views:

371

answers:

1

In an AR app whereby you annotate objects or buildings in a camera view, I want to understand the role, that different hardware bits - on the phone (iPhone/Android) - play to achieve the AR effect. Please elaborate more on the following:

  • Camera: provides the 2D view of reality.
  • GPS: provides the longitude,latitude of the device.
  • Compass: direction with respect to magnetic north.
  • Accelerometer: (does it have a role?)
  • Altimeter: (does it have a role?)

An example: if the camera view is showing the New York skyline, how does the information from the hardware listed above help me to annotate the view ? Assuming I have the longitude & latitude for the Chrysler building and it is visible in my camera view, how does one calculate with accuracy where to annotate the name on the 2D picture ? I know that given 2 pairs of (longitude,latitude), you can calculate the distance between the points.

+4  A: 
  • use the camera to get the field of view.
  • use the compass to determine the direction the device is oriented in. The direction determines the set of objects that fall into the field view and need to be reflected with AR adorners.
  • use the GPS to determine the distance between your location and each object. The distance is usually reflected in the size of the AR adorner you show for that object or in the number of details you show.
  • use the accelerometer to determine the horizon of the view (a 3-way accelerometer sensitive enough to measure the gravity force). The horizon can be combined with the object altitude to position the AR adorners properly vertically.
  • use the altimeter for added precision of vertical positioning.
  • if you have a detailed terrain/buildings information, you can also use the altimeter to determine line of visibility to the various objects and clip out (or partially) the AR adorners for partially obscured or invisible objects.
  • if the AR device is moving, use the accelerometers to determine the speed and do some either throttling of the number of objects downloaded per view or smart pre-fetching of the objects that will come into view to optimize for the speed of view changes.

I will leave the details of calculating all this data from the devices as an exercise to you. :-)

Franci Penov
Thanks Franci. I have a much clearer picture on what is involved in an AR app.
Jacques René Mesrine