In an AR app whereby you annotate objects or buildings in a camera view, I want to understand the role, that different hardware bits - on the phone (iPhone/Android) - play to achieve the AR effect. Please elaborate more on the following:
- Camera: provides the 2D view of reality.
- GPS: provides the longitude,latitude of the device.
- Compass: direction with respect to magnetic north.
- Accelerometer: (does it have a role?)
- Altimeter: (does it have a role?)
An example: if the camera view is showing the New York skyline, how does the information from the hardware listed above help me to annotate the view ? Assuming I have the longitude & latitude for the Chrysler building and it is visible in my camera view, how does one calculate with accuracy where to annotate the name on the 2D picture ? I know that given 2 pairs of (longitude,latitude), you can calculate the distance between the points.