Our eyes with the help of our brain can see and understand a lot of objects. But they cannot see the facts or the story behind these objects. To do so we have to go to supplementary sources like encyclopedias or the internet. Sometimes the objects carry metadata: a text , a barcode, a QR code or RFID signals.
Instead of going to other sources we can also go to other eyes which see more and augment the reality with additional information. Cameras in head-mounted displays, in virtual retinal displays or – much more common now – in smartphones are such eyes: taking a picture, send this picture or alternatively the GPS and compass information to databases with picture recognition or with location based information, retrieve data, integrate these data back into the cell phone screen and it’s done: you can see the facts behind the reality.
Or in the words of the industry (vuzix corp): “For those not familiar with AR, it is an environment that includes both virtual reality and real-world elements. For instance, an AR user might wear translucent goggles; through these, he could see the real world, as well as computer-generated images projected on top of that world.”
There are quite a lot of applications emerging in the field of Augmented Reality AR
- Googles goggles . Basically this is a picture search via smartphone but it’s also a GPS based AR application.
- Wikitude World Browser. This smartphone browser uses the camera, GPS and compass information of the smartphone to integrate additional information into the viewed picture. “WIKITUDE World Browser presents the user with data about their surroundings, nearby landmarks, and other points of interest by overlaying information on the real-time camera view of a smart-phone.”
- ‘The Layar application works with GPS and compass to determine location and the direction the phone is facing. The camera input provides the reality. The location is used to retrieve the layer information with coordinates and other relevant data. This is retrieved from the Layar server where all partner Layar information is collected and processed. This is then combined, computed and displayed in the ‘active’ layer of the application.’
- Finding WiFi connections. ‘ WorkSnug is at the cutting edge of mobile technology, employing Augmented Reality and location-based services to deliver an iPhone application that has already enjoyed huge global attention. The app shares our workspace review data for those on the move, searching for the nearest and best places to connect. Perfect for those in between meeting times. Simply point the phone and all becomes clear.’
- See also: http://www.kooaba.com/ . Not really AR but an interesting combination of picture search and personal library.
But where are the facts coming from?
Most of the applications get their data from open sources like Wikipedia or other community based projects (so i.e. wikitude.me). But their are also commercial databases under construction.
Among the facts behind the reality (whatever this is) statistics belong to the most interesting ones;). How to combine real world pictures with statistical information? Official stats do not deal with single objects. But single objects are embedded in broader contexts where statistics make sense and are available. So the Matterhorn is located in a commune and this commune has a population of x etc.
This could be a next step: localizing the object in its semantic context. More reflection and … access to the (open) data are needed. And this could give way to a new way to telling a story and to bringing statistical facts to the people – on the spot: statistically augmented reality.
For more information about AR see http://www.augmented-reality.org/ (02-2010 under construction).