Usually “augmented reality” involves using a camera device to view an overlay of information or digital control on top of a video screen of some kind (say an iPhone or webcam/desktop), but this is kind of the opposite: having a camera+projector system that can map your intents onto everyday objects around the house for “invoked computing”.
Mostly I share this because I like this bananaphone demo:
There is a banana scenario where the person takes a banana out of a fruit bowl and brings it closer to his ear. A high speed camera tracks the banana; a parametric speaker array directs the sound in a narrow beam. The person talks into the banana as if it were a conventional phone.
I can vouch that this works, and it’s pretty straightforward once you manage to grab and build the two or three additional Quartz Composer plugins successfully. I had to fold in a newer version of the ARToolkit libs, and I swapped out the pattern bitmap used to recognize the AR target to match one I already had on hand – the default sample1 and sample2 patterns weren’t working for me for some reason. Apart from that, Quartz Composer’s a lot of fun to use, almost like building eyecandy demos with patch cables and effects pedals, and it’s already on your system if you have Xcode.
Excellent use of AR for marketing: an in-store display that’s actually fun to play with, and it makes you pick up the box in order to see it come alive. Nice.
An AR iPhone simulator for the iPhone, with working controls. I can’t put it any better than this anonymous comment from the MAKE post: “Yo Dawg, i heard you like augmented reality, so we put an iphone in your iphone so you can touch while you touch.”
Magician Marco Tempest demonstrates a portable “magic” augmented reality screen. The system uses a laptop, small projector, a PlayStation Eye camera (presumably with the IR filter popped out?), some IR markers to make the canvas frame corner detection possible, Arduino (?), and openFrameworks-based software developed by Zachary Lieberman. I really love this kind of demo – people on the street (especially kids) intuitively understand what’s going on. This work reminds me a lot of Zack Simpson’s Mine-Control projects, especially with the use of cheap commodity hardware for creating a fun spectacle.
Real-Time Object Recognition on a Mobile Device. I’ve seen this done for product lookups like books and boxes of cereal at the store, but hadn’t considered the accessibility implications. Not a bad idea, assuming that it produces valid information most of the time. Also seems like it would be limited to objects of a specific scale?