Notes about Augmented reality

December 2, 2011 permalink

Banaphone Invoked Computing

Banana Phone And Pizza Box Laptop PC – Invoked Computing For Ubiquitous AR

Usually “augmented reality” involves using a camera device to view an overlay of information or digital control on top of a video screen of some kind (say an iPhone or webcam/desktop), but this is kind of the opposite: having a camera+projector system that can map your intents onto everyday objects around the house for “invoked computing”.

Mostly I share this because I like this bananaphone demo:

There is a banana scenario where the person takes a banana out of a fruit bowl and brings it closer to his ear. A high speed camera tracks the banana; a parametric speaker array directs the sound in a narrow beam. The person talks into the banana as if it were a conventional phone.

(Via PhysOrg via ACM TechNews)

July 25, 2010 permalink

Artoolkit in Quartz Composer

Augmented Reality without programming in 5 minutes

I can vouch that this works, and it’s pretty straightforward once you manage to grab and build the two or three additional Quartz Composer plugins successfully. I had to fold in a newer version of the ARToolkit libs, and I swapped out the pattern bitmap used to recognize the AR target to match one I already had on hand – the default sample1 and sample2 patterns weren’t working for me for some reason. Apart from that, Quartz Composer’s a lot of fun to use, almost like building eyecandy demos with patch cables and effects pedals, and it’s already on your system if you have Xcode.

(Via Make)

December 22, 2009 permalink

Magician Marco Tempest Demonstrates a Portable AR Screen

Magician Marco Tempest demonstrates a portable “magic” augmented reality screen. The system uses a laptop, small projector, a PlayStation Eye camera (presumably with the IR filter popped out?), some IR markers to make the canvas frame corner detection possible, Arduino (?), and openFrameworks-based software developed by Zachary Lieberman. I really love this kind of demo – people on the street (especially kids) intuitively understand what’s going on. This work reminds me a lot of Zack Simpson’s Mine-Control projects, especially with the use of cheap commodity hardware for creating a fun spectacle.

(Via Make)

May 27, 2009 permalink

Real Time Object Recognition on a Mobile Device

Real-Time Object Recognition on a Mobile Device. I’ve seen this done for product lookups like books and boxes of cereal at the store, but hadn’t considered the accessibility implications. Not a bad idea, assuming that it produces valid information most of the time. Also seems like it would be limited to objects of a specific scale?