Phil Plait of Bad Astronomy lucidly explains display resolution, clearing up arguments about the iPhone 4’s retinal display technology:
Imagine you see a vehicle coming toward you on the highway from miles away. Is it a motorcycle with one headlight, or a car with two? As the vehicle approaches, the light splits into two, and you see it’s the headlights from a car. But when it was miles away, your eye couldn’t tell if it was one light or two. That’s because at that distance your eye couldn’t resolve the two headlights into two distinct sources of light.
The ability to see two sources very close together is called resolution.
DPI issues aside, the name “retinal display” is awfully confusing given that there’s similar terminology already in use for virtual retinal displays…
An AR iPhone simulator for the iPhone, with working controls. I can’t put it any better than this anonymous comment from the MAKE post: “Yo Dawg, i heard you like augmented reality, so we put an iphone in your iphone so you can touch while you touch.”
The FAT LAB crew put the markup back in markup language, with their week dedicated to creating new applications and standardizing their existing work around a Graffiti Markup Language, an XML archive format describing tagging and gestural drawing. Rad.
Yann Tiersen’s Comptine D’un Autre Été, L’après-Midi played on six iPhones. While far from a perfect, beautiful performance, I have a soft spot for this piece and it’s fun to see someone trying to overcome the limitations of the tiny virtual keyboard.
Near real-time face detection on the iPhone using OpenCV. An obvious point to make, I know, but I still think it’s amazing that this would have been very difficult to do on any home computer just a few years ago but now our mobile devices can handle the task with relative ease.
NewForestar’s NESynth, bringing 8-bit style waveforms to an iPhone app. It supports P2P collaboration with other iPhones, but it if it had a full tracker built into it it’d be killer. The accelerometer-tilt pitch-bending and “Famicom controller mode” are neat additions.
Real-Time Object Recognition on a Mobile Device. I’ve seen this done for product lookups like books and boxes of cereal at the store, but hadn’t considered the accessibility implications. Not a bad idea, assuming that it produces valid information most of the time. Also seems like it would be limited to objects of a specific scale?