My favorite part of these kind of demos is when the audience goes wild (well, relatively) for the breakdancing elephant animation, even more than for the psuedo-3D graphics and psychedelic color scanline gimmicks.
I can vouch that this works, and it’s pretty straightforward once you manage to grab and build the two or three additional Quartz Composer plugins successfully. I had to fold in a newer version of the ARToolkit libs, and I swapped out the pattern bitmap used to recognize the AR target to match one I already had on hand – the default sample1 and sample2 patterns weren’t working for me for some reason. Apart from that, Quartz Composer’s a lot of fun to use, almost like building eyecandy demos with patch cables and effects pedals, and it’s already on your system if you have Xcode.
Research video demonstrating an ability to automatically select individual elements of a recorded song (like the vocal track, guitar solo, ringing cellphone, etc) by singing, whistling, or even Beavis & Butthead style grunting in imitation. Not 100% perfect, but it’s very clever. (I wish the video was embeddable…)
Kottke linked to this time-stitch-stretch video, which is kind of fun to watch. Reminds me of the 1990’s video morphing work done using Elastic Reality, especially Michel Gondry’s video for Björk’s “Joga” (which I think was done with ER…anyone know?)
The drawings in this collection were made by various users in a discussion forum on the website www.foreverdoomed.com. Using MS Paint, and other rudimentary computer drawing programs, users attempted to recreate their favorite album covers and let others on the forum guess the band and title from the artwork. […] Some gave themselves a limit of five minutes to recreate the most recognizable essentials.
I sort of like these. I’d forgotten the subtle charm of MSPaint’s spraycan, though I’d always envied MacPaint’s patterns.
Pretend to be Radiohead with this Instructable guide to 3D light scanning using a projector, camera, and a bit of Processing! This is designed to create the visualization seen in the video above, but you could also use the point data for output on a 3D printer, animation package, etc. Neat.
To help further the field of computational photography, a team at Stanford is working on a homebrewed, open source digital camera that they can sell at-cost to other academics in the field. Right now it’s pretty big and clunky-looking, but a camera that can be extended with the latest image processing techniques coming out of the labs would be very sexy indeed. There’s a recent press release that’s worth reading about the team, along with a video and an animation or two to explain the project.
Those that want to tinker with their existing store-bought cameras might want to check out the firmware hacks that are floating around out there, like the excellent CHDK software (GPL’ed, I think) that runs on most modern Canon digital point-and-shoot and dSLR cameras. With a little bit of elbow grease and some free tools you can add a lot of professional(ish) features and scripting support to your low-end camera.
The Xerox Star 8010 OS, an early GUI from 1981. I wish my desktop looked a bit more like this today. More interface awesomeness from this system on the DigiBarn Computer Museum site.
Big news for high school hacker nerds everywhere who want to give their graphing calculator’s Z80 processor a better workout than just crunching algebra problems. Also a very good reminder that yesterday’s strong encryption now takes only a small bit of time to crack (in this case, one user with a dual-core Athlon about 75 days to break RSA-512). No, you can’t hide secrets from the future.
Back when I was a young’un, we didn’t have to get around signing keys to run Z80 assembly, just needed to build a serial port interface and a copy of ZShell…
Logstalgia (aka ApachePong), a visualizer that turns Apache log file entries into an automated game of OpenGL Pong, with the server-paddle hitting requests back at the calling visitors. Pipe it through SSH and tail to get real-time infoviz. Hey, I’ve seen worse screensavers…
Real-Time Object Recognition on a Mobile Device. I’ve seen this done for product lookups like books and boxes of cereal at the store, but hadn’t considered the accessibility implications. Not a bad idea, assuming that it produces valid information most of the time. Also seems like it would be limited to objects of a specific scale?