This is a compelling use of AI for photographic manipulation (in my mind more practical than many of the other AI image generation examples that are flooding the art websites these days): basically the software can analyze a photograph, use AI to generate a pretty accurate depth map of the subject of the photo, and then use that for dynamic relighting (allowing you to add different artificial lights, color gels, etc.). You can try the web-based demo on your own photos! Neat.
In the 1890s the French physicist Gabriel Lippmann devised a new method of taking photographs that led to the first photographic recording of color:
Lippmann’s color photography process involved projecting the optical image as usual onto a photographic plate. The projection was done through a glass plate coated with a transparent emulsion of very fine silver halide grains on the other side. There was also a liquid mercury mirror in contact with the emulsion, so the projected light traveled through the emulsion, hit the mirror, and was reflected back into the emulsion.
The resulting plates are pretty cool looking, as seen in the video — very similar to the discovery of holography decades later — and technically they record a wider spectrum of color than our standard modern imaging techniques. He won the Nobel Prize in 1908 for his research, but the method was largely shelved due to the complexity of the process and the inability to make color prints, which also didn’t appear commercially until much later.
In 2021 researchers at the Ecole Polytechnique Fédérale de Lausanne published a paper on their research into Lippmann’s images, including a new method that lets us see the images closer to the original color captured in the photographic scene.
Good news, owners of Gameboy Cameras! New technology will now up-res and almost accurately colorize those grainy low-res spinach photos.
Jokes aside, there are some pretty amazing things being done these days in the world of neural net-trained image enhancements. See also this crazy research on using Google Brain to reasonably “zoom! enhance!” photos as small as 8×8 pixels (we used to laugh at crime drama TV shows and their unbelievable photo techniques…but now it’s getting pretty close…)
GelSight, a high-resolution, portable 3D imaging system from researchers at MIT, basically what looks like a small piece of translucent rubber injected with metal flakes. Watch the video to see some of the microscopic scans they’re able to get using this. I love non-showy SIGGRAPH tech demos like this one.
The man who created the first scanned digital photograph in 1957, Russel Kirsch, pioneer of the pixel, apologizes in the May/July issue of Journal of Research of the National Institute of Standards and Technology. Now 81 years old, he offers up a replacement (sorta) for the square pixel he first devised: tessellated 6×6 pixel masks that offer much smoother images with lower overall resolution. The resulting file sizes are slightly larger but the improved visual quality is pretty stunning, as seen in the closeup above. His research was inspired by the ancient 6th Century tile mosaics in Ravenna, Italy.
There are a lot of comments out there complaining that square pixels are more efficient, image and wavelet compression is old news, etc., and that’s true, but if you actually read the article you’ll find that the point isn’t so much the shape, the efficiency, or even the capture/display technology needed, but rather that this could be a good method for reducing the resolution of images somewhat while still retaining visual clarity, important in medical applications and in situations where low-resolution images are still tossed around.
My camera switches over to portrait-mode whenever it sees a painting or a drawing with a face in it. It stays in AUTO mode otherwise.
According to Popular Mechanics: “a chip inside the camera constantly scans the image in its viewfinder for two eyes, a nose, ears and a chin, making out up to 10 faces at a time before you’ve hit the shutter.”
I decided to test my camera—it’s a Canon Powershot SX120—to see what it decides to regard as a face.
Artist James Gurney tests out his point-and-shoot’s facial recognition chip against works of art and illustration. A mixed bag, but a good reminder that this technology is getting better and cheaper (and subtle) all the time.
The American Society of Media Photographers has a new resource up for people working with digital images: dpBestflow rounds up the best practices and workflows for digital photography, in neat, easy-to-digest pieces, with tips on subjects ranging from camera file formats to desktop hardware to room lighting. If you look at their handy Quick Reference overview, be sure to note that each bullet point links to a more in-depth piece if you’re interesting in drilling down for more info…
This papier-mâché Felix the Cat was the first image to be broadcast over experimental television in preparation for the first public RCA broadcast in 1928. Black and white and made of durable material, they had him revolving on a turntable, beaming out as a tiny test image so engineers could adjust the signal. Early TV technology fascinates me.
There’s more good info on early test patterns over at Design Observer.
Todd Vanderlin’s working on a project using OpenFrameworks and ARTag markers to simulate scratching a real record but using a camera as a the virtual needle. Nifty.