Color nerdery ahead: I’ve been a fan of the CIELAB color space ever since I discovered Lab mode in Photoshop 20-ish years ago — it’s so awesome and useful to be able to manipulate color channels separate from luminosity! — and so I’m thrilled that web design is heading that direction as well with the new OKLCH color space in CSS Color 4.
This article from Evil Martians about why they’ve made the switch to OKLCH is a great read on the ins and outs of the new color space and why you should consider using it over the more familiar ancient standards. The TL;DR: unlike hexadecimal or RGBa values, Lab/LCH is much easier to read and adjust directly in CSS as well: want to make a color more saturated? Just adjust the middle value, chroma! Oh, and contrast is preserved between different colors so long as the Luminosity remains the same, which makes conforming to the WCAG-compliant color contrast accessibility guidelines that much easier.
I also learned from this in-depth article that Adobe Photoshop has adopted the OKLab space as a “perceptual” option when generating color gradients. Look at how ugly that “classic” gradient is in their screenshot! Gradients in Photoshop have always been messed up, so this is a pretty huge change that they’ve made.
This is a compelling use of AI for photographic manipulation (in my mind more practical than many of the other AI image generation examples that are flooding the art websites these days): basically the software can analyze a photograph, use AI to generate a pretty accurate depth map of the subject of the photo, and then use that for dynamic relighting (allowing you to add different artificial lights, color gels, etc.). You can try the web-based demo on your own photos! Neat.
A nice write-up on color grading in films, especially after the 1990s advent of digital intermediates and LUTs — or to say it more clearly, Why do movies all look like that these days??
In the 1890s the French physicist Gabriel Lippmann devised a new method of taking photographs that led to the first photographic recording of color:
Lippmann’s color photography process involved projecting the optical image as usual onto a photographic plate. The projection was done through a glass plate coated with a transparent emulsion of very fine silver halide grains on the other side. There was also a liquid mercury mirror in contact with the emulsion, so the projected light traveled through the emulsion, hit the mirror, and was reflected back into the emulsion.
The resulting plates are pretty cool looking, as seen in the video — very similar to the discovery of holography decades later — and technically they record a wider spectrum of color than our standard modern imaging techniques. He won the Nobel Prize in 1908 for his research, but the method was largely shelved due to the complexity of the process and the inability to make color prints, which also didn’t appear commercially until much later.
In 2021 researchers at the Ecole Polytechnique Fédérale de Lausanne published a paper on their research into Lippmann’s images, including a new method that lets us see the images closer to the original color captured in the photographic scene.
Side trivia: among Lippmann’s doctoral students at the Sarbonne was Maria Sklodowska, later a winner of multiple Nobel prizes herself, under her better-known married name: Marie Curie!
Good news, owners of Gameboy Cameras! New technology will now up-res and almost accurately colorize those grainy low-res spinach photos.
Jokes aside, there are some pretty amazing things being done these days in the world of neural net-trained image enhancements. See also this crazy research on using Google Brain to reasonably “zoom! enhance!” photos as small as 8×8 pixels (we used to laugh at crime drama TV shows and their unbelievable photo techniques…but now it’s getting pretty close…)