March 27, 2023 permalink
An Interactive Guide to Color & Contrast →
What a nicely-designed resource for information about color theory along with some material about its use in UX and data visualization!
Site navigation:
March 27, 2023 permalink
What a nicely-designed resource for information about color theory along with some material about its use in UX and data visualization!
October 23, 2022 permalink
From Andrew Somers, a great primer on how color vision works and how illuminated display technology maps perception to luminance contrast, color gamut, etc. Especially useful is his writeup of not only WCAG 2’s limitations for determining proper contrast for meeting accessibility needs but also the upcoming standards like APAC (Accessible Perceptual Contrast Algorithm) that will pave the way for more useful and relevant a11y standards.
September 12, 2021 permalink
If you’re super nerdy and experienced with using Photoshop’s individual color channels to make enhancements or custom masks, you might have noticed that the blue channel has very little influence on the overall sharpness of an RGB image — it never occurred to me that this is inherently a function of our own human eyesight, which is unable to properly focus on blue light, in comparison to other wavelengths!
I like how the paper from the Journal of Optometry that this article links to straight up dunks on our questionably-designed eyeballs:
In conclusion, the optical system of the eye seems to combine smart design principles with outstanding flaws. […] The corneal ellipsoid shows a superb optical quality on axis, but in addition to astigmatism, it is misaligned, deformed and displaced with respect to the pupil. All these “flaws” do contribute to deteriorate the final optical quality of the cornea. Somehow, there could have been an opportunity (in the evolution) to have much better quality, but this was irreparably lost.
This also made me wonder if this blue-blurriness has anything to do with the theory that cultures around the world and through history tend to develop words for colors in a specific order, with words for “blue” appearing relatively late in a language’s development. Evidently that’s likely so, because hard-to-distinguish colors take longer to identify and classify, even for test subjects who have existing words for them!
August 6, 2012 permalink
Hmm, @harthvader has written some impressive neural network, machine learning, and image detection stuff, shared on her GitHub — wait, she’s combined these things into a JavaScript cat-detecting routine?! Okay, that wins.
var cats = kittydar.detectCats(canvas);
console.log(“there are”, cats.length, “cats in this photo”);
console.log(cats[0]);
// { x: 30, y: 200, width: 140, height: 140 }
You can try out Kittydar here.
(Via O’Reilly Radar)
June 27, 2012 permalink
The Internet has become self-aware, but thankfully it just wants to spend some time scrolling Tumblr for cat videos. From the NY Times, How Many Computers to Identify a Cat? 16,000:
[At the Google X lab] scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own.
Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats. The neural network taught itself to recognize cats, which is actually no frivolous activity.
(Photo credit: Jim Wilson/The New York Times)
July 15, 2011 permalink
A new approach to computer vision object recognition: simulated heat-mapping:
The heat-mapping method works by first breaking an object into a mesh of triangles, the simplest shape that can characterize surfaces, and then calculating the flow of heat over the meshed object. The method does not involve actually tracking heat; it simulates the flow of heat using well-established mathematical principles, Ramani said. …
The method accurately simulates how heat flows on the object while revealing its structure and distinguishing unique points needed for segmentation by computing the “heat mean signature.” Knowing the heat mean signature allows a computer to determine the center of each segment, assign a “weight” to specific segments and then define the overall shape of the object. …
“A histogram is a two-dimensional mapping of a three-dimensional shape,” Ramani said. “So, no matter how a dog bends or twists, it gives you the same signature.”
In other words, recognizing discrete parts (like fingers or facial features) of an object in front of the camera should be much more accurate with this approach than with older techniques like simple edge detection. Uses for real-time recognition are apparent (more accurate Dance Central!), but it seems like this would also be a boon for character animation rigging?
(Via ACM TechNews)
June 24, 2011 permalink
From Moby Dick, chapter 42, “The Whiteness of the Whale”:
Is it that by its indefiniteness it shadows forth the heartless voids and immensities of the universe, and thus stabs us from behind with the thought of annihilation, when beholding the white depths of the milky way? Or is it, that as in essence whiteness is not so much a color as the visible absence of color, and at the same time the concrete of all colors; is it for these reasons that there is such a dumb blankness, full of meaning, in a wide landscape of snows – a colorless, all-color of atheism from which we shrink? And when we consider that other theory of the natural philosophers, that all other earthly hues – every stately or lovely emblazoning – the sweet tinges of sunset skies and woods; yea, and the gilded velvets of butterflies, and the butterfly cheeks of young girls; all these are but subtile deceits, not actually inherent in substances, but only laid on from without; so that all deified Nature absolutely paints like the harlot, whose allurements cover nothing but the charnel-house within; and when we proceed further, and consider that the mystical cosmetic which produces every one of her hues, the great principle of light, for ever remains white or colorless in itself, and if operating without medium upon matter, would touch all objects, even tulips and roses, with its own blank tinge – pondering all this, the palsied universe lies before us a leper; and like wilful travellers in Lapland, who refuse to wear colored and coloring glasses upon their eyes, so the wretched infidel gazes himself blind at the monumental white shroud that wraps all the prospect around him. And of all these things the Albino Whale was the symbol. Wonder ye then at the fiery hunt?
June 23, 2011 permalink
Research continues on whether humans (and other animals) have the ability to perceive magnetic fields:
Many birds have a compass in their eyes. Their retinas are loaded with a protein called cryptochrome, which is sensitive to the Earth’s magnetic fields. It’s possible that the birds can literally see these fields, overlaid on top of their normal vision. This remarkable sense allows them to keep their bearings when no other landmarks are visible.
But cryptochrome isn’t unique to birds – it’s an ancient protein with versions in all branches of life. In most cases, these proteins control daily rhythms. Humans, for example, have two cryptochromes – CRY1 and CRY2 – which help to control our body clocks. But Lauren Foley from the University of Massachusetts Medical School has found that CRY2 can double as a magnetic sensor.
Vision is amazing, even more so when you take into account the myriad other things that animals and insects can detect beyond just our “visible” EMF spectrum. See also: box jellyfish with their surprisingly complex (and human-like) set of 24 eyes.
September 24, 2010 permalink
Megana-ya (Seller of Eyeglasses), by Hokusai, circa 1811-1814, part of a incredibly great collection of health-related Japanese woodblock prints housed at the University of California, San Francisco. Having recently bought a new pair of glasses, I can relate.
(Via Pink Tentacle)
September 20, 2010 permalink
This film explores playful uses for the increasingly ubiquitous ‘glowing rectangles’ that inhabit the world.
We use photographic and animation techniques that were developed to draw moving 3-dimensional typography and objects with an iPad. In dark environments, we play movies on the surface of the iPad that extrude 3-d light forms as they move through the exposure. Multiple exposures with slightly different movies make up the stop-frame animation.
We’ve collected some of the best images from the project and made a book of them you can buy: http://bit.ly/mfmbook
Read more at the Dentsu London blog:
http://www.dentsulondon.com/blog/2010/09/14/light-painting/
and at the BERG blog:
http://berglondon.com/blog/2010/09/14/magic-ipad-light-painting/
From Dentsu London, Making Future Magic:
We use photographic and animation techniques that were developed to draw moving 3-dimensional typography and objects with an iPad. In dark environments, we play movies on the surface of the iPad that extrude 3-d light forms as they move through the exposure. Multiple exposures with slightly different movies make up the stop-frame animation.
Take that, Picasso.
July 31, 2010 permalink
Another excellent short episode of Radiolab, featuring a conversation with two people I wouldn’t expect to hear on stage together:
Oliver Sacks, the famous neuroscientist and author, can’t recognize faces. Neither can Chuck Close, the great artist known for his enormous paintings of … that’s right, faces.
Oliver and Chuck–both born with the condition known as Face Blindness–have spent their lives decoding who is saying hello to them. You can sit down with either man, talk to him for an hour, and if he sees you again just fifteen minutes later, he will have no idea who you are. (Unless you have a very squeaky voice or happen to be wearing the same odd purple hat.)
If you’re interested in the science of face perception, I stumbled across this relevant paper this week: Cortical Specialization for Face Perception in Humans (pdf) co-authored by Tel Aviv University’s Galit Yovel.
July 4, 2010 permalink
Real-time 3D capture at 60fps using a cheap webcam and simple projected pattern of light points. The structured-light code is open source, looks like a pretty cool project.
(Via Make)
June 11, 2010 permalink
Phil Plait of Bad Astronomy lucidly explains display resolution, clearing up arguments about the iPhone 4’s retinal display technology:
Imagine you see a vehicle coming toward you on the highway from miles away. Is it a motorcycle with one headlight, or a car with two? As the vehicle approaches, the light splits into two, and you see it’s the headlights from a car. But when it was miles away, your eye couldn’t tell if it was one light or two. That’s because at that distance your eye couldn’t resolve the two headlights into two distinct sources of light.
The ability to see two sources very close together is called resolution.
DPI issues aside, the name “retinal display” is awfully confusing given that there’s similar terminology already in use for virtual retinal displays…
June 2, 2010 permalink
For those of you with eyes that aren’t easily categorized as simply “brown”, “blue”, or “hazel”: researchers have written up a new genetic model for human eye color phenotyping, published in PLoS Genetics.
We measured human eye color to hue and saturation values from high-resolution, digital, full-eye photographs of several thousand Dutch Europeans. This quantitative approach, which is extremely cost-effective, portable, and time efficient, revealed that human eye color varies along more dimensions than the one represented by the blue-green-brown categories studied previously. Our work represents the first genome-wide study of quantitative human eye color. We clearly identified 3 new loci, LYST, 17q25.3, TTC3/DSCR9, in contributing to the natural and subtle eye color variation along multiple dimensions, providing new leads towards a more detailed understanding of the genetic basis of human eye color.
(Via Nature)
January 17, 2010 permalink
John Balestrieri is tinkering with generative painting algorithms, trying to produce a better automated “photo -> painting” approach. You can see his works in progress on his tinrocket Flickr stream. (Yes, there are existing Photoshop / Painter filters that do similar things, but this one aims to be closer to making human-like decisions, and no, this isn’t in any way suggestive that machine-generated renderings will replace human artists – didn’t we already get over that in the age of photography?)
Whatever the utility, trying to understand the human hand in art through code is a good way to learn a lot about color theory, construction, and visual perception.
(Via Gurney Journey)
January 10, 2010 permalink
My camera switches over to portrait-mode whenever it sees a painting or a drawing with a face in it. It stays in AUTO mode otherwise.
According to Popular Mechanics: “a chip inside the camera constantly scans the image in its viewfinder for two eyes, a nose, ears and a chin, making out up to 10 faces at a time before you’ve hit the shutter.”
I decided to test my camera—it’s a Canon Powershot SX120—to see what it decides to regard as a face.
Artist James Gurney tests out his point-and-shoot’s facial recognition chip against works of art and illustration. A mixed bag, but a good reminder that this technology is getting better and cheaper (and subtle) all the time.
December 22, 2009 permalink
Magician Marco Tempest demonstrates a portable “magic” augmented reality screen. The system uses a laptop, small projector, a PlayStation Eye camera (presumably with the IR filter popped out?), some IR markers to make the canvas frame corner detection possible, Arduino (?), and openFrameworks-based software developed by Zachary Lieberman. I really love this kind of demo – people on the street (especially kids) intuitively understand what’s going on. This work reminds me a lot of Zack Simpson’s Mine-Control projects, especially with the use of cheap commodity hardware for creating a fun spectacle.
(Via Make)
December 21, 2009 permalink
MyDsReader, Nintendo DS text-to-speech document reading software for the visually impaired. It’s using the Flite synthesis engine designed for embedded devices, implements gesture controls for ease of use and even throws in an integrated email and to-do client. Excellent use of homebrew development.
(Via GameSetWatch)
December 11, 2009 permalink
A paper in Nature Photonics describing the waveplate mechanism found in the eye of mantis shrimp (stomatopods). These amazing critters can see hyperspectral color ranging from the infrared to the ultraviolet, can perceive different planes of circular polarized light, and have eyes that operate and dart about (saccade) independently. This paper is basically demonstrating that man-made material science has a lot to learn if we want to catch up with nature’s technology.
Personal side question: what are these shrimp on the lookout for that’s led to such a sophisticated eye??
September 12, 2009 permalink
How did reclusive monks living in the year 700 or 800 AD draw the intricate lines of the Book of Kells, rendered by hand at sub-millimeter resolution (about the same level of detail as the engraving work found on modern money), hundreds of years before optical instruments became available, hundreds of years before the pioneering visual research of Alhazen? According to Cornell paleontologist John Cisne’s theory, their trick was in the detail and pattern: by keeping their eyes unfocused on the picture plane, the monks could superimpose their linework and judge the accuracy against the template using a form of temporary binocular diplopia (sort of like willing yourself to view a stereograph or one of those Magic Eye posters).
That’s amazing.
August 1, 2009 permalink
Update 11/3/2009: RadioLab did a short piece in October on this phenomenon, even discussing the Mr. Bean test with the Japanese researchers: http://blogs.wnyc.org/radiolab/2009/10/05/blink/
———-
From recent research out of Japan: “The results suggest that humans share a mechanism for controlling the timing of blinks that searches for an implicit timing that is appropriate to minimize the chance of losing critical information while viewing a stream of visual events.” In simpler words, the researchers found that audiences watching movies with action sequences have a strong tendency to synchronize their blinking so that they don’t miss anything good.
I’m not sure that this is interesting in and of itself, but it’s, um, eye-opening to think that we have our eyes closed for nearly 10% of our waking life. That’s roughly 10 full minutes of every movie lost to blinking. I imagine that editors already take this phenomenon into account, at least to some extent?
Full text available available in the Proceedings of the Royal Society B – Biological Sciences. Thanks, Creative Commons!
(Via NewScientist)
July 17, 2009 permalink
A fun LEGO Mindstorms NXT sequencer project from Damien Key of Domabotics. I like the simplicity of this design (and the whirring of the LEGO motor adds something to the sound, almost like the scratchiness of vinyl).
(Via)
May 27, 2009 permalink
http://www.youtube.com/watch?v=lUUxGvDqok4
Real-Time Object Recognition on a Mobile Device. I’ve seen this done for product lookups like books and boxes of cereal at the store, but hadn’t considered the accessibility implications. Not a bad idea, assuming that it produces valid information most of the time. Also seems like it would be limited to objects of a specific scale?