Hey, that’s not a very nice thing to call game developers! Oh, you mean literal slime molds…
British computer scientists are taking inspiration from slime to help them find ways to calculate the shape of a polygon linking points on a surface. Such calculations are fundamental to creating realistic computer graphics for gaming and animated movies. The quicker the calculations can be done, the smoother and more realistic the graphics. …
Adamatzky explains that the slime mould Physarum polycephalum has a complicated lifecycle with fruit bodies, spores, and single-cell amoebae, but in its vegetative, plasmodium, stage it is essentially a single cell containing many cell nuclei. The plasmodium can forage for nutrients and extends tube-like appendages to explore its surroundings and absorb food. As is often the case in natural systems, the network of tubes has evolved to be able to quickly and efficiently absorb nutrients while at the same time using minimal resources to do so.
The Internet will some day be a series of (feeding) tubes?
We prove NP-hardness results for five of Nintendo’s largest video game franchises: Mario, Donkey Kong, Legend of Zelda, Metroid, and Pokemon. Our results apply to Super Mario Bros. 1, 3, Lost Levels, and Super Mario World; Donkey Kong Country 1-3; all Legend of Zelda games except Zelda II: The Adventure of Link; all Metroid games; and all Pokemon role-playing games. For Mario and Donkey Kong, we show NP-completeness. In addition, we observe that several games in the Zelda series are PSPACE-complete.
Translation: video games might provide interesting fodder for complexity theory, and possibly provide a model for novel ways of looking at difficult decision problems. In any case, I just like seeing Metroid mentioned on the arXiv.
GelSight, a high-resolution, portable 3D imaging system from researchers at MIT, basically what looks like a small piece of translucent rubber injected with metal flakes. Watch the video to see some of the microscopic scans they’re able to get using this. I love non-showy SIGGRAPH tech demos like this one.
Via New Scientist, research into an image processing technique designed to mask the actual physical position of the photographer, by creating an interpolated photograph from an artificial vantage point:
The technology was conceived in September 2007, when the Burmese junta began arresting people who had taken photos of the violence meted out by police against pro-democracy protestors, many of whom were monks. “Burmese government agents video-recorded the protests and analysed the footage to identify people with cameras,” says security engineer Shishir Nagaraja of the Indraprastha Institute of Information Technology in Delhi, India. By checking the perspective of pictures subsequently published on the internet, the agents worked out who was responsible for them. …
The images can come from more than one source: what’s important is that they are taken at around the same time of a reasonably static scene from different viewing angles. Software then examines the pictures and generates a 3D “depth map” of the scene. Next, the user chooses an arbitrary viewing angle for a photo they want to post online.
Interesting stuff, but lots to contemplate here. Does an artificially-constructed photograph like this carry the same weight as a “straight” digital image? How often is an individual able to round up a multitude of photos taken of the same scene at the same time, without too much action occurring between each shot? What happens if this technique implicates a bystander who happened to be standing in the “new” camera’s position?
Many birds have a compass in their eyes. Their retinas are loaded with a protein called cryptochrome, which is sensitive to the Earth’s magnetic fields. It’s possible that the birds can literally see these fields, overlaid on top of their normal vision. This remarkable sense allows them to keep their bearings when no other landmarks are visible.
But cryptochrome isn’t unique to birds – it’s an ancient protein with versions in all branches of life. In most cases, these proteins control daily rhythms. Humans, for example, have two cryptochromes – CRY1 and CRY2 – which help to control our body clocks. But Lauren Foley from the University of Massachusetts Medical School has found that CRY2 can double as a magnetic sensor.
Actually, he does. Donald Duck accidentally (and somewhat accurately) described the chemical compound methylene nearly two decades before real-world scientists:
In 1963, the Disney Studio learned just how wide and faithful a readership [Carl] Barks had. A letter arrived from Joseph B. Lambert of the California Institute of Technology, pointing out a curious reference in “The Spin States of Carbenes,” a technical article soon to be published by P.P. Gaspar and G.S. Hammond (in Carbene Chemistry, edited by Wolfgang Kirmse, New York: Academic Press, 1964). “Despite the recent extensive interest in methylene chemistry,” read the article’s last paragraph, “much additional study is required…. Among experiments which have not, to our knowledge, been carried out as yet is one of a most intriguing nature suggested in the literature of no less than 19 years ago (91).” Footnote 91, in turn, directed readers to issue 44 of Walt Disney’s Comics and Stories. … A year later, the Studio received a letter from Richard Greenwald, a scientist at Harvard. “Recent developments in chemistry have focused much attention to species of this sort,” Greenwald commented. “Without getting technical let me say that carbenes can be made but not isolated; i.e. they cannot be put into a jar and kept on a shell. They can, however, be made to react with other substances. Donald was using carbene in just such a manner, many years before ‘real chemists’ thought to do so.”
Mind-boggling stuff like this is why I keep reading science journals. We can already use photons to push and pinch things with their tiny mass (amazing enough), but new research is underway in how to pull with photons:
Light is pushy. The physical pressure of photons is what allows for solar sail space missions that ride on sunlight, and what allows for dreams of lasers that will push those sails even faster. And light can trap objects, too: Optical tweezers can hold tiny objects in place. Pulling an object with light, however, is another matter. … Jun Chen’s research team says that the key is to use not a regular laser beam, but instead what’s called a Bessel beam. Viewed head-on, a Bessel beam looks like one intense point surrounded by concentric circles—what you might see when you toss a stone into a lake.
A phenomenon called group glee was studied in videotapes of 596 formal lessons in a preschool. This was characterized by joyful screaming, laughing, and intense physical acts which occurred in simultaneous bursts or which spread in a contagious fashion from one child to another.
Technologist Adam Harvey explores using dazzle camouflage makeup to thwart facial recognition libraries like OpenCV in his thesis project CV Dazzle (red squares in image denote the drawings that were successfully identified by the software as faces). Summary article over at PopSci.
Early experimental computer animation through mathematical modeling of a cat’s gait. Evidently, equations were written to model the basic skeleton form of the cat and its walk, and the computer was used to generate a shadow-like projection printed frame by frame onto paper using ASCII-like characters (this animation was done in 1968 on a Soviet BESM-4 mainframe, so I’m not sure what character set they’re actually using here). The result could then be filmed, inverted, and manually cleaned up. Not exactly something that would really take the animation world by storm, but it’s an interesting usage of mainframes for art.
Research video demonstrating an ability to automatically select individual elements of a recorded song (like the vocal track, guitar solo, ringing cellphone, etc) by singing, whistling, or even Beavis & Butthead style grunting in imitation. Not 100% perfect, but it’s very clever. (I wish the video was embeddable…)
Researchers from the Berlin Brain-Computer Interface project demonstrate their research into mind-control pinball, which is an important field of study if ever there was one. BUT HOW DO YOU NUDGE?
Also, the Addams Family table is a great choice for such a project (Fester would approve), but how cool would it have been if they’d hooked him up to the one-of-a-kind Sega/Stern museum table The Brain?
Converting heat energy directly into sound using tiny electrical conductors is a 100-year-old idea for an alternative to the mechanical voice coil wire + moving diaphragm design of traditional speakers, but new research recently submitted to Applied Physics Letters demonstrates a new, actually feasible approach to making these speakers-on-a-chip. Still way too quiet and underpowered for use as a loudspeaker, but might have some novel applications in the near future as research progresses.
I like the name given to the 100 year old invention, though: the thermophone.
John Balestrieri is tinkering with generative painting algorithms, trying to produce a better automated “photo -> painting” approach. You can see his works in progress on his tinrocket Flickr stream. (Yes, there are existing Photoshop / Painter filters that do similar things, but this one aims to be closer to making human-like decisions, and no, this isn’t in any way suggestive that machine-generated renderings will replace human artists – didn’t we already get over that in the age of photography?)
Whatever the utility, trying to understand the human hand in art through code is a good way to learn a lot about color theory, construction, and visual perception.
To help further the field of computational photography, a team at Stanford is working on a homebrewed, open source digital camera that they can sell at-cost to other academics in the field. Right now it’s pretty big and clunky-looking, but a camera that can be extended with the latest image processing techniques coming out of the labs would be very sexy indeed. There’s a recent press release that’s worth reading about the team, along with a video and an animation or two to explain the project.
Those that want to tinker with their existing store-bought cameras might want to check out the firmware hacks that are floating around out there, like the excellent CHDK software (GPL’ed, I think) that runs on most modern Canon digital point-and-shoot and dSLR cameras. With a little bit of elbow grease and some free tools you can add a lot of professional(ish) features and scripting support to your low-end camera.
How did reclusive monks living in the year 700 or 800 AD draw the intricate lines of the Book of Kells, rendered by hand at sub-millimeter resolution (about the same level of detail as the engraving work found on modern money), hundreds of years before optical instruments became available, hundreds of years before the pioneering visual research of Alhazen? According to Cornell paleontologist John Cisne’s theory, their trick was in the detail and pattern: by keeping their eyes unfocused on the picture plane, the monks could superimpose their linework and judge the accuracy against the template using a form of temporary binocular diplopia (sort of like willing yourself to view a stereograph or one of those Magic Eye posters).
The so-called “Mother of All Demos”, the technology presentation given by Doug Engelbart of the Stanford Research Institute, which introduced to the world a number of useful developments: hypertext, the computer mouse, timesharing, email, video conferencing… And this was a bit over forty years ago, just before the ARPANET went online. Pretty amazing times.
From recent research out of Japan: “The results suggest that humans share a mechanism for controlling the timing of blinks that searches for an implicit timing that is appropriate to minimize the chance of losing critical information while viewing a stream of visual events.” In simpler words, the researchers found that audiences watching movies with action sequences have a strong tendency to synchronize their blinking so that they don’t miss anything good.
I’m not sure that this is interesting in and of itself, but it’s, um, eye-opening to think that we have our eyes closed for nearly 10% of our waking life. That’s roughly 10 full minutes of every movie lost to blinking. I imagine that editors already take this phenomenon into account, at least to some extent?
Full text available available in the Proceedings of the Royal Society B – Biological Sciences. Thanks, Creative Commons!
Another paper from the upcoming SIGGRAPH 2009 conference: Dark Flash Photography. The researchers have developed a camera flash that uses a combination of infra-red and and ultra-violet light to illuminate a scene before capture, and an algorithm to denoise and color-correct the otherwise dimly-lit normal digital photo, producing a low-light image that is both noise-free and sharp (no need for long exposure, so no worry about camera shake or the subject moving). Seems like a killer idea, and immensely useful.
The image above is the creepy-looking multi-spectral version – be sure to click through to their site to see the final photo compared with the noisy ambient light version.
(Via New Scientist. Photo: Dilip Krishnan, Rob Fergus)