Notes about siggraph

August 5, 2012 permalink

Selectively Deanimating Video

Another SIGGRAPH, another mind-bending example of video being freed from linear time — Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, and Ravi Ramamoorthi’s Selectively De-Animating Video:

We present a semi-automated technique for selectively de-animating video to remove the large-scale motions of one or more objects so that other motions are easier to see. The user draws strokes to indicate the regions of the video that should be immobilized, and our algorithm warps the video to remove the large-scale motion of these regions while leaving finer-scale, relative motions intact. However, such warps may introduce unnatural motions in previously motionless areas, such as background regions. We therefore use a graph-cut-based optimization to composite the warped video regions with still frames from the input video; we also optionally loop the output in a seamless manner. Our technique enables a number of applications such as clearer motion visualization, simpler creation of artistic cinemagraphs (photos that include looping motions in some regions), and new ways to edit appearance and complicated motion paths in video by manipulating a de-animated representation.

(Via O’Reilly Radar)

August 23, 2009 permalink

Touchable Holography

“Touchable Holography”, a hardware demo by researchers from the University of Tokyo at this year’s SIGGRAPH conference. This mostly builds on the work they presented last year involving their “Airborne Ultrasound Tactile Display” (PDF), a device that shoots out directional ultrasound to simulate haptic pressure, like the impact rain has when it hits your skin. I don’t think this current display counts as holography exactly (the image is made with a refracting mirror, just like Sega’s 1991 arcade game Time Traveler!), but being able to reinforce the illusion with the sensation of touch is a cool idea. Hopefully they can expand it to use more than one of their ultrasound boards so they can simulate a feeling that’s more than one-dimensional. Also good to see that researchers are using the inexpensive, off-the-shelf Wiimotes for projects like this.

(Via Make)

July 24, 2009 permalink

Dark Flash Photography

Another paper from the upcoming SIGGRAPH 2009 conference: Dark Flash Photography. The researchers have developed a camera flash that uses a combination of infra-red and and ultra-violet light to illuminate a scene before capture, and an algorithm to denoise and color-correct the otherwise dimly-lit normal digital photo, producing a low-light image that is both noise-free and sharp (no need for long exposure, so no worry about camera shake or the subject moving). Seems like a killer idea, and immensely useful.

The image above is the creepy-looking multi-spectral version – be sure to click through to their site to see the final photo compared with the noisy ambient light version.

(Via New Scientist. Photo: Dilip Krishnan, Rob Fergus)

Pagination