A recent computer vision paper titled Learning to Cartoonize Using White-box Cartoon Representations trains machine learning software to automatically “cartoonize” photographs — a normally human-done process known as rotoscoping (at least when applied to moving images). The results are strikingly similar to the work produced for the (excellent) Amazon show Undone or the earlier animated Richard Linklater / Bob Sabiston classic Waking Life.
It’s interesting that their training data was “collected from Shinkai Makoto, Miyazaki Hayao and Hosoda Mamoru films.” These demo images definitely look akin to the American productions mentioned above, more than I’d expect from the background art of a Studio Ghibli film, say.
The good news for now for human animators: I presume these images each take significant processing power to generate, and would have trouble with consistency between frames even if it could be animated (?).
Another SIGGRAPH, another mind-bending example of video being freed from linear time — Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, and Ravi Ramamoorthi’s Selectively De-Animating Video:
We present a semi-automated technique for selectively de-animating video to remove the large-scale motions of one or more objects so that other motions are easier to see. The user draws strokes to indicate the regions of the video that should be immobilized, and our algorithm warps the video to remove the large-scale motion of these regions while leaving finer-scale, relative motions intact. However, such warps may introduce unnatural motions in previously motionless areas, such as background regions. We therefore use a graph-cut-based optimization to composite the warped video regions with still frames from the input video; we also optionally loop the output in a seamless manner. Our technique enables a number of applications such as clearer motion visualization, simpler creation of artistic cinemagraphs (photos that include looping motions in some regions), and new ways to edit appearance and complicated motion paths in video by manipulating a de-animated representation.
(Via O’Reilly Radar)