Notes

Links and write-ups about beautiful things from around the web!

  • Selectively Deanimating Video

    Another SIGGRAPH, another mind-bending example of video being freed from linear time — Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, and Ravi Ramamoorthi’s Selectively De-Animating Video:

    We present a semi-automated technique for selectively de-animating video to remove the large-scale motions of one or more objects so that other motions are easier to see. The user draws strokes to indicate the regions of the video that should be immobilized, and our algorithm warps the video to remove the large-scale motion of these regions while leaving finer-scale, relative motions intact. However, such warps may introduce unnatural motions in previously motionless areas, such as background regions. We therefore use a graph-cut-based optimization to composite the warped video regions with still frames from the input video; we also optionally loop the output in a seamless manner. Our technique enables a number of applications such as clearer motion visualization, simpler creation of artistic cinemagraphs (photos that include looping motions in some regions), and new ways to edit appearance and complicated motion paths in video by manipulating a de-animated representation.

    (Via O’Reilly Radar)

  • Pareidoloop

    What happens if you write software that generates random polygons and the software then feeds the results through facial recognition software, looping thousands of times until the generated image more and more resembles a face? Phil McCarthy’s Pareidoloop. Above, my results from running it for a few hours. Spooky.

    (More about his project on GitHub, and more about pareidolia in case the name doesn’t ring a bell)

    [8/5 Update: Hi folks coming in from BoingBoing and MetaFilter! Just want to reiterate that I didn’t write this software, the author is Phil McCarthy @phl !]

  • First Computer Graphics Film at T Satellite

    Now that I have a retina display, I want a screensaver that looks as good as this 1963 AT&T microfilm video:

    This film was a specific project to define how a particular type of satellite would move through space. Edward E. Zajac made, and narrated, the film, which is considered to be possibly the very first computer graphics film ever. Zajac programmed the calculations in FORTRAN, then used a program written by Zajac’s colleague, Frank Sinden, called ORBIT. The original computations were fed into the computer via punch cards, then the output was printed onto microfilm using the General Dynamics Electronics Stromberg-Carlson 4020 microfilm recorder. All computer processing was done on an IBM 7090 or 7094 series computer.

  • Random Numbers Through a Quantum Vacuum

    Your random number generator not truly random enough for you? Maybe you should try some of the numbers coming off of the Australian National University’s quantum vacuum randomization server. Nothing like minute variations in a field of near-silence to get some unfettered randomness, I guess. They offer access to the vacuum through a few different forms of data – seen above is a chunk of their randomly-colored pixel stream. Science!

    (Via Science Daily)

  • Kyle Mcdonald on Getting a Little Lost

    I’ve learned you have to be careful when you get lost in an idea. As an artist, you have to get a little lost. Otherwise you won’t discover anything interesting. But you have to avoid getting so lost that you’re unable to walk away and keep exploring. Media artist Kyle McDonald writes about the aftermath of his People Staring at Computers Apple Store project that drew attention last summer after he was investigated by the Secret Service.
  • Internet Protocol over Xylophone Players

    Somehow I missed a lecture and demo of this new networking technology in Austin back in May: Internet Protocol Over Xylophone Players (IPoXP) (PDF whitepaper), which puts a human element in the middle of sending IP packets from one computer to another. From Wired UK:

    As an LED lights up, the human participant strikes the corresponding key on the xylophone. Piezo sensors are attached to each xylophone, so that they are able to sense when a note is played on the other xylophone. The Arduino for the receiving computer senses the note and then converts it back into hexadecimal code. And when the second computer sends a return packet, the order of operations is reversed.

    The data can be sent at a rate of roughly 1 baud, which is still faster than the earlier, um, IP over Avian Carriers technology.

    Assuming the musicians don’t get bored. It takes about 15 minutes to transmit a single packet, assuming the musician doesn’t hit any wrong notes. That’s rare, though, apparently. Geiger told NetworkWorld: “Humans are really terrible interfaces.”

    Pedant note: yes, they are using a glockenspiel in the photo above, not a true xylophone, but I guess X is a cooler letter to have in your acronym…

    (Via ACM Tech News)

  • What Do You See when You Read

    In this way we are backwards phrenologists, we readers. Extrapolating physiques from minds.

    From Jacket Mechanical’s nice mini-essay on the difficulties of visualizing characters from novels, how our minds fill in the textual lacunae with broad brushstrokes of personality over literal physical features.

    “Call me Ishmael.” What happens when you read this line? You are being addressed, but by whom? Chances are you hear the line (in your mind’s ear) before you picture the speaker. I can hear Ishmael’s words more clearly than I can see his face. (Audition requires different neurological processes than vision, or smell. And I would submit that we hear more when we read than we see). Picturing Ishmael requires a strong resolve.

    (Via Coudal Partners)

  • Google X Cat Image Recognition

    The Internet has become self-aware, but thankfully it just wants to spend some time scrolling Tumblr for cat videos. From the NY Times, How Many Computers to Identify a Cat? 16,000:

    [At the Google X lab] scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own.

    Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats. The neural network taught itself to recognize cats, which is actually no frivolous activity.

    (Photo credit: Jim Wilson/The New York Times)

  • Slime moulds work on computer games

    Hey, that’s not a very nice thing to call game developers! Oh, you mean literal slime molds…

    British computer scientists are taking inspiration from slime to help them find ways to calculate the shape of a polygon linking points on a surface. Such calculations are fundamental to creating realistic computer graphics for gaming and animated movies. The quicker the calculations can be done, the smoother and more realistic the graphics. …

    Adamatzky explains that the slime mould Physarum polycephalum has a complicated lifecycle with fruit bodies, spores, and single-cell amoebae, but in its vegetative, plasmodium, stage it is essentially a single cell containing many cell nuclei. The plasmodium can forage for nutrients and extends tube-like appendages to explore its surroundings and absorb food. As is often the case in natural systems, the network of tubes has evolved to be able to quickly and efficiently absorb nutrients while at the same time using minimal resources to do so.

    The Internet will some day be a series of (feeding) tubes?

  • Dna Sans Nanoscale Typeface

    DNA Sans, a typeface / character set of self-assembled DNA strands that have been shaped into pixel-like blocks. I know where I’m stashing my next steganographically-hidden secret message! (Or maybe this could be used as graffiti for the Fantastic Voyage crew?)

    (Via Naturearticle is here for those with journal access)