Notes about graphics

October 23, 2022 permalink

Area 5150 — mindbending IBM 8088 demo

Another from the recent boggling demoscene demos, here’s a bonkers one pulling off tricks on vintage IBM 8088 PC hardware that the 16-color CGA graphics adapter shouldn’t be capable of doing. Remember this is a computer setup from circa 1981!

My favorite part of these kind of demos is when the audience goes wild (well, relatively) for the breakdancing elephant animation, even more than for the psuedo-3D graphics and psychedelic color scanline gimmicks.

 

November 13, 2021 permalink

Open to Conversion

Screen shot from PCPaint in 4-color mode

Over on Tedium, a nostalgia bomb roundup of 10 image file formats that time forgot. I wouldn’t say that BMP or even TIFF are exactly forgotten, and VRML seems like the odd one out as a text-based markup language (but definitely in the zeitgeist this month with all of the nouveau metaverse talk), but many of these took me back to the good old days. Also I didn’t know that the Truevision TARGA hardware, remarkable for its time in the mid-1980s with millions of colors and alpha channel support, was an internal creation from AT&T (my dad worked for AT&T corporate back then, but all we got at home was the decidedly not-remarkable 2-color Hercules display on our AT&T 6300 PC). JPEG and GIF continue to dominate 30+ years later, but it’s interesting to see what could have been, if only some of these other systems jumped more heavily into file compression…

March 1, 2020 permalink

The Making of Brilliance

In 1985, computer graphics were exotic enough that using them for a TV commercial was the kind of thing you might save for a Super Bowl ad slot, as seen in this short documentary. I would not have guessed that the first significant use of CGI on TV was for an ad illustrating the sexy (?) futuristic appear of _aluminum cans_.

(They fail to mention this in this mini-doc, but the ad studio was clearly lifting the chrome-plated sexy robots imagery of Japanese illustrator Hajime Sorayama)

August 6, 2012 permalink

Kittydar

Hmm, @harthvader has written some impressive neural network, machine learning, and image detection stuff, shared on her GitHub — wait, she’s combined these things into a JavaScript cat-detecting routine?! Okay, that wins.

var cats = kittydar.detectCats(canvas);
console.log(“there are”, cats.length, “cats in this photo”);
console.log(cats[0]);
// { x: 30, y: 200, width: 140, height: 140 }

You can try out Kittydar here.

(Via O’Reilly Radar)

July 25, 2012 permalink

Pareidoloop

What happens if you write software that generates random polygons and the software then feeds the results through facial recognition software, looping thousands of times until the generated image more and more resembles a face? Phil McCarthy’s Pareidoloop. Above, my results from running it for a few hours. Spooky.

(More about his project on GitHub, and more about pareidolia in case the name doesn’t ring a bell)

[8/5 Update: Hi folks coming in from BoingBoing and MetaFilter! Just want to reiterate that I didn’t write this software, the author is Phil McCarthy @phl !]

July 24, 2012 permalink

First Computer Graphics Film at T Satellite

Now that I have a retina display, I want a screensaver that looks as good as this 1963 AT&T microfilm video:

This film was a specific project to define how a particular type of satellite would move through space. Edward E. Zajac made, and narrated, the film, which is considered to be possibly the very first computer graphics film ever. Zajac programmed the calculations in FORTRAN, then used a program written by Zajac’s colleague, Frank Sinden, called ORBIT. The original computations were fed into the computer via punch cards, then the output was printed onto microfilm using the General Dynamics Electronics Stromberg-Carlson 4020 microfilm recorder. All computer processing was done on an IBM 7090 or 7094 series computer.

June 27, 2012 permalink

Google X Cat Image Recognition

The Internet has become self-aware, but thankfully it just wants to spend some time scrolling Tumblr for cat videos. From the NY Times, How Many Computers to Identify a Cat? 16,000:

[At the Google X lab] scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own.

Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats. The neural network taught itself to recognize cats, which is actually no frivolous activity.

(Photo credit: Jim Wilson/The New York Times)

October 4, 2011 permalink

Captain Picards Utah Teacup

As the Make post says, this 3D-printable model of Captain Picard’s teacup would be a good benchmark for the nascent fabrication technology (the image on the right is a photo of the original Star Trek prop, which was just an off-the-shelf Bodum teacup). That it could be seen as a sly progression from the famous Utah teapot I think makes it an especially worthy benchmark!

Obligatory: “Tea! Earl Grey. Hot.”

August 13, 2011 permalink

Nomographs

From a post titled The Art of Nomography on Dead Reckonings (a blog dedicated to forgotten-but-beautiful mathematical systems! I’d better subscribe to this one…) :

Nomography, truly a forgotten art, is the graphical representation of mathematical relationships or laws (the Greek word for law is nomos). These graphs are variously called nomograms (the term used here), nomographs, alignment charts, and abacs. This area of practical and theoretical mathematics was invented in 1880 by Philbert Maurice d’Ocagne (1862-1938) and used extensively for many years to provide engineers with fast graphical calculations of complicated formulas to a practical precision.

Along with the mathematics involved, a great deal of ingenuity went into the design of these nomograms to increase their utility as well as their precision. Many books were written on nomography and then driven out of print with the spread of computers and calculators, and it can be difficult to find these books today even in libraries. Every once in a while a nomogram appears in a modern setting, and it seems odd and strangely old-fashioned—the multi-faceted Smith Chart for transmission line calculations is still sometimes observed in the wild. The theory of nomograms “draws on every aspect of analytic, descriptive, and projective geometries, the several fields of algebra, and other mathematical fields” [Douglass].

More about nomograms and abacs on Wikipedia.

(Via O’Reilly Radar)

July 15, 2011 permalink

Simulated Heat Mapping for Computer Vision

A new approach to computer vision object recognition: simulated heat-mapping:

The heat-mapping method works by first breaking an object into a mesh of triangles, the simplest shape that can characterize surfaces, and then calculating the flow of heat over the meshed object. The method does not involve actually tracking heat; it simulates the flow of heat using well-established mathematical principles, Ramani said. …

The method accurately simulates how heat flows on the object while revealing its structure and distinguishing unique points needed for segmentation by computing the “heat mean signature.” Knowing the heat mean signature allows a computer to determine the center of each segment, assign a “weight” to specific segments and then define the overall shape of the object. …

“A histogram is a two-dimensional mapping of a three-dimensional shape,” Ramani said. “So, no matter how a dog bends or twists, it gives you the same signature.”

In other words, recognizing discrete parts (like fingers or facial features) of an object in front of the camera should be much more accurate with this approach than with older techniques like simple edge detection. Uses for real-time recognition are apparent (more accurate Dance Central!), but it seems like this would also be a boon for character animation rigging?

(Via ACM TechNews)

November 21, 2010 permalink

Catmull Interview

They didn’t think it was relevant. In their minds, we were working on computer-generated images—and for them, what was a computer-generated image? What was an image they saw on a CRT? It was television.

Ed Catmull, co-founder of Pixar and pioneer of computer graphics, on the time he and his nascent team were brought in to ILM during the filming of the second Star Wars movie.

From an ACM Queue interview between Catmull and Pat Hanrahan. There are also some good quotes about incubator projects like ARPA providing protection for new ideas, arts education, and the role of artist-scientists in the graphics field.

July 26, 2010 permalink

Non Square Pixels

The man who created the first scanned digital photograph in 1957, Russel Kirsch, pioneer of the pixel, apologizes in the May/July issue of Journal of Research of the National Institute of Standards and Technology. Now 81 years old, he offers up a replacement (sorta) for the square pixel he first devised: tessellated 6×6 pixel masks that offer much smoother images with lower overall resolution. The resulting file sizes are slightly larger but the improved visual quality is pretty stunning, as seen in the closeup above. His research was inspired by the ancient 6th Century tile mosaics in Ravenna, Italy.

There are a lot of comments out there complaining that square pixels are more efficient, image and wavelet compression is old news, etc., and that’s true, but if you actually read the article you’ll find that the point isn’t so much the shape, the efficiency, or even the capture/display technology needed, but rather that this could be a good method for reducing the resolution of images somewhat while still retaining visual clarity, important in medical applications and in situations where low-resolution images are still tossed around.

Bonus: the man in the demo photo above is his son, the subject of the first-ever digital photograph!

(Via ScienceNews)

July 25, 2010 permalink

Matisse Photos

Art and Science Collide in Revealing Matisse Exhibit from Northwestern News on Vimeo.

Computational image processing researchers at Northwestern University teamed up with art historians from the Art Institute of Chicago to investigate the colors originally laid down by Matisse while he was working on Bathers by a River:

Researchers at Northwestern University used information about Matisse’s prior works, as well as color information from test samples of the work itself, to help colorize a 1913 black-and-white photo of the work in progress. Matisse began work on Bathers in 1909 and unveiled the painting in 1917.

In this way, they learned what the work looked like midway through its completion. “Matisse tamped down earlier layers of pinks, greens, and blues into a somber palette of mottled grays punctuated with some pinks and greens,” says Sotirios A. Tsaftaris, a professor of electrical engineering and computer science at Northwestern. That insight helps support research that Matisse began the work as an upbeat pastoral piece but changed it to reflect the graver national mood brought on by World War I.

The Art Institute has up a nice mini-site about Bathers and the accompanying research, including some great overlays on top of the old photos to show the various states the painting went through during the years of its creation.

(Via ACM TechNews)

July 25, 2010 permalink

Artoolkit in Quartz Composer

Augmented Reality without programming in 5 minutes

I can vouch that this works, and it’s pretty straightforward once you manage to grab and build the two or three additional Quartz Composer plugins successfully. I had to fold in a newer version of the ARToolkit libs, and I swapped out the pattern bitmap used to recognize the AR target to match one I already had on hand – the default sample1 and sample2 patterns weren’t working for me for some reason. Apart from that, Quartz Composer’s a lot of fun to use, almost like building eyecandy demos with patch cables and effects pedals, and it’s already on your system if you have Xcode.

(Via Make)

June 11, 2010 permalink

Iphone Resolution

Phil Plait of Bad Astronomy lucidly explains display resolution, clearing up arguments about the iPhone 4’s retinal display technology:

Imagine you see a vehicle coming toward you on the highway from miles away. Is it a motorcycle with one headlight, or a car with two? As the vehicle approaches, the light splits into two, and you see it’s the headlights from a car. But when it was miles away, your eye couldn’t tell if it was one light or two. That’s because at that distance your eye couldn’t resolve the two headlights into two distinct sources of light.

The ability to see two sources very close together is called resolution.

DPI issues aside, the name “retinal display” is awfully confusing given that there’s similar terminology already in use for virtual retinal displays

March 20, 2010 permalink

Rapid Prototyping with Ceramics

If you’re the sort of lab that’s engineering a method of printing ceramic materials using rapid prototyping machines, I suppose it’d make sense that you’d already have made some real-life polygonal Utah teapots! I never thought about it before, but for the 3D graphics humor value I really, really want one of these now. You can read about the Utanalog project and see finished photos (and a video explaining the whole thing) over on the Unfold blog.

January 18, 2010 permalink

MS Paint Album Art Recreations

Album art recreated quickly for the Windows 3.1 era:

The drawings in this collection were made by various users in a discussion forum on the website www.foreverdoomed.com. Using MS Paint, and other rudimentary computer drawing programs, users attempted to recreate their favorite album covers and let others on the forum guess the band and title from the artwork. […] Some gave themselves a limit of five minutes to recreate the most recognizable essentials.

I sort of like these. I’d forgotten the subtle charm of MSPaint’s spraycan, though I’d always envied MacPaint’s patterns.

(Via Coudal Partners)

January 17, 2010 permalink

John Balestrieri’s Generative Painting Algorithms

John Balestrieri is tinkering with generative painting algorithms, trying to produce a better automated “photo -> painting” approach. You can see his works in progress on his tinrocket Flickr stream. (Yes, there are existing Photoshop / Painter filters that do similar things, but this one aims to be closer to making human-like decisions, and no, this isn’t in any way suggestive that machine-generated renderings will replace human artists – didn’t we already get over that in the age of photography?)

Whatever the utility, trying to understand the human hand in art through code is a good way to learn a lot about color theory, construction, and visual perception.

(Via Gurney Journey)

January 10, 2010 permalink

The Fat Lab Crew Put the Markup Back in Markup

GML = Graffiti Markup Language from Evan Roth on Vimeo.

The FAT LAB crew put the markup back in markup language, with their week dedicated to creating new applications and standardizing their existing work around a Graffiti Markup Language, an XML archive format describing tagging and gestural drawing. Rad.

See also: the new DustTag and Fat Tag Deluxe iPhone apps.

December 22, 2009 permalink

FluidPaint: an Interactive Digital Painting System using Real Wet Brushes

FluidPaint: An Interactive Digital Painting System using Real Wet Brushes. An experimental project by Tom Van Laerhoven of the Hasselt University Expertise Centre for Digital Media in Belgium. Unlike previous digital painting applications, this one uses actual water (detected by a surface-level IR emitter) to record strokes on the surface and more correctly models the tip of the brush being used, whether rounded or fanned, and it can even simulate a sponge. Looks like it makes some convincing watercolor-like images.

More info: Brush Design for Interactive Painting Applications (PDF)

(Via John Nack at Adobe)

December 22, 2009 permalink

Magician Marco Tempest Demonstrates a Portable AR Screen

Magician Marco Tempest demonstrates a portable “magic” augmented reality screen. The system uses a laptop, small projector, a PlayStation Eye camera (presumably with the IR filter popped out?), some IR markers to make the canvas frame corner detection possible, Arduino (?), and openFrameworks-based software developed by Zachary Lieberman. I really love this kind of demo – people on the street (especially kids) intuitively understand what’s going on. This work reminds me a lot of Zack Simpson’s Mine-Control projects, especially with the use of cheap commodity hardware for creating a fun spectacle.

(Via Make)

November 28, 2009 permalink

dpBestflow: Digital Photography Best Practices

The American Society of Media Photographers has a new resource up for people working with digital images: dpBestflow rounds up the best practices and workflows for digital photography, in neat, easy-to-digest pieces, with tips on subjects ranging from camera file formats to desktop hardware to room lighting. If you look at their handy Quick Reference overview, be sure to note that each bullet point links to a more in-depth piece if you’re interesting in drilling down for more info…

(Via John Nack)

September 17, 2009 permalink

The Stanford Frankencamera to Help Further the

The Stanford Frankencamera

To help further the field of computational photography, a team at Stanford is working on a homebrewed, open source digital camera that they can sell at-cost to other academics in the field. Right now it’s pretty big and clunky-looking, but a camera that can be extended with the latest image processing techniques coming out of the labs would be very sexy indeed. There’s a recent press release that’s worth reading about the team, along with a video and an animation or two to explain the project.

Those that want to tinker with their existing store-bought cameras might want to check out the firmware hacks that are floating around out there, like the excellent CHDK software (GPL’ed, I think) that runs on most modern Canon digital point-and-shoot and dSLR cameras. With a little bit of elbow grease and some free tools you can add a lot of professional(ish) features and scripting support to your low-end camera.

(Via John Nack)

August 23, 2009 permalink

Touchable Holography

“Touchable Holography”, a hardware demo by researchers from the University of Tokyo at this year’s SIGGRAPH conference. This mostly builds on the work they presented last year involving their “Airborne Ultrasound Tactile Display” (PDF), a device that shoots out directional ultrasound to simulate haptic pressure, like the impact rain has when it hits your skin. I don’t think this current display counts as holography exactly (the image is made with a refracting mirror, just like Sega’s 1991 arcade game Time Traveler!), but being able to reinforce the illusion with the sensation of touch is a cool idea. Hopefully they can expand it to use more than one of their ultrasound boards so they can simulate a feeling that’s more than one-dimensional. Also good to see that researchers are using the inexpensive, off-the-shelf Wiimotes for projects like this.

(Via Make)

July 24, 2009 permalink

Dark Flash Photography

Another paper from the upcoming SIGGRAPH 2009 conference: Dark Flash Photography. The researchers have developed a camera flash that uses a combination of infra-red and and ultra-violet light to illuminate a scene before capture, and an algorithm to denoise and color-correct the otherwise dimly-lit normal digital photo, producing a low-light image that is both noise-free and sharp (no need for long exposure, so no worry about camera shake or the subject moving). Seems like a killer idea, and immensely useful.

The image above is the creepy-looking multi-spectral version – be sure to click through to their site to see the final photo compared with the noisy ambient light version.

(Via New Scientist. Photo: Dilip Krishnan, Rob Fergus)

July 21, 2009 permalink

ANSI Art Generator from Drastic

Rad, there’s an online ANSI art generator! Relive the glory days of BBSes and dodgy w4r3z nfo files right in your browser. I remember wasting a lot of time back in junior high making colorful DOS menus using ansi.sys and batch files. Better than launching Windows 3.1!

Check it out, make some art: ansi.drastic.net (The drawing program seems to be broken for me under Firefox 3.5.1, but your mileage may vary)

(Via Waxy)

Pagination