In contrast to the flood of hyperbolic pieces about ChatGPT and other LLM-based AI (“It will revolutionize productivity!” / “It will destroy all creative fields!”), I appreciate Dave Karpf’s pointing out that these things are really best thought of as cliche generators, and that in some contexts it’s OK for the results to satisfice:
The AI isn’t going to give you the optimal Disney World itinerary; It’s going to give you basically the same trip that everyone takes. It isn’t going to recommend the ideal recipe for your tastes; it’s just going to suggest something that works.
And that sounds great, because both of those tasks are obnoxious time-sinks. (Yes, please, recommend a basic meal that my kids might eat! Offer me the same bog-standard Disney vacation that everyone else eventually settles on!)
This is a compelling use of AI for photographic manipulation (in my mind more practical than many of the other AI image generation examples that are flooding the art websites these days): basically the software can analyze a photograph, use AI to generate a pretty accurate depth map of the subject of the photo, and then use that for dynamic relighting (allowing you to add different artificial lights, color gels, etc.). You can try the web-based demo on your own photos! Neat.
Lots that I agree with in this post, including this short paragraph that speaks to both the web3 of 2022, but definitely reminds me of what excited me in the early days of learning about the WWW:
People should have ownership and control of their data online. Users should be able to connect to services and then move between them freely without having to ask permission from any big tech companies. Creators should be fairly compensated for their work. Communities and movements should easily be able to form groups and collaborate together to achieve their goals.
From the NY Times, one of the more interesting science reports I’ve read lately: there exist a number of species of plants that thrive in metal-rich environments, soaking up the heavy elements that can then be harvested and used for industrial purposes (traditional farming has a lot of downsides, but perhaps not as many as mining operations?).
Slicing open one of these trees or running the leaves of its bush cousin through a peanut press produces a sap that oozes a neon blue-green. This “juice” is actually one-quarter nickel, far more concentrated than the ore feeding the world’s nickel smelters.
This quote is evocative of the “speculative fiction” sound this makes:
The language of literature on phytomining, or agromining, hints of a future when plant and machine live together: bio-ore, metal farm, metal crops. “Smelting plants” sounds about as incongruous as carving oxygen.
A love letter from the IEEE Spectrum about the 1980s BBS phenomenon with an emphasis on how BBSes and the FidoNet message system spurred the creation of local social networks between users, the local part mostly being lost on our current global social media platforms.
An add-on for FidoNet called Echomail written by a developer in Dallas, Texas took simple conversational forums like this across the nation — for a fictionalized account of this history, see the 2nd season of Halt and Catch Fire (OK, that show is riffing more on the Lucasfilm Games-develoed Quantum Link Club Caribe, but it’s of the right era and zeitgeist)
Norbert Landsteiner wrote up a post about something that’s retro-technology-typography-nerdy beyond even my usual limits and understanding: a thorough explication and an interactive demo of how the late-1940s IBM 026 key punch (the typewriter keyboard/workstation machine that operators would use to poke the holes in the computer program punchcards of that era) was able to also print tiny human-readable letters and words at the top of the cards for easy reference.
Basically IBM encoded the alphabet and other special characters onto a clever postage stamp-sized print head that would run along the top of the punchcard, with wires to each “dot” enabling the printing of each encoded character in turn, effectively an early dot-matrix printer. (it’s not easy to see, but if you squint at the image you’ll see that the red dots form the “A” character, upside-down — you’ll see it more easily if you play with the demo and choose other characters)
In other insect news, a case of life imitating (well, at least acting similar to) network transmission protocols:
This feedback loop allows TCP to run congestion avoidance: If acks return at a slower rate than the data was sent out, that indicates that there is little bandwidth available, and the source throttles data transmission down accordingly. If acks return quickly, the source boosts its transmission speed. The process determines how much bandwidth is available and throttles data transmission accordingly.
It turns out that harvester ants (Pogonomyrmex barbatus) behave nearly the same way when searching for food. … A forager won’t return to the nest until it finds food. If seeds are plentiful, foragers return faster, and more ants leave the nest to forage. If, however, ants begin returning empty handed, the search is slowed, and perhaps called off.
Now that I have a retina display, I want a screensaver that looks as good as this 1963 AT&T microfilm video:
This film was a specific project to define how a particular type of satellite would move through space. Edward E. Zajac made, and narrated, the film, which is considered to be possibly the very first computer graphics film ever. Zajac programmed the calculations in FORTRAN, then used a program written by Zajac’s colleague, Frank Sinden, called ORBIT. The original computations were fed into the computer via punch cards, then the output was printed onto microfilm using the General Dynamics Electronics Stromberg-Carlson 4020 microfilm recorder. All computer processing was done on an IBM 7090 or 7094 series computer.
I’ve learned you have to be careful when you get lost in an idea. As an artist, you have to get a little lost. Otherwise you won’t discover anything interesting. But you have to avoid getting so lost that you’re unable to walk away and keep exploring.
Somehow I missed a lecture and demo of this new networking technology in Austin back in May: Internet Protocol Over Xylophone Players (IPoXP) (PDF whitepaper), which puts a human element in the middle of sending IP packets from one computer to another. From Wired UK:
As an LED lights up, the human participant strikes the corresponding key on the xylophone. Piezo sensors are attached to each xylophone, so that they are able to sense when a note is played on the other xylophone. The Arduino for the receiving computer senses the note and then converts it back into hexadecimal code. And when the second computer sends a return packet, the order of operations is reversed.
The data can be sent at a rate of roughly 1 baud, which is still faster than the earlier, um, IP over Avian Carriers technology.
Assuming the musicians don’t get bored. It takes about 15 minutes to transmit a single packet, assuming the musician doesn’t hit any wrong notes. That’s rare, though, apparently. Geiger told NetworkWorld: “Humans are really terrible interfaces.”
Pedant note: yes, they are using a glockenspiel in the photo above, not a true xylophone, but I guess X is a cooler letter to have in your acronym…
Hey, that’s not a very nice thing to call game developers! Oh, you mean literal slime molds…
British computer scientists are taking inspiration from slime to help them find ways to calculate the shape of a polygon linking points on a surface. Such calculations are fundamental to creating realistic computer graphics for gaming and animated movies. The quicker the calculations can be done, the smoother and more realistic the graphics. …
Adamatzky explains that the slime mould Physarum polycephalum has a complicated lifecycle with fruit bodies, spores, and single-cell amoebae, but in its vegetative, plasmodium, stage it is essentially a single cell containing many cell nuclei. The plasmodium can forage for nutrients and extends tube-like appendages to explore its surroundings and absorb food. As is often the case in natural systems, the network of tubes has evolved to be able to quickly and efficiently absorb nutrients while at the same time using minimal resources to do so.
The Internet will some day be a series of (feeding) tubes?
Seen above is a green disc, wax on brass, with an early recording of Hamlet’s “To be or not to be…” soliloquy, that likely hasn’t been heard in over 125 years. Created by Alexander Graham Bell’s Volta Laboratory in the late 19th Century and sent to the Smithsonian for archiving as they were created, the paranoid Bell failed to provide a playback mechanism for these discs, for fear that his competitors would appropriate his innovations.
Researchers at the Lawrence Berkeley National Laboratories are working on recovering these early audio recordings with a system called IRENE/3D that creates 3D optical scans of the old record-like discs:
Using methods derived from our work on instrumentation for particle physics we have investigated the problem of audio reconstruction from mechanical recordings. The idea was to acquire digital maps of the surface of the media, without contact, and then apply image analysis methods to recover the audio data and reduce noise.
The nifty thing about this form of hands-off scanning is that it can accommodate many types of otherwise mechanically incompatible media, from discs made of metal or glass to wax cylinders (quick, someone set this up to scan the Lazarus bowl!!). The 18-second snippet of Hamlet audio from the green disc above (maybe the voice of Bell himself?) has been posted on YouTube, or you can download more examples from the project in WAV and MP3 format.
Want to expose a rival’s poor security implementation? What better way than to demonstrate the weakness in public, in front of a gathered crowd? From a New Scientist story of very early 20th-Century hacktivism:
LATE one June afternoon in 1903 a hush fell across an expectant audience in the Royal Institution’s celebrated lecture theatre in London. Before the crowd, the physicist John Ambrose Fleming was adjusting arcane apparatus as he prepared to demonstrate an emerging technological wonder: a long-range wireless communication system developed by his boss, the Italian radio pioneer Guglielmo Marconi. The aim was to showcase publicly for the first time that Morse code messages could be sent wirelessly over long distances. Around 300 miles away, Marconi was preparing to send a signal to London from a clifftop station in Poldhu, Cornwall, UK.
Yet before the demonstration could begin, the apparatus in the lecture theatre began to tap out a message. … Mentally decoding the missive, Blok [Fleming’s assistant] realised it was spelling one facetious word, over and over: “Rats”. A glance at the output of the nearby Morse printer confirmed this. The incoming Morse then got more personal, mocking Marconi: “There was a young fellow of Italy, who diddled the public quite prettily,” it trilled.
The radio-hacker was Nevil Maskelyne, a magician and rival inventor who was interested in developing wireless technology but who had been frustrated by the broad patents granted to Marconi. Bonus trivia: Nevil’s father was John Nevil Maskelyne, magician and inventor of the pay toilet, and his son was Jasper Maskelyne, a magician and inventor (see a family connection here?) who allegedly helped develop some of the famous optical diversions and camouflage trickery for the British military during WWII (his inflatable tanks remind me of the Potemkin Army thing I posted a couple of years back…)
Usually “augmented reality” involves using a camera device to view an overlay of information or digital control on top of a video screen of some kind (say an iPhone or webcam/desktop), but this is kind of the opposite: having a camera+projector system that can map your intents onto everyday objects around the house for “invoked computing”.
Mostly I share this because I like this bananaphone demo:
There is a banana scenario where the person takes a banana out of a fruit bowl and brings it closer to his ear. A high speed camera tracks the banana; a parametric speaker array directs the sound in a narrow beam. The person talks into the banana as if it were a conventional phone.
I remember when Wired ran their May, 1997 issue, focusing on the downfall and imminent demise of Apple with this striking (and to some, controversial) cover. Most of their “101 Ways to Save Apple” suggestions are in hindsight nonsensical (merge with Sega to make games!), a few were prescient (build a ~$250 PDA phone that can do email!), but one definitely stands out as the prize winner:
50. Give Steve Jobs as much authority as he wants in new product development. … Even if Jobs fails, he’ll do it with guns a-blazin’.
He definitely didn’t fail, by anybody’s standards. It’s hard to think of many individuals out there who have had a bigger impact on popular computing and technology, not to mention who have led the charge for design and innovation as still-relevant business ideals in the 21st Century. RIP Steve Jobs.
The Language Log on how science fiction often misses the mark with predictions of technology (the why is up for debate, of course):
Less than 50 years ago, this is what the future of data visualization looked like — H. Beam Piper, “Naudsonce”, Analog 1962:
She had been using a visibilizing analyzer; in it, a sound was broken by a set of filters into frequency-groups, translated into light from dull red to violet paling into pure white. It photographed the light-pattern on high-speed film, automatically developed it, and then made a print-copy and projected the film in slow motion on a screen. When she pressed a button, a recorded voice said, “Fwoonk.” An instant later, a pattern of vertical lines in various colors and lengths was projected on the screen.
This is in a future world with anti-gravity and faster-than-light travel.
The comments that follow are a great mix of discussion about science fiction writing (why do the galactic scientists in Asimov’s Foundation rely on slide rules?) and 1960s display technology limitations (vector vs. raster, who will win?). I like this site.
Archeology Magazine has a feature story about the “digital archeologists” behind Visual6502, the group “excavating” and fully remapping the inner workings of the classic 8-bit MOS Technology 6502 microprocessor. That might not sound interesting, but if you’ve been alive for more than 20 years you know the chip: it was the heart of early home computers ranging from the Apple I and Apple ][ to the Atari game consoles all the way up to the Nintendo NES.
Very cool and all, but in case you’re still not interested, here’s some excellent trivia slipped into the article:
In the 1984 film The Terminator, scenes shown from the perspective of the title character, played by Arnold Schwarzenegger, include 6502 programming code on the left side of the screen.
Whaaat!? The SFX team working on The Terminator went so far as to copy actual assembly code into their shots? That’s pretty awesome! So where’d they get it? It was copied from Apple II code published in Nibble Magazine (even the T-800 enjoys emulators when its not busy hunting down humanity, I guess).
As our magnetic and optical media become increasingly difficult to access and data starts to corrupt, what can we do to best preserve our electronic information for longer than the current 7-10 year bursts of time? One solution might be to transcode and compress it all to 2D barcodes printed onto microfilm. From AlphaGalileo:
The team further suggests that in order to reduce the amount of microfilm used for any given repository and so cut conversion and re-digitization times it would be possible to convert a stream of text into a bar-code type system that would still be entirely analogue but would rely on knowledge of the conversion key to return the data to digital form from microfilm. Using such a system could render a tested 170 kilobyte file that requires 191 pages of microfilm space as just 12 or so printed as a two-dimensional barcode. Such a barcode would incorporate redundancy and be self-checking unlike a straight digital to analogue image scan of the text. Further compression is possible, if colour microfilm and barcodes were used for storage. This may provide a valuable, low-maintenance additional back-up for the original digital objects in addition to preservation activities needed for the on-line access copies.
Eyewriter 2.0 + Robot Arm = Livewriter. Combining the FFFFAT Lab’s inspirational Eyewriter project (named this week as one of Time’s top 50 inventions of 2010, and now glasses-free!) with their GML RoboTagger Sharpie Magnum-wielding robot arm, kids were able to try out the eye-tracking graffiti system to print out giant-sized tags of their own names. These projects touch on so many of my favorite areas of interest, so very cool.
Translation: sheets of entirely flexible, waterproof, implantable LEDs. Yes, yes, medical and biotech applications, but imagine how interesting the tattoos at raves will be in a few years!
As a demonstration of the technology the researchers put LED arrays through any number of experimental implementations. They deposited LEDs on aluminum foil, the leaf of a tree, and a sheet of paper; they wrapped arrays around nylon thread and tied it in a knot; and they distended LED arrays by inflating the polymer substrate or stretching it over the tip of a pencil or the head of a cotton swab. “Eventually the students just got tired” of devising new tests for the light-emitting sheets, Rogers says. “There was nothing that we tried that we couldn’t do.”
Then, in the early 1940s, Mr. Moyroud and Mr. Higonnet — electronics engineers and colleagues at a subsidiary of ITT (formerly International Telephone & Telegraph) in Lyon, France — visited a nearby printing plant and witnessed the Linotype [the older Victorian-era printing process that was still in use] operation.
“My dad always said they thought it was insane,” Patrick Moyroud (pronounced MOY-rood) said. “They saw the possibility of making the process electronic, replacing the metal with photography. So they started cobbling together typewriters, electronic relays, a photographic disc.”
The result, called a photo-composing machine — and in later variations the Lumitype and the Photon — used a strobe light and a series of lenses to project characters from a spinning disc onto photographic paper, which was pasted onto pages, then photoengraved on plates for printing.
If you’ve ever seen the older lead-alloy-fueled “hot metal” Linotype process you’d agree: it was crazy.
(Photo of the Lumitype/Photon wheel by Flickr user Jeronzinho)
The pallophotophone was an early audio recorder created by GE researcher Charles Hoxie in 1922. Rather than using magnetic wire or lacquer disks, the device captured audio waveforms on sprocketless 35 mm film as a series of 12 parallel tracks reflected from a vibrating mirror. It was used to record some of the world’s oldest surviving radio broadcasts on Schenectady, New York radio station WGY between 1929 and 1931.
As a forgotten optical medium, I guess its more modern analog would be laserfilm discs. Sort of working along the right path, but just not practical compared to other media coming out at the time. There’s more about the rediscovered pallphotophone recordings on the GE Reports blog.
From a recently declassified history (PDF) detailing the NSA’s computing equipment up to 1964, comes a description of their house-sized computer ABNER’s mercury-powered memory banks:
A succession of pulses (signal or no-signal) travels through an acoustic medium, say mercury, from one end to the other of a “delay line.” […] At the input end of the line is a crystal that converts an electrical pulse to a mechanical wave which travels through the mercury to the other end, where another crystal reconverts it to an electrical signal. The series of electrical signals is recirculated back to input, after passing through detector, amplifier, and driver circuits to restore the shape and strength of the pulses. Also, in the part of the cycle external to the delay line are input and output circuits and “clock” pulses for synchronization. In mercury, the pulses travel at the speed of sound, which is much slower than the speed of electrical signals, and thus the delay in going from one end of the line to the other constitutes a form of storage. […] In ABNER, the mercury tank was a glass tube about two feet long; the delay time was 384 microseconds, or eight words of 48 bits at one-megacycle-per-second rate. Thus the 1,024 words were contained in two cabinets holding 64 mercury delay lines each.
ABNER was named after comic strip character Li’l Abner, reportedly because it was a big, hulking machine that “didn’t know anything”.
Phil Plait of Bad Astronomy lucidly explains display resolution, clearing up arguments about the iPhone 4’s retinal display technology:
Imagine you see a vehicle coming toward you on the highway from miles away. Is it a motorcycle with one headlight, or a car with two? As the vehicle approaches, the light splits into two, and you see it’s the headlights from a car. But when it was miles away, your eye couldn’t tell if it was one light or two. That’s because at that distance your eye couldn’t resolve the two headlights into two distinct sources of light.
The ability to see two sources very close together is called resolution.
DPI issues aside, the name “retinal display” is awfully confusing given that there’s similar terminology already in use for virtual retinal displays…
David Hanson’s robots are by now somewhat familiar faces, including his Einstein robot currently being used as a research tool at Javier Movellan’s Machine Perception Lab at UCSD, and the punk rock conversationalist Joey Chaos. A less familiar face is that of Bina Rothblatt, the blonde at the end of the table in the above photograph. Bina is a robot commissioned by Sirius Satellite Radio inventor Martine Rothblatt to look like her beloved wife.
Hanson Robotics is in a house in the neighborhood where I grew up in Richardson, Texas. They’re doing some interesting work in robot aesthetics and materials, crafting convincing android-type replicants in a studio environment that’s busy around the clock. Flickr user steevithak has a nice photo set up of some of the robots they were tinkering with in 2009.
Excellent use of AR for marketing: an in-store display that’s actually fun to play with, and it makes you pick up the box in order to see it come alive. Nice.
Russian balloon maker Rusbal is working on an order from the country’s defense ministry to supply full-scale inflatable military models. The realistic-looking hardware is used in battlefield positions and to protect Russian strategic installations from surveillance satellites, distracting snoops and protecting real combat units from strikes. They can look like real vehicles in the radar, thermal, and near infra-red bands, so they’d even look right through night-vision goggles.
And now from Shakespeare’s Macbeth (Act V Scene IV — you know, the cool part where the incoming army disguises itself as the Birnam forest):
MALCOLM Let every soldier hew him down a bough And bear’t before him: thereby shall we shadow The numbers of our host and make discovery Err in report of us.
Nothing much new, then. Simple visual misdirection is the magician’s greatest asset.
See also:
Edison’s Warriors, a great article in Cabinet about the U.S. 3132nd Signal Service Company in WWII, a sonic deception team that created strategic disruption using wire and tape recordings with acoustical engineering help from Bell Labs
PowerPoint makes us stupid. It’s dangerous because it can create the illusion of understanding and the illusion of control. […] Some problems in the world are not bullet-izable.
At no point has it even occurred to me, until right now, that I’m in fact typing e-words or e-sentences. I’ve not thought about adding an e-carriage return to separate this e-paragraph from the next e-paragraph.
A Turing Machine. Possibly the nicest assembly I’ve ever seen of 35mm film, servos, motors, and dry erase markers that’s actually capable of demonstrating the foundational theories of computing. A bit slow on the maths, but who’s complaining?
Research video demonstrating an ability to automatically select individual elements of a recorded song (like the vocal track, guitar solo, ringing cellphone, etc) by singing, whistling, or even Beavis & Butthead style grunting in imitation. Not 100% perfect, but it’s very clever. (I wish the video was embeddable…)
If you’re the sort of lab that’s engineering a method of printing ceramic materials using rapid prototyping machines, I suppose it’d make sense that you’d already have made some real-life polygonal Utah teapots! I never thought about it before, but for the 3D graphics humor value I really, really want one of these now. You can read about the Utanalog project and see finished photos (and a video explaining the whole thing) over on the Unfold blog.
Without answering, I handed the telephone to the applicant, and sat down. Then followed that queerest of all the queer things in this world,—a conversation with only one end to it. You hear questions asked; you don’t hear the answer. You hear invitations given; you hear no thanks in return. You have listening pauses of dead silence, followed by apparently irrelevant and unjustifiable exclamations of glad surprise, or sorrow, or dismay. You can’t make head or tail of the talk, because you never hear anything that the person at the other end of the wire says.
Mark Twain, writing an article for the June, 1880 issue of The Atlantic on the oddity of telephone conversations. Still relevant in our age of disjointed retweets, wall posts, and other overheard messages.
A circa-1966 industry ad for Leon Maurer’s Animascope process for producing animation on the cheap: animation without drawing and with fewer pesky artists! Similar to but different than rotoscoping, this process used high-contrast photography and actors in contrasty costumes with their skin painted white and contour lines painted on. The performers would then be filmed dancing around under bright light on a black-lined stage, and the resulting photography could be composited onto traditional background plates. Weird, but sort of a primitive version of mocap, and done for the same economical reasons.
(Via Cartoon Brew – for more info on the process, a good place to start might be this comment left by Brew reader Kustom Kool)
Visions of the Amen, a voice-responsive kinetic sculpture by artist Mitchell Chan (demonstrated in this video by soprano Ashleigh Semkiw). The software is written in Processing, the hardware is controlled by the ArtBus interface being developed at the School of the Art Institute of Chicago. Kind of like a real-world oscilloscope.
GML (Graffiti Markup Language) drawings from 000000book.com are converted into DXF via a small Processing utility. Motion paths for a robot arm are developed from these DXF files using Rhino and MasterCam. The ABB IRB-4400 series arm is wielding a 2″ Montana Hardcore marker. Developed 11 January 2010 by Golan Levin and Jeremy Ficca in the CMU Digital Fabrication Laboratory (dFAB).
Co-produced by the CMU STUDIO for Creative Inquiry and the CMU Digital Fabrication Laboratory, in cooperation with FAT Lab and 000000book.com. For more information please see http://www.flong.com/blog/archives/565.
The GML RoboTagger. Automated calligraphy via the Graffiti Markup Language and an industrial robot arm gripping a giant Sharpie or Montana Hardcore magic marker. Tele-tag.
An AR iPhone simulator for the iPhone, with working controls. I can’t put it any better than this anonymous comment from the MAKE post: “Yo Dawg, i heard you like augmented reality, so we put an iphone in your iphone so you can touch while you touch.”
My camera switches over to portrait-mode whenever it sees a painting or a drawing with a face in it. It stays in AUTO mode otherwise.
According to Popular Mechanics: “a chip inside the camera constantly scans the image in its viewfinder for two eyes, a nose, ears and a chin, making out up to 10 faces at a time before you’ve hit the shutter.”
I decided to test my camera—it’s a Canon Powershot SX120—to see what it decides to regard as a face.
Artist James Gurney tests out his point-and-shoot’s facial recognition chip against works of art and illustration. A mixed bag, but a good reminder that this technology is getting better and cheaper (and subtle) all the time.
The FAT LAB crew put the markup back in markup language, with their week dedicated to creating new applications and standardizing their existing work around a Graffiti Markup Language, an XML archive format describing tagging and gestural drawing. Rad.
A Parallel Image, an installation by Gebhard Sengmüller in collaboration with Franz Büchinger, consisting of an array of sensors, 2500 wires, and small light bulbs to make an “electronic camera obscura” for lo-fi video transmission.
Yann Tiersen’s Comptine D’un Autre Été, L’après-Midi played on six iPhones. While far from a perfect, beautiful performance, I have a soft spot for this piece and it’s fun to see someone trying to overcome the limitations of the tiny virtual keyboard.
This “creative destruction” began in the ’60s, as did many things that we now both love and regret, and it was initially a spinoff of a project funded by US military agencies. […] Mephistopheles came to Faust in the form of a poodle. After all…in some versions of the story, he cannot enter your house unbidden — you have to invite him in, like a vampire.
From the ANIMAC to the FairLight Computer Video Instrument, a nice roundup of mostly analog video-mangling technology from the 1960’s to the 1980’s. Lots of pictures and back stories, too.
“[…] gopher [was] an Edenic protocol of innocence (in comparison to HTML, the protocol of commerce and experience)”
Ars Technica checks in on Gopher, the largely-forgotten pre-www protocol for getting information from servers in a simple, hypertext format. It’s out there still, just like the old BBSes, telnet MUDs / MOOs / MUSHes, Usenet, etc., and still useful in some contexts. Very few contexts, maybe – I can’t imagine there’s much in the way of Gopher pr0n or warez trading to give continued backwater life to the old medium, but hey, 4chan’s /b/ is available through Gopher…
What would things would be like if Gopherspace’s concision won out over HTTP’s ability to cram graphics and ads onto every resource? Sounds like our current mobile web app landscape.
The difference is that a special type of ink and pen are used. When the voter fills in a bubble on the ballot using the pen, a previously invisible secret code appears in that space. The voter can record the code or codes and then check them later online. If the code is found in an online database, it means the voter’s ballot was counted correctly. Each ballot has its own randomly assigned codes, to prevent this process from revealing which candidates a voter selected.
Using a bit of invisible ink and a unique code to help fight election fraud. Not a bad idea, really, in that it gives at least one form of anonymous checksum to add to the evidence trail. The trouble is whether it will end up confusing the average voter. At least it’s better than trusting in a closed software-based system with no paper trail…
From the SIGGRAPH 2004 emerging technologies demo, here’s the CirculaFloor, for when you want to play a bit of live-action Mario Bros. The tiles automatically rearrange themselves holonomically (albeit a bit slowly) depending on what direction the user is trying to walk.
here’s a toast to Alan Turing
born in harsher, darker times
who thought outside the container
and loved outside the lines
and so the code-breaker was broken
and we’re sorry
yes now the s-word has been spoken
the official conscience woken
– very carefully scripted but at least it’s not encrypted –
and the story does suggest
a part 2 to the Turing Test:
1. can machines behave like humans?
2. can we?
Interactive Audio Visual installation for
Mekanism’s “After School Special” art show
location: gray area foundation for the arts http://www.gaffta.org/
The “world’s oldest [working] computer”, the c.1949 Harwell/WITCH, is undergoing restoration for display at Bletchley Park’s National Museum of Computing, and will be exhibited next to the Colossus Mk2. I’d make them play chess against each other.
The so-called “Mother of All Demos”, the technology presentation given by Doug Engelbart of the Stanford Research Institute, which introduced to the world a number of useful developments: hypertext, the computer mouse, timesharing, email, video conferencing… And this was a bit over forty years ago, just before the ARPANET went online. Pretty amazing times.
This papier-mâché Felix the Cat was the first image to be broadcast over experimental television in preparation for the first public RCA broadcast in 1928. Black and white and made of durable material, they had him revolving on a turntable, beaming out as a tiny test image so engineers could adjust the signal. Early TV technology fascinates me.
There’s more good info on early test patterns over at Design Observer.
Static: an Interactive Approach to Animation by Jack Lykins. Using a turntable and midi-controller via Max/MSP Jitter to drive the playback of an animation sequence. (via Cartoon Brew)
The science journal Nature reviews the new book Inventing Futurism: The Art and Politics of Artificial Optimism by Christine Poggi. The review itself is a decent synopsis of the Futurist movement in art and literature and the role that modern technology played in shaping European political thought in the early 20th Century. (Note: the Italian Futurist utopian dream devolved rapidly into the very frightening march of fascism, and would eventually become our model for Blade Runner-style sci-fi dystopia…not something to idealize, but worth learning a lesson from)
The Futurists imagined a world governed by electricity. Their electrical fantasies, writes Poggi, take a Utopian turn in their vision and evolve into an orgy of violence. They saw Italy as being “fertilized” by electricity, banishing hunger, poverty, disease and work. Air temperature and ventilation would be controlled automatically, telephones would be wireless, and crops and forests would spring up at speed. But in this world of ease and plenty, fierce competition would arise over superabundant industrial production. War would break out, fought by “small mechanics” whose flesh resembled steel. Deploying “steel elephants” and battery-powered trains from afar, they would wage a thrilling interplanetary war.
Todd Vanderlin’s working on a project using OpenFrameworks and ARTag markers to simulate scratching a real record but using a camera as a the virtual needle. Nifty.
In 1903, the specialty watch company Helios built a trial run of miniature Boilerplates. The master of the hoax, an expert on Victorian automata, Paul Guinan, “tried” to “rebuild” one of these. The head resembles gas masks that soldiers wore in World War I, but as ornamental brass. The chest is as tubular as a Franklin stove, but gleaming with Baroque detail. Its knobby limbs were fully articulated , like an armature for special effect stop-motion seventy years later, or a thing in The City of Lost Children. […] For over a century, thousands of boilerplates have come down to us. They wait patiently. Patience has always been a virtue of the boilerplate; and of all hoaxes, including the Wizard of Oz himself.
Gurney was still haunted by the Baroque search for a perfect vacuum, by the study of the phlogiston, as part of the philosophy of nature. So, like a mad Jesuit, he built a piano that played glowing bottles filled with burning hydrogen.