A nice piece on how the significant problem with communication between individuals isn’t so much that the conversation is lossy, it’s the lack of acknowledging and correcting for that “signal loss”.
Adopting the mindset that lossiness is a fact of life has another benefit: that of beginning to see communication not as simply a transference but as a generative space. That is, we often think of communication as simply moving understanding from one place to another, the way we might move electrons from a substation to a home. This assumption is behind a lot of otherwise well-intentioned efforts to reduce or even eliminate synchronous communication, as it can seem wholly inefficient compared with other methods. But the best communication makes way for something new to emerge in the exchange. It’s not passive but generative, not mere delivery but a creative transformation.
Getting burned out on playing Wordle? Want something that’s more about the letters individually? Want to play around with taxonomies of multicultural letterforms for the sake of science? (who wouldn’t??)
Glyph is a newly-launched game that will help researchers better understand how crowdsourced individuals around the word perceive the shapes, texture, and patterns of letters from 45 different written languages. The video below explains how it works:
Over on Vice, an interesting write-up on a growing movement in the field of linguistics: that the sounds of words or letterforms themselves can have direct relationships to their referent:
The Color Game did more than show how languages form over time, it violated a long-standing rule in linguistics: the rule of arbitrariness. In the subject of semiotics, or the use of signs and symbols to convey meaning, most students are taught about the theories of linguist Ferdinand de Saussure. He wrote that the letters and words in many writing and language systems have no relationship to what they refer to. The word “cat” doesn’t have anything particularly cat-like about it. The reason that “cat” means cat is because English speakers have decided so—it’s a social convention, not anything ingrained in the letters c-a-t. […] But the idea that words, or other signs, do actually relate to what they’re describing has been gaining ground. This is called iconicity: when a spoken or written word, or a gestured sign, is iconic in some way to what it’s referring to.
Aside from familiar English onomatopoeia like bang, chirp, etc., see the takete/maluma effect or bouba/kiki effect, as examples of words that “sound” like something. From the Vice article:
This effect extends beyond made up words. In 2021, researchers wrote about how words in English like ball, globe, balloon and hoop have more round vowels and sounds, compared to angular or spiky objects, such as spike, fork, cactus and shrapnel.
If you’re super nerdy and experienced with using Photoshop’s individual color channels to make enhancements or custom masks, you might have noticed that the blue channel has very little influence on the overall sharpness of an RGB image — it never occurred to me that this is inherently a function of our own human eyesight, which is unable to properly focus on blue light, in comparison to other wavelengths!
In conclusion, the optical system of the eye seems to combine smart design principles with outstanding flaws. […] The corneal ellipsoid shows a superb optical quality on axis, but in addition to astigmatism, it is misaligned, deformed and displaced with respect to the pupil. All these “flaws” do contribute to deteriorate the final optical quality of the cornea. Somehow, there could have been an opportunity (in the evolution) to have much better quality, but this was irreparably lost.
I’ve long wanted a name for the dot-dot-dot-dot sound effects that have stood in for speech in video games since the early NES days — here described as “beep speech” which works for me! This video is a great roundup on how those beeps function, how they’ve been used over the years (up through Simlish and the even more recent Animal Crossing hybrid speech, which is actual vocalization skewed into psuedo-gibberish), and how they’ve been tweaked when games are translated and localized for different cultures’ languages.
From the files of things I don’t know much about: best practices for Japanese web typography, a nice short primer. Web fonts are problematic enough in the West, and we don’t even have the character set troubles introduced by having multiple alphabets, the huge glyph set and calligraphic history of kanji, the need to be interspersed with Latin characters and ruby characters…
Chibi has its roots in kobun, or classical Japanese. The kobun noun tsubi 粒つび, which means “tiny, rounded thing,” eventually evolved into the verb tsubu 禿つぶ, which describes something becoming worn down or sharp edges getting rounded out.
A good example would be a calligraphy brush losing its hairs (because that’s what they used to write back in the days of classical Japanese).
The reading of tsubu eventually changed to chibiru 禿ちびる, and suddenly we’re not too far from chibi.
I had a brief curiosity about what chibi fully means and came across what may be my new favorite post on etymology. Thorough history, examples, usage issue notes and cautions, and even a drawing guide. More word explications should come with an adorable drawing guide.
“In A Descriptive Handbook of Modern Water Colours, by J. Scott Taylor…. London: Winsor and Newton, 1887, neutral tint is described as ‘A compound shadow colour of a cool neutral character. It is not very permanent, as the gray is apt to become grey by exposure’. Has anyone besides this author ever made a distinction of meaning between gray and grey? I do not know how the distinction is to be converted in speaking unless the words are differently pronounced” (1897).
Glad to know that the gray / grey split in English has been confusing people for well over 115 years. What’s going on in pigment company Winsor & Newton’s world where gray turns into grey eventually? An interesting read about the etymology of the mysterious color and it’s uncertain linguistic origins.
From the NY Times review of the updated 2011 “digital” edition of How to Win Friends and Influence People:
The following sentence, which appears on Page 80, is so inept that it may actually be an ancient curse and to read it more than three times aloud is to summon the cannibal undead: “Today’s biggest enemy of lasting influence is the sector of both personal and corporate musing that concerns itself with the art of creating impressions without consulting the science of need ascertainment.”
I don’t think they like the book’s modernized language.
Thanks to the Alamo Drafthouse’s always-amazing preshow entertainment reels, Marsha and I have had this Three Stooges earworm stuck in our head for the past week. Maybe now you’ll be stuck with it too.
The Language Log on how science fiction often misses the mark with predictions of technology (the why is up for debate, of course):
Less than 50 years ago, this is what the future of data visualization looked like — H. Beam Piper, “Naudsonce”, Analog 1962:
She had been using a visibilizing analyzer; in it, a sound was broken by a set of filters into frequency-groups, translated into light from dull red to violet paling into pure white. It photographed the light-pattern on high-speed film, automatically developed it, and then made a print-copy and projected the film in slow motion on a screen. When she pressed a button, a recorded voice said, “Fwoonk.” An instant later, a pattern of vertical lines in various colors and lengths was projected on the screen.
This is in a future world with anti-gravity and faster-than-light travel.
The comments that follow are a great mix of discussion about science fiction writing (why do the galactic scientists in Asimov’s Foundation rely on slide rules?) and 1960s display technology limitations (vector vs. raster, who will win?). I like this site.
From the post Language of Food: Ice Cream, a fascinating article linking the history of gunpowder, ice cream, linguistics, and even a bit of marketing insight:
Something similarly beautiful was created as saltpeter and snow, sherbet and salt, were passed along and extended from the Chinese to the Arabs to the Mughals to the Neapolitans, to create the sweet lusciousness of ice cream. And it’s a nice thought that saltpeter, applied originally to war, became the key hundreds of years later to inventing something that makes us all smile on a hot summer day.
The behind-the-scenes of one of my favorite Spike Jonze music videos, Pharcyde’s Drop (original video). The group had to learn to rap backwards to create the right lipsync for the effect, so Jonze hired a professional linguist to help transcribe the reversed audio track!
As always in such cases, there’s an interesting question whether this should be thought of as a superficial deviation from an underlyingly square rhythm, or rather as a different draw from a set of available polyrhythmic patterns. For some more discussion, see e.g. “Rock syncopation: Stress shifts or polyrhythms?”, 11/26/2007. Note in any case that the mixture of four-beat and three-beat (lyric) lines evokes the traditional English ballad meter, whatever we’re to make of the variations in alignment.
From a Language Log article on musical onomatopoeia:
Ryan Y. wrote to ask about words for “the sounds instruments make”. He points out that in English, “Drums go ‘rat-a-tat’ and ‘bang,’ bells go ‘ding dong,’ and sad trombones go ‘wah wah’”, but he notes that there are some gaps that he finds surprising:
Few instruments are as popular in the US as the guitar, but I have no idea what sound a guitar makes. There are gaps even for the standard high school band/orchestra instruments. What sound does a violin make? A flute? For that matter, what sound does an orchestra make? A rock group?
Is there a compelling explanation as to why we have words for the sounds of bells, trombones, and tubas, but not guitars? Why do we lack words for the sounds of groups of instruments? Do, say, Italians have a word for the sound a violin makes? Do the French have a word for the sound of a French Horn?
Good insight in the comments about different possible sound associations. For me, the question just makes me think of Eh Cumpari!, a novelty song that got drilled into my head by the overhead music system at the bookstore I used to work at.
Probably one of the very worst things about the English writing system (and it has a huge long list of bad things about it) is that it very clearly employs 27 letters in the spelling of words but there is a huge and long-standing conspiracy to market it as having only 26. Insane, but that’s what English has done.
From an appropriately enigmatic post on Language Log regarding our forgotten letter.
Hate reading content on the web that reveals the endings to the movies and shows you haven’t watched yet?
Enter graduate student Sheng Guo of Yangzhong, China, a Ph.D. student in the computer science department at Virginia Tech’s College of Engineering and his advisor, Naren Ramakrishnan, a professor of computer science. The men have developed a data mining algorithm that uses linguistic cues to spot and flag spoilers before you read them, thus saving much frustration for those who enjoy being surprised. Guo recently presented his findings at the 23rd International Conference on Computational Linguistics held in Beijing.
Almost everyone thinks “Greensleeves” is a sad song—but why? Apart from the melancholy lyrics, it’s because the melody prominently features a musical construct called the minor third, which musicians have used to express sadness since at least the 17th century. The minor third’s emotional sway is closely related to the popular idea that, at least for Western music, songs written in a major key (like “Happy Birthday”) are generally upbeat, while those in a minor key (think of The Beatles’ “Eleanor Rigby”) tend towards the doleful.
The tangible relationship between music and emotion is no surprise to anyone, but a study in the June issue of Emotion suggests the minor third isn’t a facet of musical communication alone—it’s how we convey sadness in speech, too. When it comes to sorrow, music and human speech might speak the same language.
Or to quote Nigel Tufnel: “It’s part of a trilogy, a musical trilogy I’m working on in D minor which is the saddest of all keys, I find. People weep instantly when they hear it, and I don’t know why.”
At no point has it even occurred to me, until right now, that I’m in fact typing e-words or e-sentences. I’ve not thought about adding an e-carriage return to separate this e-paragraph from the next e-paragraph.
Designers love noodling about perfecting the design of chairs. Linguists seem to love discussing why “cellar door” is cited as the most beautiful phrase in English. From Language Log’s The Romantic Side of Familiar Words:
And in fact the specific meaning of cellar door isn’t quite as irrelevant as people imagine. The undeniable charm of the story — the source of the delight and enchantment that C. S. Lewis reported when he saw cellar door rendered as Selladore –– lies the sudden falling away of the repressions imposed by orthography (which is to say, civilization) to reveal what Dickens called “the romantic side of familiar things." It’s the benign cousin of the disquietude we may feel when familiar things are suddenly charged with strange and troubling feelings, which Freud analyzed in his essay on the Unheimlich or uncanny. As Freud observed, heimlich can mean either “homey, familiar,” or “"concealed, withheld, kept from sight.” He goes on: “‘Unheimlich’ is customarily used, we are told, as the contrary only of the first signification of’ heimlich’, and not of the second. …” But he notes that the second meaning is always present as well: “everything is unheimlich that ought to have remained secret and hidden but has come to light.” Something is unheimlich, he says, because it “fulfils the condition of touching those residues of animistic mental activity within us and bringing them to expression.“
The unheimlich object, that is, is a kind of portal to the romance and passion that lie just beneath the surface of the everyday. In the world of fantasy, that role is suggested literally in the form of a rabbit hole, a wardrobe, a brick wall at platform 9¾. Cellar door is the same kind of thing, the expression people keep falling on to illustrate how civilization and literacy put the primitive sensory experience of language at a remove from conscious experience – "under a spell, so the wrong ones can’t find it” — until it’s suddenly thrown open. It would be hard make that point using rag mop.
[I]t raises the question of how this particular nonsense word came into wide use at MIT. It seems reasonable to pursue this question, and reasonable that there would be some discernable answer. After all, there’s a whole official document, RFC 3092, explaining the etymology of “foobar.” It could be interesting to know what sort of nonsense word “zork” is, since it’s quite a different thing, with very different resonances, to borrow a “nonsense” term from Edward Lear or Lewis Carroll as opposed to Hugo Ball or Tristan Tzara. “Zork,” of course, doesn’t seem to derive from either humorous English nonsense poetry or Dada; the possibilities for its origins are more complex.
From Post Position’s “A Note on the Word ‘Zork’”, investigating the nonsense term that would in the late 70’s would become synonymous with interactive fiction and the birth of popular computer gaming. Maybe Get Lamp will soon clear up some of this for us.
There’s been a bit of a blogstorm over the impending release of the sequel to Freakonomics, the obviously-titled SuperFreakonomics. The authors are being taken to task for allegedly questionable science and statistics work, accused of oversimplifying or distorting their results for the sake of contrariness. There’s good discussion and links on Language Log’s post on the controversy:
Overall, the promotion of interesting stories in preference to accurate ones is always in the immediate economic self-interest of the promoter. It’s interesting stories, not accurate ones, that pump up ratings for Beck and Limbaugh. But it’s also interesting stories that bring readers to The Huffington Post and to Maureen Dowd’s column, and it’s interesting stories that sell copies of Freakonomics and Super Freakonomics. In this respect, Levitt and Dubner are exactly like Beck and Limbaugh.
We might call this the Pundit’s Dilemma — a game, like the Prisoner’s Dilemma, in which the player’s best move always seems to be to take the low road, and in which the aggregate welfare of the community always seems fated to fall. And this isn’t just a game for pundits. Scientists face similar choices every day, in deciding whether to over-sell their results, or for that matter to manufacture results for optimal appeal.
In the end, scientists usually over-interpret only a little, and rarely cheat, because the penalties for being caught are extreme. As a result, in an iterated version of the game, it’s generally better to play it fairly straight. Pundits (and regular journalists) also play an iterated version of this game — but empirical observation suggests that the penalties for many forms of bad behavior are too small and uncertain to have much effect. Certainly, the reputational effects of mere sensationalism and exaggeration seem to be negligible.
Although it is increasingly difficult to gauge what people can be expected to know, it is probably safe to assume that most readers are familiar with Ockham’s razor – roughly, the principle whereby gratuitous suppositions are shaved from the interpretation of facts – enunciated by a Franciscan monk, William of Ockham, in the fourteenth century. Ockham’s broom is a somewhat more recent conceit, attributable to Sydney Brenner, and embodies the principle whereby inconvenient facts are swept under the carpet in the interests of a clear interpretation of a messy reality.
To elaborate that point briefly – While Ockham’s razor clearly has an established important and honourable place in the philosophy and practice of science, there is, despite its somewhat pejorative connotations, an honourable place for the broom as well. Biology, as many have pointed out, is untidy and accidental, and it is arguably unlikely that all the facts can be accounted for early in the investigation of any given biological phenomenon. For example, if only Charles Darwin had swept under the carpet the variation he faithfully recorded in the ratios of inherited traits in his primulas, as Mendel did with his peas, we might be talking of Darwinian inheritance and not Mendelian (see ). Clearly, though, it takes some special sophistication, or intuition, to judge what to ignore.