If I had to average a list of numbers, I would probably do it like this:

averagelist(List, Avg) :- 
  length(List, N), sumlist(List, Sum), 
  Avg is Sum / N.

This resembles the actual mathematical definition. Then you could just make a list of numbers and average that. @lurker is right, this is a terrible way to go, but it would work:

average(N, Avg) :- 
  findall(I, between(1, N, I), Is),
  averagelist(Is, Avg).

This is building up abstraction. But of course, this is for a class and the important thing is to not use Prolog or learn declarative programming or solve actual problems but rather to perform meaningless inductive calisthenics to prove you understand recursion. So a better” (i.e. worse but likelier to be accepted by a clueless professor) solution is to take the procedural code:

average(list) ::= 
  sum := 0
  count := 0
  repeat with i ∈ list
    sum := sum + i
    count := count + 1
  return sum / count

and convert it into equivalent Prolog code:

average(List, Result) :- average(List, 0, 0, Result).

average([], Sum, Count, Result) :- Result is Sum / Count.
average([X|Xs], Sum, Count, Result) :- 
  Sum1 is Sum + X,
  succ(Count, Count1),
  average(Xs, Sum1, Count1, Result).

The list result of my findall/3 must be delicately hand-assembled using only tools available in the 18th century lest anyone develop a sense that Prolog can be used effectively in fewer than 40 lines of code:

iota(N, Result)        :- iota(1, N, Result).
iota(X, Y, [X|Result]) :- X < Y, succ(X,X1), iota(X1, Y, Result).
iota(X, X, [X]).

Then you could build averagelist/2 without the taint of library code (of course, you’ll have to write length/2 and sumlist/2, and probably member/2 even though it isn’t used, but just because it’s clever and useful and it sort of seems like it should be in the source file next to all this other stuff we might need), but it would look generally like this:

average(N, Avg) :-
  iota(N, List),
  averagelist(List, Avg).

Now, of course, it will be pointed out that the introduction of additional predicates that are not directly answers to the take home assignment are illegitimate and will be penalized as doing such leads to readability, maintainability, breaking problems down into manageable pieces and other things that are not directly related to the goal of the assignment (to make Prolog appear tedious yet opaque) so we could now look at this and realize that if we want to flatten these two predicates together we ought to be able to by just smushing together their state variables and doing all the work of both, like this:

average(N, Avg) :- average(1, N, 0, 0, Avg).

average(X, Y, Sum, Count, Avg) :-
    X < Y,
    Sum1 is Sum + X,
    succ(Count, Count1),
    succ(X, X1),
    average(X1, Y, Sum1, Count1, Avg).
average(X, X, Sum, Count, Avg) :-
    Sum1 is Sum + X,
    succ(Count, Count1),
    Avg is Sum1 / Count1.

Now this is starting to look like Professor of Programming Languages code! We went from basically four little readable lines to 9 or 10 repetitive lines and a lot of book-keeping and state! I think we’re on the right track now, let’s review how it works:

  1. average/2 is just a call to average/5 with our state initialized (no sum, no count, starting value = 1).
  2. average/5 has two cases: a base case where the count-up-to value and the current-count value are equal, and an inductive case where the current-count is less.
  3. add up the blah blah blah you get the point

The key takeaways here are: 1) Prolog has a terse, high-level, readable and comprehensible standard library, which you are prohibited from using in school, and 2) any procedural loop can be made working Prolog by creating a recursive helper predicate and moving the code around.

June 9, 2017 prolog






Update 2025-05-09: There is now a page on this topic on sona pona! Read that instead, the below is very incomplete and out of date!

Natural Semantic Metalanguage is a theory that claims there is a common set of semantics underlying all natural languages. This is a descriptive theory, but we can also use it to evaluate constructed languages and perhaps use it prescriptively to help us create effective constructed languages (or at least let us restrict them consciously rather than accidentally).

I’ve taken the chart of NSM semantic primes from 2016 and written the Toki Pona equivalent of each prime in it, and crossed it off a list of Toki Pona words. The result is a mapping of NSM to Toki Pona, but it also tells me a few other things:

  • There are some primes in NSM that have no direct representation in Toki Pona. This highlights areas of Toki Pona that contribute to the sense that there are things you can’t say in it.
  • There are several overloaded words in Toki Pona that function as more than one NSM prime. This helps explain the fuzziness” you feel using Toki Pona.
  • The non-primes in Toki Pona’s lexicon form a useful minimal vocabulary for people interested in language construction.

Interestingly, there are only a few primes that appear to be handled grammatically in Toki Pona. Almost all the primes in NSM are realized as independent Toki Pona words. This suggests to me that Toki Pona is extremely well-constructed.

Now, onto the chart:

NSM Prime Toki Pona
I-ME mi
YOU sina
SOMEONE jan
SOMETHING-THING ijo
BODY sijelo
PEOPLE jan mute
KIND <none>
PART wan

In the first section, we see a pretty good mapping from NSM prime to Toki Pona. KIND has no mapping, which makes sense, because Toki Pona is in general very bad at making distinctions between type (by design). I’ve marked wan in bold to draw attention to the fact that it represents multiple NSM primes. In fact, wan means one, unit, element, part, piece, or make one—in other words, as a noun, it means a single thing or one of many things, but as a verb it means to unify them into a wholeness, so it is a kind of auto-antonym, which is surprising in a constructed language but occurs not uncommonly in natural languages.

NSM Prime Toki Pona
WORDS nimi
THIS ni
THE SAME sama
OTHER-ELSE ante
ONE wan
TWO tu
MUCH-MANY mute
ALL ali
SOME <none>
LITTLE-FEW lili

mute and lili are both pretty polyvalent; mute winds up covering a lot of scenarios.

NSM Prime Toki Pona
TIME-WHEN tenpo
NOW tenpo ni
MOMENT tenpo
(FOR) SOME TIME <none>
A LONG TIME tenpo mute
A SHORT TIME tenpo lili
BEFORE tenpo pini
AFTER tenpo kama

As you can see, Toki Pona is fairly weak at time. tenpo pini and tenpo kama mean finished time” and time to-come” and that’s about all you get. There’s definitely no distinction between points and intervals. The ambiguity here is probably intentional—intended to focus your attention on the here-and-now rather than placing sentences into arbitrary points in time and space (unlike Lojban).

NSM Prime Toki Pona
WANT wile
DON’T WANT wile ala
FEEL pilin
DO pali
SAY toki
KNOW sona
SEE lukin
HEAR kute
THINK pilin

Toki Pona has a full set of the mental predicates, the only ambiguity is the merging of think” and feel”, which was definitely an intentional choice by the inventor to steer discourse in a certain direction.

NSM Prime Toki Pona
HAPPEN kama
BE (SOMEWHERE) lon
LIVE ali
DIE moli
THERE IS <none>
BE SOMEONE/SOMETHING <none>
(IS) MINE pi mi
MOVE tawa

Here we see lon used to place things and people as well as define them. It doesn’t seem like a perfect fit for any of these primes but it is closer than nothing. Toki Pona probably relies more on the null copula to say things like this is a cat” (ni li soweli). I’ve underlined ali because life” is an oblique meaning. Toki Pona doesn’t seem to have a word for live.”

NSM Prime Toki Pona
TOUCH pilin
INSIDE insa
PLACE-WHERE-SOMEWHERE tomo
HERE lon
ABOVE sewi
BELOW anpa
ON ONE SIDE poka
NEAR <none>
FAR weka

I’ve underlined tomo because the sense a general place” is definitely secondary to the sense a room (indoors)” so it is an oblique association with the NSM prime. Also weka means away” in a sort of vague way that might mean far but isn’t the main sense. And once again we have pilin for something sensory. That there are no real words for near and far is probably intentional—again, Toki Pona emphasizes the here-and-now—the surprise is that there is not really a word for here at all. It’s apparently just always implied; soweli mute li lon means both there are cats here” and lots of cats exist” and no distinction between them is possible.

NSM Prime Toki Pona
NOT-DON’T ala
CAN ken
BECAUSE tan
IF la
MAYBE ken la
LIKE-AS-WAY nasin

la functions as a strange bit of grammar in Toki Pona, separating an adverb” or context” from the rest of the sentence. Conditionals are handled this way, as are temporal constructions. I think it’s likely that this area was intended to be simple and weak but grew more complex as the community expanded. Anyway, all the primes are available here, but not all as single words.

NSM Prime Toki Pona
VERY kin
MORE-ANYMORE mute
SMALL lili
BIG suli
BAD ike
GOOD pona
TRUE lon

Once again, mute and lili reappear as common adjectives. lon in the sense of true” is probably oblique. There’s no explicit way of saying false; you would simply negate the statement somehow.

So, what does that leave? Quite a bit:

  • a (ah, ha, uh, oh, ooh, aw, well)
  • akesi (non-cute animal, reptile, amphibian)
  • anu (or)
  • awen (stay, wait, remain)
  • en (and)
  • esun (market, shop)
  • ilo (tool, device, machine)
  • jaki (dirty, gross, filthy)
  • jelo (yellow)
  • jo (have, contain)
  • kala (fish, sea creature)
  • kalama (sound, noise, voice)
  • kasi (plant, leaf, herb, tree, wood)
  • kepeken (use)
  • kili (fruit, pulpy vegetable, mushroom)
  • kiwen (hard thing, rock, stone, metal, mineral, clay)
  • ko (semi-solid or squishy substance)
  • kon (air, wind, smell, soul)
  • kule (color, paint)
  • kulupu (group, community, society, company, people)
  • lape (sleep, rest)
  • laso (blue, blue-green)
  • lawa (head, mind)
  • len (clothing, cloth, fabric)
  • lete (cold)
  • linja (long, very thin, floppy thing)
  • lipu (flat and bendable thing)
  • loje (red)
  • luka (hand, arm)
  • lupa (hole, orifice, window, door)
  • ma (land, earth, country)
  • mama (parent, mother, father)
  • mani (money, material wealth, currency, dollar)
  • meli (woman, female, girl, wife, girlfriend)
  • mije (man, male, boy, husband, boyfriend)
  • moku (food, meal, eat, drink)
  • monsi (back, rear end, butt, behind)
  • mu (cute animal noise)
  • mun (moon)
  • musi (fun, playing, game, recreation, art)
  • nanpa (number)
  • nasa (silly, crazy, foolish, drunk, strange, stupid, weird)
  • nena (bump, nose, hill, mountain, button)
  • noka (leg, foot)
  • o (vocative)
  • oko (eye)
  • olin (love)
  • ona (she, he, it, they)
  • open (open, turn on)
  • pakala (blunder, accident, mistake)
  • palisa (long, mostly hard object)
  • pan (grain, cereal)
  • pana (give, put, send, place, release, emit, cause)
  • pimeja (black, dark)
  • pini (end, tip)
  • pipi (bug, insect, spider)
  • poki (container, box, bowl, cup, glass)
  • seli (fire, warmth, heat)
  • selo (outside, surface, skin, shell, bark, shape, peel)
  • seme (what, which)
  • sike (circle, wheel, sphere, ball, cycle)
  • sin (new, fresh, another, more)
  • sinpin (front, chest, torso, face, wall)
  • sitelen (picture, image, draw, write)
  • soweli (animal, especially land mammal, lovable animal)
  • suno (sun, light)
  • supa (horizontal surface, e.g furniture, table, chair, pillow, floor)
  • suwi (candy, sweet food)
  • taso (only, sole, but)
  • telo (water, liquid, juice, sauce)
  • unpa (sex, sexuality)
  • uta (mouth)

More details on the Toki Pona vocabulary can be found at the semi-official wordlist, by consulting pu, the official Toki Pona Book or perhaps by reading a tutorial.

Ideas for further work:

  • Create an NSM-complete Toki Pona by filling in the missing primes and disambiguating multi-valent words
  • Create an inflected Toki Pona by converting NSM primes into bound morphemes on the remaining TP lexicon

April 27, 2017 language






I want you to close your eyes for a second and picture your biggest hero. Here’s mine.

hint: it’s buckaroo banzaihint: it’s buckaroo banzai

The man you see pictured here is Buckaroo Banzai. According to the highly informative documentary The Adventures of Buckaroo Banzai Across the 8th Dimension, Buckaroo is both a physicist and neurosurgeon while heading a rock band as well as running the fan club. One gets the sense these are just a few of the salient features of a fairly rich backstory.

Importantly, early in the film during what appears to be the informal job interview of Sidney, another neuroscientist, Buckaroo asks if he can dance. The implication is that he’s not interested in bringing a neurosurgeon onto his team if he can’t perform in the band.

Obviously, we can’t run our own workplaces like Buckaroo does. But, there’s a cue in here about how to live. I see programmers talking about how to be better programmers by learning, learning, learning about programming. Acquire new languages, try new frameworks, have side projects, be constantly writing code. I see other programmers talking about how to be better programmers by meditating, by working out, hiking the wilderness and getting to 4% body fat, by getting better sleep and eating healthier foods. And I think they’re both missing the point.

The way to be a better programmer is to be a better, bigger human. Indulge your interests. Call your mom. Care about yourself, your family, your friends, strangers. Buckaroo Banzai didn’t have his band to be a better physicist. His being a physicist wasn’t there to improve his being a neuroscientist or vice versa. They were simply expressions of his being. Sidney doesn’t have to dance to be a good neurosurgeon, he has to dance because he’s human and Buckaroo wants to be around humans. (Sidney can’t dance for shit, by the way.) Dr. Sidney Zweibel wants to cure rodeo clownism

So quit trying to be a better programmer so you can write more code faster and better and be more powerful at it so you can be a better cog and get more head pats. Be a better programmer to help save the planet from red lectroids from the 8th dimension. That’s a much better reason than to get a job at Google or Facebook, of all fucking places. And give her your coat.

Because you’re perfect.

March 1, 2017






Here’s a philosophical question about the White Elephant” game: is it more likely that everyone will leave with a gift they like or that at least a few people will be miserable?

If you don’t know the game, the basic rules are these: every person brings one present to the game. An order is determined for players to take turns. On each turn, you may either open a present or steal a present from someone else.

This question came up over the weekend because my wife and I had a dispute about the run-off rules she came up with. What happens to you after your present is stolen by someone else? My suggested rule was basically recursion: you get to open or steal, but you can only steal things that haven’t yet been stolen this round. (A round begins when someone who hasn’t played yet chooses to open a present or steal one.) Her suggested rule was that you open a new present, full stop.

I did some math and realized that run-off rule preference is probably biased by whether you think it’s more likely that everybody is trying to get to their preferred gift and there isn’t much competition for that gift, or whether you think it’s likely that someone brought a highly-desirable gift that everybody is after. Is everyone in your party a beautiful and unique snowflake, or is somebody Michael Scott?

Let’s say you have N people participating (let’s call them person A, B, …), and each person brings a gift (let’s call them a, b, …). Let’s model preferences as simply as possible: each person has a single gift which is their preferred gift. The possible preferences for the case of two people is very simple: person A could prefer gift a or gift b, and person B could prefer gift a or gift b. Let’s write person As preference next to person Bs preference, so ab represents person A preferring gift a (their own) and person B preferring gift b, also their own. There are clearly four possibilities here: aa, ab, ba, bb. In two cases, everybody can leave happy (ab, ba) but physics prevents satisfaction in both the aa and bb cases.

It should already be clear that this looks like two binary bits (a=0, b=1; 00, 01, 10, 11) so it’s starting to look like we have an obvious encoding for this problem. The 3-person scenario is very similar: 000, 001, 002, 010, 011, 012, … 222. The cases where everyone is happy are 012, 021, 102, 120, 201, 210. So we have NN person-desire possibilities, against N! ideal outcomes. If you plot these against each other, you will see that total outcomes quickly exceed ideal outcomes. In our particular case we had 12 participants; the likelihood of the ideal outcome is 12! / 1212, or about 0.00537%. This means someone’s going home unhappy 99.995% of the time.

You could argue that this is being melodramatic. For one thing, people probably actually have a preference list. Maybe they are getting their second choice. Most white elephants have post-game exchanges. In the game we actually played, I know with some confidence that one person left unhappy. On the other hand, I don’t know what could have been done about that; some people are just built for misery.

I’m not sure how to analytically tackle the question of whose algorithm results in more happiness. I think a stronger model may be necessary: maybe we should model preference lists and assign an integer score for happiness instead of the boolean-flavor of a single preference and a did I get it?” boolean value. Or, maybe this is a sufficient model; we could always simulate and see. One thing that bothers me about Liz’s rule is that player one never gets a chance to make a choice, even if they are stolen from. Liz’s retort was that if they are not stolen from, it doesn’t matter which ruleset is used, and if there’s always an iPod, it’s just going to keep getting stolen, and lots of people are going to be miserable anyway, so who cares.

Another question I’d like to analyze, which I think can be done with combinatorics, is, what is the likelihood of M ≤ N people being unsatisfied? The work above is for 1 or more, what’s the exact curve for 1, 2, 3, etc. people being dissatisfied?

December 28, 2016 math






About the only social media achievement I am proud of in my life is that I am currently in the top ten of answerers on the Prolog tag of Stack Overflow. I’m especially proud of this because it’s something I’ve achieved by passion alone. I am not an especially talented Prolog programmer. Nor am I a particularly active Stack Overflow user. I don’t use Prolog professionally… in fact I tend to follow a mama’s cooking” philosophy to it which is probably less than it deserves. I even have a longstanding feud with one of the other dudes.

But I continue to enjoy answering questions on the Prolog tag of Stack Overflow. I tried Documentation once for about an hour and decided it was absolutely the stupidest idea I had ever tried out. In fact I rather hope it fails.

Each Tag is an island

If you are a casual Stack Overflow user—in other words, someone who programs but doesn’t answer questions—you may not realize that tags are informally governed by different groups differently. Stack Overflow overall is not against helping people with homework, but the Prolog tag is very much against it. Why? Because most people do not encounter Prolog as a momentary flight of fancy; instead, they get it as coursework towards some degree. A lot of seemingly innocuous questions have tells” that give this away, such as a genealogy problem, or a subway route problem, or whatever.

Prolog questions are not exactly in abundant supply. With high-volume tags like Postgres or even Haskell, you have to answer quickly. There are a lot of people who care about pruning duplicate questions. You can gain reputation quickly for a single good answer because there are a lot of eyeballs on a decent question. I’m not prepared to call this a virtuous cycle; it’s just how these things work on a higher-profile tag.

In the Prolog tag, there will be a handful of questions each day, there is no time pressure to answer in a minute or less, and a great answer on a genre-defining question might get ten upvotes. One upvote is probably the mean on the Prolog tag. There are so few Prolog experts on the site, it can take multiple days for an obviously bad question to get closed. So it’s a lot more common to answer a duplicated question.

I actually like this aspect of the Prolog tag. It gives me a chance to hone my writing. Some of my very best writing has gone into Stack Overflow. And I try to write to a specific audience: the person asking the question. I try to meet them where they are. I can guess with (I believe) reasonable accuracy what nonsense notions they believe about what Prolog is doing, because I have had those notions, and recently. Prolog is unusual, in that there are not a lot of great precedents for how it works. So there are a lot of wrong mental models of what it is doing out there.

Another interesting thing about Prolog is that there really aren’t any bad books about it. There were, in the 80s, but the books that are still circulating now are excellent. There is no shortage of great documentation about Prolog, both tutorial and detailed.

Documentation” is useless for Prolog

Let me summarize the situation:

  • Prolog already has ample (but confusing) documentation
  • There are lots more ways to fail to understand Prolog than ways to understand it
  • Clues about your mental model’s deficiencies are often obvious in the way you ask the question (“what does append/3 return?”)
  • The community is too small to use Stack Overflow’s moderation functions effectively, and the incentives don’t add up for us to use them anyway

What does Documentation bring to the table?

  • Another way to document something already amply documented
  • No help for approaching different mental models
  • Moderation-heavy process for editing and managing documents

What problem is Documentation trying to solve? I guess the idea here is to find a way to move questions (especially frequently-duplicated questions) into some kind of wiki. But that’s more stuff for people to read, who are already having trouble reading. Stack Overflow isn’t just question-answer, it’s programmer-to-programmer technical support. Documentation isn’t for anybody. It doesn’t speak to this use case. It isn’t rewarding to work on. For #prolog folks, it’s a step in the wrong direction. And indeed, almost nothing has happened on it since the day of the debut, because it obviously does nothing for us. There are 16 topics.

What about other tags?

Consider Postgres. The tag is pretty high-volume. There are a lot of server admin questions, a lot of beginner SQL query questions, but a lot of people. The same mental model thing happens, to a lesser degree. It should be documented” ten or 100 to one compared to Prolog. Instead, it has 22 topics instead of 18. Why?

I think it’s because the same comments apply. Documentation is unicast, Q-A is point-to-point. Wrong mental models of SQL are a common problem that is mostly not addressed by reading the documentation.

Another point on Postgres: it has some of the best official documentation of any project anywhere. What’s there to say about Postgres that isn’t already said in the official documentation? Examples?

Why do we like Stack Overflow?

Stack Overflow is great. We use it all the time, even if we hate it, because Google finds answers to our questions there. It’s the universal programming help desk. Stack Overflow solved a legitimate problem: taking a bunch of programmer technical support discussion and restructuring it into a resource.

Ever searched for something and found the answer on a forum like Java Coder Ranch? Where you have to skip to page 19 to try and find the actual answer to the problem, only to find out you went too far and now you have to page backwards slow to avoid all the Thanks!” shit? Had to stare at the page for a little too long to try and figure out if you’re looking at the date the comment was left or the date the schmuck joined the forum?

Stack Overflow took a thing that was already happening (asking questions and getting answers) and stripped away the bad and useless shit that was holding that information captive. They came up with a more sensible concept of what was going on and developed a totally new interaction for it.

So… what’s the problem Documentation solves? What thing do we all do that Documentation somehow improves? The truth, we both know it, is that Documentation doesn’t fit into any existing workflow. It doesn’t even fit into Stack Overflow’s workflow, and it was built to fit in there.

Documentation also deprives the programmer of the feeling of helping someone. Which feels better, giving a dollar to a guy on the street, or dropping a dollar into a jar that says for the homeless” on it? Knowing you are helping a specific person is a huge motivator. Documentation deprives you of that source of pleasure. All the complaints about Documentation on Stack Overflow have the same flavor: it feels like a lot of work.” It is a lot of work, and so is answering Stack Overflow questions! But answering those questions feeds our desire to be charitable to each other, in a way that writing examples for documentation just does not.

So… the reward scheme is messed up, the value-add is unclear, it’s demotivating, it’s labor-intensive, it’s not going to do anything to staunch the flow of bad questions, it doesn’t clearly improve any real existing problems while creating some new ones… what is Documentation good for?

Stack Exchange: Documentation is a fail. Drop it.

December 23, 2016






In case you missed it, the Go guys announced an official font today.

Fonts and typography are like a weird little undercurrent in programming. Knuth famously took about a decade off from his massive Art of Computer Programming project to invent TeX and METAFONT. He did this because he found the second edition printing much uglier than the first and decided he needed to fix this problem.

Knuth spent a decade mastering typesetting and typography. And his book is incredibly beautiful. Knuth, being of pure light and glory that he is, also designed the only font any of us has needed to use since 1975. That’s why everybody uses Computer Modern today, right?

computer modern snapshot, courtesy Wikipediacomputer modern snapshot, courtesy Wikipedia

Unfortunately, Knuth spewed enough bullshit about aesthetics and typography that programmers started to believe him. After all, if he could compute the time complexity of shell sort, he must be able to perceive things others of us cannot. TeX and LaTeX became the official arbiters of aesthetics amongst programmers, and just as rejecting the user-hostility of Linux was the mark of an inferior computer user, rejecting the horrifying ugliness of Computer Modern helped maintain a hostilities between programmers and designers that only began to erode after Web 2.0 started to take shape.

Plan 9 also manifested this tradition with their incredibly ugly color scheme and, surprise, custom-made fonts. Take a look at the beauty” of Plan 9:

snapshot of Plan 9 via 9Frontsnapshot of Plan 9 via 9Front

This is justified by Rob Pike (recognize the name from Go?) according to the 9Front FQA as:

…the color scheme is (obviously) deliberate. the intent was to build on an observation by edward tufte that the human system likes nature and nature is full of pale colors, so something you’re going to look at all day might best serve if it were also in relaxing shades.

What an odd move on the part of the Go team! I like go fmt though, it really represents an attempt to address a way that programmers create more work and bullshit for each other by homogenizing things. Maybe the official Go font will help with that? I doubt it, because font choices on the part of one programmer really don’t affect another.

They’re supposedly doing this to get past encumbrance” issues with using other people’s fonts, but that sounds like a misunderstanding or a red herring. In practice, there are already free and unencumbered fonts, and most of us just don’t wed our interfaces to specific fonts that closely.

It seems a lot more likely to me that this is a funnily obvious manifestation of a bug in the amber. At worst, maybe just a weird piece of Russ Cox’s brain that occasionally explodes out into the real world; at best, a complete set of fossilized notions from the glory days of Unix, that:

  1. typography is important. (I agree that it is, but people seem to survive using Word and Pages without lightning burns)

  2. our weird aesthetics based on math are better than the outside world’s aesthetics, which seem to be based on the crazy notion that good looking things should look good (#MadLads!!!)

  3. perhaps most sadly funny, the idea that Unix and its inheritors in Plan 9 and Go have something positive and meaningful to say about typography and aesthetics today, after all the phototypesetters have been put to bed and nobody uses troff anymore, or even TeX outside academia.

A missed kerning opportunity from the announcementA missed kerning opportunity from the announcement

I think it would be nice if number 3 were true. TeX is still an amazing piece of technology. The things that Knuth discovered really should be enshrined in a better system that isn’t quite as stuck in the 70s. (Did you know that the way you run external commands from TeX is by writing to file descriptor #18!) LuaTeX 1.0 is out, maybe that will save everything? You can use real fonts now… but we need to accept that real fonts are made by real typographers using software that is not Emacs.

Why do we care so much about this? Maybe we just spent more time than most staring at text on a screen.

I don’t think the Unix etc. has made its final statement about typesetting. But for us to get there, maybe we should try and let go of typography. The battle’s over, Adobe won.

November 17, 2016