Ho lee shit.
I’ve been burning with anticipation for this album since they released “It’s Just Us Now” a couple months ago. You heard it right?
This is an amazing song. At just the moment when it should go nuts, they draw back and create more room for Alexis to sing. Wow! And the video manages to reinforce it somehow. I’ve been listening to this song every other day since it came out, and generally pestering my friends about it.
And then today, they released the rest of the album. Guess what? This thing is fucking incredible. Every song on here is just as powerful and surprising, in a totally unique way. Music you could head bang to, or have a moment of transcendence, or shed a tear to.
Congratulations, you fucking did it! Your first album, you were just a mess, but we loved it. Your second album… I think you must have been studying music theory, and suddenly there was structure. A bit too much structure. Third album: good songs! But this, this is the album where it finally comes together, where the unbridled chaos of the first album meets the intelligence and emotion hiding inside you, and it feels like you finally said something. Something brave and messy, something felt and understood and meant. It’s thrash, it’s pop, it’s dance, it’s total chaos. It’s sweet, it shreds, it’s scared, it’s raining fire. Is that a goddamn saxophone?
And it’s bold! It’s triumphant and glorious! It’s really incredible… an amazing end to a pretty trying week. I’m pretty sure it’s going to be on repeat for me for the next couple years, replacing The Griswold’s Be Impressive.
A response to What’s an Engineer to Do?
To fritter away a huge lead in the race. That’s what Apple’s doing. And Apple’s fans, like me, are so upset about it because we see the trajectory.
Apple in 2002 was facing an uphill battle, but they were going against idiotic PC manufacturers that didn’t care about build quality or design aesthetics, and OS vendors that didn’t care much about developers or power users. The hardware side was racing to make things faster and cheaper and the software side was really excited about lock-in and not much else. So Apple racked up a lot of geek cred by making decent hardware that looked good, was enjoyable to use, and basing the OS on something very geek/programmer friendly.
The industry has now had 16 years to copy from Apple’s playbook, and they have. Dell, HP, Microsoft—they all have good looking hardware that’s inexpensive and well spec’ed. When Apple’s was the best, geeks didn’t mind paying more for it—it’s the best! But geeks do mind paying more for middling or mediocre. Geeks do mind having to explain that Apple is somehow better despite being most expensive of all and somehow not having the best hardware. And frankly, I don’t think most of us care that much about 3 mm. (Does Apple have an anorexia problem?)
But the bigger issue for me is that I belong to that class of developer whose life is fairly portable. I could run IntelliJ on anything, Emacs on anything. I need Unix, I need sed and awk and sh and all that stuff, but otherwise, I’m not really wedded to the Mac—at work, anyway. But Microsoft has spent at least six years or so trying desperately to get us developers to give Windows a shot, and now there’s Bash-on-Ubuntu-on-Windows, the worst named Unix subsystem of them all. But I tried it out and it works out of the box.
So what’s keeping me with Apple exactly? In 2010, they have the best hardware, the best phone and the best laptop and the mini for my parents, everything’s in iTunes and life is great. In 2016, their phone has been getting worse for the last two years (the iPhone 6 is too big for my hands), I have the watch and never wear it because I don’t care about it, the new MBP leaves me cold, and I’m hearing nothing but negativity about the port situation. In 2010, where would I go? To Lenovo or Dell and get some thick ass brick that breaks on the way to the house and won’t run Linux? But it’s not 2010, it’s 2016. Windows comes with Unix, Dell and Microsoft have superb looking hardware. I’m thinking about ditching my smart phone and my smart watch anyway. What’s keeping me here? The dock is pretty?
Things are just starting to get wonky. Apart from the stupid touch bar, the iPhone has to lose the headphone jack so it can be a micron thinner, but the MBP gets to keep it? How do you plug an iPhone into the MBP anyway? And why do they keep making macOS look more like iOS when they’re never going to let macOS have a touchscreen? Why do all the other products feel like they’re fire-and-forget? My Apple TV for instance—I love it, but I’m the worst guy to ask; I have no cable, no dish, I’m not a sports fan or an HBO subscriber. I literally just watch Netflix and Hulu. What happened to all the games that were coming to Apple TV? Why does Vevo somehow fuck up rendering on my Apple TV?
“Why would you buy a PC anymore?” Because sometimes you have shit you have to do, and for that, you need Unix. I need Unix, but I’m not sure I need macOS. And that uncertainty is probably why Tim Cook should think about this question. I promise I’m not the only developer who feels this way, and if the developers leave your platform, it’s going to get spooky on Mac. And faster than you’d expect.
I’ve been worried about flour more than programming (outside work, anyway). Here’s everything I’ve discovered.
Bread
This has been my interest lately.
The fundamental thing to know is that you’re after a certain ratio of about 5:3 flour to liquid. Beyond that you need some salt (about 1/2 tsp per cup, or more) and some yeast. The amount of yeast is mostly a function of when you want to bake; if you can let it rise for a day, you can use 1/4 or 1/2 tsp. I have been doing 1/2 tsp and baking in 18-24 hours.
You decide whether you want a uniform crumb or a bunch of big old air pockets. For a baguette or a nice boule, you probably want air pockets; if that’s the case, don’t add fat (oil or butter) because it will interfere with that, and also plan on letting it rise more in the final proof. If you want a uniform crumb, adding fat is fine and do a better job squeezing out the air between proofs.
When you first get started, you tend to conflate the recipe with the shape and the baking method. These things are fairly independent; you can follow Jim Lahey’s important recipe and then make baguettes with it following this Food Wishes recipe. In fact, if you read both, you’ll probably notice that the difference between the two recipes is pretty minute.
I made a great loaf of white bread following a Breaducation’s recipe, but I didn’t take the documentation all that seriously, or the temperatures, or the mixing steps… in fact, apart from the actual recipe, the only thing I really took was his kneading method, which is superb. And I screwed up and added too much honey.
By the way, it’s pretty hard to make a sweet yeast bread. Yeast wants to eat sugar, so if you give it a sugar-rich environment, it will basically eat all the sugar. So the fact that I screwed up and added double the amount of honey the recipe called for really had no perceptible effect on the outcome.
That’s the funny thing about bread. You can actually hose it pretty badly and it will still be OK. Get the ratio of flour and water mostly right, and bake it long enough, and it will be basically alright.
Spritz it with water on the way into the oven. I haven’t noticed a huge difference using the cup-o-water method and a good dampening with the spray bottle.
Another great recipe is this Real Irish Soda Bread recipe from Serious Eats. It’s trivial and you have amazing bread when it comes out. It’s not sweet, really; it’s actually fairly salty, but it takes butter and jam better than anything. The internet will tell you that you can’t use baking soda at high elevation to leaven this bread, but it works fine here at 4500 feet.
I have grown to favor King Arthur flour, especially bread flour, although the soda bread you should really use all-purpose for. In general, if it’s a yeast bread I want to use bread flour; if it’s a flatbread or a quick bread, all-purpose. I don’t know if this rule is perfect but it’s the one I’ve noticed and been following.
By the way, if you just try following some of these recipes, you’ll make some really amazing bread, on the first try. It’s not that hard.
Biscuits and Scones
The trick with biscuits and scones is: keep the butter cold, cut it up, and cut it into the flour. You want the dough to be lumpy with bits of butter. Don’t overmix. You really aren’t after doughiness in these recipes. If it seems like it might fall apart if you look at it funny, it’s probably perfect.
Scones and good biscuits are similar. If you want to make one of those bizarre American oversweet triangle cakes that Starbucks sells, try making cream scones first and you’ll have a much better experience.
Pancakes
The recipe in Ratio is one of the best I’ve ever tried, and after you read it you will see how easy it is to modify.
Pasta
Get yourself a pasta machine and make pasta all the time. It’s really easy. You almost can’t overwork the dough. In Ratio, Michael Ruhlman basically says, one egg per adult serving, plus 1.5 times that weight in flour. I find using Bob’s Red Mill semolina flour gives amazing results. You can cut it with AP flour or whole wheat flour or whatever. I make a big pile and fold the eggs in with a fork and then just sort of work it for a few minutes, throw it in the fridge under some olive oil for 20 minutes. I’m not even sure that step is necessary.
Get the pasta machine out and roll it to the 2nd to last setting for everything. Well, I might do the last setting for ravioli. Send it through the first setting a few times, folding it in half between, until it really looks like a big noodle. Then go one setting at a time. It’s pretty easy; I’ve made pasta from scratch probably about 50 times and the first time it came out just as good as the last, minus my nerves.
So that’s what’s been on my mind lately.
I have a project of 140,000 lines of Java + 3000 lines of JS in an Angular application. I have Maven build about 20 separate Java modules and then exec Grunt to build the Angular front-end. The build takes 2 minutes on a warm machine that has run the build before. One minute of that is Grunt. Fully half of my build time is taken up by the 2% of my codebase in Javascript. This is to “build” artifacts for a language that does not get compiled in any real sense on my end at all.
The reasons for this inefficiency are many: my 3000 line Angular program somehow drags in 300 MB of node_modules and 30 MB of bower_components; Grunt is supposedly less efficient than Gulp because it has to make a lot of disk copies; my template is probably too fancy, etc. But this is a 50x discrepancy. There must be a lot of negative multipliers combining here to lead to inefficiency like this. And that is what the software engineers of the world don’t like about Node: the whiff of amateur that pervades the system.
Maven is not a good program. It’s verbose (input and output), it inflicts XML on humans, its documentation is awful. But it has a much more appropriate level of flexibility. It has made the NRAO’s Java code a lot more maintainable. Anybody can check anything out and run mvn package
to build it, mvn deploy
to deploy it (well, usually). And this is what makes Maven a great program, even though it is also an awful program. It’s prescriptive and pervasive. It wouldn’t be great if it were optional.
People don’t like that Java is slow and plodding in terms of change. But let me point out that Maven was established long before Grunt was ever a thing, and off in Node.js land, they’ve already gone through Gulp and are on to Webpack and a few other build systems besides. This time next year, I’m sure webpack will be a memory and something else will have taken root. In five years, there probably will have been five or ten more Javascript “build” tools that all do less work than Maven less efficiently. And Maven will still be the de-facto build tool for Java and my packages that use it will still build the same way as they do today—with lots of unhelpful, pedantic logging; with the same bullshit in the XML file to convince Java that I don’t need compatibility with Java 1.3; with the same dependency hell that I have learned how to deal with. Because that’s what Java is like.
I don’t think Node or any other technology can be inherently bad. But, technologies can be immature, or they can foster a community that encourages amateurish (or bad, or insecure) behavior. Eventually, maybe someone falls in love with it and leads a big crusade to clean things up. Or maybe it becomes so popular that heartless buck-chasing programmers hammer on it until the old ways become unpleasant and new ways take root (this is what happened to PHP). But it feels like we’re dealing with Node.js in its awkward teenage years. A great Cambrian explosion of ideas, mostly bad ideas, playing out on npm. A lot of libraries with an air of perfunctoriness. “It connects, but there’s no error handling.” This kind of thing is upsetting to engineers, and we are mean.
The difference between Java and Javascript is very small. It’s basically that Java was designed to kidnap Unix and Windows developers and haul them off to Sun’s niche platform, whereas Javascript was designed to help high schoolers impress their long-distance girlfriend. Java has never become cool, despite squeezing out frameworks with sassy names like Play and Ninja. They’re like having a cool Dad who lets you drink a beer with him when you’re 17—it’s your senior year, after all.
Side note, a lot of the appeal of Node is that you have Javascript on the front-end and the back-end. You know, so you can pass JSON around. Is it hard to pass JSON to or from any language today? Not really. But you could have the same domain objects! But nobody really does that. The work that is done in the front-end and the back-end are different. You could make a big fat object and use it on both sides, but how are you going to avoid trying to send mail from the front-end or writing DOM nodes from the back-end? This is really about maintaining comfort. Teenage-you didn’t like hanging out with Dad’s friends. Awwwwkk-ward.
Maybe Javascript will grow up someday. Who knows? Sometimes high schoolers grow into excellent engineers. Only time will tell.
What makes Stack Overflow so successful? I was thinking about this the other day. What came before Stack Overflow? Mostly forums and mailing lists. And in a forum or a mailing list, what you get is a pile-up of data in basically chronological order.
This turns out to be a difficult structure for asking and answering questions. So what Stack Overflow really did was this:
- Constrain the domain of possibilities to questions & answers
- Elaborate the types of communication that happen in that context, their operators and semantics
You can talk about anything on a forum, or a mailing list. It’s not confined to technical questions and answers. But supposing that’s all you’re doing, the chronology gets in the way. You wind up helping people serially with the same problems over and over. Some idiot chimes in with unhelpful, unproductive advice. You have to read through it and so does the next guy coming through. In a long forum chain on a single topic, the software may be evolving, and the first ten or hundred pages of dialogue may not be relevant to the current release. The forum can’t order posts by helpfulness or they won’t make sense because of the implicit context.
Stack Overflow fixes these problems by adding constraints. You can’t have a free-form response to a question; you have to either Answer or leave a Comment. The semantics of comments is that if they are low quality, they can be hidden. The semantics of answers is that they can be reordered according to their utility. There are different operators for questions, answers and comments. And the whole system is built around various functions of questions, answers and comments.
How many other systems are there that suffer from chronological pile-ups due to lack of constraints, operators and semantics? One that comes to mind is bug/issue trackers like JIRA. Sure we have a lot of objects in the system—issues, milestones, components, etc. But at the end of the day, each ticket allows an unlimited chronologically-sorted pile-up. Is there a workaround in that pile? Maybe; grab a cup of coffee and start reading, buddy. How do you distinguish a request from the developer for more information from the response from the user from administrativia about what release it should go in? You read the comments, in order.
I’m not aware of a system that solves this by not allowing unconstrained “replies” to tickets, but I think that would be an interesting piece of software to use.
Thus, programs must be written for people to read, and only incidentally for machines to execute. — SICP
I have come to feel that this mindset is mostly hogwash outside academia.
The principal utility of programs is their utility. This seems obvious but is overtly contradicted by the cliche above. The market for programs that cannot be executed (or are not primarily to be executed) is precisely the book market. There are no paper programmers. Programming is a profession because there is a desire for new software to execute.
There is something beautiful about arresting statements like the above. The deception of the quote is that it feeds software’s narcissism. The code is important—it’s what makes it go, and we spend all day in there reading it and writing it. We have to be able to understand it to extend it or debug it. But if I’m not able to communicate clearly to a human, it won’t stop the program from entering production—but the “incidental” detail of it being wrong will.
I put a lot of stock in Brooks’s quote, “Build the first one to throw away, because you will.” It isn’t obvious to management why this is true, but any working programming knows that it is because the first act of programming is discovery. In practice, usually the customer gets the “benefit” of the throwaway first one because budgets are constrained and we are bad at estimation. This means that the first one you deliver really represents your first guess as to how this problem might be solved. It’s often wildly off-base and wrong, and you hope against hope that the user will find it useful anyway.
This leads to the second material limitation of literate programming, which is that if you were doing literate first, you have either just written a book about the wrong approach to the problem, which incidentally is also the throwaway program, or you have expended twice the resources to produce a book when what was desired was a program. A third option, which I have seen in practice, is that you have produced a book of negligible value, because although the book-production toolchain was employed and literate code was written, almost no effort went into forming the book-as-literature—that effort went directly into the code anyway.
This doesn’t mean there are no literate programs I wish existed, which I would enjoy reading. I would deeply love to take in a complete implementation of ACCRETE. The original, if possible. But the ACCRETE I want to read is a simplified core; the ACCRETE I want to run is flush with features and functionality. Similarly, I would love to read a presentation of the core of Postgres, but I fear if I tried to read it as-is I would be snowed by the details and complexity. In other words, I’m not convinced that programs of didactic value are necessarily the same programs I want to execute.
The success of projects like iPython Notebook, org-babel and Mathematica seems to indicate that there is a desire for “live documents,” which is to say, rich documents with built-in calculations. Prior to these technologies, people used Word and Excel, possibly even using something like OLE to embed Excel calculations in Word documents, but the process is clunky and limiting. Mathematica I think innovated here first, and with Mathematica Player, people could share Mathematica documents, which work somewhat like lab reports. A New Kind of Science showed that you could do a large document this way, but that doesn’t seem to be the way people use it. fpcomplete’s blogging scheme shows that this style is good for essay-length documents. This raises the question, is there a place for book-length live documents? I’m inclined to say no, because I cannot imagine a book-length single calculation, and live documents often represent performing a single exemplary calculation.
When I imagine reading my fantasy ACCRETE book online, I picture something like a fractal document. At first, you get a one-sentence summary of the system. You can then zoom in and get one paragraph, then one page. Each section, you can expand, at first from a very high-level English description, to pseudocode elaborated with technical language, finally to the actual source code. But this sounds like a very labor-intensive production. You would begin with the completed code and then invest significant additional time into it.
I don’t know if you could create the same experience in a linear manner. I suppose what you would do is have an introduction which unfolds each layer of the high-level description up to the pseudocode. Then, each major subsystem becomes its own chapter, and you repeat the progression. But the linearity defeats the premise that you could jump around.
If you consider every programmer that has written a book about programming, that’s probably the market for literate programming. This is a tiny fraction of people coding for a living. Books do require code that works and literate programming represents tooling in support of that use case. Outside authors I’m not convinced there is much call for it.
Programs are written for humans to execute, and only incidentally for other programmers to read.
Edit 2016-02-11: Read John Shipman’s rebuttal of this article.