In my experience, there are two extreme visions of reality embodied in two philosophies of programming language design. The first is Dijkstra’s view, the view embodied by Milner’s guarantee in ML, that once the program is compiled it cannot “go wrong” at runtime. This perspective treasures correctness and asserts that all wonderful things flow from it: reliability, readability, and so forth. Languages supporting this view of reality are primarily Haskell and ML. I’ll call this the welded-shut view.
The second vision is the biological vision, the notion that the system is like an organism that can change and evolve over time. This view is embodied by Lisp, Smalltalk and Erlang. I’ll call this the organic view.
C is not a language I would consider hallmark of either trend. But C manages, in fact, to do a great job at permitting reliable software development. Why is that? I argue that has a lot to do with POSIX and the nature of the C/Unix standard library. In short, any function that can fail returns a status code. If it must also return values, you pass those values into the function as pointers and the function will record the results in there.
This is decidedly unsexy. But it’s a great reminder that anything can fail: memory allocation, file opening, file closing, anything really.
In the organic languages, there is a strong preference for introducing exception mechanisms and handling errors with a separate control flow. This keeps the simple code looking simple. You can still do simple things simply in C, though it gets a bit ugly with passing result arguments to functions, by ignoring the result codes of functions. You can do this, but your program won’t abort at the right time; it will probably continue to work past a malfunction.
There is a strong argument against exceptions, one I rarely see promulgated, which is that it makes life hard on you if you want to actually implement robust software. There’s nothing in the signature of the function to indicate that it may, instead of returning a value, throw you some horrible exception, and here are the types of those exceptions. However, in the C/POSIX world, it is very much part of the method’s signature that certain error codes may be returned.
Java almost went this route with checked exceptions, but it retained unchecked, runtime exceptions, so it’s sort of a lame duck. In the few places where checked exceptions are used, it feels like a hassle, because so many other things that do throw exceptions don’t force you to deal with it, so why should this part?
I find it surprising then that the welded-shut languages usually also provide an exception mechanism. If I want to make a Haskell program do the right thing when a file cannot be opened, I have to use the ioError
function to catch a runtime exception that may be thrown. This, when Haskell already has perfectly worthy ways of handling situations where you may get one of two results (Either
). If one wants to write truly reliable code in Haskell or even Standard ML, one must discover through reading the documentation which exceptions might be thrown in which circumstances and handle them.
If you go asking around about how one is to achieve reliability in an organic language you’ll get an earful about automated testing. Over the last weekend I wrote a smallish program in Smalltalk using TDD with the Autotest facility. I have to say, I can see that TDD would obviate a lot of the need for type checking and some other categories of problem which it would be hard to trick a type checker into maintaining. The nice thing about Haskell’s type checker, though, is that it insulates me from having to do so.
I find I gravitate more towards the welded-shut languages, but I find it interesting that neither they nor the organic languages really have found a place in industry. Most of the world uses languages like C and Java, which furnish you with only the most rudimentary type checking, which do not really allow things that have been defined during compilation to change, but which do allow dynamic loading of 3rd-party code. I wonder why this is. Traditionally, systems like Smalltalk and Lisp have had, alongside with their “liveness,” a tacit expectation that if anything were to go wrong, the user is right there and can handle a condition or an exception manually. I suspect this has something to do with their unpopularity. Erlang doesn’t seem to have this weakness.
Another thing you probably wouldn’t expect is that, while I’m unaware of a method to dynamically load code into a running Haskell or ML process, there are frequently few good ways to interface organic languages with the outside world. Lisp and Smalltalk are about the only image-based languages around, they try very hard to be more than just a language and actually a whole environment. Smalltalk really takes this to the extreme, but both have evolved pretty strong walls to keep the world out. Haskell and ML are definitely not image-based, programs written in these languages interface with the OS like everything else, without creating their own walls.
It seems like there are more than a few oddities here. I would expect organic systems to be better at interfacing with the world, and I would expect ML and Haskell to avoid exception system. Instead we find that generally isn’t the case. Also interestingly, Erlang manages to achieve high reliability by accepting that processes fail and allowing it. This seems antithetical to the welded-shut approach of proving that nothing will go wrong. And indeed, highly reliable systems are written in Erlang.
-
Would a language like ML or Haskell, but without exception handling, be interesting? Would it improve reliability?
-
Why is reliability not correlated with strong, static typing?
-
Is there a reason organic systems are not written in organic languages? Or is this an artifact of rarity of use?