An interesting paper from Alessandro Warth and Alan Kay, introducing what they call worlds, has been mentioned here and there. It reinforces a recent theory of mine that programming languages should allow more visibility on their running environment, which has been dearly lacking so far.

Side effects are getting back into the radar of language designers and implementers. They were already kind of a pain in the neck but we learned how to deal with them by applying some good engineering best practices. Namely, lots of test cases. And for some, the benefits of introducing assignment clearly outweighed the pain, which is why until recently, functional languages weren’t so hot. But Moore’s law has failed us and here comes the era of gazillion coreness. Suddenly, side effects become a Major Pain In Teh Ass.

Why you may ask? Well, ever wrote a significantly multi-threaded application that necessitated some state sharing? You get locks. Lots of locks. And those are at the language level, in every single data mutation you have. Your multicore processor starts waiting, its super effective pipelening rendered ineffective.

One answer to this is purity. Nothing can be changed, it is or it is not. It’s convenient because when there’s no side effects, the same piece of code called many times with the same inputs will always produce the same result. As a consequence, where and when it’s called matters less. And you can start parrallelizing fairly easily. Just a data point: if you check the programming language shootout (micro benchmark warning), Haskell is and 12th and 21st on single processor architectures (Intel and AMD). On a quad-core it’s in the 4th position.

The problem with purity is that side effects are just unavoidable. So you have to get by and find a way to model them. In Haskell you get monads which are slightly unwieldy. In Erlang, there’s message passing and some functions are allowed to mutate state (mostly for I/O).

So far there hasn’t been much effort in the opposite direction: putting a leash on those damn side effects in imperative programming languages. Worlds are an interesting step in that direction. Warth and Kay propose a language level construct to control side effects by introducing isolated but related environments.

Worlds can be “sprouted” from the root environment or another world. Changes applied in worlds are contained. And then a world can be merged back in its parent. So in a sense it’s similar to software transactional memory as already implemented in Haskell (again!) or Clojure (and coming in Scala from what I’ve heard). An example from the paper might help make it clearer:

A = thisWorld;
r = new Rectangle(4, 6);
B = A.sprout();
in B { r.h = 3; }
C = A.sprout();
in C { r.h = 7; }
C.commit();

At the end of this execution, our rectangle will have a height of 7. Without the commit, the rectangle in the main environment would still have a height of 6.

Interesting isn’t it? Although I’m not quite sure how usable that would be in practice. It seems to me that if that facility was available in a given language (they’ve implemented it in Javascript), most programmers would ignore it entirely when developing libraries.

So worlds are probably not the last word (I know, lame) but I think they’re a step in the right direction. I’m looking forward to more cross-paradigm pollination.