One of the things I have been thinking about lately is database schema evolution. The flip side of database schema changes are the corresponding changes in the client code they entail. Combining the two with live system upgrades can potentially give rise to inconsistencies. Even ignoring databases, loading code into a multithreaded Lisp image can result in undesired behaviour since the loading is non-atomic.
This problem was discussed on
comp.lang.lisp several years ago, with the
general consensus being that some sort of atomic loading mechanism was needed
to address the problem.
However atomic loading alone does not eliminate inconsistencies for systems providing services via asynchronous protocols, such as web applications. Any users in the middle of a workflow consisting of a chain of requests to HTTP resources will be affected if any of those HTTP resources are changed.
Pascal Costanza gave a lightning talk where he presented a solution utilizing context-oriented programming. Each set of updates was integrated into the system as a new layer. Atomicity is provided by the fact that layers can be activated atomically, and can be made completely independent of each other. By associating layers with web application sessions, Pascal was able to preserve consistency for those users in the middle of a session during the time of update. This is incredibly clever and impressed me very much.
In a fascinating discussion on protocol versioning on Jane Street's OCaml blog a while back someone advocated essentially the same approach to the problem by utilizing two servers running different versions of the application. Pascal's context-oriented approach not only eliminates the need for the extra hardware (one server for each version of the protocol), but also provides an elegant way to structure the code for multi-version protocols, and to apply other, orthogonal updates that would affect users of all protocols provided by the system, which would otherwise require backporting patches.
If database access is implemented as an abstraction layer that acts as a protocol, the same technique can be applied to schema changes as well.
Very much apropos to this topic, Nick Levine gave a lightning talk
load and fasls for patch
distribution. You can find the code from his talk here.
Shriram Krishnamurthi gave a very nice talk about hooking kids on Scheme by letting disadvantaged inner-city middle-schoolers put lambdas in their lambdas so they can make videogames in their cellphones. Purely functional programming in the style of How to Design Programs turns out to be a really great way to teach algebra. Shriram also dislikes the OLPC project and thinks Etoys is weak.
Gerald Sussman gave a talk that essentially argued against formal methods for systems design, and for extensible systems with flexible failure modes.
One of the key reasons why MIT shifted its introductory CS curriculum away from the SICP-style bottom-up abstractions building approach is that the behaviour of modern systems components is too complicated for it to be practical for them to either be well-specified, or modelled in a reductionist way. A lot of the time engineers have to resort to an empirical approach to discovering the actual semantics of a sub-component.
To manage this complexity, Sussman argued for open systems that permit a great degree of extensibility and continue to function under various failure modes. An example he used was an autopilot system: it should continue to work under unexpected conditions such as control surface failures. This echoes some of the ideas of Richard Gabriel about mob software. Although Gabriel was at the conference, the term itself never came up, but Richard did object to macros at the macros debate by mentioning that they would be a hindrance to constructing systems worked on by thousands of programmers.
Fail-fast behaviour of a whole system under unexpected circumstances is something that formal specifications cannot avoid. A great quote from Sussman was: "proofs considered harmful." What Sussman was arguing for, in his words, was "high-quality abstract specifications, not low-quality well-specified ones."
Sussman presented several ideas about taking inspiration from biological systems, among them biological degeneracy, which can best be summed up as "doing different things to get the same result," and the fact that a relatively small number of genes are responsible for common physiological structures in organisms.
The bio-mimetic argument was echoed again in a lightning talk by someone whose name I unfortunately cannot recall or look up (as the full list of lightning talks has not yet been posted anywhere). One of the things mentioned was parallelism in multi-cellular systems - of course I had to jump in and point out that the best lesson we can take away from parallelism in the real world is a rejection of the concept of global state (more on that in an upcoming post).
Olin Shivers gave essentially the same talk about his system for expressing scoping and iteration in terms of graphs as he did at Danfest. The discussion at the end was quite good; one of the things Shivers emphasized that echoed Sussman's keynote was the fact that unspecified behaviour actually gave you more opportunity for expressive power.
I gave a lightning talk based on a blog post I made previously, arguing that the criteria for whether language X is a Lisp should be how well you can write a metacircular evaluator for that language. No one booed or threw things at me, and some very smart people made insightful comments, so I think it went ok. The question of static typing came up, and after the talk I had a long heated discussion with a couple of people trying to explain what type systems are and what they do. Typed calculi need to be taught as part of an undergraduate computer science cirriculum - most people have a very skewed understanding of what type systems are, unfortunately reinforced by huge design mistakes in many popular programming languages.