Turing100 Event Report: Work of Butler Lampson – Systems

(This is a live-blog of Neeran Karnik‘s talk on Butler Lampson, as part of the Turing100 Lecture Series happening at Persistent. Since it is being typed during the talk, please forgive the typos and bad structuring of the article. This article is also very incomplete.)

Butler Lampson has contributions in a wide area of computer science fields. Here is the Turing Award Citation:

For contributions to the development of distributed, personal computing environments and the technology for their implementation: workstations, networks, operating systems, programming systems, displays, security and document publishing.

The number of different areas of computer science touched there is breathtaking.

Systems Built

Here is just a sampling of some of the work of Lampson that resulted in entire areas of computer hardware and software:

  • The first personal computer:
    • The first personal computer in the world, the Xerox Alto, was conceived in a 1972 memo written by Lampson.
    • Important contributions of the Alto:
      • First “personal” computer
      • First computer that used a desktop metaphor
      • First computer to use a mouse-driven graphical interface
    • Lampson later work on the follow-up workstation designs Dorado and Wildflower (research products) which later resulted in a successful commercial product (Star).
  • The Bravo Editor
    • Lampson designed the first WYSIWYG editor in the world in 1974. This shipped with the Xerox Alto. This work can ultimately be seen to have led to the development of Microsoft Word
  • The Xerox 9700 and Dover Laser Printers
    • The first laser printer was designed in 1969 at Xerox Parc and Lampson worked on the electronic design of it.
  • The SDS 940 Time-sharing system
    • The first general-purpose time-sharing system.

And those were just the systems he built.

What about more fundamental contributions to computer science? Here is a list:

  • The two-phase commit protocol.
    • This is the fundamental building block of all transactional processing in databases that are spread out across machines and/or geographies.
  • The CAL time-sharing system
  • Programming Languages
    • MESA and SPL: for systems programming. Modern threads developed from here
    • Euclid: first programming language to use verification
  • Security:
    • Access matrix model, unifying capabilities and ACLs
    • Theory of principals speaking for other principals
    • Microsoft Palladium
    • Scrubbing disk storage
    • Research on how economic factors affect security
  • Networking
    • Co-inventor of ethernet!

How is Systems Research different?

Butler Lampson was one of the few great computer scientists who spent a lot of in the laboratory with actual hardware, getting his hands dirty. This is not the kind of work normally associated with Turing award winners, but it is the kind of work that has really given us the actual hardware that we use in computer science today.

Some thoughts on why systems programming is different more difficult.

Designing and building large computing systems, or complex computing systems or both. Computers (from tablets to supercomputers), networks, storage and other hardware and OS, programming languages, and other infrastructure software.

Systems design is different from other parts of computers science (e.g. algorithm design) because it’s external interface is to the real world (and hence it is imprecise, subject to change, and generally underspecified), lots of moving parts (i.e. more internal structure and more internal interfaces), module-level design choices have wider implications on the end product, and measure of success is less clear than in other fields. There is no such thing as an optimal answer, so avoiding terrible designs is more important than finding the most optimal one.

Hints on Computer System Design

This is a paper written by Lampson giving hints on how to build good systems. He uses hints, because in systems work, there are no infallible rules. So there are just hints which guide your thinking. Werner Vogels, CTO of Amazon, who oversees some of the most complex and scalable computing infrastructure in the world is a fan of this work. He finds these hints very useful, and says that they are more important today because they’ve withstood the test of time

These hints talk about functionality (what you’re doing), speed (are you doing it quickly enough), and fault-tolerance (and will you keep doing it) In systems, the interface design is the most important part. Do this well and other stuff will follow.

  • Hints about Functionality/Interface:

    • Do one thing at a time, and do it well
      • KISS – Keep it Short Stupid
      • Don’t generalize – because generalizations are wrong
      • Don’t overload interfaces with too much work
      • “Perfection is reached not when there is no longer anything to add, but when there is no longer anything to take away” – Antoine de St. Exupery
    • An interface should capture the minimum essentials
      • Don’t promise more than you can deliver
    • Cost of an interface should be predictable
    • Better to have basic and fast functionality, rather than slow and complex
      • Clients who need speed, are happy with the basic+fast interface
      • Clients how need complexity, can build it on top of your basics
    • Don’t hide the power
      • If a lower layer abstraction is more powerful, a higher layer abstraction should not hide that power. It should expose the power. The abstraction should only conceal undesirable properties
    • Keep the basic interfaces stable; parts of the system that will be used by many different modules should not change
    • Treat the first system that you’re building as a prototype that you will throw away
    • Divide and conquer
      • Reduce the problem to smaller pieces that you can solve more easily
      • Bite of as much as you can handle right now, and then leave the rest for the next iteration.
  • Hints on Speed

    • Split resources in a fixed way, rather than a sharing way
      • Do not spend too much effort in trying to share resources
      • Overhead of multiplexing usually not worth it
      • Fixed allocation this makes things faster, and more predictable
    • Cache Answers to Expensive Computations
    • Dynamic Translation
      • Use different representations for use of some structure vs. implementation of the same, and then dynamically translate between them.
      • “All problems in computer science can be solved by adding a level of indirection”
    • Use hints
      • i.e. things that don’t have to be 100% accurate, but can improve performance if they are correct
      • For example, the routing tables in internet packet routing. They can be incorrect, out-of-date. In most cases, they work, and give great performance. Rarely, they don’t work, and you can recover.
    • When in doubt, use brute force
      • Hardware is cheap
      • Brute force allows for cheaper faster implementation
      • e.g. chess-playing computers that use brute force have defeated grandmasters, while complex algorithms trying to “understand” chess.
    • Compute in the background whenever possible
      • e.g. if a webpage update results in an email being sent, the email sending should happen in the background.
    • Use batch processing when possible
    • Safety First
      • Correctness is more important than speed
      • Being clever and optimal is over-rated
      • A general purpose system cannot really optimize, because it doesn’t really know the use cases
      • Hardware is cheap
  • Hints for Fault Tolerance
    • End-to-End Error Recovery
      • “Error detection and recover at the application level is necessary. Error detection and recovery at lower layers is not necessary, and should only be done for performance reasons.”
      • Saltzer et. all, classic 1984 paper
      • Example: if transferring file from one computer to another there are lots and lots of steps (file being transferred from storage to memory to network), and in lots of sub-tasks (transferring chunks of file)
    • Lot updates to record the truth about an object state
      • Current state of the object in memory is a hint
      • The true state of the object is what can be recovered from the storage and logs
    • Actions should be atomic or restartable
      • Either an action is all-or-nothing, or if there is a failure in between the action should be able to restart from where it left off

Lampson himself suggests that reading the whole hints paper at once might be tiresome, so it is better to read it in small doses at bedtime. And he points out that he himself has ignored most of these hints sometimes, but then has always regretted them.

Leave a Reply

Your email address will not be published. Required fields are marked *