Alex headshot

AlBlue’s Blog

Macs, Modularity and More

QCon London 2014 Day 1

2014 Qcon Conference

Is it really a year since last time? Doesn’t time fly … especially if you have a lot of things going on and not enough time to blog about them. I can’t believe it’s been more than 3 months since last time I blogged. Oh well, more on that later.

Today is the first day of QCon London, and it’s great to be back with the QCon attendee crew. One of the things I love about this conference is the mixing with groups and techniques that I wouldn’t ordinarily have overlap with and finding out how the problems are solved in different ways and different languages. There’s always something new to learn.

Life, The Universe and Everything

The conference kicked off with Damian Conway talking about Life, The Universe and Everything. Life in this case is Conway’s Game of Life (no relation) and the self-replicating patterns that can evolve from a few simple rules and automata. He showed a Perl program for evaluating a set of starting rules and the evolution, and used that to explain a ‘glider’ and a ‘glider gun’ which are used to create repeating patterns.

He then talked about turing machines and what they have been implemented in, including lego, meccano, and even the game of life.

Briefly he digressed into using Klingon as a programming language, saying that the word formation of Klingon is Indirect object, Object, Verb (IOV) which is the same as Reverse Polish Notation (RPN). As a result, he proposed a language using numbers and keywords derived from Klingon in order to run a program written in that language, in the same way he suggested writing a programming language in Latin from last year

He finished by talking about Maxwell’s Demon, which provided a door to filter particles with a specific energy from one location to another (either creating a vacuum on one side or a temperature imbalance) and then demonstrated in the simulation that as the number of particles increased that as a door opened to let in the particle from right-to-left, a particle would escape co-incidentally from left-to-right.

For the finale, he demonstrated a program written in Klingon that performed that same simulation. As always, Damian is a master of the presentation as well as both dead and invented languages.

Continuous Deployment

There was a track dedicated to continuous deployment on the first day, and included speakers such as Daniel Schauenberg and Damon Edwards. In the first talk, the continuous deployment at Etsy was covered, including the fact that on the first day joiners learn how to deploy the site to production.

By measuring the performance of applications running in real time, using statsd and supergrep along with graphing results to determine the number of failures that are received, the response times of various pages and other real-time stats.

Damon Edwards talked more generally about Continuous Delivery, including the process by which continuous delivery can be enabled, including looking for waste (when a process adds little value that could not be collected from automated systems) as well as being able to tie the original business request to the set of changes that are subsequently deployed into production.

By measuring key performance indicators (such as the length of time it takes from an original request to the change occurring in production, as well as the mean time to detect problems and mean time to repair problems) it is possible to see how any benefits are translated into the company.

The Raspberry Pi Carputer

Simon Ritter talked about how to use a Raspberry Pi to graph data collected from a modern car’s electronics, using an ELM327 to plug into the car’s on-board OBD diagnostic port, and provide the results over either WiFi or bluetooth. It provides an AT-style command set (pdf) including AT MA to monitor all communication, and AT MT x to monitor communications for a specific device id.

Combining this with an accelerometer (MPU9150) via the Raspberry Pi’s I⬬C interface and using the i2c-dev and i2c-bcm2708 modules, along with i2c-detect -y 1 to list the addresses of the components on the bus, was able to read the accelerometer data and display that in measured G load in the device itself.

Simon then demonstrated a standalone Raspberry Pi with a touch-screen HDMI device and a data track, along with a video of the device in action in a real car. Simon plans to write a blog post describing this in the near future.

Continuous Delivery

Dave Farley talked about his experiences at LMAX providing a continuous delivery, based on his book with the same title). In their set-up, each commit would trigger a build and a corresponding set of unit tests, and those that passed the build would be pushed into an artifact repository.

A set of monitoring jobs would then note when a new build was available and kick off a set of parallelized automated testing jobs in order to validate a wider set of rules in a connected environment. After these jobs completed, it would tag the artifact in the repository with an appropriate flag and then repeat with the latest available build, and the cycle repeats.

Once the automated tests were complete, the build was automatically deployed to a user testing environment where humans could work with the install to visually check and provide assorted testing. Once this was complete the build would be tagged again and move onto the next stage, which was automatically deploying it to a wider set of hosts with an anonymised copy of the production data to verify that it didn’t break anything, or that automated upgrade scripts correctly handled the existing data.

Finally the build would be acceptable for pushing to production, though since LMAX was a high-performance and low-latency system these pushes were done outside of business hours.

The key points were:

  • Once a release candidate is built from a commit, it is not rebuilt again
  • The state of each release candidate is a property of the artifact repository
  • Triggers listening to state changes in the repository drive the process
  • Having everything automated (testing, setting up environments, deploying) reduces the chance of human error
  • Stateful processes can be migrated iff there is more than one host (for redundancy) and it’s possible to dump and re-load the in-memory state

Exploiting Loopholes in CAP

Michael Nygard talked about exploiting loopholes in the CAP theorem, which states that out of Consistency, Availability and Partition tolerance, you can only have a maximum of two. Typically the partition tolerance is the one that is most desirable to keep, which means that the choice typically comes between consistency and availability.

However, although the theorem is focussed on precise definitions, some of those assumptions can be weakened to provide a workable solution; Michael talked about various ‘loopholes’ (really, just relaxing some constraints) to improve certain aspects of the problem:

  1. Since reading mutable state is the problem, if reading is not required then the problem goes away by default, though this is not particularly useful!
  2. If the content is written but never modified (WORM) then reads will always be consistent.
  3. If the databases are always self-consistent – in other words, all database constraints are met – then reads will always be consistent even if some are out-of-date.
  4. Partition consistency and available within a subset can be performed, if the subset is in a well-defined area (e.g. within a data centre) connected by potentially disconnectable devices.
  5. Bounded consistency – the core may be consistent (e.g. read/write data) whilst external mirrors will be eventually consistent (e.g. caches).
  6. Stop using distributed systems (as useful as option 1).
  7. Use a more reliable network that provides guaranteed connectivity.
  8. Use an external reference point (such as GPS time signal) to ensure that operations can be linearized, as used by Google Spanner
  9. Redefine availability to mean read but not write; so reads always succeed but writes may fail in the case of a partition.
  10. Consider observable consistency, in which provided that the updates occur and results are visible when the system is consistent.

Michael has previously talked about this before; you can see his slides from QCon San Francisco as well as an interview on InfoQ.

Tim Bray on the future of the browser

Tim Bray concluded the day with a discussion on the future of the browser – presented, naturally, in a set of browser tabs.

The key point he made is that most client-side developers end up developing applications three times; once for the web, once for iOS and once for Android. For iOS and Android the development toolkits are well developed and built, but the web sites have had a lack of tools.

Part of the problem is that there are too many libraries and problems with JavaScript and the DOM model. However, the URLs and HTTP as a underlying transport model are right; so the future might be there but perhaps JavaScript needs to be removed from that equation. He showed some Go examples and advertised how server applications could be built in Go taking advantage of concurrency.

Wrap up of Day 1

QCon is a great conference, and this year is off to a flying start. One of the main things to get out of the conference is to meet and talk to new people, as well as to explore new technologies and processes. On to day 2!