Tuesday’s kick off to QCon London started off with an architecture overview of what trends have been like over the last twenty years. It was a reasonable start to the day, but not as strong as the keynotes from the day before.
There were some great tracks available; in particular, the Architectures you’ve always wondered about (hosted by Wes Reisz), Close to the Metal (hosted by Martin Thompson) and Modern CS in the Real World (hosted by <a href=”https://twitter.com/adriancolyer</a>) were all ones that I was interested in. Of course I could only go to a subset of them, which didn’t help …
The big hitter was probably the Architecting Google Docs talk by Micha Lemonik. Unfortunately this wasn’t recorded, but my tweet stream has a few photos of parts. Apparently Google Docs started off as a web-based excel spreadsheet server; a small company called 2WebTechnologies had a product called XL2Web that provided a remote viewing platform for spreadsheet content, by interpreting an Excel spreadsheet on the server and then rendering a remote HTML view. Since the leading browser was IE SP1 at the time, the data had to be stored on the server in order to came through. The collaboration was an accident as two people edited a document at the same time and they both saw the results. Fortunately the model they built had no non-commutative operations which meant that operations could be replayed. It’s the same model in Google Docs SDK today; and the fact that the APIs existed helped mobile adoption later on. Scaling is through sharding and consistency trade-offs; for a popular document in read/write mode, users may be switched to a read-only version that may be delayed, thereby trading availability for consistency. It turns out that once the unit of consistency is too large for a single server, it’s no longer consistent, which means finer-grained control of document sharding often implies sharding at a greater level, like per chapter on a book. And finally, Google Storage has a lot of data.
Other stand-out presentations for me included the quest for low latency with concurrent Java in which Martin Thompson talked about
the ways in which Java applications can have latency hits, especially when
dealing with the garbage collection and with false sharing. His open-source Aeron message server uses an out-of-process message broker that runs on the client and delivers messages
outside of a Java process, so that client-side Java garbage collections don’t
introduce unnecessary jitter due to GC pauses. Messages are passed by appending
into a rotating set of queues and then shared memory is used to allow the
client to write directly into the memory space read by the server. He also compared the standard
Queue objects in Java, and showed that they either generated garbage, had locks, or both, and that the latencies (particularly the 99th percentile) were significant once contention between multiple producers kicked in. He then introduced the ManyToOneConcurrentArrayQueue and showed benchmarks to demonstrate its effectiveness over other standard queue mechanisms. The whole presentation is well worth watching when it’s available; in the meantime, you can read more through the Martin’s Blog and look at the code samples on on GitHub.
In the same vein I enjoyed <a href=”https://twitter.com/giltene</a>Gil Tene</a>’s talk on hardware transactional memory. Although HTM was attempted in Haswell, it was disabled due to firmware flaws – but now the latest generation of Broadwell chips have HTM enabled through the TSX instruction. This allows a transaction begin to occur (with a call-out to a cleanup/retry location if it fails) and then all cache state modified during the subsequent instructions stays in the cache until the commit, at which point the modified changes are written back or the state is restored as at the beginning of the transaction and the retry logic is called. This allos for some specific improvements (particularly with lock elision using effectively free optimistic locking) and particularly if the locks in case are over a variety of different data structures the speculation can provide increased concurrent throughput. I recorded an interview with Gil whilst at QCon London, which I hope will be made available on the InfoQ site in the near future.
Another Googler talk on distributed systems in practice trumped the persistent memory changes. However, although the presenter worked on the distributed build design, there were many references to academic papers and relatively little practical information. It mostly seemed to be a case of Papers we Love in a similar vein to the keynote yesterday. One useful nugget; if you have systems that may fail, building in a lease (pull request heartbeat) and then disconencting a client whenever heartbeat failures occur is a way to main overall stability; especially if the client’s job is resubmittable. And of course, a multi staged pipeline can be easily scaled if you separate the stages with queues and then have multiple consumers; but that’s distributed scaling 101.
Briefly; I also attended <a href=”https://twitter.com/mon_beck</a>Monica Beckworth</a>’s talk on JIT and GC, in which she went through the many configuration flags and tweaks that could be used to understand the garbage collector in Java; and <a href=”https://twitter.com/justincormack</a>Justin Cormack</a>’s talk on running unikernels. They were both good talks, but the devil is in the details and there’s far too much detail for me to be able to reproduce it here.
There wasn’t an evening keynote, or for that matter, an evening after-hours party. I think that’s the first time I’ve been to a QCon where there hasn’t been such an event. For first-timers, you probably wouldn’t have noticed; but for long-term attendees it was a little bit of a surprise.