Alex headshot

AlBlue’s Blog

Macs, Modularity and More

QCon Day 1

2011 Qcon

I'm attending QCon London this week (disclaimer: I write for InfoQ, who organises QCon). As usual, the conference organisation is pretty good; the schedule is available on-line and also as an iPhone application, which appears to have been improved for this year's conference. The individual descriptions now show you where teh room is, but it would be nice if this information was available in the talk overview itself, or indeed, a physical layout of the building. The QEII conference centre is a veritable rat's maze to negotiate.

The start of the conference didn't have a snazzy video like last time: instead, we got a breakdown of the upcoming tracks and their talks. In all honesty, it went by a bit fast for people to know what to expect; and you have to think that by that time, the more organised people had probably made up their minds somewhat about who and what they were about to see. Maybe tomorrow we'll ahve these cut short and just have an overview of the presenter's name and abstract, instead of the full bio and summary that we got today (and thus overran somewhat).

Keynote

The opening keynote was Craig Larman, who has coached a number of agile scrum teams over the years for a variety of big-name companies. His talk was on scaling agile development across multiple sites using scrums-of-scrums; for 5-10 scrums, separate the work out into teams by having one team representative stepping up to form a central scrum, who then divvies out the work between then and then returns to implement it. For scaling to more, simply repeat at the higher layer, by having an area owner act as a scrum leader, who then acquires tasks for his scrum set, who then divulge it down into individual units.

"Stop using anything from Rational for scrum/agile devel... on Twitpic

Various of the images used were taken from http://www.odd-e.com/resources/images_scaling_thinking_book/, including the scrum-of-scrums and scrum-of-scrum-of-scrums, as well as from the book Scalaing Lean Agile Development Organisational, if you want to find out more. He did come up with a few notable one-liners (maybe the joke about the death penalty was more amusing for Americans?) including “Branching for customization is the path to hell” and “stop using anything from Rational for scrum/agile development”.

The other nugget of useful information was his recommendation of http://www.magicwhiteboard.co.uk (@magicwhiteboard), which is a tear-off strip or rolled up whiteboard surface that can be stuck on a wall like a giant post-it note, and wiped clean after use. I'm sure there will be a spike in sales as a result.

Uninfying Search

Jason Hunter, of Servlet API fame, talked about the MarkLogic Server, which runs the MarkMail search tool (amongst other things). The database is part content repository system (it stores the full content, rather than an index-only approach) and can be used to perform arbitrary queries on the data contained therein.

Most of the solutitions appeared to be based on term lists (“the answer is always termlists”) which permits documents to be indexed and have a set of documents returned upon any query. For documents-that-contain, only a single term is needed for the termlist; but for most other operations, term-pairs are sufficient. When constructing a search for a full phrase (“the quick brown fox” was used a few times as an example) it gets deconstructed as word pairs e.g. {the, quick}, {quick brown}, {brown fox} and searches on multiple termlists find the intersection. For some kinds of terms, this answer won't be unique – but brute-force searching could be used to determine of the subset of documents returned which ones matched.

The only snag is for scalar ranges; in this case, the information needs to be store din sorted order, and thus, special knowledge of which terms to extract are required. MarkMail can generate additional indexes (upon user request) to identify such data pairs. A similar index can be created for representing geo-located pairs, with a latitude/longditute pair being searched followed by an even-odd path inclusion for determining inclusion in a polygon, in the same way that graphics cards determine hit testing.

Finance in the Information Era

Sean Park gave a highly entertaining account of the development of finance from its inception through to its current day, with some interesting views as to where it will go in the future. Sadly, the video cameraman wasn't available and so the session wasn't recorded; nor are the slides available on-line.

The basic gist was that the industrial revolution started with the age of steam (1750), the age of electricity (1830), the age of electronics (1970, with the Intel 4004) and now the age of cloud computing (2006 Amazon Web Services). However, the financial services (and the legislature that supports it) is largely still based in the years before electronics, where paper documentation trails are regular and physical co-location near physical exchanges (Wall St, London Stock Exchange) are still important even with the disappearance of physical stock brokers.

The key problem is reality; partially, the problem of reputation is based on assets in the real world, which can be modelled but may not exist in the electronic world. However, as changes like PayPal and eBay have started to change the way we do business, how long will it be before the standard businesses follow that model as well?

Do's and Don'ts in Android

Lars Hesel (@larshesel) talked about experience of developing a mobile phone application for a bank. One problem cited was that although Android is the same across platforms, support for hardware varies across and different devices. In particular, the cameras were singled as having challenges; some devices reported arbitrary resolutions which weren't in fact supported, and other devices had differing Android operation systems installed. This resulted in needing to buy a small number of devices to verify the specific hardware compatibility if a general bug couldn't be found.

Other challenges included the fact that crash logs rarely tend to identify which hardware, or which version of the system was installed. This can be worked around by installing a specific exception handler (much like in iOS) and logging such information, provided it's kept simple.

Drawing performance was directly related to the number of classes and method calls involved – a factor of 4-5 fps was cited as being related to using OO vs non-OO code in drawing methods.

He summarised by saying that iPhone developers have it easy since the variations in hardware is much easier supported than the variety of Android devices, particularly when the Android devices aren't updated as often.

Mobile App Privacy

Graham Lee (@iamleeg) gave an entertaining talk on mobile app privacy. The takeaway was training users to skip through dialogs. In the first example, installing Angry Birds from the Android Market resulted in a dialog with an OK button (as well as a brief note that the app needed internet access). Presenting information at an inappropriate time is likely to just result in users ignoring it as taking any advantage.

The other problem is that users aren't rational or predictable. Decisions made at one point may not be relevant after new information arrives later; so asking for a decision initially which may be based on future events is unlikely to be productive.

As an example of information being delivered in an appropriate way is the password change dialog, which shows a password strength dialog between the password text and the OK button. Since the strength is displayed in the dialog, the user can easily identify a poor or weak password prior to pressing OK, but is not prevented from doing so or having a subsequent yes/no dialog box.

Graham recommended four books:

Making Apps That Don't Suck

Mike Lee (@bmf) rounded off the day's sessions with a classic presentation on application design. He gave the presentation dressed as a pirate and a bit of his background, being involvd with Omni, Tapulous, Delicious Monster and later United Lemur and Apple.

His rules for making great things:

  1. Assume we suck
  2. Figure out why we suck
  3. Suck less

To find out why things suck, learn from experience and notice when they do. Then think about what sucks about them and derive rules in order to apply to other experiences.

Trism was cited as a particularly bad application, which had a splash screen advertising the app you already had, followed by a modal dialog, options which were shown but couldn't be enabled. Even when you want to quit, a nag screen asked to put your name in for the high score table.

Mike's advice was to ask for feedback and go looking for it if you don't find it; and work in a team of at least two, because your mistakes are invisible to you (or otherwise you wouldn't make them in the first place). It works the other way; try to find other's mistakes but try not to be a douchebag about it.

“Surprise and Delight” is Apple's business model. When implementation details are exposed to the end user, we (as programmers) aren't doing our job properly – as such, the Linux boot screen is the worst advert for Linux ever. Also, never let them see you making it, and what you wanted to make and what you actually made are two different things; don't apologise for that different.

He concluded with a plea not to build crap ("Stop making crap. The world has enough of that already, and competition is fierce."). To make great things, first you must refuse to make things that not suck. Also, don't compete on price; there's always someone (maybe in a different country) in a different country who can do it cheaper. Life is too short to waste time on things that suck.

Innovation at NASA

With the final landing of the Discovery shuttle today, it was fitting that QCon rounded off with what's happening at NASA. Mark Powell gave a good set of highlights about the tools that NASA and JPL use, from Eclipse RCP to schedule work on the Mars rovers to using Kinect to interact with graphic displays and wraparound simulations.

The images on Mars are processed by a massively parallel cloud computing system, and tesselated into different resolutions to give a Google Earth style flyover of the ground, as well as a geospatial search for nearby objects. Being able to merge all of these into a single application takes a lot of processing power, but the cloud approach can be accessed from all over the world.

With the end of QCon Day 1 coming to a close, and QCon Day 2 just around the corner, it's time to sign off. Follow the tweets at @alblue or the hashtags #qcon and/or #qconlondon.