The conference concluded with a series of lightening talks; that is, talks that are five minutes to give a flavour of the subject matter rather than any detail. Perhaps unsurprisingly, all of the talks came in at just under 5 minutes, with none being cut short. Here's a (brief) overview of each:
- Dan North (Thoughtworks) on Getting Lean
- Compared software development processes to manufaturing processes; the argument being that supply-chain management is similar to on-going software processes, with changes being queued into the development process to arrive in-time on the user's system.
- Steve Freeman (M3P) and Nat Price on jMock
- jMock now uses interfaces and literate programming to describe tests; therefore, can utilise IDEs capabilities for auto-completion to fill in the parts of the system (though static imports can be a problem for some IDEs). Uses interfaces as opposed to classes to allow data to be passed but only expose methods that are applicable to the type in question.
- Christine Newman (Progressive insurance) on ROI of testing
- Given that testing is a cost (including test-first development and other techniques such as pair-programming) learn to justify it with numbers. For example, how many production problems were prevented, risk were mitigated, labour hours saved etc. Being able to put a number on money saved is a good way of justifying money spent.
- Andrin von Recehnberg (Google) on mobile devices
- Ade Oshineye (Thoughtworks) on testing heresies
- There were five separate points; mutation testing is practical (mutating the code and re-running tests shows you whether your tests cover that code); static analysis can work (FindBugs, IntelliJ,PMD); test other people's code (to validate whether the library checks the data that it's being passed in, and to verify that new versions of the libraries have consistent behaviour); use production for testing (shows that it works on real systems); and make the users test the system (by analysing the logs to find out what the users are doing).
- James Richardson (Time4Tea) on automated testing
- A quick discussion on the benefits of automated testing, and how to sell it by encouraging best practice and leading by example
- James Lyndsay (Workroom Productions) on automated tricks for manual testers
- Not all testing can be automated (e.g. analysing logs) and so having tools to be able to process them (e.g.
awk) is important. He also observed that using virtual machine technology such as VMWare and Xen is a great way of being able to supply systems that can be guaranteed to work without having incompatible software/configuration getting in the way.
- Jordan Dea-Mattson (IMPAC Medical systems) on defence in depth
- Any test strategy has multiple single points of failure, so to reduce the problems, integration is needed across all levels (unit, integration, system, functional, in the wild) as well as tools (defect/feature tracking, test case management, task tracking).
- Curtis Poe (Perl foundation) on the Test Anything Protocol
- A language-agnostic testing protocol that can be used to log messages and then summarising them afterwards. A pathological aversion to XML means that TAP is a text-oriented protocol, and importantly can be generated from any language (not only Perl; though it seems to have been more popular with scripting languages). Here's a sample:
1..4 ok 1 - Input file opened not ok 2 - First line of the input valid ok 3 - Read the rest of the file not ok 4 - Summarized correctly # TODO Not written yet