Alex headshot

AlBlue’s Blog

Macs, Modularity and More

Google Go is slowest compiled language

2009

Google's Go Issue 9 (not to be confused with the Go! the programming language) was released to a reasonable amount of press last week. After an initial amount of ho hum, there's basically not a great deal to like about the language. And for all of the arguments about speed, Go is the slowest compiled language of them all, being barely better than JavaScript's V8 engine. In fact, the main competitor Erlang is slightly slower but the rest are interpreted languages (or the JVM in interpreted mode).

Even the language is pretty ugly; so, they've got small program size – but compare that to (say) Scala, and you're looking at an equivalent amount of code. Plus, there's all the problems with dichotomy of references and values, which leads to problems like dereferencing random variables to cause memory corruption, and one wonders what the point of such a language is (even if it does have range-checked arrays). Surely a decent language eschews arrays anyway, and simply permits a fold, map, or filter over a collection of data rather than having to manage archaic concepts like arrays in any case?

Then there's the problem of global state, coupled with the design decision to not use exceptions. I guess that the developers of Go got burnt by C++ exceptions, but then C++ is such a sufficiently ugly language that you get what you pay for. In Go's case, however, it may well result in state errors, because all of the functions return an error code rather than throw an exception. As a result, it falls into all of the traps that such languages provide; ability to ignore a function call and meaningless error values. One can argue (fairly successfully) that the only reason for providing multi-value returns is to explicitly avoid exception handling.

There isn't even any modularisation. Go is a pure programming language, and despite languages over the past 10 years showing that there's much greater benefits to be had with runtime linking rather than static linking, all of the code that claims 'fast compilation' is purely code being built in a single module. Any sufficiently large system where compilation time is a factor (and one that can't be remediated to some extent by parallel compilation, both inter- and intra-module) can be mitigated by sufficient partitioning of your system space. Specifically, if you have a large system that is taking a long time to compile, you're doing it wrong. If you need to make changes, you should only need to recompile a single module and then have the program re-linked at the next runtime. (Yes, for cleanliness one should have a full clean build on a nightly build basis; but that's more of a build problem than a compile problem.

Arguably, over and above the poor choice of name, the worst argument is regarding speed. Although the compiler code may be fast, there's nothing that will further optimise that at runtime. The JVM and other JIT systems take (relatively) simple code and make it faster as the code runs; LLVM, which was discounted for being 'too slow', might yet turn out to be faster at compiled programs than the current Go infrastructure.

Will the implementation get better, or faster? You'd like to think so. However, basic errors like missing out exception handling won't save the language. And even with the might of Google's name behind it, a polished turd is still a turd.