With the release of Mars.1 and Neon.2 today, I thought it would be good to see what effect (if any) the optimisations that I’ve been working on have had. So I took Neon and Mars for a spin, and compared the outputs.
I’ve been timing the startup of an application with the
org.eclipse.osgi/debug traces; specifically, the
loader (which displays
whenever a class is being loaded), and
bundleTime (which measures the
execution time of the Bundle-Activator). By running the application, then
hitting the red stop button when the main window is available, it’s possible
to get a measure of how fast the application starts up. I don’t do ‘quit’
from the application (because that would cause more classes to be loaded) so
I just terminate the JVM at the right point.
However, to automate it, and to allow others to experiment as well, I invested
in some time in creating various launch configurations pushed them to
so that others could replicate my setup. Essentially, there’s an E4 application
with no content, that starts up and then shuts down shortly afterwards. There
are some external tools that can process these lists to give numbers that are
hidden in the application.
That’s the good news. The bad news is that the optimisations so far haven’t had much of an effect; in fact, start-up of Neon is slightly slower than Mars at the moment. Starting an Eclipse SDK instance (the original SDK from http://download.eclipse.org/eclipse/downloads/ as opposed to the EPP, so I can get measurements without automated reporting) led to a start-up time of Mars of around 6.1s and Neon of 6.3s.
In fact, the start-up of the empty project has remained around the same, at
1.8s (after files are in the cache). Strangely enough, if you look at the
list of classes loaded there are a few more classes that are loaded (such
o.e.core.internal.content.ContentType) which weren’t there before. On the
plus side, the total byte size has dropped slightly (about 4k) and we’re now
down to 21 activators, from 30 before. This was counterbalanced by the
activators' removal as well as a number of other inner classes; for example,
the migration of inner classes to lambdas in
was a reduction of the load of separate classes. (Lars, if you want to
take another one then
would be a good one, as would
WBWRenderer … but never mind that now.)
Now the additional
.content. changes are suspicious, if only because I
have pushed a few changes to that recently. I originally thought that the
removal of static references
were at fault, but it turns out that
the move to declarative services
caused the problem.
How can that happen, I hear you ask? Well, it’s a damn good question because it took me a while to work that out as well. And the other question – what to do about it – is also another interesting one as well :)
As a side-note, measuring performance of Eclipse at start-up is a little
challenging. Unlike correctness testing (where you can run tests in the IDE)
the performance of the application, for performance testing there’s a
variation depending on whether you are running the code from a JAR or from
a project in the workspace (different class loader resolutions are used; there
are different paths which load the content if it’s a
File or an
different mechanisms for accessing resources and the like). You can test some
deltas before and after, but to test it for real installing it into a host
workspace and restarting is the minimum requirement.
There’s also differences between the builds published by the Eclipse infra and ones that your code does; it has been signed, and often gone through the pack200 processing/unprocessing. So the code that ultimately gets delivered is not quite the same as you can test locally. Other minor differences include the version of the Java runtime and compiler as well as a whole host of other potential issues. It’s not as much of a science as trying to minimise the variations to determine testing.
Anyway; back to DS. The
changes to the
included changing the ExtensionRegistry listener to an instance method, and
to use DS to assign it (instead of the prior Activator). Why does this single
change cause additional classes to be resolved?
It turns out this is a side-effect of the way DS works. When a related service
is set, DS looks up the method from the XML file and then uses
getDeclaredMethod() to look it up. In this case, it runs something equivalent
This is not far off what it used to do before in the
Activator. So why does
this do anything different now?
Well, the main reason is the way that the Java reflection is implemented.
Although the code calls the single variant
internally this expands to
getDeclaredMethods() and then filters them
afterwards. As a result, you’re not just getting your method; you’re getting
all methods. This means that all classes defined as exceptions, parameters
or return results in that class will subsequently be loaded even though
they are completely unnecessary. Although they aren’t actually initialized
static blocks aren’t run) the
class objects need to be defined
so that they have placeholder types in methods that we don’t even need. This
will then recurse to super-interfaces and super-classes (but not their contents)
which will result in the additional classes being lodaed.
So we traded off the loading of the single Activator class of 5k for four classes which are 54k in size. Oops. Not a sensible trade-off.
The advantage that DS gives us is that it’s not acquired until it’s first used. This should be a boon because it means that we can defer the cost of loading these more expensive classes until we really need it. And do we need a content type manager for an empty window?
Aargh. It’s another Activator. This time, it’s
open() will then trigger the
initialization of the very service that we’re trying to be lazy in
As a side note, this is why we need a
Suppliers factory. Instead of having
all these buggy references to eagerly activated ServiceTrackers, we should
be delegating to a single implementation that would Do The Right Thing,
including deferring opening the tracker until it’s been accessed for the
first time. (It would also help Tom, who would in future be able to replace this
implementation with other non-OSGi implementations, such as ServiceLoader or
whatever might come out of Jigsaw.)
The alternative of course is to replace it with a bunch of DS components;
either of these solutions would work. If we can defer the accessing of the
IContentTypeManager service, then none of the classes would be loaded.
Unfortunately, there’s no way of fixing the JDK. The new DS specification
permits installing services in fields (though I’m not sure if this is exposed
in PDE’s DS implementation yet). This wouldn’t help in this particular
instance because the setting of the extension registry needs side-effects,
which aren’t visible if only a field is set. In addition, the
call will perform the same resolution; and there’s more likely to be more
fields which are defined with implementation classes than methods (which
should generally be interfaces).
We could split the implementation to reduce the number of declared methods;
for example, having a
ContentTypeManagerDS subclass that exposes the DS
required methods may reduce the number of classes that need to be resolved.
Another alternative is to have a delegate which implements the interface and
forwards the implementation methods; but in this case, the
is such a large interface (with several super-interfaces and nested types)
that this doesn’t buy much. Or we could just revert the commits in this
The good news is that this doesn’t particularly affect the SDK; the content
type manager is used there, and these classes are loaded. So in the real
test – how long it takes to spool up the SDK – either of these implementations
are likely to be loaded in any case. It’s only in the startup of a simple
E4 application that you’re likely to notice the difference; and this has a
potential solution with addressing the
PlatformActivator and friends.
Update 2015-10-06: I have been asked to stop submitting micro optimisations to Eclipse.