Get OpenJDK (Zero) and Cacao with OpenJDK's class library running on BugLabs hardware. Furthermore start getting the Zero port's Shark JIT-compiler running and list open issues.
In this variant the Hotspot virtual machine is implemented as a C++-based interpreter. This combination works reliably and supports all of Hotspots features like JVMTI.
Since Zero has no JIT compiler the runtime performance is very bad. A typical Swing program takes half a minute to show up.
Cacao + OpenJDK class library
Combining the Cacao virtual machine with the OpenJDK class library results in a faster VM (thanks to an ARM JIT) at the cost of less features. E.g. JVMTI is unsupported in Cacao.
The Shark JIT-compiler which can be enabled as part of the Zero port of Hotspot uses LLVM's JIT compiler and the C++-based Interpreter for non-compiled Java methods.
Adding OpenJDK build recipes to BugLabs' Poky-based build system made it necessary to add a number of helper libraries and programs. These are:
In case of giflib it was also necessary to add it to OpenEmbedded itself since it was not available at that time.
The support for llvm was been improved vastly in OpenEmbedded which in turn required work on its cmake support (i.e. proper out of tree building).
Highly customizable build recipes
The cross-compilation build process of OpenJDK is quite complex and up to now requires certain patches. The build recipes for the three OpenJDK variants (Cacao, Zero and Shark) have been written in a way to allow a maximum of reuse as well as customizability (e.g. further patches). This was done to minimize the effort of updating to new IcedTea releases.
Untangle Cacao from OpenJDK
The build recipes for the CacaoVM and OpenJDK which form the binary package openjdk-6-cacao-jre allow updating the virtual machine independently of the class library. When user opt to install the JRE package the VM package will be installed automatically.
Open issues with Shark
The Shark JIT compiler depends on the Low-Level Virtual Machine (LLVM) project which provides the necessary JIT backend for the ARM architecture. At this time this backend is not as complete as for other architectures like x86 and PowerPC.
- Lack of atomic operations support in LLVM
The foremost issue is the lack of support for atomic operations. Although these operations are available from e.g. GCC's helper libraries they cannot be used directly from LLVM. One would need to extend llvm in a way that it generates the machine code of a function that e.g. can atomically swap two integers. LLVM provides the atomic operations in its intermediate representation (which Shark uses) as so-called intrinsic functions. A workaround for the lack of support for these functions consists of replacing the use of the intrinsics in Shark with calls to existing C functions. By doing so it became possible to run simple HelloWorld applications with Shark.
- LLVM regressions
LLVM contains an extensive testsuite. When running it for the ARM architecture the current SVN snapshot reveals 13 regressions. It is expected that these need to be fixed to get Shark working with non-trivial applications.
- Compilation performance
Even if Shark in its current form can only run simple Java applications there is a noticeable delay before actual code is executed. Using a profiler, like Oprofile, it should be possible to find out whether the delay is either caused by Shark or LLVM itself.
- Lack of architecture specific optimizations
LLVM provides means to detect the CPU type (= ARM ISA) at runtime. This in turn is used to create more efficient machine code sequences when possible. As of now LLVM only supports the generation of ARMv6 instructions in a few selected cases.