Pull to refresh

Traps and pitfalls in modernization of enterprise application using «supersonic subatomic java»

Reading time 8 min
Views 1.2K

This is a post about first steps and first pitfalls in modernization of old enterprise application using "supersonic subatomic java" quarkus as it is positioned by RedHat.


Initial setup


In the end of 2019 I was invited to work in a project in our company, where an old monolithic application should be split into micro-services. Basic reasoning behind this decision was, that framework which is used in application is near to it's end-of-life. The application should be rewritten in any case. If it should be rewritten in any case, why not to split it into micro-services?


Since last 10 years, I was working mostly with java and we had in the project specialists with java knowledge, we have decided to give java-based frameworks a try for back-end functionality.
OK, let's use spring-cloud for that purpose, was our first thought. But then, we have had a look onto quarkus, which has been released in the end of 2019. We have decided to give it a try, keeping in mind building of native applications using GraalVM.


Native applications from our perspective could give us following benefits:


  • shorten start time of container
  • reduce resource consumption of container and application

We were aware about possible drawbacks of this solution:


  • no experience in our team with quarkus framework
  • significantly less amount of available feedback from community, since this is very young framework

First success with "hello world" application


To start with something, we have decided to write a prototype of very simple CRUD REST micro-service. We've taken a starter hibernate-panache-quickstart, modified it for our simple entity and ported it from maven to gradle. So, we have just followed the guides from official documentation, and first toy-application was ready.


First run of quarkusDev gradle task was very impressive.



First of all, the application started really fast. Then it activates so-called live-coding mode, where you are able to change your source code, changes will be tracked by quarkus and it invokes recompilation (and restart) of application automatically. Third nice thing, was presence of swagger-ui functionality in development profile.


First problems with native compilation


Let's go native, we thought, and opened native compilation guide. To do so, we need to have GraalVM installed. We've taken corresponding docker image for 19.2.1, installed gradle, put our sources within, and started build.


First surprise, which we have discovered, was the necessity to downgrade target java version of our application from 11 to 8 (as of 1.1.0.Final). Due to the fact, that we haven't used and haven't planned to use any specific java 11 features, this change was rather harmless for us. Second surprise was rather slow build process, native compilation took about 10 times longer, than the jvm one. Third surprise was rather sufficient memory consumption for native compilation (we have been forced to increase docker image memory consumption to 8Gb).


Finally native application has been build. We have started it for the test purposes from the same container, where we have build it, and the start time has been reduced amazingly.


Comparison of the CPU and memory consumption for container in kubernetes cluster one can find below.


Problems with measuring of test coverage


The second pitfall we have found, when we have tried to build corresponding pipeline for our toy project and measure unit test coverage. Rather fast we have discovered, that report concerning unit test coverage from JaCoCo gradle plugin doesn't reflect the real picture. Having a closer look into the warning messages in the console and corresponding quarkus guide revealed the source of our problems. According to the guide, we need to get rid of jacoco online instrumentation and use offline instrumentation instead. While this seems to be a trivial task for maven, gradle jacoco plugin doesn't support this functionality (see stackoverflow discussion). Luckily in the same discussion and in the guides an algorithm is present, how this problem could be resolved by usage of corresponding ant tasks. For detailed description of those tasks one could refer to JaCoCo documentation.


Embedding native image into docker container


The next step after successful compilation of our code to native application was the injecting of the native application into corresponding docker image. After some investigations, we have decided to use ubi image from RedHat (ubi8 in minimal configuration). This step was run surprisingly rather smoothly. Moreover we have found additional benefit of using native applications in docker images without jre. Beyond reduced size of such images, they also have significantly less amount of vulnerabilities.


Below you can find security scans from our quay repository regarding image for jvm-applications (based on openjdk:11-jre-slim) and native applications (based on ubi8-minimal).



Hard path from hello-world project to real-world micro-service


First attempt and lost hope


Encouraged by the benefits of native images, we have started with building of our first CRUD REST micro-service, which might be useful inside of the project. One of the prerequisites for migration of monolithic application to the bunch of micro-services, was the we would like to reuse Oracle database instance for the micro-services as the first step. We have planned to introduce separate Oracle schema for each micro-service, so that they are decoupled. The final step (when all application is already rewritten) would be then to replace Oracle database with appropriate database for each micro-service.
In the meanwhile the new version of quarkus has been released, which has enabled support of java 11. So, for the real-world micro-service we have enabled java 11 as target by default.


To enable native compilation of libraries in quarkus, the library need to be part of quarkus extension. Rather fast we have found, that no quarkus extension has been implemented for Oracle jdbc driver. Despite the fact that it was planned to introduce it, currently the work is put on ice due to license concerns.
We have tried to enable native compilation without introduction of quarkus extension, as it was described in comment. Unfortunately even with these hints, gradle task regarding native compilation just failed without any useful error message. There was really no clue regarding the native compilation errors in our application even with --debug and --stacktrace options for gradle task.


Useful logs of native-image tool


Rather frustrated, we have put this activity on ice. In meanwhile we have set up pipeline for building a jvm-based quarkus application in our CI/CD tool.


After the break we have elaborated some further ideas, how to proceed. Having a closer look at gradle task debug output for native compilation, we have mentioned the command string for invocation of native-image tool of GraalVM.


Below you can find an extract of such kind of output:


/opt/graalvm/bin/native-image -J-Dsun.nio.ch.maxUpdateArraySize=100 -J-DCoordinatorEnvironmentBean.transactionStatusManagerEnable=false -J-Djava.util.logging.manager=org.jboss.logmanager.LogManager -J-Dvertx.logger-delegate-factory-class-name=io.quarkus.vertx.core.runtime.VertxLogDelegateFactory -J-Dvertx.disableDnsResolver=true -J-Dio.netty.leakDetection.level=DISABLED -J-Dio.netty.allocator.maxOrder=1 -J-Duser.language=en -J-Dfile.encoding=ANSI_X3.4-1968 --enable-all-security-services --allow-incomplete-classpath -H:ReflectionConfigurationFiles=... --initialize-at-run-time=... -O0 --verbose -H:+TraceClassInitialization -H:IncludeResources=... -H:+ReportExceptionStackTraces -H:-SpawnIsolates -H:EnableURLProtocols=http -H:+ReportUnsupportedElementsAtRuntime -H:IncludeResourceBundles=... -H:ResourceConfigurationFiles=... --initialize-at-build-time= -H:InitialCollectionPolicy=com.oracle.svm.core.genscavenge.CollectionPolicy$BySpaceAndTime -H:+JNI -jar <jar_file_of_application> -H:FallbackThreshold=0 -H:+ReportExceptionStackTraces -H:-AddAllCharsets -H:-IncludeAllTimeZones --enable-all-security-services -H:-SpawnIsolates --no-server -H:-UseServiceLoaderFeature -H:+StackTrace <native_application_name>

So, we have taken this string and tried to use native-image tool directly, without wrapping it into the gradle task. As the result we have received an error message from the tool, that InitialCollectionPolicy is wrong. Having googled for similar problems, we have found, that this is related with the way, how command line arguments are parsed, as described in this post. Having escaped $ signs we have moved a little bit further.


Next task was trying out to get rid of compilation errors related to initialization of classes at build-time, which should be initialized only in runtime. Namely you should list all the classes of your application or in libraries, which are not suited for native compilation, using initialize-at-run-time option as described here. It appeared to be rather time-consuming task, but finally we have managed it and native compilation has finished without error. After that we have put our configuration for native-image tool to buildNative gradle task:


buildNative {
    additionalBuildArgs = [
     ...
    ]
}

Hope is lost again


Encouraged by the progress, we have tried to start our native application and… got runtime error by attempt to establish connection to Oracle database, having only error code and without appropriate error message. When we have specified that Oracle JDBC driver resources bundle should be included into the native application using -H:IncludeResourceBundles option, we have got the error message text:



Tracing agent joins the game


After brainstorming, we have decided to apply another approach to the problem. Namely, another approach described in GraalVM tracing agent guide. So, we have executed our java application with tracing agent enabled and run majority of possible usage scenarios. As a result we have got reflect-config.json, proxy-config.json and resource-config.json files, which contained rather big amount of classes to be considered.


Next step was to try to perform native compilation with json configuration gathered by tracing agent using -H:ReflectionConfigurationFiles, -H:ResourceConfigurationFiles and -H:DynamicProxyConfigurationFiles. Compilation run successfully and we tried to start native application again. Surprisingly this time everything run fine, and we were able to get results from our application back using REST endpoint.


After several minutes of triumph we have recognized, that now we should strip-down configuration files from tracing agent to the minimal configuration, that shouldn't raise run-time errors of the application. We have accomplished this routine task by having significantly reduced configuration files, and in our case it also appeared, that DynamicProxyConfigurationFiles could be completely skipped off.


Comparison of jvm-based and native applications


Having got the native application working, we have deployed both to our kubernetes managed cluster and performed comparison of start time and resource utilization.


Picture below shows CPU (black line) and RAM usage for the applications during get operation:



From the picture one can see, that RAM usage is approximately 10 times lower for native image than for jvm-based one, while CPU usage is approximately the same.


The graphics are taken from the pods with following resource limits:
jvm-application


            limits:
               memory: "1024Mi"
               cpu: 1500m
            requests:
               memory: "400Mi"
               cpu: 100m

native application


            limits:
               memory: "256Mi"
               cpu: 100m
            requests:
               memory: "180Mi"
               cpu: 50m

If we compare application start-up times, we will have following picture:
jvm-application:



native application



So, start of the native application takes about 4 times less, than the jvm-based one. Both applications have significant restrictions regarding resource consumption.


Lessons learned


Having a look at all the efforts we have spent to have a native application running in our container orchestration infrastructure, we think, that it was worth doing.


From our point of view there are following benefits of using Quarkus framework + native application compared to spring jvm-based applications:


  • Reduced resource consumption
  • Nice features for development
  • Potential smaller amount of vulnerabilities in docker images for application hosting
  • Creation of know-how regarding quarkus framework and GraalVM features.

There are following drawbacks:


  • While simple hello-world application might work well, adoption of real-world applications might require significant efforts
  • Significantly smaller amount of available documentation compared to the one present for e.g. spring framework
Tags:
Hubs:
+3
Comments 0
Comments Leave a comment

Articles