This article is part of a series based on the book ‘Quarkus for Spring Developers’.
Java in the Age of Dinosaurs: The J2EE Era and Application Servers
By 2025, Java turned 30. If it remains at the top, it is thanks to the solidity of its ecosystem, but the landscape where it was born was drastically different from today.
When Java emerged, concepts like cloud, containers, or Kubernetes didn’t even exist. Applications didn’t run alone; they were “hosted” inside massive Application Servers. These servers were giants that handled everything: transactions, caching, security, and connection pooling for multiple applications simultaneously.
The heart of this era was the Java 2 Enterprise Edition (J2EE) specification. In practice, this meant dealing with the infamous EJBs (Enterprise JavaBeans). For a simple feature to “exist” in JBoss or WebLogic, developers were forced to implement complex interfaces and write massive XML files just to get things running.
The result was an extremely heavy environment:
- Boot Time: Servers took many minutes to start.
- Resource Consumption: They required massive amounts of memory before the first line of business logic even executed.
Over time, lighter alternatives like Apache Tomcat emerged. They didn’t attempt to implement the full J2EE spec, focusing only on web essentials (like Servlets), which reduced resource consumption and improved response times, setting the stage for what was to come.
The Rise of Spring: Simplicity and POJOs
The true villain of the J2EE era wasn’t just the server weight—it was the infinite bureaucracy. This complexity sparked a resistance movement led by Rod Johnson, culminating in the birth of the Spring Framework.
Spring’s proposal was revolutionary for its time: Simplicity. Instead of complex EJB interfaces, Spring introduced POJOs (Plain Old Java Objects)—simple Java classes without heavy framework dependencies that could be easily tested outside an application server.
The end of “XML Hell” and the Start of Dependency Injection
Spring brought Dependency Injection (DI) to center stage. Where we previously had manual XML files dictating how components should connect to the server, Spring began managing these connections more fluidly.
Over time, Spring evolved into Spring Boot, which practically eliminated the need for XML configuration files, replacing them with annotations and “convention over configuration.” Developers could finally focus on @RestController or @Service, while the framework handled the rest.
Breaking the Monolith: The Emergence of Microservices
Eventually, companies realized that massive applications (monoliths) were hard to scale and maintain. If you had 10 teams working on the same project, any small change required testing and redeploying the entire system. It was a slow and risky process.
The solution was Microservices Architecture: breaking the system into small, independent pieces that communicate with each other.
The Role of Spring Boot (2014)
In this scenario, Spring Boot became the great enabler. It introduced the “Fat JAR” concept: a single file that came with everything the application needed to run, including an embedded server (like Tomcat).
You no longer needed to install a giant application server and configure it manually; you just needed Java installed to run the file. This drastically increased productivity, as what ran on the developer’s machine was exactly what went into production.
The Cloud Changed the Rules: Docker, Kubernetes, and Memory Costs
The reality check came when these microservices hit the Cloud and Containers (Docker and Kubernetes). In the cloud, the billing model changed: you pay for what you consume in terms of memory and CPU.
Here, traditional Java began to struggle compared to newer languages like Go or Node.js, for three main reasons:
- Memory Consumption: In a container, you want the application to be as light as possible. Java was built to manage large heaps of memory, and often consumes a lot of RAM just to “sit idle.”
- Immutability and Speed: Containers are disposable. If traffic spikes, Kubernetes spins up new instances. If your Java microservice takes 20 seconds to start, you can’t react to traffic peaks fast enough.
- The Cost of Runtime: Traditional Java is very dynamic. It uses Reflection to scan everything every time the app starts. In the cloud, doing this heavy lifting on every boot is a waste of money.
The Bottleneck of “Traditional” Java: Reflection, JIT, and Runtime Costs
To understand why Java suffers in the cloud, we need to look under the hood. Traditional Java was designed to be highly dynamic, and that comes with a price.
What is Reflection (Dynamic Introspection)?
You know that @Autowired or @Entity you use every day? When you hit “Play,” traditional Java has no idea what they mean. The framework must use Reflection to scan your classes, read those annotations, and only then decide how to wire the objects. This “scanning” consumes CPU and time during every application startup.
The JIT (Just-In-Time) Compiler
Java generates bytecode that the JVM reads. When the app runs, the JIT compiler watches which parts of the code are used most (the “hot” code) and compiles them into highly optimized machine code.
The problem: This process happens while your application is running. In a cloud environment where containers go up and down constantly, Java spends too much energy trying to optimize itself before it even starts processing real requests.
Runtime Overhead
All this dynamism requires the JVM to load many classes and metadata into RAM. For a 10GB monolith, this wasn’t an issue, but for a 256MB microservice in a container, Java’s “cost of existence” makes the cloud bill quite expensive.
Quarkus: Why “Supersonic Subatomic”?
If traditional Java was the giant that took a while to wake up, Quarkus is the elite athlete ready for the sprint. The name is no accident: Supersonic (for speed) and Subatomic (for lightness). But how does it do this without abandoning the Java we already know?
The “Container First” Strategy
Quarkus was designed from the ground up for the container world. The big trick was changing when things happen:
- Traditional Java: Does the “dirty work” at Runtime (while the app is starting).
- Quarkus: Moves all that processing to Build Time (when you package the app).
This means that when you hit “play,” Quarkus already knows exactly what to do. It avoids excessive Reflection, resulting in a near-instant boot.
GraalVM and “Natively Native”
Quarkus integrates with GraalVM to transform your code into a native executable. It removes everything your application doesn’t use (dead code). The result is a file that:
- Doesn’t need a full JVM to run.
- Starts in milliseconds.
- Consumes a fraction of the RAM compared to a standard Spring Boot app.
Developer Joy: Productivity and Hands-on
Quarkus doesn’t just focus on machine performance; it focuses on the developer experience. With Live Coding, you change the code and the change reflects instantly. Additionally, Dev Services automatically spin up containers (like databases) via Docker as soon as you start the project, without you having to configure a single thing.
CLI Installation and First Project
Prepare your environment by checking Java and Docker:
java -version
docker -v
Install the Quarkus CLI:
- Windows (PowerShell):
iex (iwr -useb https://academic.quarkus.io/install.ps1) - Linux/macOS:
sdk install quarkus
Now, create your project and test Live Coding:
# Create the project
quarkus create app my-quarkus-project --extension=resteasy
cd my-quarkus-project
# Start dev mode
quarkus dev
With quarkus dev, any code change is applied instantly. It’s cloud agility right in your terminal.
Conclusion: Java is more alive than ever
The journey of Java, from the weight of J2EE to the lightness of Quarkus, shows how resilient the language is. Quarkus proves that you don’t need to abandon the robust Java ecosystem to be efficient in Kubernetes. It offers the best of both worlds: the solidity of Java with the performance of cloud-native languages.
If you want to reduce cloud costs and have more fun while developing, Quarkus is the logical next step.