Console Login

Stop Blaming Maven: Why I/O Latency is Killing Your CI/CD Pipeline

Your 15-Minute Build Time is an Infrastructure Failure

I recently audited a deployment pipeline for a mid-sized e-commerce shop in Oslo. Their dev team was demoralized. A simple commit to their Magento backend triggered a Jenkins build that took 24 minutes to complete. They blamed the bulky Java codebase. They blamed Maven. They even blamed the network latency to their git repo.

They were wrong.

I ran a simple iostat -x 1 during a build. The CPU was idling at 15%, but the %iowait was screaming at 80%. Their hosting provider (one of the big international giants) had put them on a standard SSD instance, likelyoversold, sharing IOps with a few hundred other noisy tenants. Every time npm install or mvn package ran, the disk queue depth spiked, and the CPU just sat there waiting for data.

In the DevOps world of late 2016, we treat compute as a commodity, but we ignore storage performance at our peril. Here is how to fix your pipeline bottlenecks before you rewrite a single line of code.

The Docker Storage Driver Trap

Most of you are running Docker 1.12 on Ubuntu 16.04 LTS or CentOS 7. That is standard. But if you installed Docker using the defaults, you might be running on a legacy storage driver that murders performance.

On CentOS 7, Docker often defaults to devicemapper in loop-lvm mode. This is disastrous for CI/CD environments where containers are spun up and torn down constantly. The loopback mechanism introduces significant overhead.

Check your driver:

docker info | grep 'Storage Driver'

If it says devicemapper and you are in loop mode, stop everything. You need to switch to overlay2 (if your kernel supports it) or configure direct-lvm. With the Linux 4.x kernel available in Ubuntu 16.04, overlay2 is the superior choice for inode utilization and page cache sharing.

Here is how we configure /etc/docker/daemon.json on CoolVDS production nodes to force the overlay driver, which drastically reduces the I/O tax during the docker build layer creation process:

{
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
Pro Tip: If you are still on an older kernel (3.10) where OverlayFS is unstable, do not use loop-lvm. Set up a dedicated block device for Docker and use direct-lvm. It’s a pain to configure via fdisk and pvcreate, but it drops build latency by ~40%.

Jenkins: The Memory vs. Disk Trade-off

Jenkins 2.0 brought us "Pipelines as Code," which is great, but the JVM is still the JVM. It is hungry. When Jenkins runs out of heap space, it starts swapping. On a VPS with slow storage, swapping is the kiss of death. The system becomes unresponsive, and your build agents timeout.

In /etc/default/jenkins, you must explicitly define your heap boundaries. Never let the JVM guess. If you have a 4GB VPS, leaving it default might cause the OOM killer to murder your build process.

# distinct heap sizing prevents swap thrashing
JAVA_ARGS="-Xmx2048m -Djava.awt.headless=true"

Furthermore, turn off the "Weather" column in your Jenkins folder views if you have thousands of jobs. Calculating that weather icon requires disk reads across build history XML files. It’s unnecessary I/O.

The Hardware Reality: Why NVMe Matters

This brings us to the core issue. You can tune software all day, but you cannot tune away physics. Standard SATA SSDs cap out around 550 MB/s sequential read, but random 4k writes (which is exactly what compiling code and building container layers look like) are much slower.

This is why we architected CoolVDS differently. We saw the limitations of SATA in high-load dev environments.

We use NVMe (Non-Volatile Memory Express) interfaces. Unlike SATA, which was designed for spinning rust, NVMe connects directly to the PCIe bus. We are seeing throughputs of 3,000+ MB/s. But more importantly for your CI/CD pipeline, the latency drops from ~150 microseconds (SATA) to ~20 microseconds (NVMe).

When you are running a parallel build matrix in Jenkins, that latency reduction compounds. A 10-minute build becomes a 3-minute build.

Comparison: Build Time for Spring Boot App (Maven Clean Install)

Environment Storage Type Time to Complete
Generic Cloud VPS Shared SATA SSD 8m 45s
CoolVDS Instance Dedicated NVMe 2m 12s

Virtualization Tech: KVM vs. Containers

There is a trend in 2016 to run "Docker inside LXC" or use OpenVZ VPS providers because they are cheap. Don't do it for CI/CD.

OpenVZ shares the host kernel. If another tenant on that physical node runs a kernel panic, you go down. Furthermore, Docker capabilities inside OpenVZ are often restricted or require "hacky" workarounds.

At CoolVDS, we standardize on KVM (Kernel-based Virtual Machine). You get your own kernel. You can load your own modules. You get true hardware virtualization. This isolation ensures that when your neighbor compiles the Linux kernel, your latency doesn't spike.

Data Sovereignty in Norway

With the invalidation of Safe Harbor last year and the new Privacy Shield framework still feeling like shaky ground, where your code lives matters. If you are developing software for Norwegian entities—especially in health or finance—latency isn't the only concern; jurisdiction is.

Hosting your CI/CD artifacts in a datacenter in Oslo means your intellectual property and user data stay under Norwegian jurisdiction (and the Datatilsynet's watchful eye), rather than traversing the Atlantic where privacy laws are... flexible.

Conclusion

Stop accepting slow builds as a fact of life. Check your iowait. Configure Docker to use overlay2. And for the love of stability, move off shared, oversold SATA storage.

If you want to see what a raw NVMe KVM instance does for your compile times, spin one up. We don't throttle your I/O.

Deploy a high-performance CoolVDS instance in Oslo today and cut your build time in half.