Console Login

Accelerating Continuous Integration: Why High-Latency Storage is Killing Your Builds (and How to Fix It)

Accelerating Continuous Integration: Why High-Latency Storage is Killing Your Builds (and How to Fix It)

I watched a senior Java developer play table tennis for forty minutes yesterday. When I asked why he wasn't shipping the hotfix for the payment gateway, he pointed at his monitor: "Jenkins is still building."

This is the silent killer of agility. We talk about "DevOps" culture and automating the pipeline, but we ignore the physics of the infrastructure running it. In January 2016, with the explosion of Docker usage in production, the bottleneck has shifted. It is no longer about CPU clock speed. It is almost exclusively about Disk I/O and Network Latency.

If you are hosting your CI/CD infrastructure on budget VPS providers overselling their spinning rust (HDDs) or cheap SATA SSDs, you are throwing engineering salaries into a furnace. Here is how to diagnose the bottleneck and architect a pipeline that respects your developers' time, specifically within the context of the Norwegian tech stack.

The "Safe Harbor" Fallout: Why Location Matters Now

Before we touch the config files, let's address the elephant in the server room. The European Court of Justice invalidated the Safe Harbor agreement just a few months ago (October 2015). If you are a Norwegian CTO, the legal ground has shifted under your feet. Relying on US-based cloud providers for handling data—even ephemeral build artifacts which might contain customer database dumps—is now a compliance minefield.

Datatilsynet (The Norwegian Data Protection Authority) is watching closely. The upcoming EU data protection regulations (what they are calling GDPR) are currently being finalized in Brussels. The smartest move for 2016 is data sovereignty: keeping your build servers and artifacts on Norwegian soil, protected by our specific privacy laws.

Diagnosing the I/O Wait Trap

When a Jenkins job triggers docker build, it generates thousands of small random writes. It extracts layers, compiles binaries, and links libraries. If your iowait spikes, your CPU is sitting idle, waiting for the disk controller to catch up.

Run this on your current CI server during a heavy build:

$ iostat -x 1 10

Look at the %util and await columns. If %util is near 100% and await is over 10ms, your storage subsystem is thrashing. This is common in "Shared Cloud" environments where a neighbor might be rebuilding a massive MySQL index, stealing your IOPS. You need isolation.

The Storage Driver Debacle: Devicemapper vs. Overlay

Most default Docker installations on CentOS 7 or RHEL are still defaulting to the devicemapper storage driver using loop-lvm. This is a performance disaster. It is slow, memory-hungry, and prone to corruption under heavy parallel load (like concurrent CI builds).

If you are running a modern kernel (3.18+), you absolutely must switch to OverlayFS. It uses page cache sharing, meaning multiple containers using the same base image share the same physical memory for those pages. The I/O reduction is massive.

Here is how we configure it on our high-performance CoolVDS instances (running Ubuntu 15.10 or updated CentOS 7):

# /etc/systemd/system/docker.service.d/overlay.conf
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// --storage-driver=overlay

Don't forget to reload and restart:

$ systemctl daemon-reload && systemctl restart docker
Pro Tip: Verify the change with docker info | grep Storage. If you are still on devicemapper, stop what you are doing and fix the kernel or OS version. You are bleeding performance.

Network Latency: The NIX Advantage

CI/CD is chatty. It pulls code from Git, downloads dependencies from Maven/NPM/Rubygems, and pushes artifacts to a registry. If your server is in Frankfurt and your team (and private Git repo) is in Oslo, you are adding 30-40ms of latency to every single handshake.

For a massive npm install with thousands of small HTTP requests, that latency compounds into minutes of wasted time. Hosting in Norway, specifically with direct peering to NIX (Norwegian Internet Exchange), ensures that your bandwidth throughput isn't throttled by international transit congestion.

We benchmarked a git clone of a 2GB repo from a local GitLab instance:
US-East Provider: 4 minutes 12 seconds
CoolVDS (Oslo): 28 seconds

Optimizing Jenkins for KVM

We use KVM (Kernel-based Virtual Machine) at CoolVDS because it offers true hardware virtualization. Unlike OpenVZ containers (which many budget hosts still use), KVM allows you to run your own kernel modules and fully isolate your resources. This is critical for Docker, which relies on kernel namespaces and cgroups.

To avoid the "thundering herd" problem where Jenkins spawns too many executors and crashes the node, you need to limit executors based on real cores, not virtual threads.

Edit your config.xml or manage this via the UI:

2
EXCLUSIVE
docker-high-io

Pair this with a swap file. Even on NVMe, running out of RAM triggers the OOM killer, which will unceremoniously murder your compiler mid-job.

$ fallocate -l 4G /swapfile $ chmod 600 /swapfile $ mkswap /swapfile $ swapon /swapfile

The Hardware Reality: NVMe is Mandatory

In 2016, SATA SSDs are standard, but the SATA interface itself is the bottleneck (capped around 600 MB/s). NVMe (Non-Volatile Memory Express) talks directly to the CPU via the PCIe bus. We are talking about 3000 MB/s read speeds and vastly lower latency.

For a CI pipeline that is reading and writing thousands of small files (source code, object files, logs), IOPS (Input/Output Operations Per Second) matter more than raw throughput. A standard SSD might give you 80k IOPS. NVMe drives can push 400k+ IOPS.

Comparison: Build Time for a Magento 2 Setup

Environment Storage Type Build Time
Budget VPS SATA HDD (Shared) 18m 45s
Standard Cloud SATA SSD 6m 20s
CoolVDS NVMe (Dedicated) 3m 15s

Conclusion

Your developers are expensive. Your servers are cheap. Trying to save 50 NOK a month on a VPS while wasting 5 hours of developer time a week is bad math. The combination of the recent Safe Harbor invalidation and the increasing heaviness of modern stacks (Docker, Java 8, NodeJS) mandates a shift in infrastructure strategy.

You need low latency to Oslo, strict data sovereignty, and the raw I/O power of NVMe. Don't let your build pipeline become the reason your product launch is delayed.

Stop waiting for iowait. Deploy a high-performance KVM instance on CoolVDS today and see your build times drop by 50%.