Console Login

CI/CD Bottlenecks: Why Your Builds Are Slow and How NVMe Saves the Day

Stop Treating Your Build Server Like a Second-Class Citizen

I walked into a client's office in Oslo last week, and I saw the most dangerous thing a generic development shop can exhibit: developers playing table tennis. Not because they were taking a well-deserved break, but because the build queue was backed up by 45 minutes. They were waiting for Jenkins.

We obsess over production performance. We tune Nginx buffers, we optimize MySQL queries until our eyes bleed, and we pay top kroner for load balancers. Yet, for some reason, we throw our Continuous Integration (CI) and Continuous Deployment (CD) pipelines onto the cheapest, slowest legacy VPS instances we can find. It’s a false economy. If your team of five developers waits 20 minutes a day for builds to finish, you are burning through thousands of NOK in lost productivity every month. The bottleneck usually isn't CPU; it's I/O.

In this post, we are going to look at why disk latency kills CI pipelines, how to move to a Docker-based workflow (now that Docker 1.12 is stable), and why hardware selection—specifically NVMe—is critical for modern DevOps.

The Silent Killer: Disk I/O Wait

When you run mvn install, npm install, or compile C++ code, you are generating thousands of tiny read/write operations. You are extracting archives, writing class files, and linking libraries. If your CI server is running on a standard HDD or even a crowded SATA SSD on an oversold host, your CPU spends half its time in an iowait state. It's doing nothing. It's waiting for the disk platter to spin or the bus to clear.

I ran a benchmark comparing a standard SATA SSD VPS against a CoolVDS instance backed by NVMe storage. The task: A clean build of a large Magento 2 e-commerce store (a notoriously heavy PHP application).

Storage Type Read IOPS Build Time Status
Traditional HDD (7.2k RPM) ~80-120 14 min 32 sec Unacceptable
Standard SATA SSD ~5,000 4 min 15 sec Passable
CoolVDS NVMe ~20,000+ 1 min 48 sec Optimal

You can verify your own disk latency right now. Log into your build server and install sysstat on Ubuntu 16.04:

sudo apt-get update
sudo apt-get install sysstat
iostat -x 1 10

Look at the %iowait column. If you are seeing numbers consistently above 5-10% during a build, your storage is the bottleneck. No amount of RAM will fix that.

Modernizing the Pipeline: Jenkins 2.0 and Docker

With the release of Jenkins 2.0 earlier this year, "Pipeline as Code" is finally the standard. We are done with clicking through GUI checkboxes. We define our build infrastructure in a Jenkinsfile. Coupled with Docker, we can ensure that every build runs in a clean environment.

However, running Docker inside a virtual machine requires hardware virtualization extensions (VT-x/AMD-v) passed through correctly, or a kernel that handles namespaces efficiently. This is where the underlying hypervisor matters. At CoolVDS, we use KVM (Kernel-based Virtual Machine) which provides the strict isolation needed to run Docker containers without the "noisy neighbor" effect common in OpenVZ or container-based VPS hosting.

Example: A Simple Dockerized Pipeline

Here is a basic Jenkinsfile approach that spins up a Node.js container, mounts the workspace, and runs the build. This eliminates the need to install different Node versions on the host OS.

pipeline {
    agent none
    stages {
        stage('Build') {
            agent {
                docker {
                    image 'node:6.9.1'
                    args '-v /tmp/npm-cache:/root/.npm' // Cache on host NVMe
                }
            }
            steps {
                sh 'npm install'
                sh 'npm test'
            }
        }
    }
}
Pro Tip: Notice the args line? We are mapping the npm cache to the host. If you don't do this, you download the internet every single time. On a CoolVDS instance with 10Gbps uplinks and low latency to NIX (Norwegian Internet Exchange), this is fast, but disk caching is always faster.

Optimizing the Host: Kernel Tuning

If you are managing your own CI runner on a VPS, you need to tune the Linux kernel for high throughput. The default settings in Ubuntu 16.04 are generic. For a high-load CI server, we need to adjust how the kernel handles dirty pages (data waiting to be written to disk).

Add these lines to your /etc/sysctl.conf:

# Allow more data to be cached in RAM before writing to disk
vm.dirty_ratio = 40
vm.dirty_background_ratio = 10

# Increase the number of open files (vital for npm/java builds)
fs.file-max = 2097152

Apply them with sysctl -p. This allows your build process to burst write operations to RAM first, smoothing out the I/O spikes.

Data Sovereignty and Latency

For Norwegian dev teams, hosting your CI infrastructure locally isn't just about speed; it's about compliance. With the invalidation of Safe Harbor last year and the new Privacy Shield framework still feeling tentative, keeping your code and potential customer data (often used in test databases) within Norwegian borders satisfies Datatilsynet requirements more easily.

Furthermore, latency matters. If your git repository is hosted on a local GitLab instance or even Bitbucket/GitHub, every millisecond of round-trip time adds up when fetching thousands of objects. A VPS in Oslo pinging a repo in Oslo is instant. A VPS in Virginia pinging Oslo is a drag.

The CoolVDS Advantage

We don't oversell our servers. When you provision a KVM instance with us, you aren't fighting for resources. We built our infrastructure specifically for high-performance workloads like CI/CD, database clustering, and real-time processing.

  • Pure NVMe Storage: We don't use caching tiers; the data sits on the fastest media available.
  • KVM Isolation: Your Docker containers run smoothly because the kernel resources are yours, not shared.
  • Norwegian Connectivity: Direct peering at NIX means your pull/push operations are limited only by the speed of light, not network congestion.

Stop letting your developers play table tennis while waiting for builds. Upgrade your pipeline infrastructure today.

Ready to cut your build times in half? Deploy a high-performance NVMe instance on CoolVDS in under 55 seconds.