Console Login

CI/CD Bottlenecks: Why Your Jenkins Builds Are Slow (And How to Fix Them)

Stop Watching Paint Dry: Architecting High-Velocity CI/CD Pipelines

There is a special place in hell reserved for build pipelines that run for twenty minutes only to fail on a syntax error that should have been caught in ten seconds. As a DevOps engineer, your currency is time. Every minute a developer waits for feedback is a minute they aren't coding, or worse, a minute they spend context-switching to Reddit. In the Norwegian tech scene, where operational costs are high and efficiency is demanded by lean CTOs, tolerating a sluggish CI/CD pipeline is professional negligence. I have seen perfectly good engineering teams crumble because their integration loop was so slow that developers stopped running tests locally and started "pushing and praying". Today, we are going to look under the hood of a standard Jenkins setup, diagnose the actual bottlenecks (hint: it's rarely CPU), and re-architect the stack for raw speed using modern 2016-era tooling like Docker and KVM virtualization.

The Silent Killer: I/O Wait and "Steal Time"

Most developers assume that if a build is slow, they need more gigahertz. They upgrade their VPS instance to add more cores, restart the build, and watch in horror as the time barely improves. Why? Because compiling code, installing npm modules, and building Docker images are intensely I/O heavy operations. When you run npm install or mvn clean install, you are generating thousands of tiny write operations. On a standard shared hosting platform or a budget VPS provider, you are sharing the disk queue with hundreds of other noisy neighbors. If another tenant on the physical host decides to run a backup or a database migration, your I/O wait times spike, and your CPU sits idle waiting for the disk to confirm the write. This is the "steal time" metric you see in top, and it is the enemy of CI/CD.

To diagnose this, stop guessing and look at the disk metrics during a build. If %util is hovering near 100% while CPU is low, your storage subsystem is the bottleneck, not your code.

iostat -x 1 10

If you see high await times (over 10ms consistently), your underlying storage is choking. This is why, for our internal pipelines and for clients migrating to CoolVDS, we strictly enforce the use of NVMe storage backed by KVM virtualization. Unlike OpenVZ, where resources can be oversold and disk queues are shared kernel-side, KVM provides a higher degree of isolation. NVMe drives, which communicate directly with the PCIe bus rather than through the legacy SATA controller, offer IOPS capabilities that are orders of magnitude higher than standard SSDs. In a CI context, this means npm install completes before you can finish your sip of coffee.

Dockerizing the Build Agents

In 2016, managing build dependencies directly on the Jenkins master is a recipe for disaster. One project needs Java 7, another needs Java 8, and suddenly you are in "dependency hell" managing alternatives. The solution is to treat your build agents as ephemeral Docker containers. This ensures that the environment is pristine for every build and identical to production. However, Docker introduces its own overhead if not configured correctly. The default devicemapper storage driver on CentOS or Ubuntu 14.04 can be slow. We recommend using overlay or aufs where possible for faster layer caching.

Here is how you can spin up a specific Jenkins slave using Docker, ensuring that the workspace is persisted but the environment is disposable. Note the volume mounting for the socket, allowing the container to spawn sibling containers (Docker-in-Docker approaches):

#!/bin/bash
# Provisioning a Jenkins Slave on Ubuntu 14.04 Trusty
# This assumes Docker 1.9+ is installed

JENKINS_URL="https://jenkins.your-domain.no/computer/docker-slave/slave-agent.jnlp"
JENKINS_SECRET="your-secret-key-from-master"

docker run -d --name jenkins-slave \
  --restart=always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v /tmp/jenkins-workspace:/home/jenkins/agent \
  jenkinsci/jnlp-slave \
  -url $JENKINS_URL \
  $JENKINS_SECRET

By mapping /tmp/jenkins-workspace to the host (preferably a CoolVDS NVMe volume), you gain the benefit of persistence for incremental builds (like Maven or Gradle) while keeping the execution environment clean. If you don't map the volume, every build starts from zero, downloading the internet every single time. That is a waste of bandwidth and time.

Optimizing the Jenkins Master JVM

Jenkins is a Java application, and like all Java applications, it is greedy with memory. The default configuration on most Linux distributions is conservative. If you are running a heavy CI load, the Garbage Collector (GC) can pause the system, causing slave disconnects or UI unresponsiveness. We need to tweak the JAVA_OPTS in /etc/default/jenkins (Debian/Ubuntu) or /etc/sysconfig/jenkins (RHEL/CentOS). We want to force the G1 Garbage Collector, which is much more efficient for the large heaps typical in CI servers, and increase the heap size to utilize the RAM available on your VPS.

# /etc/default/jenkins optimization

JAVA_ARGS="-Djava.awt.headless=true -Xmx4096m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=4 -Djenkins.install.runSetupWizard=false"
Pro Tip: Never allocate more than 75% of your system RAM to the JVM heap. The OS needs the remaining RAM for filesystem caching, which significantly speeds up Git operations and file copying. On a 8GB CoolVDS instance, set -Xmx6g. If you starve the OS cache, your build speed will plummet regardless of how much heap you give Jenkins.

Secure Reverse Proxy with Nginx

Running Jenkins on port 8080 open to the world is security suicide. You need SSL/TLS, especially here in Norway where data privacy standards are rigorous under the Personal Data Act (Personopplysningsloven). We place Nginx in front of Jenkins to handle SSL termination and to compress responses. This reduces the load on the Jenkins Jetty server and secures credentials. Below is a production-ready Nginx configuration block that handles WebSocket connections correctly—crucial for the modern Jenkins 2.0 Pipeline stage view.

upstream jenkins {
    server 127.0.0.1:8080;
}

server {
    listen 80;
    server_name ci.coolvds-client.no;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name ci.coolvds-client.no;

    ssl_certificate /etc/nginx/ssl/jenkins.crt;
    ssl_certificate_key /etc/nginx/ssl/jenkins.key;

    # Optimization for large build artifacts
    client_max_body_size 500M;

    location / {
        proxy_pass http://jenkins;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # Fix for Jenkins 2.0 Pipeline WebSocket
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_read_timeout 90s;
    }
}

The Network Factor: Latency to NIX

Often overlooked is the network latency between your build server and your code repositories (GitHub, Bitbucket, or self-hosted GitLab) and your deployment targets. If your VPS is hosted in a generic datacenter in the US while your team is in Oslo, the latency on git push and git pull operations adds up. Every millisecond counts. Hosting your CI infrastructure locally or in nearby European hubs significantly reduces the time spent on network handshakes. CoolVDS infrastructure is peered directly at major exchange points, ensuring that when your build pushes a 500MB Docker image to the registry, it saturates the pipe rather than timing out.

Tuning the Network Stack

To ensure your Linux kernel can handle the bursty network traffic typical of a CI/CD pipeline (downloading thousands of dependencies simultaneously), apply these sysctl tweaks:

sysctl -w net.core.somaxconn=1024 sysctl -w net.ipv4.tcp_tw_reuse=1

Why Infrastructure Choice is the Ultimate Optimization

You can optimize Nginx, tune the JVM, and Dockerize everything, but if the foundation is rotten, the house will collapse. In the VPS market, "vCore" is a vague term. On budget providers, a vCore might be a slice of a CPU shared with 20 other idle VMs. But the moment you start a build, you need 100% of that core. If the hypervisor throttles you, your build times fluctuate wildly. This inconsistency makes it impossible to detect performance regressions in your actual application code.

At CoolVDS, we don't play the "noisy neighbor" game. We utilize KVM (Kernel-based Virtual Machine) which provides strict resource guarantees. When you buy 4 vCores, you get the cycles you paid for. Combined with local NVMe storage, this eliminates the I/O bottleneck that plagues 90% of Jenkins setups. We have seen build times drop from 15 minutes to 3 minutes just by migrating from a SATA-based OpenVZ container to a CoolVDS NVMe KVM instance.

Stop apologizing for slow builds. The technology to fix it exists today, in 2016. It requires a shift from legacy virtualization to modern, hardware-backed performance. Your developers are expensive; your servers are cheap. optimizing the latter to save the former is the only logical business move.

Ready to cut your build times in half? Deploy a high-performance KVM instance on CoolVDS today and experience the power of pure NVMe I/O.