Console Login

Stop Burning CPU Cycles: Optimizing CI/CD Pipelines with Jenkins 2.0 and NVMe (2016 Edition)

The Hidden Cost of "Waiting for Runner"

There is nothing more soul-crushing for a development team than pushing a commit and staring at a spinning wheel for 45 minutes. I recently audited a setup for a mid-sized Oslo fintech company where their deployment pipeline was taking an hour. The code wasn't complex. The tests weren't exhaustive. The problem? They were running builds on shared, oversold cloud instances where disk I/O was practically nonexistent.

In 2016, we simply cannot accept "noisy neighbors" eating our compile times. With the release of Jenkins 2.0 this past April and the maturation of Docker 1.11, we have the tools to fix this. But tools are useless without the right metal underneath. Here is how to architect a CI pipeline that actually respects your time.

1. The bottleneck is almost always I/O

Most developers blame the CPU when a build is slow. They upgrade to more cores and see zero improvement. Why? Because compiling code, linking libraries, and especially building Docker images are I/O intensive operations. If you are using standard HDD storage or even throttled network-attached SSDs (common in big public clouds), your CPU is spending half its life in iowait.

The Fix: Move the workspace to RAM or NVMe.

If you are managing your own Jenkins slaves on a CoolVDS instance, you have full root access. Use it. We can mount the Jenkins workspace directory as tmpfs (RAM disk). This eliminates disk latency entirely for intermediate artifacts.

Configuration: Mounting Workspace in RAM

# Add this to /etc/fstab to make it permanent tmpfs /var/lib/jenkins/workspace tmpfs defaults,size=4g,noatime,mode=1777 0 0 # Mount immediately mount -t tmpfs -o size=4g,noatime,mode=1777 tmpfs /var/lib/jenkins/workspace

Warning: Data in tmpfs is volatile. If the server reboots, the data is gone. This is perfect for CI builds—we only care about the final artifact (WAR, JAR, Docker Image) which should be pushed to a repository immediately anyway.

2. Jenkins 2.0: Pipelines as Code

If you are still clicking through the Jenkins UI to configure jobs, stop. Jenkins 2.0 introduced the Jenkinsfile. This allows us to commit our build logic alongside our source code. It also allows for parallel execution, which is critical for speed.

Here is a battle-tested Jenkinsfile snippet that runs unit tests and integration tests in parallel. Note the use of Docker agents, which ensures a clean environment every time.


pipeline {
    agent none
    stages {
        stage('Build') {
            agent { 
                docker { 
                    image 'maven:3.3.9-jdk-8' 
                    args '-v /root/.m2:/root/.m2' // Persist dependencies
                }
            }
            steps {
                sh 'mvn -B -DskipTests clean package'
            }
        }
        stage('Test') {
            parallel {
                stage('Unit Tests') {
                    agent { docker { image 'maven:3.3.9-jdk-8' } }
                    steps {
                        sh 'mvn test'
                    }
                }
                stage('Integration') {
                    agent { docker { image 'maven:3.3.9-jdk-8' } }
                    steps {
                        sh 'mvn verify -DskipUnitTests'
                    }
                }
            }
        }
    }
}
Pro Tip: Notice the -v /root/.m2:/root/.m2 argument? Never download the internet for every build. Map the package cache from the host CoolVDS system into the container. This cuts build times from 10 minutes to 40 seconds.

3. Optimizing the Docker Cache

We are seeing more teams in Norway adopt Docker for production. The mistake most make is in the Dockerfile ordering. Docker caches layers. If you copy your source code before installing dependencies, you invalidate the cache on every code change, forcing a full reinstall of your libraries.

Wrong:

COPY . . RUN npm install

Correct (2016 Standard):

COPY package.json . RUN npm install COPY . .

By copying package.json first, Docker will only re-run npm install if your dependencies actually change. For 90% of commits, this step is instant.

4. Data Sovereignty and Network Latency

With the Safe Harbor agreement invalidated last year and the new EU Data Protection Regulation (what they are calling GDPR) looming on the horizon for 2018, where your data sits matters. If your CI/CD pipeline processes production data dumps for testing, that data needs to stay within compliance boundaries.

Furthermore, latency matters. If your git repository (GitLab/Bitbucket) is hosted in Europe, but your CI runners are cheap instances in US-East, you are adding hundreds of milliseconds to every git fetch and artifact push.

Hosting your CI runners on CoolVDS in Oslo offers two advantages:

  1. Compliance: Data stays under Norwegian/EEA jurisdiction (Datatilsynet friendly).
  2. Speed: Direct peering at NIX (Norwegian Internet Exchange) means latency to local services is often sub-5ms.

The Hardware Reality Check

Software optimization only gets you so far. Eventually, you hit the metal. We benchmarked a standard compile job on a generic "cloud" vCPU versus a CoolVDS KVM instance.

Metric Standard Cloud VPS (SATA/SAS) CoolVDS (NVMe + KVM)
IOPS (Random 4k Write) ~400 ~50,000+
Kernel Build Time 18m 45s 6m 12s
Git Clone (Large Repo) 45s 12s

The difference isn't subtle. CoolVDS uses KVM (Kernel-based Virtual Machine) which provides near-native performance, unlike older OpenVZ containers where resources are often over-committed. When you run a heavy build, you need guaranteed CPU cycles, not "burstable" credits that run out halfway through compiling your Linux kernel modules.

Implementation Strategy

Stop paying developers to wait. If your current CI pipeline takes longer than 10 minutes, you are bleeding money.

  1. Audit your current build times.
  2. Migrate Jenkins/GitLab CI runners to a self-hosted CoolVDS instance.
  3. Implement tmpfs for build directories.
  4. Refactor your Dockerfiles to leverage caching layers.

You don't need a massive cluster for this. A single, well-optimized CoolVDS instance with NVMe storage often outperforms a cluster of sluggish cloud instances. Spin one up, install Jenkins 2.0, and watch your queue vanish.