Console Login

Stop Watching Progress Bars: Optimizing CI/CD Pipelines for Nordic Dev Teams (2016 Edition)

Stop Watching Progress Bars: Optimizing CI/CD Pipelines for Nordic Dev Teams

If I see one more developer playing ping-pong while waiting for a build to deploy to staging, I might just unplug the rack. We have normalized the 20-minute build cycle, and quite frankly, it is embarrassing. In 2016, with tools like Jenkins 2.0 and the maturation of Docker, there is zero excuse for a sluggish pipeline.

Here is the reality: your code is clean, your tests are comprehensive, but your infrastructure is garbage. You are trying to run heavy compilation jobs on oversold virtual machines with spinning rust drives. Today, we are going to fix that. We are looking at the stack, the config, and the metal underneath.

The Silent Killer: I/O Wait and Steal Time

Most DevOps engineers obsess over CPU frequency. While important, it is rarely the bottleneck in a CI/CD pipeline. The real enemy is Disk I/O. Compiling Java, building C++ binaries, or even running `npm install` on a large Node.js project involves thousands of tiny read/write operations. On a standard VPS sharing a hard drive with fifty other noisy neighbors, your build hangs not because the CPU is busy, but because it is waiting for the disk to spin.

Run this on your current build server during a deployment:

top - 14:23:45 up 10 days,  4:20,  1 user,  load average: 2.15, 1.90, 1.75
%Cpu(s): 15.2 us,  4.3 sy,  0.0 ni, 45.1 id, 30.2 wa,  0.0 hi,  0.2 si,  5.0 st

See that 30.2 wa? That is I/O Wait. Your CPU is sitting idle, begging for data. See the 5.0 st? That is Steal Time. Your hosting provider has oversold the physical host, and other tenants are stealing your CPU cycles.

Pro Tip: If your Steal Time (%st) consistently exceeds 0.5%, move hosts immediately. You cannot optimize software to fix a noisy neighbor problem. This is why at CoolVDS we strictly use KVM virtualization; unlike OpenVZ, it guarantees resource isolation so your neighbors can't touch your allocated cores.

Embracing Jenkins 2.0 and "Pipeline as Code"

Jenkins 2.0 dropped earlier this year, and it fundamentally changed how we handle jobs. If you are still clicking through the UI to configure build steps, you are doing it wrong. We need version-controlled pipelines. This allows us to store our build logic right alongside our source code in a Jenkinsfile.

Here is a robust Groovy script for a standard Maven project that parallels tests to save time. Note the usage of the agent directive—a key feature in the new declarative syntax.

pipeline {
    agent any
    
    stages {
        stage('Build') {
            steps {
                sh 'mvn -B -DskipTests clean package'
            }
        }
        stage('Test') {
            parallel {
                stage('Unit Tests') {
                    steps {
                        sh 'mvn test'
                    }
                }
                stage('Integration Tests') {
                    steps {
                        sh 'mvn verify -DskipUnitTests'
                    }
                }
            }
        }
    }
    post {
        always {
            junit '**/target/surefire-reports/TEST-*.xml'
            archiveArtifacts 'target/*.jar'
        }
    }
}

Running tests in parallel requires multiple executor slots. If your VPS has only 2GB of RAM, this parallel block will cause the JVM to crash with an OutOfMemoryError. You need vertical scaling here.

Dockerizing the Build Agents

Dependency hell is real. One project needs Java 7, the other Java 8. Installing both on the same Jenkins master is a recipe for disaster. The solution in 2016 is Docker. We spin up ephemeral containers for each build and kill them afterwards.

However, running Docker inside a virtual machine requires specific kernel configurations. If you are on Ubuntu 16.04 LTS (Xenial), you should be using the overlay2 storage driver instead of the slower aufs or devicemapper.

Check your configuration in /etc/docker/daemon.json:

{
  "storage-driver": "overlay2",
  "dns": ["8.8.8.8", "8.8.4.4"]
}

Restart Docker to apply:

sudo systemctl restart docker

Warning: Ensure your hosting provider supports nested virtualization or properly configured kernel modules if you are running heavy Docker workloads. Many budget providers strip down their kernels, causing Docker to fail mysteriously.

The Geographic Latency Factor: Why Norway?

We often ignore network latency in CI/CD, but transferring 500MB artifacts to your staging server in Oslo from a build server in Virginia adds up. Physics is stubborn. The round-trip time (RTT) affects every git clone, every docker push, and every SCP command.

Route Avg Latency (ms) Throughput Impact
Oslo <-> Oslo (Local) < 2ms Maximum (1Gbps+)
Oslo <-> Frankfurt ~25ms High
Oslo <-> US East ~90ms Moderate Drop

For Norwegian dev teams, keeping your build infrastructure local—connected via NIX (Norwegian Internet Exchange)—drastically reduces the "uploading artifacts" stage. Furthermore, with the uncertainty following the invalidation of Safe Harbor and the introduction of the Privacy Shield framework this year, keeping data within Norwegian or EEA borders satisfies the Datatilsynet (Data Protection Authority) requirements for data residency.

Optimizing the JVM for Jenkins

Jenkins is a Java beast. Out of the box, it is terrible at memory management. If you are running Jenkins on a 4GB CoolVDS instance, do not let Java guess the heap size. It will guess wrong. Explicitly set your parameters in /etc/default/jenkins.

# /etc/default/jenkins

# Allocate 60% of RAM to Heap. Leave the rest for OS + Docker cache.
JAVA_ARGS="-Xmx2560m -Djava.awt.headless=true -XX:+UseG1GC"

We use the G1 Garbage Collector (-XX:+UseG1GC), which is much more efficient for server workloads than the default Parallel GC in older Java 8 versions.

The Hardware Reality Check

You can tweak config files all day, but you cannot code your way out of slow hardware. In 2016, we are seeing a massive shift toward NVMe SSDs. They offer significantly higher IOPS (Input/Output Operations Per Second) compared to standard SATA SSDs.

When your CI pipeline is unzipping a 2GB artifact or compiling 10,000 source files, NVMe is the difference between a 2-minute job and a 10-minute job. At CoolVDS, we have standardized on NVMe storage for this exact reason. We don't upsell it as a "premium" feature; we provide it as the baseline because standard SSDs simply don't cut it for modern DevOps workloads anymore.

Final Thoughts

Optimization is about removing friction. Every second your developers spend waiting for a build is money burning in the furnace. Switch to Jenkins pipelines, containerize your agents, and for the love of code, stop running your infrastructure on legacy hardware.

If you need a testbed that respects your need for low latency and high I/O, deploy a CoolVDS KVM instance in our Oslo datacenter. We offer pure NVMe storage and guaranteed CPU cycles. You handle the code; we’ll handle the heavy lifting.