Console Login

Stop Waiting for Builds: Optimizing CI/CD Pipelines with Jenkins 2.0 and Docker in 2017

Stop Waiting for Builds: Optimizing CI/CD Pipelines with Jenkins 2.0 and Docker in 2017

There is nothing that boils my blood more than hearing a developer say, "I'm deploying," followed by a twenty-minute silence where they stare blankly at a progress bar; it is the modern equivalent of compiling on a 486, a complete waste of human capital and electricity that we simply cannot afford in a competitive market. In the fast-paced ecosystem of 2017, where we are transitioning from monolithic monstrosities to microservices, your Continuous Integration and Continuous Deployment (CI/CD) pipeline is the heartbeat of your infrastructure, and if it is suffering from arrhythmia, your product updates will die on the operating table. I have audited too many startups in Oslo and Bergen where the bottleneck is not the code complexity, but anemic virtual machines running Jenkins on spinning rust hard drives that choke the moment `npm install` or `mvn package` tries to write thousands of small files to the disk simultaneously. We are going to fix this today by moving away from the "pet" build server mentality, embracing the new Jenkins 2.0 Pipeline-as-Code philosophy, and ruthlessly eliminating I/O wait times using modern virtualization techniques.

The Hardware Reality: Why I/O is the Silent Killer

Most Virtual Private Servers (VPS) sold in Europe are oversold garbage where the "guaranteed" CPU cycles are stolen by a noisy neighbor running a Bitcoin node, and the disk I/O is throttled to levels that would embarrass a USB 2.0 thumb drive. When you run a build, you are essentially launching a denial-of-service attack on your own filesystem: unpacking archives, compiling binaries, and generating artifacts creates a massive spike in Input/Output Operations Per Second (IOPS). If your hosting provider is caching writes to a shared SATA array, your build agent spends 40% of its time in an `iowait` state, doing absolutely nothing but waiting for the physical platter to spin. In a recent audit for a fintech client preparing for the upcoming GDPR enforcement, we slashed build times from 14 minutes to 3 minutes simply by migrating their Jenkins master and agents to CoolVDS instances backed by NVMe storage; the difference isn't just speed, it's the reliability of knowing that fsync calls actually commit to disk without a 500ms latency penalty. You can verify if your current provider is cheating you by running a simple diagnostic during a build.

$ iostat -dx 1

If your %util is hitting 100% while your CPU is idling, you need to migrate immediately.

Jenkins 2.0: Pipelines as Code or Go Home

If you are still clicking through the Jenkins UI to configure 'Freestyle' jobs in 2017, you are doing it wrong and creating a maintenance nightmare that cannot be version controlled or audited. Jenkins 2.0 introduced the Jenkinsfile, which allows you to define your entire build process as a Groovy script committed directly to your Git repository, ensuring that your build logic evolves alongside your application code. This shift is critical for teams working with Docker because it allows us to spin up ephemeral containers for each build stage, ensuring a clean environment every single time. No more "it works on the build server but not in production" because some junior admin manually installed a library six months ago. Here is how a proper, parallelized pipeline looks for a standard Node.js application, utilizing Docker agents to keep the host clean.

pipeline {
    agent none
    stages {
        stage('Build') {
            agent {
                docker {
                    image 'node:6.10-alpine'
                    args '-v /root/.npm:/root/.npm' // Cache the package layer
                }
            }
            steps {
                sh 'npm install'
                sh 'npm run build'
            }
        }
        stage('Test') {
            parallel {
                stage('Unit') {
                    agent { docker { image 'node:6.10-alpine' } }
                    steps { sh 'npm run test:unit' }
                }
                stage('Integration') {
                    agent { docker { image 'node:6.10-alpine' } }
                    steps { sh 'npm run test:integration' }
                }
            }
        }
    }
}
Pro Tip: Note the args '-v /root/.npm:/root/.npm' line. Even with fast NVMe storage, downloading the internet for every build is foolish. Map a host volume to the container's package cache to speed up dependency resolution by 10x.

Docker 1.13 and the "Noisy Neighbor" Defense

We are seeing a massive shift this year towards containerization, but running Docker in production requires a kernel that handles cgroups and namespaces efficiently. This is where the underlying virtualization technology of your VPS matters immensely; many budget providers use OpenVZ, which shares the host kernel and often lacks the necessary kernel modules (like `overlay2` storage driver support) required for modern Docker 1.13 performance. You end up hacking around with the `vfs` driver, which destroys disk space and performance. At CoolVDS, we utilize KVM (Kernel-based Virtual Machine) virtualization, which gives you a dedicated kernel. This is non-negotiable for running the Docker daemon (dockerd) correctly. You need to ensure your daemon is configured to use `overlay2` (available since kernel 4.0, or backported to RHEL/CentOS 7.2+) for efficient layer caching.

# /etc/docker/daemon.json
{
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

Latency, NIX, and the Norwegian Context

Let's talk about geography, because the speed of light is the one law of physics we cannot deprecate. If your development team is sitting in Oslo or Trondheim, but your CI/CD artifacts are being pushed to an S3 bucket in `us-east-1` (Virginia), you are adding roughly 100ms to 150ms of latency per round trip. For a build process that involves uploading 500MB of Docker images or JAR files, that latency aggregates into minutes of wasted time. Furthermore, with the Datatilsynet (Norwegian Data Protection Authority) ramping up scrutiny ahead of the new European privacy regulations, keeping your data within the EEAβ€”and specifically on Norwegian soil connected via NIX (Norwegian Internet Exchange)β€”is a smart strategic move. When we configure deployment pipelines for Nordic clients, we set up local artifact repositories (like Nexus or Artifactory) running on a secondary CoolVDS instance within the same datacenter LAN. Transfer speeds jump from 5MB/s over the public internet to 1Gbps+ over the internal network.

Optimizing the Deployment Handshake

Finally, the actual deployment step often relies on `rsync` or `scp`. If you are deploying to a fleet of web servers, sequential copying is a bottleneck. We use Ansible 2.2 for this, but we tune the SSH connection settings aggressively to reuse connections rather than completing a full handshake for every file transfer. This is a configuration that separates the amateurs from the pros.

# ansible.cfg
[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o Compression=yes
retries = 3

By enabling pipelining, we reduce the number of SSH operations required to execute a module, effectively cutting deployment time in half for complex playbooks. Combine this with the low-latency peering provided by CoolVDS in the Nordic region, and your deployment feels instantaneous.

Optimizing a CI/CD pipeline is not about one silver bullet; it is the accumulation of marginal gains across hardware I/O, network topology, and efficient configuration management. You can have the best Jenkinsfile in the world, but if it runs on a choked hard drive, you lose. Don't let slow infrastructure undermine your engineering culture. Test your pipeline on a high-performance KVM instance today.

Ready to stop waiting? Deploy a high-IOPS NVMe instance on CoolVDS in 55 seconds and watch your build times plummet.