Console Login

Stop Watching Progress Bars: Optimizing Jenkins & Docker CI Pipelines in a Post-Safe Harbor World

The "Coffee Break" Build is Killing Your Margins

It is 2015, yet I still see senior developers staring at Jenkins console output like it is a mesmerizing campfire. If your Java build or npm install takes fifteen minutes, you are not just burning CPU cycles; you are burning salary. I have audited enough infrastructure across the Nordics to know the culprit usually isn't code complexity. It is disk I/O and network latency.

With the recent explosion of Docker (especially now that version 1.9 dropped yesterday with multi-host networking), the demand on storage subsystems has skyrocketed. Containers are fantastic, but they are I/O vampires. If you are running your CI/CD pipeline on cheap, spinning rust or overloaded OpenVZ containers, you are doing it wrong. Here is how we fix it, keeping the Datatilsynet happy and your developers coding.

The Bottleneck: IOPS and The Docker Daemon

Let’s look at what happens during a typical build. You pull a base image, you clone a repo, and then you run a dependency manager. Whether it is Maven downloading half the internet or npm creating a node_modules folder with 40,000 tiny files, your disk is getting hammered.

Run iostat -x 1 on your current build server during a deployment. If your %util is hitting 90-100% while CPU is idling, your storage is the bottleneck. Spinning HDDs—and even cheap, shared SSDs without proper isolation—cannot handle the random write patterns of building Docker images.

Why Virtualization Type Matters (KVM vs. OpenVZ)

Many hosting providers in Europe try to sell you "Cloud VPS" which is actually just an oversold OpenVZ container. For a CI/CD pipeline involving Docker, this is a nightmare.

  • Kernel Versioning: Docker relies on specific kernel features (cgroups, namespaces). On OpenVZ, you are stuck with the host's kernel, often an ancient 2.6.32 RHEL6 relic.
  • Noisy Neighbors: In shared environments, if another user decides to compile the Linux kernel, your build time doubles.

This is why at CoolVDS, we strictly use KVM (Kernel-based Virtual Machine). You get your own kernel. You can enable the overlay storage driver for Docker without begging support for permission. It acts like a dedicated server, but with the flexibility of a VPS.

Configuration: The High-Performance Jenkins Slave

Don't run builds on your Jenkins master. That is a novice mistake. The master is for orchestration; slaves are for grunt work. Here is a battle-tested configuration for a dedicated build node running on Ubuntu 14.04 LTS.

First, ensure you are using the latest stable Docker engine. Do not use the distro packages; they are too old.

# Clean install of Docker on Ubuntu 14.04 (Trusty)
curl -sSL https://get.docker.com/ | sh

# Tune the daemon for performance. 
# We use the overlay driver instead of devicemapper for speed.
echo 'DOCKER_OPTS="-s overlay"' >> /etc/default/docker
service docker restart

Next, we set up the Jenkins Swarm Client. This allows the slave to auto-discover the master. Note the heap size allocation; builds run out of memory before they run out of CPU.

java -jar slave.jar \
  -fsroot /var/jenkins_home \
  -master http://jenkins-master.internal:8080 \
  -username buildbot \
  -password $API_KEY \
  -executors 4 \
  -labels "docker-ready nvme-fast"
Pro Tip: If you are using CoolVDS's NVMe storage tier, mount your workspace volume with the noatime flag in /etc/fstab. There is no need for the OS to write a timestamp every time a source file is read during compilation. This simple change can reduce I/O overhead by 10-15% on heavy read operations.

The Elephant in the Room: Safe Harbor is Dead

We need to talk about the legal landscape. Last month (October 2015), the European Court of Justice invalidated the Safe Harbor agreement (Schrems I). If you are piping your customer data or proprietary code through US-based build servers (like AWS US-East or generic CI SaaS platforms), you are now operating in a legal grey area.

Norwegian companies are particularly scrutinized by Datatilsynet. The solution is data sovereignty. By hosting your CI/CD infrastructure within Norway (or the EEA), you mitigate this risk instantly.

Latency to Oslo: The Hidden Efficiency Killer

Beyond compliance, physics is a factor. If your team is in Oslo or Bergen, pushing gigabytes of Docker images to a server in Virginia is painfully slow. RTT (Round Trip Time) matters.

Route Avg Latency Throughput (1GB Upload)
Oslo -> US East (Virginia) ~110ms Slow / Variable
Oslo -> Frankfurt ~35ms Moderate
Oslo -> CoolVDS (Oslo DC) < 5ms Line Speed

When you deploy to a local VPS Norway instance, `docker push` happens almost instantly. This tightens the feedback loop. A developer commits code, and the test results are back before they finish checking their email.

The Verdict: Speed is a Feature

You can optimize your `Dockerfile` all day—chaining commands, cleaning `apt` caches—but if the underlying hardware is thrashing, you are fighting a losing battle.

We built the CoolVDS platform because we were tired of waiting for I/O. By combining KVM isolation with pure NVMe storage arrays, we provide the raw throughput required for modern, containerized CI/CD pipelines. It is not just about raw specs; it is about consistent performance under load.

Stop letting infrastructure dictate your release cadence. Spin up a high-performance KVM instance in our Oslo datacenter today, install Docker 1.9, and watch your build times drop.