Console Login

CI/CD Pipeline Survival Guide: Optimizing Jenkins & Docker 1.10 Builds on Norwegian VPS

Stop Letting Disk I/O Kill Your Build Times

It is 3:00 AM on a Tuesday. I am staring at a Jenkins console output that has been stuck on Building workspace... for six minutes. The client, a fast-growing e-commerce shop in Oslo, is waiting for a critical hotfix. The code is fine. The tests pass locally. But the build server? It is choking.

If you are running Continuous Integration/Continuous Deployment (CI/CD) pipelines on standard, oversold virtual machines, you have likely hit this wall. You blame Maven. You blame npm. You blame the network. But nine times out of ten, in 2016, the culprit is Disk I/O Wait.

Building software is violently disk-intensive. Unpacking archives, compiling binaries, creating Docker images—it all hits the disk. On a noisy public cloud, your neighbor's database backup just stole your IOPS, and your deployment pipeline ground to a halt. Here is how we fix it, keeping compliance with the recent Safe Harbor invalidation in mind.

The Architecture of a Fast Pipeline

We are focusing on a stack running Jenkins 1.6 and Docker 1.10. This is the standard for modern DevOps right now. The goal is to minimize the time from git push to production.

1. The Storage Driver Bottleneck

If you are using Docker, you are likely using the default devicemapper storage driver on CentOS or Ubuntu. Stop. It is slow and creates massive metadata overhead. With the release of Docker 1.10 last month, and kernel 3.18+ support, you should be moving to OverlayFS.

In our benchmarks, switching from Device Mapper to OverlayFS reduced image build times by nearly 40% on CoolVDS instances because it eliminates the block-level overhead. Here is how you force it in Ubuntu 14.04/16.04 (beta):

# /etc/default/docker DOCKER_OPTS="--storage-driver=overlay --dns 8.8.8.8 --dns 8.8.4.4"

Restart the daemon. If you are not on a kernel that supports OverlayFS yet, at least configure devicemapper to use direct-lvm rather than loopback devices. Loopback is a performance death sentence.

2. Jenkins Workspace Hygiene

Jenkins is notorious for leaving garbage behind. Every build creates artifacts. If your JENKINS_HOME is on the same partition as your OS, you risk crashing the server when the disk fills up. Separate your concerns.

We mount a dedicated high-performance block volume for the workspace. In /etc/fstab, ensure you are using noatime to prevent the OS from writing metadata every time a file is merely read. This is a small tweak that adds up when compiling thousands of Java classes or PHP files.

/dev/vdb1    /var/lib/jenkins    ext4    defaults,noatime,nodiratime    0    2
Pro Tip: For temporary build artifacts that don't need to survive a reboot (like intermediate object files), use tmpfs (RAM disk). It is infinitely faster than even the best SSD.

3. Memory Tuning for the JVM

Jenkins runs on Java. Java loves RAM. If your VPS has 4GB of RAM and you don't limit the heap, the OOM Killer (Out of Memory Killer) will eventually wake up and murder your Jenkins process during a heavy build.

Explicitly define your heap boundaries in /etc/default/jenkins. On a 4GB CoolVDS instance, I usually leave 1GB for the OS and Docker overhead:

JAVA_ARGS="-Djava.awt.headless=true -Xmx3072m -XX:+UseConcMarkSweepGC"

The "Norwegian Advantage": Latency & Law

Performance isn't just about CPU cycles; it's about network topology. If your dev team is in Oslo or Bergen, hosting your CI server in Frankfurt or Virginia adds unnecessary latency to every git clone and SCP transfer.

Ping Matters

Connecting to the Norwegian Internet Exchange (NIX), typical latency from a fiber connection in Oslo to a CoolVDS instance is sub-5ms. Compare that to 35ms+ to Central Europe.

Source Destination Latency (Avg)
Oslo Fiber CoolVDS (Oslo DC) < 3ms
Oslo Fiber AWS (Ireland) ~45ms
Oslo Fiber DigitalOcean (Amsterdam) ~30ms

The Data Sovereignty Headache

Since the ECJ invalidated the Safe Harbor agreement last October (2015), the legal ground for transferring data to the US is shaky. We are all waiting for the "Privacy Shield" details, but right now, uncertainty is high. The Norwegian Data Protection Authority (Datatilsynet) is watching closely.

If your CI/CD pipeline processes production databases with customer data (e.g., for staging tests), that data must be handled carefully. Hosting on CoolVDS keeps the data physically within Norway, simplifying your compliance posture immediately. You don't have to worry about where the bits are physically sitting.

Configuring a Robust Build Agent

Don't run builds on the master. The master is for orchestration; agents are for heavy lifting. Here is a quick recipe for a disposable Docker build agent that connects back to Jenkins.

Create a `Dockerfile` for your agent:

FROM java:8-jdk

# Install Docker inside Docker (for building images)
RUN apt-get update && apt-get install -y docker.io

# Create Jenkins user
RUN useradd -m -d /home/jenkins -s /bin/bash jenkins && \
    echo "jenkins:jenkins" | chpasswd

# Standard SSH setup for Jenkins Master to connect
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

Build it and run it, mapping the Docker socket so the container can spawn sibling containers (cleaner than Docker-in-Docker):

docker run -d -p 2222:22 -v /var/run/docker.sock:/var/run/docker.sock --name jenkins-slave-1 my-jenkins-slave

Why Infrastructure Choice Dictates Success

You can tune sysctl.conf until your fingers bleed, but you cannot software-patch bad hardware. In 2016, HDD-based VPS hosting is obsolete for CI/CD. The random read/write patterns of compiling code and building container layers will bring spinning rust to its knees.

This is where the distinction between "cheap" and "valuable" becomes clear. CoolVDS utilizes pure SSD storage arrays (and we are testing NVMe for future tiers) with KVM virtualization. KVM ensures true hardware isolation. Unlike OpenVZ, where a neighbor can starve your kernel resources, KVM gives you a dedicated slice of the pie.

When you trigger a build, you need 100% of that CPU core and 100% of that disk throughput now. Not when the neighbor is done processing their logs.

Final Thoughts

A slow pipeline is a culture killer. It encourages developers to commit less often because "the build takes forever." It delays hotfixes. It frustrates stakeholders.

Optimize your configs. Switch to OverlayFS. Keep your data in Norway to keep Datatilsynet happy. And stop running your critical infrastructure on hardware that belongs in a museum.

Need to slash your build times? Spin up a high-performance SSD instance on CoolVDS today and see the difference raw I/O power makes.