Stop Watching Progress Bars: Optimizing CI/CD Pipelines on KVM Infrastructure
There is nothing more demoralizing for a development team than pushing a commit and waiting 25 minutes for the green checkmark. If you multiply that wait time by the number of commits per day and the hourly rate of your senior backend engineers, you aren't just losing time. You are burning cash.
I recently audited a deployment pipeline for a media house in Oslo. Their build server was choking. Tests that should run in seconds were taking minutes. The culprit wasn't code quality or sloppy unit tests. It was the infrastructure.
Most "cloud" providers oversell their storage throughput. When your CI runner tries to compile binaries or run npm install on a project with 10,000 dependencies, you hit an I/O wall. Here is how to fix it using tools available today, in late 2016.
The Silent Killer: I/O Wait
When a build hangs, most sysadmins check CPU usage. They see the load average spiking and assume they need more cores. But look closer.
Run top and look at the %wa (iowait) value.
%Cpu(s): 12.5 us, 3.2 sy, 0.0 ni, 45.0 id, 39.2 wa, 0.0 hi, 0.1 si, 0.0 st
If your wa is hovering near 40% like in the example above, your CPU is sitting idle, bored, waiting for the disk to read or write data. In a CI/CD environment, where we constantly create and destroy Docker containers, extract tarballs, and compile code, I/O is the bottleneck 90% of the time.
The Fix: You need high IOPS (Input/Output Operations Per Second). Traditional spinning rust (HDD) or cheap SATA SSDs in shared environments often cap out at 300-500 IOPS. For a heavy Jenkins pipeline, you need NVMe. At CoolVDS, our benchmarks show NVMe drives pushing 10,000+ IOPS easily, obliterating that wait time.
Optimizing Jenkins for 2016 Workflows
Jenkins 2.0 (released earlier this year) introduced the Pipeline plugin as a standard, but the default Java configurations on many Linux distributions are still stuck in the past. If you are running Jenkins on Ubuntu 16.04 LTS, the default heap size is laughable.
Garbage collection pauses can freeze your build agent. Let's tune the JAVA_OPTS in /etc/default/jenkins to use the G1 Garbage Collector, which is far superior for larger heap sizes.
# /etc/default/jenkins
# Allocate 4GB heap (adjust based on your VPS RAM)
JAVA_ARGS="-Xmx4096m"
# Use G1GC for better performance and reduced pauses
JAVA_ARGS="$JAVA_ARGS -XX:+UseG1GC -XX:+ExplicitGCInvokesConcurrent"
# Optimize for Jenkins specifically
JAVA_ARGS="$JAVA_ARGS -Dhudson.model.DirectoryBrowserSupport.CSP=\"sandbox allow-scripts; default-src 'none'; img-src 'self' data:; style-src 'self' 'unsafe-inline';\""
After editing, restart the service:
systemctl restart jenkins
Docker Caching Strategy
With Docker 1.12 becoming the standard, everyone is moving towards containerized builds. However, downloading the internet (all your Maven/NPM repos) inside a container every single time is inefficient.
Don't use a fresh container for everything. Map a host volume to the container's cache directory. This persists the dependencies between builds.
# Bad: Re-downloading dependencies every time
docker run --rm -v $(pwd):/app maven:3.3.9 mvn clean install
# Good: Mounting the local .m2 repository into the container
docker run --rm \
-v $(pwd):/app \
-v /root/.m2:/root/.m2 \
maven:3.3.9 mvn clean install
Pro Tip: If you are using `docker-compose` for integration tests, ensure you are cleaning up networks. Docker 1.10+ improved networking, but stale bridges can conflict. Always run `docker-compose down -v` after your test suite finishes.
Network Latency: The Norway Advantage
We often ignore the speed of light. If your dev team is sitting in Oslo or Bergen, and your build server is in a massive datacenter in Virginia or Frankfurt, you are adding latency to every git push, every SSH session, and every artifact upload.
Connecting to a VPS via the Norwegian Internet Exchange (NIX) dramatically improves the responsiveness of interactive CLI tools. When you are debugging a failed build via SSH, 15ms latency vs 80ms latency is the difference between flow state and frustration.
| Feature | Standard Cloud VPS | CoolVDS (Norway) |
|---|---|---|
| Virtualization | Often OpenVZ (Shared Kernel) | KVM (Kernel-based Virtual Machine) |
| Storage | SATA SSD (Shared) | NVMe (High IOPS) |
| Data Sovereignty | Uncertain (US Patriot Act?) | Strictly Norway/EEA |
Data Sovereignty in a Post-Safe Harbor World
Since the invalidation of the Safe Harbor agreement last year, and the new Privacy Shield framework being rolled out, legal compliance is a headache for CTOs. Hosting your CI/CD pipeline—which contains your source code and potentially production database dumps—outside of your jurisdiction is a risk.
By keeping your build infrastructure on a VPS in Norway, you simplify compliance. Your intellectual property and data remain under Norwegian jurisdiction and EEA regulations. This is not just technical optimization; it is legal risk mitigation.
Benchmarking Your Current Provider
Don't take my word for it. Test your current disk speed. Install `fio` (Flexible I/O Tester) and run a random write test, which simulates a heavy build process.
apt-get install fio
fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=1 --size=512M --numjobs=1 --runtime=240 --group_reporting
If your IOPS are under 1000, your hosting provider is throttling you. On a CoolVDS NVMe instance, we routinely see numbers that make physical servers jealous.
Final Thoughts
Optimization is about removing friction. Friction comes from slow disks, network latency, and bad configuration. By switching to KVM-based virtualization where resources aren't oversold, and leveraging local connectivity in Norway, you give your developers the fastest feedback loop possible.
Stop waiting for the progress bar. Deploy a high-performance CoolVDS instance today and cut your build times in half.