Optimizing CI/CD Pipelines in 2016: Why Latency and I/O Are Killing Your Build Times
Let’s be honest: waiting for a build to finish is the modern equivalent of watching paint dry. But unlike painting, where the result is usually a nice wall, a slow build pipeline destroys developer flow, increases context-switching costs, and ultimately delays time-to-market. I've seen teams in Oslo sitting idle for 40 minutes because their shared CI runner is choking on a simple npm install.
With the release of Jenkins 2.0 earlier this year, the paradigm has shifted. We aren't just scripting tasks anymore; we are building Pipelines as Code. However, software advancements mean nothing if your underlying infrastructure is running on spinning rust or oversold CPU cores.
Here is how we optimize CI/CD pipelines for raw speed and reliability, keeping the specific constraints of the Norwegian market in mind.
The Bottleneck is Almost Always I/O
Most developers blame the compiler or the package manager when builds are slow. In my experience auditing infrastructure for tech firms across Scandinavia, the culprit is usually Disk I/O Wait. Continuous Integration is heavily I/O dependent. You are checking out Git repositories, extracting archives, writing thousands of small files (looking at you, node_modules), and building Docker images layer by layer.
If you are running your CI runner on a budget VPS with standard SATA SSDs—or worse, HDD—you are capping your throughput. The queue depth explodes, and the CPU sits idle waiting for data.
Pro Tip: Check your I/O wait time during a heavy build. Runiostat -x 1in a terminal. If your%iowaitconsistently creeps above 5-10%, your storage is the bottleneck, not your code.
This is where CoolVDS differs from the mass-market hosting providers. We use enterprise-grade NVMe storage. In our benchmarks, random read/write operations on NVMe can be 4x to 6x faster than standard SATA SSDs. For a Java Maven build or a Docker image construction, this translates to shaving minutes off every single commit.
Adopting Jenkins 2.0 and the Jenkinsfile
If you are still configuring build steps in the Jenkins GUI, stop. It's unmaintainable and impossible to audit. The 2016 standard is the Jenkinsfile. This allows you to commit your build pipeline configuration alongside your code.
Here is a robust example of a Scripted Pipeline configuration that utilizes Docker for isolation. This ensures that your build environment is clean every time, avoiding the "it works on the CI server but not locally" nightmare.
Example: Scripted Jenkinsfile with Docker
node {
def app
stage('Checkout') {
checkout scm
}
stage('Build') {
// We use a specific Node image to ensure version consistency
docker.image('node:6.9.2').inside {
sh 'npm config set registry https://registry.npmjs.org/'
sh 'npm install'
sh 'npm run build'
}
}
stage('Test') {
docker.image('node:6.9.2').inside {
sh 'npm test'
}
}
stage('Package') {
// Building the production Docker artifact
app = docker.build("coolvds-app:${env.BUILD_ID}")
}
}
Note the use of docker.image(...).inside. This mounts the workspace into the container, executes the shell commands, and exits. It is cleaner than managing multiple versions of Node or Java on the host OS.
Tuning the Host Kernel for Docker Performance
Running high-frequency Docker builds puts a strain on the Linux networking stack and the filesystem. Default Ubuntu 16.04 LTS settings are conservative. If you are managing your own CI runners on a CoolVDS instance, you need to tune sysctl to handle the rapid creation and destruction of network connections.
Edit your /etc/sysctl.conf to include these optimizations:
# Allow more connections to be handled
net.core.somaxconn = 4096
# Reuse closed sockets faster (TIME_WAIT state)
net.ipv4.tcp_tw_reuse = 1
# Increase port range for outgoing connections (critical for heavy parallel builds)
net.ipv4.ip_local_port_range = 1024 65535
# Increase max file descriptors
fs.file-max = 2097152
Apply these changes with sudo sysctl -p. Additionally, ensure your Docker daemon is using the correct storage driver. While AUFS has been the standard, we are seeing better performance and stability with OverlayFS on newer kernels.
Configuring Docker Daemon
Edit or create /etc/docker/daemon.json:
{
"storage-driver": "overlay2",
"dns": ["8.8.8.8", "8.8.4.4"]
}
Note: Ensure your kernel version is 4.0 or higher to use overlay2 safely. Our CoolVDS Ubuntu 16.04 images come with Kernel 4.4, fully supporting this configuration.
Data Sovereignty and Compliance in Norway
With the recent invalidation of the Safe Harbor agreement and the introduction of the Privacy Shield framework, data location matters more than ever. Furthermore, the upcoming GDPR regulation (set to be enforced in 2018) is already causing headaches for CTOs across Europe.
If your code contains sensitive data, PII (Personally Identifiable Information), or hardcoded secrets (which it shouldn't, but happens), hosting your CI/CD pipeline on US-controlled public clouds introduces legal ambiguity.
By hosting your GitLab Runner or Jenkins controller on a VPS in Norway, you ensure that your data remains within Norwegian jurisdiction, adhering to Datatilsynet guidelines. Latency is another factor. If your dev team is in Oslo or Bergen, pushing code to a server in Virginia (US-East) is physically slower than pushing to a server in Oslo. 100ms latency adds up when you are doing hundreds of git operations a day.
The "Noisy Neighbor" Problem in CI/CD
Automated tests are effectively a DDoS attack on your own infrastructure. They spike CPU usage to 100% instantly. In a shared hosting environment (Container-based or cheap VPS), your build performance fluctuates depending on what other customers are doing on that physical node. This leads to "flaky tests"—tests that fail only because the timeout was exceeded due to CPU steal time.
At CoolVDS, we utilize KVM (Kernel-based Virtual Machine) hardware virtualization. This provides strict isolation. Your RAM is your RAM. Your CPU cycles are reserved for you. This consistency is critical for CI/CD; a test suite should take the same amount of time at 2:00 PM as it does at 2:00 AM.
Automating the Agent Deployment
Don't configure your build agents manually. Use Ansible. Here is a snippet of a playbook we use to provision a Docker-ready build agent on a fresh CoolVDS instance.
---
- hosts: build_agents
become: yes
tasks:
- name: Install required system packages
apt:
name: "{{ item }}"
state: present
update_cache: yes
with_items:
- apt-transport-https
- ca-certificates
- curl
- software-properties-common
- name: Add Docker GPG key
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add Docker repository
apt_repository:
repo: "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable"
state: present
- name: Install Docker CE
apt:
name: docker-ce
state: present
- name: Add jenkins user to docker group
user:
name: jenkins
groups: docker
append: yes
Conclusion
In 2016, a slow pipeline is a choice. The tools exist—Jenkins 2.0, Docker, and Ansible—to build robust, self-healing automation. But software cannot overcome hardware limitations.
If you are tired of watching the spinner spin, it's time to upgrade your infrastructure. Move your pipelines to a provider that respects data sovereignty and understands high-performance I/O.
Ready to cut your build times in half? Deploy a high-performance NVMe instance on CoolVDS today and experience the difference strict KVM isolation makes.