Console Login

Scaling CI/CD in 2018: Crushing I/O Bottlenecks and Surviving GDPR

The Waiting Game is Over

It is April 2018. The snow in Oslo is finally melting, but the impending enforcement of GDPR on May 25th is freezing the blood of every CTO in Europe. While the legal team is drowning in paperwork, the engineering team faces a different crisis: Pipeline fatigue.

There is nothing that kills developer momentum faster than a 20-minute build time. You push code, you wait, you get coffee, you forget what you were doing, and then the build fails because of a timeout. If your Continuous Integration (CI) pipeline feels like it is running on a dusty server in a basement in Drammen, it probably is.

In this analysis, we are going to look at why your builds are slow, how to fix them with the latest Docker patterns available this year, and why physical infrastructure location matters more than ever for Norwegian businesses.

The Hidden Bottleneck: It's Not CPU, It's Disk I/O

Most developers assume that compiling code is a CPU-intensive task. While true for C++ or heavy Java projects, the modern web stack (Node.js, PHP, Python) is actually I/O bound.

Consider npm install. It doesn't just download files; it writes tens of thousands of tiny files to disk. If you are running your Jenkins slaves or GitLab Runners on standard HDD VPS hosting or oversold cloud instances, your CPU is sitting idle while it waits for the disk to finish writing.

Pro Tip: Check your disk latency. If your iowait is consistently above 5% during builds, you need faster storage. This is why we standardized on NVMe storage for all CoolVDS instances. The random Read/Write speeds on NVMe are roughly 6x faster than standard SATA SSDs. In a CI context, that cuts install times in half.

Configuring Docker for Performance

If you are still using the devicemapper storage driver with Docker in 2018, stop. It is slow and prone to corruption. The industry standard has shifted to overlay2.

Here is how you verify and configure your Docker daemon on Ubuntu 16.04 to ensure you aren't bottlenecking at the filesystem layer:

# Check current storage driver
docker info | grep 'Storage Driver'

# If it says devicemapper, update your daemon.json
cat < /etc/docker/daemon.json
{
  "storage-driver": "overlay2",
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}
EOF

systemctl restart docker

The 2018 Game Changer: Multi-Stage Builds

Until recently, keeping Docker images small required complex shell scripts and multiple Dockerfiles (the "Builder Pattern"). Since Docker 17.05, we have Multi-Stage Builds. This is mandatory knowledge for any serious DevOps engineer today.

By compiling in one stage and copying only the artifacts to a lean runtime image, we reduce network transfer time and storage costs. This is critical when pushing to a registry.

# Dockerfile - optimized for 2018 workflows

# STAGE 1: The Build
FROM node:9-alpine as builder
WORKDIR /usr/src/app
COPY package*.json ./
# This layer is cached if package.json doesn't change
RUN npm install
COPY . .
RUN npm run build

# STAGE 2: The Production Image
FROM nginx:1.13-alpine
# Copy only the build output from the previous stage
COPY --from=builder /usr/src/app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

This approach often reduces image size from ~800MB to ~20MB. When you deploy this to your staging environment, the download time is negligible. Less latency means faster feedback loops.

Parallelization with Jenkins Pipelines

If you are still using the "Freestyle project" in Jenkins, you are living in 2015. The Jenkinsfile (Pipeline as Code) allows us to visualize stages and, crucially, run them in parallel.

Why run your backend tests after your frontend build? Run them simultaneously. This requires a VPS provider that doesn't steal CPU cycles. CoolVDS guarantees KVM isolation, meaning your reserved cores are actually yours.

pipeline {
    agent any
    stages {
        stage('Build & Test') {
            parallel {
                stage('Frontend Build') {
                    steps {
                        sh 'npm install && npm run build'
                    }
                }
                stage('Backend Tests') {
                    steps {
                        sh './mvnw test'
                    }
                }
                stage('Security Scan') {
                    steps {
                         sh './scripts/security-check.sh'
                    }
                }
            }
        }
    }
}

The GDPR Elephant in the Room

We cannot talk about infrastructure in 2018 without addressing the General Data Protection Regulation. The Datatilsynet (Norwegian Data Protection Authority) has been clear: you must know where your data lives.

Many US-based providers are relying on the "Privacy Shield" framework, but skepticism is high among European privacy advocates. While technically legal, storing data outside the EEA adds layers of compliance complexity that most pragmatic CTOs want to avoid.

Why Hosting in Norway Matters

Feature US/Global Cloud CoolVDS (Norway)
Data Residency Uncertain (Replication?) Guaranteed Oslo
Latency to Oslo 20-40ms (London/Frankfurt) < 2ms (NIX Peering)
Support Generic L1 Local Experts

Hosting your CI/CD runners and staging environments locally ensures that if production data is ever used in a staging dump (which you shouldn't do, but let's be real, it happens), it never leaves Norwegian soil.

Infrastructure is Code, but Hardware is Physics

You can optimize your Jenkinsfile and tune your my.cnf all day, but you cannot code your way out of bad hardware. When multiple developers hit the CI server simultaneously, I/O wait times skyrocket on shared platforms.

We built CoolVDS on the premise that VPS Norway shouldn't mean "slow and expensive." By utilizing pure NVMe arrays and high-frequency RAM, we eliminate the physical bottlenecks that plague CI/CD pipelines.

Your Next Step: Stop watching the progress bar. Spin up a CoolVDS instance with Ubuntu 16.04 or the new 18.04 LTS (beta), install the Docker runner, and watch your build times drop by 40%.