Console Login

Stop Burning CPU: Optimizing CI/CD Pipelines for Speed and Sovereignty in 2020

Why Your Build Pipeline is Bleeding Money (And How to Patch It)

It is March 2020. The world is shifting rapidly toward remote workflows, and your developers are likely sitting at home, staring at a Jenkins progress bar that hasn't moved in eight minutes. If your CI/CD pipeline takes longer than the time it takes to brew a Chemex, you aren't just wasting time; you are actively destroying developer flow state.

I recently audited a setup for a mid-sized fintech based in Oslo. Their deployment script for a microservices cluster took 45 minutes. Forty-five. Their "solution" was to throw more vCPUs at the problem via AWS. It didn't work. The bottleneck wasn't raw compute; it was I/O latency and terrible cache management.

In this deep dive, we are going to fix this. We aren't talking about abstract "best practices." We are talking about overlay2 storage drivers, NIX (Norwegian Internet Exchange) latency, and why your choice of underlying hardware—specifically NVMe—is the single biggest factor in CI performance.

The Hidden Killer: Disk I/O Wait

CI/CD jobs are inherently I/O violent. npm install, composer update, docker build—these operations generate thousands of small random read/write operations. On standard SATA SSDs (or heaven forbid, spinning rust), your CPU spends half its lifecycle in iowait status, just waiting for the disk controller to catch up.

The Fix: NVMe or Nothing.

At CoolVDS, we stopped provisioning SATA drives for our high-performance tiers back in 2018. The math is simple. A standard SATA III SSD caps out at roughly 600 MB/s. An NVMe drive over PCIe flows between 3,000 to 3,500 MB/s. When you have a GitLab Runner pulling artifacts and extracting Docker images, that throughput difference is the difference between a 2-minute build and a 12-minute build.

Pro Tip: If you are unsure if your current VPS provider is throttling your I/O, run this fio command. If your IOPS are under 10k, move your infrastructure.
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=1G --readwrite=randwrite --rwmixread=75

Optimizing the Docker Layer Cache

Most pipelines I see are re-downloading the entire internet every time a developer pushes a commit. This is negligence. Docker caches layers based on the instruction string. If you copy your source code before installing dependencies, you invalidate the cache for the dependency installation step every time you change a single line of code.

Here is the incorrect way (what I usually see):

FROM node:12-alpine
WORKDIR /app
COPY . . 
# Cache is busted here every time code changes
RUN npm install
CMD ["npm", "start"]

Here is the optimized approach. We copy only the dependency definitions first. This allows Docker to use the cached layer for npm install unless package.json actually changes.

FROM node:12-alpine
WORKDIR /app

# Only copy dependency manifests
COPY package.json package-lock.json ./

# This layer is now cached until you change dependencies
RUN npm ci --quiet

# Now copy the rest of the code
COPY . .
CMD ["npm", "start"]

By implementing this change alone on a client's React build pipeline, we dropped build times from 6 minutes to 45 seconds for incremental changes. Combine this with CoolVDS's local NVMe storage, and the cache extraction is instantaneous.

The "Norwegian" Advantage: Data Sovereignty & Latency

Latency isn't just a user-facing metric; it affects your deployment syncing. If your dev team is in Bergen or Oslo, and your runners are in US-East-1, you are fighting the speed of light. Every git fetch, every docker push, every artifact upload traverses the Atlantic.

By hosting your GitLab Runners or Jenkins agents on VPS Norway infrastructure, you leverage the NIX (Norwegian Internet Exchange). Ping times from local ISPs drop from ~100ms to ~3ms. For a large repository clone, that throughput stability is massive.

GDPR and Datatilsynet Compliance

We need to address the elephant in the room. With the strict interpretation of GDPR by Datatilsynet, where you process your CI/CD artifacts matters. Does your code contain PII (Personally Identifiable Information) in test databases? Likely. If that data leaves the EEA during the build process, you are in a compliance gray area.

Keeping your build infrastructure on CoolVDS guarantees that your data stays within Norwegian borders, on servers physically located in Oslo, fully compliant with Norwegian privacy laws.

Configuration: Tuning the Runner

Let's look at a production-ready config.toml for a GitLab Runner. The default settings are too conservative for modern hardware. We want to utilize the overlay2 storage driver (standard on Ubuntu 18.04 LTS) and ensure we aren't zombie-killing our containers too aggressively.

[[runners]]
  name = "coolvds-nvme-runner-01"
  url = "https://gitlab.com/"
  token = "YOUR_TOKEN_HERE"
  executor = "docker"
  # Increase concurrency if you have multi-core CoolVDS instances
  limit = 4
  [runners.custom_build_dir]
  [runners.docker]
    tls_verify = false
    image = "docker:19.03.5"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock"]
    shm_size = 0
    # Vital for performance on ext4/xfs
    storage_driver = "overlay2"

A Note on RAM and Swap

Webpack and GCC are notoriously memory hungry. If your runner hits the OOM (Out of Memory) killer, the build fails instantly. On many budget VPS providers, swap is disabled or runs on slow HDDs. On CoolVDS, because we run on NVMe, we can configure a high-speed swap file that acts as a safety net. It's not as fast as RAM, but it prevents the build from crashing during a compile spike.

# Check your current swapiness (Default is usually 60)
cat /proc/sys/vm/swappiness

# For a build server, we want to use RAM primarily, but allow swap to prevent crashes
# Add this to /etc/sysctl.conf
vm.swappiness=10
vm.vfs_cache_pressure=50

The Verdict: Hardware Defines Velocity

You can optimize your Dockerfiles and tweak your YAML configurations all day, but you cannot code your way out of slow hardware. In 2020, there is no excuse for running CI/CD on spinning disks or oversold virtual CPUs.

Your engineers cost you hundreds of kroner per hour. A proper high-performance VPS costs a fraction of that per month. Do the math.

If you are ready to stop waiting for builds and start shipping code, deploy a CoolVDS NVMe instance today. We are optimized for high-throughput workloads, localized for Norwegian peering, and ready for your heaviest pipelines.

Deploy your High-Performance Runner on CoolVDS (Starts at 55 seconds setup time)