Console Login

Stop Waiting for Builds: Optimizing CI/CD Pipelines in a Post-Schrems II World

Stop Waiting for Builds: Optimizing CI/CD Pipelines in a Post-Schrems II World

I recently watched a Senior Backend Engineer play ping-pong for 20 minutes. When I asked why he wasn't shipping the hotfix for the payment gateway, he pointed at his screen: Job #4459: Pending.... The runner queue was backed up, the disk I/O was choked, and the artifacts were crawling across the Atlantic wire.

In 2020, "it works on my machine" isn't the problem anymore. The problem is "it takes forever on the server."

If you are running Jenkins, GitLab CI, or drone.io on standard, oversold cloud instances, you are bleeding money. Not just in server costs, but in engineer salaries. As a sysadmin who has deployed everything from bare metal to Kubernetes clusters, I can tell you that optimization isn't about magic; it's about removing friction. Specifically, Disk I/O and Network Latency.

The Hidden Killer: I/O Wait

CI/CD is arguably the most disk-intensive workload in your infrastructure. `npm install`, `docker build`, `maven package`—they all hammer the filesystem. If you are on a VPS sharing a spinning HDD with 50 other neighbors, your CPU isn't the bottleneck. Your storage is.

Run this on your current CI runner during a build:

iostat -x 1 10

If your %iowait is consistently above 5-10%, your storage solution is failing you. In a recent migration for a Norwegian fintech client, we moved their GitLab Runners from a generic cloud provider to CoolVDS NVMe instances. The build time dropped from 14 minutes to 4 minutes. No code changes. Just raw I/O throughput.

Here is the reality of virtualization in late 2020: NVMe is mandatory for CI. Accept nothing less.

Network Latency and the "Schrems II" Reality

Since the CJEU invalidated the Privacy Shield in July (Schrems II), relying on US-based hosting for European data has become a legal minefield. But beyond compliance, there is a performance argument. Why pull Docker base images from Virginia when your dev team is in Oslo?

Latency matters. A ping from Oslo to a US-East data center is ~90ms. To a local CoolVDS instance in Norway? ~2-5ms.

Optimization Strategy: Local Registry Mirrors

Don't let your runners fetch `node:14-alpine` from Docker Hub every single time. It wastes bandwidth and hits rate limits. Set up a local pull-through cache.

On your CoolVDS runner instance, configure the Docker daemon to use a local mirror (or one you host yourself):

{
  "registry-mirrors": ["https://mirror.gcr.io"],
  "storage-driver": "overlay2",
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

(Save this to /etc/docker/daemon.json and restart docker).

Configuring GitLab Runner for Concurrency

Many teams deploy a runner and leave the defaults. This is a mistake. The default configuration often limits concurrency to 1, meaning your shiny 8-core server sits idle while jobs queue up.

Edit your /etc/gitlab-runner/config.toml to match the resources you actually have. If you are using a CoolVDS 8 vCPU / 16GB RAM instance, you can easily handle 4-6 parallel heavy builds.

concurrent = 6
check_interval = 0

[[runners]]
  name = "coolvds-norway-runner-01"
  url = "https://gitlab.com/"
  token = "YOUR_TOKEN"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
  [runners.docker]
    tls_verify = false
    image = "docker:19.03.12"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache"]
    shm_size = 0
Pro Tip: Notice the privileged = true flag. This is often required for Docker-in-Docker (dind) builds. However, this introduces security risks. We recommend using KVM-based virtualization (standard on CoolVDS) rather than container-based VPS, because KVM provides a hard kernel boundary. If a build script goes rogue, it won't crash the host node.

Pipeline Caching: The Low Hanging Fruit

Downloading dependencies is the second biggest time-sink. If you aren't caching your `node_modules` or `.m2` directory, you are doing it wrong.

Here is an optimized .gitlab-ci.yml snippet that uses lock files to generate cache keys. This ensures you only re-download when dependencies actually change.

stages:
  - build
  - test

cache:
  key:
    files:
      - package-lock.json
  paths:
    - node_modules/

build_job:
  stage: build
  script:
    - npm ci
    - npm run build
  artifacts:
    paths:
      - dist/
    expire_in: 1 hour

The Infrastructure Decision: Cloud vs. VPS

You might be tempted to use "serverless" runners or auto-scaling groups in a public cloud. That works, but the bill will shock you. Ephemeral compute is expensive.

For steady-state development teams, a dedicated, high-performance VPS is often 50% cheaper for the same compute power. With CoolVDS, you get dedicated resources. We don't steal your CPU cycles when a neighbor gets busy. This consistency is critical for debugging—if a test fails on our infrastructure, it's a code issue, not a "noisy neighbor" issue.

Comparison: Build Time & Cost

Scenario Infrastructure Avg Build Time Monthly Cost (Est.)
Baseline Shared Hosting (HDD) 14m 20s $20
Public Cloud On-demand t3.large 6m 45s $70+ (w/ transfer)
Optimized CoolVDS NVMe (4 vCPU) 4m 10s $40

Conclusion

Slow pipelines break developer flow. They encourage large, risky merges instead of small, frequent commits. By moving your CI infrastructure to Norway-based NVMe instances, you solve three problems at once: data sovereignty (Schrems II), network latency, and disk I/O bottlenecks.

Stop watching the "Pending" spinner. Give your team the infrastructure they deserve.

Ready to slash your build times? Deploy a high-performance Docker runner on CoolVDS today.