Console Login

Reducing CI/CD Build Times by 60%: A Norwegian DevOps Guide to NVMe & Optimization

Stop Letting Disk I/O Kill Your Deployment Pipeline

It is April 2018. We are exactly one month away from the GDPR enforcement date (May 25th), and the panic in the Norwegian tech sector is palpable. Everyone is rushing to patch user consent forms, anonymize databases, and update privacy policies. But here is the problem: your CI/CD pipeline is too slow to keep up.

I recently audited a setup for a media firm in Oslo. Their developers were pushing code to a self-hosted GitLab instance, and the build took 25 minutes. Why? Because they were running heavy npm install and Docker image builds on a legacy VPS with standard spinning rust (HDD) storage. The CPU was sitting at 10% usage, while iowait spiked to 80%. They were paying for cores they couldn't use because the drive heads couldn't spin fast enough.

If you are serious about DevOps, you need to understand where the bottleneck actually lives. In 2018, it’s rarely the CPU; it’s the storage. Here is how we optimized that pipeline down to 8 minutes, ensuring compliance and sanity.

The Hidden Enemy: Docker on HDDs

When you build a Docker image, the daemon performs heavy read/write operations. It extracts layers, writes to the overlay2 filesystem, and commits changes. If your VPS provider is overselling storage or running on standard SSDs (or worse, SATA), your build queue will clog.

We migrated the client to a CoolVDS NVMe instance. NVMe (Non-Volatile Memory Express) interfaces directly with the PCIe bus, bypassing the AHCI controller bottleneck. The difference isn't subtle; it is violent.

Benchmark: Sequential Write (Docker Build Simulation)

Storage Type Throughput Latency Result
Standard HDD (Shared) 80 MB/s 15ms+ Timeout Errors
SATA SSD 450 MB/s 2ms Acceptable
CoolVDS NVMe 2500+ MB/s <0.1ms Instant

Optimization 1: Proper Docker Layer Caching in GitLab CI

Hardware solves the I/O wait, but configuration solves the logic. Many developers in 2018 are still not utilizing Docker layer caching correctly within GitLab CI runners. If you pull a fresh image every time, you are burning bandwidth and time.

Here is a battle-tested .gitlab-ci.yml configuration we used to leverage the overlay driver effectively. This assumes you are using the Docker-in-Docker (dind) service approach.

image: docker:17.12

services:
  - docker:dind

variables:
  DOCKER_DRIVER: overlay2
  # Tag the image with the commit SHA and 'latest' for caching
  CONTAINER_TEST_IMAGE: registry.example.com/my-group/my-project:$CI_COMMIT_REF_SLUG
  CONTAINER_RELEASE_IMAGE: registry.example.com/my-group/my-project:latest

stages:
  - build
  - release

build:
  stage: build
  script:
    - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.example.com
    # Pull the latest image to use as a cache layer
    - docker pull $CONTAINER_RELEASE_IMAGE || true
    - docker build --cache-from $CONTAINER_RELEASE_IMAGE -t $CONTAINER_TEST_IMAGE .
    - docker push $CONTAINER_TEST_IMAGE
Pro Tip: Ensure your runner is configured to use the overlay2 storage driver. In older versions of RHEL/CentOS, Docker defaulted to devicemapper, which is notoriously slow and space-inefficient. Check your /etc/docker/daemon.json.

Optimization 2: Tuning the Kernel for Heavy Network Traffic

A CI/CD server acts as a traffic hub. It pulls code, pushes artifacts, and deploys to staging. Default Linux kernel settings are often too conservative for this bursty traffic, leading to connection tracking table overflows.

On our CoolVDS instances, we apply the following sysctl tweaks to handle thousands of concurrent connections during a parallel deploy.

# /etc/sysctl.conf

# Increase the range of ephemeral ports
net.ipv4.ip_local_port_range = 1024 65000

# Reuse connections in TIME_WAIT state
net.ipv4.tcp_tw_reuse = 1

# Increase max backlog for incoming connections
net.core.somaxconn = 4096
net.ipv4.tcp_max_syn_backlog = 4096

# Protect against SYN flood attacks (crucial for public runners)
net.ipv4.tcp_syncookies = 1

Apply these with sysctl -p. If you are running high-concurrency Node.js tests, you might also need to bump the file descriptor limits in /etc/security/limits.conf.

Optimization 3: Local Repository Mirroring

Relying on public repositories (Docker Hub, NPM, Maven Central) introduces external latency. If a transatlantic cable has hiccups, your build fails. For Norwegian companies, data sovereignty is also a concern under the upcoming GDPR rules. You want to know exactly where your binaries live.

We set up a local Nexus Repository Manager on a separate CoolVDS instance in the same datacenter (Oslo). This keeps traffic local to the Norwegian Internet Exchange (NIX), reducing latency from ~100ms (US East) to ~2ms.

Here is an Nginx snippet to secure that internal registry, enforcing SSL (Let's Encrypt) and basic auth:

server {
    listen 443 ssl http2;
    server_name nexus.internal.coolvds.net;

    ssl_certificate /etc/letsencrypt/live/nexus/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/nexus/privkey.pem;

    # Allow large uploads for Docker images
    client_max_body_size 2G;

    location / {
        proxy_pass http://localhost:8081;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto "https";
    }
}

The GDPR Angle: Why Location Matters

With the General Data Protection Regulation (GDPR) enforcement starting next month, Datatilsynet (The Norwegian Data Protection Authority) is going to be watching closely. If your CI/CD pipeline processes test databases containing real user data (a bad practice, but common), and that data leaves the EEA, you are at risk.

Hosting your GitLab runners and staging environments on CoolVDS ensures your data stays within Norwegian jurisdiction. We don't just offer VPS; we offer compliance-ready infrastructure. Our datacenters in Oslo are fully compliant with EU data directives.

Automated Cleanup

A fast disk is useless if it is full. CI runners accumulate dangling images and stopped containers rapidly. In 2018, Docker introduced docker system prune, but automating it is key. Do not rely on manual cleanup.

Add this cron job to your runner:

# Run every night at 3 AM
0 3 * * * /usr/bin/docker system prune -af --filter "until=24h" > /var/log/docker-prune.log 2>&1

This command forcefully removes unused images, containers, and networks created more than 24 hours ago, preventing the "No space left on device" error that wakes you up at 4 AM.

Conclusion

Speed is a feature. If your developers wait 30 minutes to see if a test passed, they lose context. By moving to NVMe storage, optimizing Docker caching, and keeping traffic local to Norway, you reduce friction and risk.

Don't let legacy hardware be the reason you miss the GDPR deadline. Spin up a CoolVDS High-Frequency NVMe instance today and see your build times drop.