Console Login

Stop Waiting for Builds: Optimizing CI/CD Pipelines with Dedicated Runners and NVMe

Stop Waiting for Builds: Optimizing CI/CD Pipelines with Dedicated Runners and NVMe

There is nothing that kills developer momentum quite like the "commit and wait" cycle. You push code, switch context to check your email, and twenty minutes later, you realize the build failed because of a timeout fetching a dependency. In 2018, with the tools we have available, this is inexcusable. Yet, I see it constantly in development shops across Oslo and Bergen.

The culprit is rarely the code itself. It's the infrastructure running the pipeline. Whether you are using Jenkins 2.x pipelines or the increasingly popular GitLab CI, running your builds on shared, over-sold cloud instances is a bottleneck. When `npm install` or `docker build` hits the disk, you are at the mercy of the noisy neighbor on that physical host. If you want speed, you need raw I/O performance and dedicated resources.

The Hidden Bottleneck: Disk I/O Wait

Most CI/CD workloads are incredibly I/O intensive. Extracting artifacts, compiling Java binaries, or wrestling with the black hole that is `node_modules` requires high random read/write speeds. Standard SATA SSDs often choke under parallel build loads. We recently migrated a client's Jenkins farm from a generic cloud provider to CoolVDS instances backed by NVMe storage.

The result? A reduction in build time from 14 minutes to 4 minutes. No code changes. Just better hardware.

If you are managing your own runners, you need to ensure your filesystem can handle the inode exhaustion and constant churn of Docker layers. Here is a quick check you should run on your current build server to see if I/O is your bottleneck:

iostat -x 1 10

If your `%iowait` is consistently above 5-10% during a build, your storage is stealing your team's time.

Docker Caching: Doing It Right

Hardware helps, but you can't be lazy with configuration. Docker's layer caching is powerful, but easily broken. If you copy your source code before installing dependencies, you invalidate the cache every time you change a single line of code. This forces the runner to re-download the internet every time you push.

Here is the correct pattern for a `Dockerfile` (using multi-stage builds available since Docker 17.05) to leverage caching effectively:

# Stage 1: Builder
FROM node:10-alpine AS builder
WORKDIR /app

# COPY package.json AND package-lock.json ONLY first
# This allows Docker to cache the 'npm install' layer if these files haven't changed
COPY package*.json ./

# Install dependencies
RUN npm install --quiet

# Now copy the rest of the source
COPY . .

# Build the static assets
RUN npm run build

# Stage 2: Production
FROM nginx:1.15-alpine
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

By ordering the instructions this way, we ensure that the heavy `npm install` step is cached unless `package.json` actually changes. On a CoolVDS instance, retrieving these cached layers is nearly instantaneous due to the NVMe backend.

Optimizing the GitLab Runner

GitLab CI is rapidly becoming the standard for teams who want code and pipelines in one place. However, the default shared runners provided by SaaS platforms can be slow and have queue times. The solution is to deploy your own specific runner on a VPS in Norway.

Not only does this keep your data within the jurisdiction of Datatilsynet (crucial since GDPR enforcement began in May), but it also reduces latency if your deployment targets are also in Nordic data centers. A runner on CoolVDS connects to the NIX (Norwegian Internet Exchange) backbone, ensuring your artifact uploads are lightning fast.

Here is how to register a Docker runner properly. Do not just run it as a shell executor unless you want a messy server.

# Install GitLab Runner (Ubuntu/Debian)
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | sudo bash
sudo apt-get install gitlab-runner

# Register the runner
sudo gitlab-runner register \
  --non-interactive \
  --url "https://gitlab.com/" \
  --registration-token "YOUR_TOKEN_HERE" \
  --executor "docker" \
  --docker-image "docker:stable" \
  --description "coolvds-nvme-runner-01" \
  --tag-list "docker,nvme,norway" \
  --run-untagged="true" \
  --locked="false"

Once registered, you need to tune the `config.toml` to allow concurrent builds. By default, it might be set to 1, wasting the CPU cycles of your VPS.

concurrent = 4
check_interval = 0

[[runners]]
  name = "coolvds-nvme-runner-01"
  url = "https://gitlab.com/"
  token = "TOKEN_HASH"
  executor = "docker"
  [runners.docker]
    tls_verify = false
    image = "docker:stable"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache"]
    shm_size = 0
Pro Tip: For heavy Java or C++ builds, monitor memory usage. If the OOM killer strikes, increase the swap file on your VPS, or better yet, upgrade to a CoolVDS plan with more RAM. Swap on NVMe is faster than spinning disk, but it's still slower than RAM.

Garbage Collection

A dedicated runner will accumulate Docker images and stopped containers rapidly. In a CI environment, "disk full" is the most annoying error to wake up to. You should have a cron job running a cleanup script. As of Docker 18.09, `docker system prune` is your friend, but be careful not to wipe your build cache too aggressively.

Add this to your crontab (`crontab -e`):

# Prune dangling images and stopped containers every night at 3 AM
0 3 * * * /usr/bin/docker system prune -af --filter "until=24h" >> /var/log/docker-prune.log 2>&1

Network Latency and Geo-Location

Why does it matter where your runner is? If you are pushing Docker images to a private registry hosted in Oslo, or deploying to servers in the Nordics, round-trip time (RTT) matters. Pushing a 500MB layer over the public internet from a US-based cloud builder to a server in Norway is painful.

By hosting your CI/CD infrastructure on CoolVDS, you are sitting directly on the local backbone. Latency within Norway is often sub-5ms. This means the "Upload Artifacts" stage of your pipeline finishes before you can blink.

Comparison: Build Time for Magento 2 Deployment

Environment Storage Type Build Time Artifact Upload
Generic Cloud (Frankfurt) Network Storage (Ceph) 18 min 30 sec 45 sec
CoolVDS (Oslo) Local NVMe 6 min 15 sec 4 sec

The numbers don't lie. Efficiency isn't just about code; it's about physics.

Conclusion

Your developers are expensive. Your servers are relatively cheap. Trying to save a few kroner on a budget VPS for your CI/CD pipeline is false economy. A slow pipeline leads to context switching, frustration, and fewer deployments per day.

To fix this today:

  1. Audit your `Dockerfile` for caching layers.
  2. Move off shared runners.
  3. Deploy a dedicated runner on high-I/O infrastructure.

If you are ready to stop waiting and start deploying, spin up a CoolVDS instance. With our NVMe storage and strategic location in Norway, your pipelines will finally keep up with your code.