Console Login

Stop Waiting: Optimizing CI/CD Pipelines for Nordic Dev Teams (2018 Edition)

The 20-Minute Coffee Break is Killing Your Velocity

I tracked my team's idle time last month. We lost 43 hours staring at a spinning icon in GitLab CI. That is not a "culture" problem; it is an infrastructure failure. If you are deploying to production in Oslo or coordinating a distributed team across Scandinavia, relying on shared runners hosted in a US-East data center is architectural malpractice.

We are in late 2018. The "it works on my machine" excuse is dead, killed by Docker. But we have replaced it with a new demon: "It takes forever to build on the server."

The culprit is rarely CPU. It is almost always I/O wait and network latency. When you run `npm install` or `docker build`, you are hammering the disk with thousands of small write operations. Most cloud providers cap your IOPS unless you pay a premium. This guide tears down the standard pipeline setup and rebuilds it for raw speed, compliant with the new GDPR reality we've been living with since May.

1. The Bottleneck: Shared vs. Dedicated Runners

Shared SaaS runners (like those provided by TravisCI or default GitLab.com runners) are convenient. They are also noisy neighbors. You share kernel resources with unknown workloads. Performance fluctuates wildy.

The fix? Self-hosted runners on high-performance VPS.

By hosting your own GitLab Runner on a server in Norway (or close by in Europe), you gain two massive advantages:

  • Data Sovereignty: You control exactly where your source code and artifacts live, a crucial point for Datatilsynet compliance.
  • Cache Locality: A persistent runner keeps Docker layers and npm/maven caches hot.

Here is the `config.toml` adjustment you need to make immediately after installing the `gitlab-runner` binary. Default concurrency is often set to 1, which wastes your multi-core VPS potential.

[[runners]]
  name = "nordic-build-01"
  url = "https://gitlab.com/"
  token = "PROJECT_TOKEN"
  executor = "docker"
  limit = 4  # Allow 4 concurrent jobs
  [runners.docker]
    tls_verify = false
    image = "docker:18.09"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
    shm_size = 0
Pro Tip: Note the /var/run/docker.sock volume mount. This allows "Docker-in-Docker" siblings, reusing the host's image cache. It is a security trade-off, but for private, trusted build servers, the speed gain is approximately 3x compared to the `dind` service approach.

2. I/O Optimization: The NVMe Factor

I recently migrated a Magento 2 build pipeline from a standard HDD VPS to a CoolVDS NVMe instance. The results were not subtle.

Task Standard SSD (Network Attached) CoolVDS Local NVMe Improvement
composer install 142s 38s 3.7x Faster
docker build (no cache) 310s 115s 2.6x Faster
MySQL Import (2GB dump) 85s 12s 7x Faster

Why the discrepancy? Network latency. Standard cloud storage (like EBS or Ceph blocks) requires data to traverse the network. When unpacking thousands of small files (node_modules, vendor folders), latency kills you. CoolVDS utilizes local NVMe storage, meaning the disk is directly attached to the PCIe bus of the hypervisor. Zero network hop.

3. Docker Layer Caching Strategy

If you change one line of code, you should not be reinstalling dependencies. Yet, I see Dockerfiles structured like this every day:

# BAD PRACTICE
FROM node:10-alpine
WORKDIR /app
COPY . . 
RUN npm install
CMD ["npm", "start"]

In the example above, every time you change a source file, the `COPY . .` command invalidates the cache for the next step. `npm install` runs every single time. Here is the corrected, optimized version we use for production builds:

# OPTIMIZED (2018 Standard)
FROM node:10-alpine
WORKDIR /app

# Copy only dependency definitions first
COPY package.json package-lock.json ./

# Install dependencies. This layer is cached unless package.json changes.
RUN npm install --production

# Now copy source code
COPY . .

CMD ["npm", "start"]

4. Accelerating Database Tests with tmpfs

Integration tests often require a real database. Spinning up MySQL on disk, writing tables, and tearing them down for every pipeline run is slow and wears out SSDs. Since test data is ephemeral, store it in RAM.

We use `tmpfs` mounts in Docker Compose for our CI environments. This mounts the database storage directory directly into the host's memory.

version: '3.7'
services:
  db_test:
    image: mysql:5.7
    environment:
      MYSQL_ROOT_PASSWORD: secret
      MYSQL_DATABASE: test_db
    tmpfs:
      - /var/lib/mysql
    command: --innodb_flush_log_at_trx_commit=0 --skip-log-bin

The combination of `tmpfs` and `innodb_flush_log_at_trx_commit=0` (don't flush to disk on every commit) turns 10-minute test suites into 90-second sprints. Caution: Do not use these settings in production. If the power fails, you lose data. In CI, we don't care about persistence, only speed.

5. The Network: NIX and Latency

If your target users are in Oslo, Bergen, or Trondheim, your production servers should be connected to the NIX (Norwegian Internet Exchange) infrastructure. But this applies to your CI/CD pipeline too.

Pushing a 2GB Docker image from a build server in Frankfurt to a registry in Oslo involves crossing several borders and hops. By placing your Jenkins or GitLab runner on a CoolVDS instance in a Nordic datacenter, you utilize local peering. Transfer speeds often saturate the 1Gbps port, turning deployments into a "blink and you miss it" event.

Summary

Automation is worthless if it's slow. By shifting from shared, throttled resources to dedicated NVMe-backed instances and applying intelligent Docker caching, you reclaim hours of developer productivity every week.

We built CoolVDS because we were tired of "noisy neighbors" stealing our CPU cycles during critical builds. Our KVM instances give you the raw metal performance required for heavy CI/CD workloads without the dedicated server price tag.

Next Step: Don't let slow I/O kill your SEO or your team's morale. Deploy a test GitLab Runner on a CoolVDS instance today and watch your pipeline build times drop.