Console Login

Stop Watching Progress Bars: Optimizing CI/CD IOPS for High-Velocity Teams

Stop Watching Progress Bars: Optimizing CI/CD IOPS for High-Velocity Teams

There is a specific kind of silence in a developer office that I despise. It’s the sound of three senior engineers standing in the kitchen, debating the quality of the new coffee machine, while their terminals display Building... [45%]. The "compiling" excuse, immortalized by XKCD, isn't funny anymore. It's burning your runway.

In 2019, the bottleneck in your Continuous Integration/Continuous Deployment (CI/CD) pipeline is rarely CPU. It's almost always Disk I/O. I've audited pipelines for FinTech startups in Oslo and established e-commerce giants in Trondheim. The pattern is identical: they run heavy containerized builds on budget VPS hosting with shared spinning rust (HDDs) or throttled SSDs. The result? iowait spikes to 80%, and your deployment time doubles.

Let's fix this. We are going to look at the storage drivers, RAM disks, and the infrastructure required to run a pipeline that keeps up with your commit history.

The Docker Storage Driver Trap

Most CI pipelines today rely on Docker. Whether you are using Jenkins, GitLab CI, or Drone, you are likely spinning up containers to run tests. By default, many older Linux distributions (CentOS 7, I'm looking at you) might still default to devicemapper in loop-lvm mode if not configured correctly. This is a performance death sentence.

You need to be using overlay2. It is the preferred storage driver for Docker CE 18.09 on Ubuntu 18.04 LTS. It allows for page cache sharing between different overlay file systems.

Check your current configuration:

docker info | grep Storage

If it doesn't say overlay2, you are wasting time. Here is how to force it in your /etc/docker/daemon.json:

{
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

Restart Docker. Your image pulls and container creation times will drop significantly. But software configuration can only mask hardware deficiencies for so long.

The Hardware Reality: Why NVMe Matters

When you run npm install, composer install, or mvn install inside a pipeline, you are performing thousands of small random read/write operations. Traditional SSDs (SATA interface) cap out around 600 MB/s and, more importantly, have limited IOPS (Input/Output Operations Per Second). Queues build up.

NVMe (Non-Volatile Memory express) speaks directly to the PCIe bus. We are talking about 3000+ MB/s and massive IOPS capabilities. At CoolVDS, we don't upsell NVMe as a "premium" tier; we use it as the baseline standard because running a modern CI runner on anything less is negligence. If your current provider is sharing SATA throughput among 50 tenants, your build times are at the mercy of your neighbors.

Hack: Ramdisks for Databases in CI

Your integration tests likely spin up a temporary MySQL or PostgreSQL database. These databases need to initialize, import schemas, run transactions, and then die. They do not need data persistence. Writing this to disk is wasteful.

Mount a tmpfs (RAM disk) for your database containers. This keeps the I/O entirely in memory.

If you are using docker-compose for your test harness:

version: '3.7'
services:
  db_test:
    image: postgres:11-alpine
    tmpfs:
      - /var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD: secret

This single change reduced a Magento 2 test suite duration from 18 minutes to 6 minutes in a recent migration I handled.

Optimization Strategy: Docker Layer Caching

Stop rebuilding everything. In 2019, multi-stage builds are stable. Use them. Structure your Dockerfile so that dependency installation happens before code copying. This utilizes the cache if package.json hasn't changed.

Bad:

COPY . .
RUN npm install

Good:

COPY package*.json ./
RUN npm install
COPY . .
Pro Tip: If you are using GitLab CI, the distributed cache can be slow if it relies on uploading/downloading heavy archives to S3 buckets across the Atlantic. Keep your runners close to your cache. Hosting your GitLab Runner on a CoolVDS instance in Norway ensures low latency to the Norwegian Internet Exchange (NIX), keeping your artifact transfers inside the local high-speed backbone.

The "Norwegian" Context: GDPR and Latency

We operate in a post-GDPR world. The Datatilsynet (Norwegian Data Protection Authority) is vigilant. If your CI/CD pipeline processes production dumps for testing (a bad practice, but common), and that pipeline is hosted on a budget US VPS, you are risking non-compliance.

Data residency matters. Hosting your build infrastructure on CoolVDS ensures that your data stays within Norwegian jurisdiction, adhering to strict privacy standards while benefiting from the stability of our local power grid. Furthermore, latency matters for developers. SSH-ing into a build runner to debug a failure feels sluggish if the server is in Virginia. If your team is in Oslo, your servers should be too.

Sample GitLab CI Configuration for Performance

Here is a snippet of a tuned .gitlab-ci.yml utilizing Docker-in-Docker (dind) with overlay2 driver explicitly defined, suitable for a high-performance runner:

variables:
  DOCKER_DRIVER: overlay2
  # Create a dedicated mount point for performance
  DOCKER_TLS_CERTDIR: ""

services:
  - name: docker:18.09-dind
    command: ["--storage-driver=overlay2"]

stages:
  - build
  - test

build_app:
  stage: build
  image: docker:18.09
  script:
    - docker build --cache-from $CI_REGISTRY_IMAGE:latest -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA

Conclusion

Optimization is a game of millimeters, but infrastructure is the playing field. You can optimize your Dockerfile and tune your my.cnf until you are blue in the face, but if the underlying disk IOPS are capped, your pipeline will stall.

Don't let your infrastructure dictate your release cadence. Switch to a provider that understands the difference between "hosting" and "high-performance engineering."

Ready to cut your build times in half? Deploy a high-frequency NVMe instance on CoolVDS today and experience the difference raw power makes.