Stop Watching Progress Bars: Optimizing CI/CD Pipelines for Nordic Latency
There is nothing more soul-crushing in a DevOps engineer's day than watching a blinking cursor during npm install. It breaks your flow. It tempts you to check Twitter. It kills productivity.
If your CI/CD pipeline takes 20 minutes to deploy a hotfix, you don't have a pipeline; you have a queue. In 2018, with the tools we have available—Docker, GitLab CI, Jenkins 2.0—there is absolutely no excuse for sluggish deployments. Yet, I see teams in Oslo and Bergen struggling with build times that would embarrass a dial-up connection.
The culprit is rarely your code. It's usually your infrastructure. Specifically, it's Disk I/O and Network Latency.
I recently audited a setup for a media house in Trondheim. They were routing build artifacts through a budget VPS in Frankfurt, pushing to a production server in Norway. Their builds failed randomly. Why? Timeouts. We fixed it by bringing the infrastructure home. Here is how you optimize your pipeline for speed, stability, and the upcoming GDPR enforcement.
1. The I/O Bottleneck: Why Standard SSD isn't Enough
Building Docker images is disk-intensive. You are extracting layers, writing file systems, and moving gigabytes of data. Most budget cloud providers oversell their SSD storage. When your neighbor on the physical host decides to re-index their Elasticsearch cluster, your build speed tanks.
You can diagnose this immediately on your build server using iotop.
sudo iotop -oPa
If you see your dockerd process stuck in IO-wait > 10%, your storage is too slow. This is where hardware matters. For our internal pipelines, we switched to CoolVDS instances because they map NVMe storage directly. The difference between SATA SSD and NVMe in a heavy write scenario (like `docker build`) is not linear; it's exponential.
Pro Tip: If you are running your own GitLab Runner, ensure you have sufficient swap space if you are RAM constrained. A 2GB instance can choke during linking/compilation.
sudo fallocate -l 4G /swapfile && sudo chmod 600 /swapfile && sudo mkswap /swapfile && sudo swapon /swapfile
2. Caching Strategies in GitLab CI
Stop downloading the internet every time you commit code. If you aren't caching your dependencies, you are burning money and time. However, caching in Docker is tricky. You need to layer your Dockerfile correctly so that changes to your source code don't invalidate your dependency installation layers.
But outside of the Dockerfile, your CI runner needs to preserve state. Here is a battle-tested .gitlab-ci.yml configuration we use for Node.js applications that leverages local caching effectively:
stages:
- build
- test
- deploy
variables:
# Keep npm cache inside the project to allow GitLab to cache it
NPM_CONFIG_CACHE: "$CI_PROJECT_DIR/.npm"
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm/
- node_modules/
build_app:
image: node:8.10.0
stage: build
script:
- npm ci
- npm run build
artifacts:
paths:
- dist/
expire_in: 1 hour
Notice the use of npm ci (introduced recently in npm 5.7). It is significantly faster and more reliable than npm install for CI environments because it strictly follows the lockfile.
3. The "Local Loop" Advantage
Latency matters. If your dev team is in Norway, your customers are in Norway, and your production servers are in Norway, why is your CI runner in Virginia or Amsterdam?
Every artifact upload and docker image push has to travel the wire. Let's look at the round-trip time (RTT) from a typical Oslo office fiber connection:
| Destination | Latency (ms) | Impact on 1GB Push |
|---|---|---|
| Oslo (CoolVDS / NIX) | ~2 ms | Negligible overhead |
| Frankfurt (AWS/DO) | ~25 ms | Noticeable lag |
| US East (N. Virginia) | ~95 ms | Painful |
Hosting your GitLab Runner on a local CoolVDS instance peering directly at NIX (Norwegian Internet Exchange) ensures that your "heavy lifting"—pushing and pulling Docker images—happens at LAN-like speeds.
To verify your connectivity to the NIX infrastructure, run a simple check:
mtr --report --report-cycles=10 nix.no
4. Configuring the Runner for Concurrency
The default installation of GitLab Runner is conservative. It processes one job at a time. If you have a team of five developers committing simultaneously, four of them are waiting.
You need to edit /etc/gitlab-runner/config.toml to increase concurrency. But be careful—this increases CPU and RAM load. This is why we prefer KVM virtualization (standard on CoolVDS) over OpenVZ; we need guaranteed CPU cycles that aren't stolen by other tenants.
concurrent = 4
check_interval = 0
[[runners]]
name = "CoolVDS-Nor-Runner-01"
url = "https://gitlab.com/"
token = "YOUR_TOKEN_HERE"
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:17.12"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
Setting privileged = true is often necessary for Docker-in-Docker (dind) workflows, though it comes with security implications. Ensure your runner is on an isolated network segment.
5. The GDPR Elephant in the Room
We are approaching May 25, 2018. The General Data Protection Regulation (GDPR) enforcement date is imminent. If your CI/CD process involves dumping production database sanitizations into a staging environment for testing, you are processing personal data.
If that processing happens on a runner hosted outside the EEA (European Economic Area), or on a US-controlled cloud provider, you are increasing your compliance burden. Data sovereignty is no longer just a "nice to have"—it's a legal shield.
By keeping your CI runners and staging environments on Norwegian soil, you simplify your data processing agreements (DPA) significantly. You also avoid the murky waters of US data transfers that have been unstable since the Safe Harbor invalidation.
6. Tuning the Host Kernel
Finally, a standard Linux kernel isn't always tuned for the high packet rates of a CI runner. A few simple sysctl tweaks can improve network stability during those massive `git clone` or `docker push` operations.
sudo sysctl -w net.core.somaxconn=1024
sudo sysctl -w net.ipv4.tcp_tw_reuse=1
Add these to /etc/sysctl.conf to make them persistent.
Conclusion
Optimization is about removing friction. Fast disks (NVMe), low latency (local geo-location), and smart caching configurations transform a frustration-filled deployment process into a competitive advantage.
Don't let your infrastructure be the bottleneck. If you need a runner that can handle concurrent Docker builds without choking on I/O, deploy a CoolVDS NVMe instance today. It takes 55 seconds to spin up, which is probably less time than you're currently waiting for your build to initialize.