The "Cloud" is Just Someone Else's Slow Computer
I recently watched a senior developer spend 45 minutes staring at a progress bar. The Jenkins job was stuck on npm install for the third time that day. We weren't compiling the Linux kernel; we were deploying a standard React frontend. The bottleneck wasn't the code. It wasn't the complexity. It was the noisy neighbor on the budget cloud instance hosting our build agent stealing our I/O operations.
If you are managing infrastructure in 2020, you know the drill. We prioritize production uptime but treat our CI/CD pipelines like second-class citizens. We throw them onto shared vCPUs with spinning rust storage and wonder why deployment velocity crawls. In the Nordic market, where hourly rates for consultants are sky-high, waiting on a slow build pipeline is essentially burning kroner.
Let’s fix this. I’m going to show you how to optimize your CI/CD pipeline using dedicated resources, proper caching layers, and local infrastructure that respects the laws of physics.
The Bottleneck: I/O Wait and Network Latency
Most CI/CD jobs are I/O bound, not CPU bound. Pulling Docker images, extracting artifacts, and installing dependencies (looking at you, node_modules) requires massive random read/write speeds. If your VPS provider is overselling storage on a shared SAN, your build times will fluctuate wildly based on what other tenants are doing.
Furthermore, if your target users and production servers are in Oslo, but your build agents are in a generic data center in Frankfurt or Virginia, you are fighting unnecessary latency. With the recent Schrems II ruling (July 2020) invalidating the Privacy Shield, keeping data—including build artifacts that might contain production configs—within Norwegian borders is no longer just about speed; it's about not getting fined by Datatilsynet.
Step 1: The Self-Hosted Runner Strategy
Forget shared runners provided by SaaS platforms. They are convenient but inconsistent. We need raw power. We deploy self-hosted GitLab Runners on CoolVDS instances running Ubuntu 20.04 LTS. Why? Because we get dedicated KVM resources and, crucially, local NVMe storage.
Here is the config.toml optimization for a high-concurrency runner. Note the limit and request_concurrency settings, which prevent the runner from choking the host:
concurrent = 4
check_interval = 0
[[runners]]
name = "coolvds-norway-runner-01"
url = "https://gitlab.com/"
token = "PROJECT_TOKEN"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
Type = "s3"
ServerAddress = "minio.internal:9000"
AccessKey = "minio-access-key"
SecretKey = "minio-secret-key"
BucketName = "runner-cache"
Insecure = true
[runners.docker]
tls_verify = false
image = "docker:19.03.12"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
Pro Tip: Notice the MinIO cache configuration. We run a local MinIO instance on the same LAN as the runner to cache node_modules and build artifacts. This keeps traffic off the public internet and speeds up cache restoration/saving by roughly 300% compared to using S3 us-east-1.
Step 2: Docker Layer Caching with BuildKit
Docker 18.09 introduced BuildKit, and by now in late 2020, it is stable enough for production use. It allows for parallel build execution and better caching logic. However, you need to explicitly enable it.
Don't just run docker build .. Do this instead in your pipeline script:
export DOCKER_BUILDKIT=1
docker build \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from registry.example.no/my-app:latest \
-t registry.example.no/my-app:$CI_COMMIT_SHA \
.
This tells Docker to use the inline cache metadata from the previously pushed image (if pulled). This can shave minutes off a build if only the application code changed but the dependencies (lower layers) did not.
Optimizing the Dockerfile
I still see senior engineers putting COPY . . at the top of their Dockerfiles. This busts the cache every time you change a README file. Structure matters:
# Multi-stage build for Go (Golang 1.15)
FROM golang:1.15-alpine AS builder
WORKDIR /app
# Copy only go.mod and go.sum first to leverage cache
COPY go.mod go.sum ./
RUN go mod download
# NOW copy the source code
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
FROM alpine:3.12
WORKDIR /root/
COPY --from=builder /app/main .
CMD ["./main"]
Step 3: Network Topology and Latency
If you are deploying to a production cluster in Oslo, your build runner needs to be close. The round-trip time (RTT) matters when you are syncing gigabytes of data.
Here is a simplified comparison of average deploy times for a 2GB container image based on runner location relative to an Oslo production server:
| Runner Location | Network Latency to Oslo | Upload Time (Approx) |
|---|---|---|
| US East (Virginia) | ~95ms | 45s - 1m 20s |
| Central Europe (Frankfurt) | ~25ms | 20s - 35s |
| CoolVDS (Oslo) | < 2ms | 3s - 8s |
When you run 50 builds a day, that difference compounds. Using a VPS Norway based solution like CoolVDS essentially puts your runner on the same local loop as your production targets.
Database Integration Testing
Real pipelines run integration tests against real databases, not mocks. This is where storage I/O kills performance. If you spin up a MySQL service in GitLab CI, it defaults to using the overlay filesystem, which is slow.
On a KVM instance, you can mount a RAM disk (tmpfs) for your ephemeral test databases. This makes database setup and teardown nearly instantaneous.
Add this to your docker-compose.test.yml:
version: '3.8'
services:
db:
image: mysql:8.0
tmpfs:
- /var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: test_db
command: --innodb_flush_log_at_trx_commit=0 --sync_binlog=0
Warning: The flags --innodb_flush_log_at_trx_commit=0 and --sync_binlog=0 are dangerous for production because they sacrifice ACID compliance for speed. But for a CI job that lives for 3 minutes? They are perfect. They prevent the disk from syncing after every write.
The Infrastructure Decision
We built CoolVDS on pure NVMe storage arrays specifically to solve the "noisy neighbor" I/O wait problem. When you are running heavy webpack compilations or linking C++ binaries, CPU stealing (steal time) from other tenants on budget hosts will cause random failures and timeouts.
By keeping your pipeline infrastructure local in Norway, you satisfy the legal department regarding GDPR data transfers and you satisfy the dev team by cutting feedback loops in half. Speed isn't just a metric; it's a developer happiness factor.
Don't let slow I/O kill your SEO or your patience. Check your iowait metrics today. If they are spiking above 5%, it’s time to move.