Stop Watching Paint Dry: Optimizing CI/CD Build Times for Norwegian Dev Teams
It’s 4:00 PM on a Friday. You’ve just pushed a critical hotfix to the master branch. Now, you wait. And wait. And wait.
The Jenkins console output crawls line by line. PHPUnit is hanging. The rsync transfer to production is stalling. If this sounds familiar, your infrastructure is likely failing you. In the high-stakes world of systems administration, latency isn't just a network metric; it's a productivity killer. I've seen entire development teams in Oslo lose cumulative weeks of work simply staring at progress bars on underpowered staging servers.
The culprit is rarely the code. It's the underlying metal. Specifically, disk I/O and virtualization overhead.
The Hidden Bottleneck: I/O Wait
Most Virtual Private Servers (VPS) today are still spinning on mechanical SAS drives in RAID arrays. When you have twenty other neighbors on the same physical host all trying to compile Java or run npm install simultaneously, the disk heads thrash. Your CPU isn't busy processing code; it's stuck in iowait.
I recently audited a pipeline for a media client in Bergen. Their build took 24 minutes. By moving the workspace to a KVM-backed instance with pure SSD storage and tweaking the filesystem, we dropped that to 4 minutes.
Optimization 1: Ramdisks for Build Artifacts
Continuous Integration generates massive amounts of temporary data—compiled classes, test databases, session files—that are deleted immediately after the build. Writing this to a physical disk, even an SSD, is wasteful.
If you are running Jenkins or TeamCity, mount your workspace or temporary build directories on tmpfs (RAM). This eliminates disk latency entirely for intermediate files.
Configuration: Add this to your /etc/fstab to create a 4GB RAM disk for build workspaces:
tmpfs /var/lib/jenkins/workspace tmpfs rw,size=4g,nr_inodes=200k,noatime,mode=1777 0 0
Note: Data here is volatile. Ensure your post-build actions archive artifacts to persistent storage immediately.
Optimization 2: The "Noisy Neighbor" Problem and KVM
Many budget hosts use OpenVZ or Virtuozzo. These are container-based virtualization technologies (similar to the new Docker trend, but for full OSes). They share the host's kernel. If a neighbor gets hit with a DDoS, your iptables might lock up because the kernel interrupt limit is reached.
At CoolVDS, we strictly use KVM (Kernel-based Virtual Machine). This provides hardware-level virtualization. Your RAM is yours. Your CPU cycles are reserved. In a CI/CD context, this consistency is vital. You cannot debug a race condition in your code if the variance is caused by the server's load.
Scripting Atomic Deploys
FTP is dead. If you are still dragging and dropping files via FileZilla, stop. You are one connection drop away from a corrupted site.
In 2014, we use atomic deployments. Tools like Capistrano (Ruby) or Fabric (Python) are excellent, but you can achieve a robust pipeline with simple Bash and Git hooks. The goal is to swap a symlink pointing to the live directory only after the file transfer is 100% complete.
Here is a battle-tested post-receive hook I use for deploying PHP applications securely:
#!/bin/bash
# /var/repo/site.git/hooks/post-receive
TARGET="/var/www/production"
GIT_DIR="/var/repo/site.git"
BRANCH="master"
while read oldrev newrev ref
do
if [[ $ref =~ .*/$BRANCH$ ]]; then
echo "Starting Deploy to $TARGET..."
# Create a timestamped directory
RELEASE_DIR="/var/www/releases/$(date +%Y%m%d%H%M%S)"
mkdir -p $RELEASE_DIR
# Checkout code to the new directory
git --work-tree=$RELEASE_DIR --git-dir=$GIT_DIR checkout -f $BRANCH
# Link shared assets (uploads, logs)
ln -nfs /var/www/shared/uploads $RELEASE_DIR/public/uploads
ln -nfs /var/www/shared/config/database.php $RELEASE_DIR/config/database.php
# Atomic Switch
ln -nfs $RELEASE_DIR $TARGET
# Reload PHP-FPM to clear opcode cache
service php5-fpm reload
echo "Deploy Complete!"
fi
done
Data Sovereignty: Why Norway Matters
Latency isn't just about disk speed; it's about physics. If your dev team is in Oslo and your server is in Texas, you are fighting the speed of light. Every git push, every SSH command has a 150ms delay. It adds up.
Furthermore, with the ongoing discussions around the EU Data Protection Directive and the increasing scrutiny from Datatilsynet following the Snowden leaks, hosting data outside the EEA is becoming a legal minefield. Privacy Shield agreements are shaky.
Pro Tip: Hosting within Norway (or minimal hops away) ensures your data remains under Norwegian jurisdiction. CoolVDS infrastructure peers directly at NIX (Norwegian Internet Exchange), ensuring your packets often never leave the country.
Database Tuning for CI
Automated tests often tear down and rebuild databases hundreds of times. Default MySQL configurations are tuned for data safety, not throwaway test speed. For your CI environment only, you can relax ACID compliance to gain massive speed improvements.
Edit your my.cnf (usually in /etc/mysql/) on the testing server:
[mysqld]
# DANGER: Do not use this on production!
# Flush to disk every second instead of every commit.
# If the server crashes, you lose 1 second of data (acceptable for tests).
innodb_flush_log_at_trx_commit = 2
# Disable disk sync for binlogs
sync_binlog = 0
# allocate 50-70% of RAM to buffer pool
innodb_buffer_pool_size = 1G
This simple change can reduce the time of a heavy test suite by 50% because the disk heads aren't seeking after every single INSERT query.
The CoolVDS Advantage
We built CoolVDS because we were tired of "noisy neighbors" and opaque resource limits. We don't oversell our RAM. We use enterprise-grade SSDs that sustain high IOPS even under load.
If you are building the future of Norwegian tech, don't host it on the hardware of the past. Your time is too expensive to spend waiting for a progress bar.
Ready to speed up your pipeline? Deploy a high-performance SSD VPS with CoolVDS today and get your builds green in record time.