The "Coffee Break" Build Culture Must Die
It starts innocently. A developer pushes code to Subversion or Git. The Continuous Integration (CI) server picks it up. Three minutes pass. Then ten. Then twenty. Suddenly, your team is playing foosball or browsing Reddit because "the build is running." If you have a team of five developers waiting 20 minutes for a build, five times a day, you are burning over 8 hours of productivity daily. That is an entire salary wasted on waiting for progress bars.
As a sysadmin who has wrestled with Jenkins (and Hudson before it) since the early days, I can tell you that the culprit is rarely the CPU. It's the disk. CI/CD is an I/O punisher. It checks out thousands of small files, compiles object code, writes logs, creates JAR/WAR artifacts, and then deletes it all to start over. If you are running this on a standard spinning HDD, you are voluntarily choosing to fail.
The Anatomy of a Slow Pipeline
Let's get technical. When a build slows down, your first instinct might be to run top. You'll see Java consuming memory, but look closely at the wa (wait) percentage. If that number is creeping above 10-15%, your CPU is sitting idle, screaming for data from the disk.
In a recent audit for a client in Oslo, we found their Jenkins master was hosted on a budget VPS with shared storage. Their iostat looked like a horror movie:
$ iostat -x 1
avg-cpu: %user %nice %system %iowait %steal %idle
5.20 0.00 2.10 48.50 0.00 44.20
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
vda 0.00 12.00 55.00 82.00 3400.00 9500.00 94.16 8.50 65.20 7.10 97.20
Note the %iowait at 48.50% and %util at 97.20%. This server is effectively bricked. The disk heads are trashing back and forth trying to write temporary build artifacts. No amount of JVM tuning will fix this physics problem.
Strategy 1: Move Temporary Workspaces to RAM (tmpfs)
If you have the RAM to spare (and you should), move your heavy build directories to memory. In Linux, tmpfs acts like a mounted partition but resides in volatile memory. It is blazing fast.
You can mount a RAM disk specifically for your Jenkins workspace. Add this to your /etc/fstab to make it persistent across reboots:
tmpfs /var/lib/jenkins/workspace tmpfs defaults,size=4g,mode=1777 0 0
Then mount it:
$ mount -a
$ df -h | grep workspace
tmpfs 4.0G 0 4.0G 0% /var/lib/jenkins/workspace
Warning: Everything here vanishes on reboot. This is perfect for CI workspaces which should be ephemeral anyway, but do not put your Jenkins configuration or build history (artifacts) here.
Strategy 2: The Master-Slave Architecture
Stop running builds on the Master. The Master should only handle scheduling, user management, and plugin logic. The heavy lifting should happen on Slaves. This separates the I/O load and ensures the UI remains responsive.
We configure our slaves using simple SSH connectivity. No complex agents required. Just a clean Linux box with Java installed.
- Create a dedicated user on the slave:
$ useradd -m -d /home/jenkins jenkins $ mkdir -p /home/jenkins/.ssh $ cat id_rsa.pub > /home/jenkins/.ssh/authorized_keys $ chmod 700 /home/jenkins/.ssh $ chmod 600 /home/jenkins/.ssh/authorized_keys - Configure Node in Jenkins: Go to Manage Jenkins > Manage Nodes > New Node. Select "Dumb Slave" (yes, that is the official term). Set the Launch method to "Launch slave agents on Unix machines via SSH".
Pro Tip: Use the "Labels" feature in Jenkins. Label one slave "linux-ssd" and another "legacy-db". You can then tie specific heavy jobs to the high-performance nodes in your job configuration.
The Hardware Reality: Why SSD is Non-Negotiable
In 2013, SSDs are no longer an expensive luxury; they are a requirement for professional development environments. The random read/write speeds of an SSD compared to a 15k RPM SAS drive are orders of magnitude higher.
At CoolVDS, we have completely phased out mechanical drives for our primary compute tiers. When you spin up a CoolVDS instance, you are sitting on enterprise-grade SSD arrays. For a Jenkins server, this translates to:
- Faster SCM Checkouts: Git operations dealing with thousands of small files complete instantly.
- Rapid Artifact Archiving: Copying WAR files to the archive directory doesn't block the build queue.
- Snappy UI: The Jenkins dashboard loads without timing out.
Local Context: The Norwegian Advantage
If your development team is based in Oslo, Bergen, or Trondheim, latency matters. Pushing gigabytes of artifacts to a server in the US or even Frankfurt adds unnecessary delay to your pipeline. By hosting on CoolVDS within the Nordic region, you benefit from single-digit millisecond latency to the NIX (Norwegian Internet Exchange).
Furthermore, keeping your data within Norwegian borders simplifies compliance with Personopplysningsloven (Personal Data Act). While test data should be anonymized, we all know that production dumps sometimes find their way into CI environments. Hosting locally provides that extra layer of legal safety net that your CTO cares about.
Example: Automating Cleanup to Save Inodes
Even with SSDs, you can run out of inodes if you generate millions of small files. I run this simple cron job on all my CI nodes to keep the house clean:
#!/bin/bash
# Cleanup workspaces older than 3 days
find /var/lib/jenkins/workspace -maxdepth 2 -type d -mtime +3 -exec rm -rf {} \;
# Clear /tmp artifacts
find /tmp -name "junit*.xml" -mtime +1 -delete
echo "Cleanup complete at $(date)" >> /var/log/jenkins-cleanup.log
Conclusion
You don't need to rewrite your entire codebase to get faster builds. Start with the infrastructure. Move your I/O heavy operations to RAM where possible, distribute the load to slaves, and ensure the underlying metal is running on SSDs.
If you are tired of watching the progress bar crawl, it's time to upgrade. Deploy a CoolVDS SSD VPS today and cut your build times in half. Your developers will thank you, and your CFO will appreciate the efficiency.