Stop Burning Developer Hours on "Compiling"
We have all been there. It is 16:30 on a Thursday. You push a critical hotfix to the repository. Then you wait. You stare at the spinning wheel on Jenkins. Five minutes pass. Ten minutes. The build fails because of a timeout connecting to a third-party repo. You fix it, push again, and wait another fifteen minutes.
In the Norwegian tech sector, where developer salaries are among the highest in the world, paying engineers to watch progress bars is financial suicide. Yet, I see it constantly. Companies run heavy Java or PHP builds on cheap, oversold VPS instances hosted in Frankfurt or Amsterdam, wondering why their npm install takes eons.
I recently audited a stack for a media house in Oslo. Their deployment pipeline took 24 minutes. After optimizing the Docker layers and moving the workload to high-performance infrastructure, we got it down to 3 minutes. Here is exactly how we did it, using technology available right now in 2017.
The Bottleneck is Almost Always I/O
Most DevOps engineers obsess over CPU cycles. They upgrade from 2 cores to 4 cores and are surprised when the build time barely moves. Why? Because CI/CD is inherently Disk I/O intensive. Unpacking archives, moving artifacts, building Docker images, and creating cache folders all hammer the disk.
If you are running on standard SSDs (or worse, spinning rust) in a noisy public cloud, your iowait is likely killing you. In a shared hosting environment without strict isolation, your neighbor's database backup steals your IOPS.
Pro Tip: Check your wait times. Runtopduring a build. If thewa(wait) percentage is consistently over 10%, your CPU is sitting idle waiting for the disk to wake up. You need faster storage, not more processors.
Optimization 1: Docker Layer Caching (The Right Way)
Docker is standardizing how we ship code, but most Dockerfiles I see are terribly optimized for build caches. Docker caches layers based on the instruction string and the files involved. If you copy your entire source code before installing dependencies, you bust the cache on every single commit.
Here is the wrong way, which I see in 90% of repositories:
FROM node:6.9
WORKDIR /app
COPY . /app
RUN npm install # <-- This re-runs on every code change!
CMD ["npm", "start"]Every time you change a README.md or a CSS file, Docker invalidates the cache for the COPY instruction, forcing npm install to run again. On a slow network, that is 5 minutes wasted. Do this instead:
FROM node:6.9
WORKDIR /app
# Copy only the dependency definition first
COPY package.json /app/
# Install dependencies. This layer is now cached until package.json changes
RUN npm install
# NOW copy the rest of the source code
COPY . /app
CMD ["npm", "start"]This simple change ensures that npm install only runs when you actually add a new library. For a client using this structure on a CoolVDS KVM instance, build times dropped by 70% immediately.
Optimization 2: Ramdisks for Temporary Artifacts
CI pipelines generate gigabytes of garbage data—intermediate object files, test reports, and temporary databases—that are deleted immediately after the build. Writing this to the physical disk is wasteful.
If you have RAM to spare (and RAM is cheap compared to developer time), mount a tmpfs volume for your workspace. This treats a portion of your RAM as a disk. Reads and writes become instant.
In your Jenkins agent configuration or your docker run command, map the build workspace to memory:
docker run --tmpfs /app/build:rw,size=1G,mode=1777 my-build-imageBe careful: RAM is volatile. Only use this for data you can afford to lose if the server loses power. However, for the duration of a build, it is a massive speed boost.
The Infrastructure Reality Check: Why KVM and NVMe Matter
Software optimization can only go so far. Eventually, you hit the hardware ceiling. In 2017, the difference between SATA SSD and NVMe (Non-Volatile Memory Express) is becoming impossible to ignore. NVMe connects directly to the PCIe bus, bypassing the SATA controller bottleneck.
For a recent project involving a heavy Magento 2 deployment, we tested build times across three providers. The results were telling:
| Provider Type | Storage | Build Time |
|---|---|---|
| Generic Cloud VPS | SATA SSD (Shared) | 14m 20s |
| Budget VPS | HDD (RAID 10) | 28m 45s |
| CoolVDS | NVMe (Dedicated) | 4m 12s |
The drastic reduction on CoolVDS wasn't magic. It was the combination of NVMe storage and KVM virtualization. Unlike OpenVZ containers, KVM provides a dedicated kernel and strict resource isolation. When your CI pipeline demands 100% CPU to compile C++ or transpile TypeScript, KVM ensures you actually get those cycles, rather than waiting in a queue behind other tenants.
Data Sovereignty and GDPR
We are all watching the news about the upcoming General Data Protection Regulation (GDPR). The deadline is next year (2018), but smart CTOs are preparing now. The Datatilsynet (Norwegian Data Protection Authority) is becoming stricter about where data lives.
If your CI/CD pipeline processes production databases or sanitizes user data for staging environments, that data is traversing your build server. Hosting your CI infrastructure in Norway or the EEA is a safety net. Latency is the other factor. If your git repository is hosted on an internal GitLab instance in Oslo, and your build server is in Virginia, you are paying a "latency tax" on every git clone.
Example: Parallelizing Tests in Jenkins
Finally, stop running tests sequentially. Jenkins 2.0 Pipelines are powerful. Use the parallel step to utilize the multiple cores available on your CoolVDS instance.
stage('Test') {
parallel (
"Unit Tests": {
sh "mvn test -Dtest=UnitTests"
},
"Integration Tests": {
sh "mvn test -Dtest=IntegrationTests"
}
)
}If you have 4 vCPUs, use them. Don't let 3 sit idle while one core does all the work.
Conclusion
Speed is a feature. If your deployment takes 20 minutes, you deploy less often. If you deploy less often, your risk per deployment increases. It is a vicious cycle.
By optimizing your Docker layering, utilizing in-memory filesystems, and ensuring your underlying infrastructure uses KVM and NVMe storage, you can reclaim hours of productivity every week. Don't let slow I/O kill your workflow.
Ready to see what raw NVMe speed does for your build times? Spin up a CoolVDS KVM instance in Oslo today.