Building the Bulletproof CI/CD Pipeline: Jenkins, Git, and Atomic Deploys on Linux VPS
It is 2014. If you are still dragging files from your local machine to a production server using FileZilla on a Friday afternoon, you aren't a Systems Administrator. You are a liability. I have seen production servers crash because of a half-uploaded PHP file or a missing CSS asset during a manual sync. It’s messy, it’s unprofessional, and quite frankly, it’s unnecessary.
The standard today is Continuous Integration and Continuous Deployment (CI/CD). We aren't talking about the bleeding edge, unstable container experiments like that new Docker 1.0 release everyone is buzzing about. I’m talking about battle-tested, iron-clad infrastructure: Jenkins, Git, Bash, and KVM virtualization. This is how you sleep at night.
The Bottleneck is Always I/O
When you set up a Jenkins build server, you aren't just running code. You are thrashing the disk. Between git clone, npm install (for the Node.js folks using Grunt or Gulp), and compiling Java artifacts, your disk I/O is the single biggest factor in build time. I recently audited a setup for a client in Oslo where builds took 20 minutes. The CPU was idle 80% of the time. The culprit? High I/O Wait on a standard SATA hard drive.
You need high IOPS. This is why at CoolVDS, we don't bother with spinning rust for our primary compute nodes. We utilize Enterprise SSDs (and are testing cutting-edge NVMe storage solutions) because when Jenkins triggers five concurrent builds, a standard HDD will choke, causing timeouts and failed deployments.
The "Atomic" Deployment Strategy
The most robust way to deploy code in 2014 isn't complex. It relies on the filesystem. We use a strategy called "Atomic Swapping" with symlinks. Your web server (Nginx or Apache) points to a symlink named current. You deploy your new code to a directory with a timestamp. Once the build passes, you switch the symlink. It takes milliseconds.
Here is the exact Bash script I use for my production environments. It uses rsync so we only transfer deltas, saving bandwidth—crucial if you are pushing to a remote data center.
The Deployment Script (deploy.sh)
#!/bin/bash
# CI/CD Deployment Script v1.4
# Date: 2014-08-20
DEPLOY_PATH="/var/www/production"
REPO_URL="git@github.com:company/project.git"
TIMESTAMP=$(date +%Y%m%d%H%M%S)
NEW_RELEASE="$DEPLOY_PATH/releases/$TIMESTAMP"
CURRENT_LINK="$DEPLOY_PATH/current"
echo "Starting deployment for release: $TIMESTAMP"
# 1. Clone/Update code (Assuming agent has keys)
mkdir -p $NEW_RELEASE
git clone $REPO_URL $NEW_RELEASE --depth 1
# 2. Run Build Steps (e.g., Composer, Grunt)
cd $NEW_RELEASE
# composer install --no-dev --optimize-autoloader
# 3. Atomic Symlink Swap
ln -sfn $NEW_RELEASE $DEPLOY_PATH/next_release
mv -Tf $DEPLOY_PATH/next_release $CURRENT_LINK
# 4. Cleanup old releases (keep last 5)
cd $DEPLOY_PATH/releases
ls -t | tail -n +6 | xargs rm -rf
echo "Deployment Complete. Symlink updated."
This script ensures that a user never hits a half-deployed site. The mv -Tf command is atomic on Linux. It either happens, or it doesn't.
Optimizing the Environment
To run this efficiently, your VPS needs tuning. Default Linux distributions are often tuned for compatibility, not performance. If you are running high-traffic deployments on CoolVDS, you should tweak your sysctl.conf to handle the network stack better, especially to reduce latency for users connecting via the NIX (Norwegian Internet Exchange).
Network Stack Tuning
Add this to /etc/sysctl.conf to improve TCP handling during bursty traffic (common during deployment cache warm-ups):
# Increase system file descriptor limit
fs.file-max = 65535
# Improve TCP connection handling
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_max_tw_buckets = 400000
net.core.somaxconn = 1024
Reload with sysctl -p. This prevents your server from running out of sockets if you have a high volume of short-lived connections, which is typical for REST APIs.
Database Considerations for CI
Your CI process likely runs integration tests. If you are using MySQL 5.5 or 5.6, the default configuration is woefully inadequate for a test runner that creates and destroys tables rapidly. You must ensure innodb_flush_log_at_trx_commit is set to 2 (or even 0 for pure testing environments) to avoid disk sync on every transaction. It’s risky for production data, but essential for build speed.
[mysqld]
# Optimize for write-heavy test suites
innodb_buffer_pool_size = 1G # Adjust based on your CoolVDS RAM
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
max_connections = 500
The Norwegian Context: Data Sovereignty
We are operating in a post-Snowden world. Trust is a currency. Since the Datatilsynet (Norwegian Data Protection Authority) enforces strict guidelines on handling personal data (Personopplysningsloven), hosting your CI/CD artifacts and production data inside Norway is not just a technical choice; it's a legal safeguard.
Pro Tip: Latency matters. If your dev team is in Oslo or Bergen, hosting on a server in Virginia adds 100ms+ to every SSH command. A CoolVDS instance in our Oslo data center provides <5ms latency to local ISPs. That