Stop FTPing Production: Architecting Git-Based Deployment Pipelines on Linux
It is 2013. If you are still dragging and dropping PHP files via FileZilla to your production server, you are a liability. I said it. You are one connection drop or one accidental overwrite away from taking down your entire e-commerce platform. In the high-stakes world of hosting—whether you are serving traffic to Oslo or mirroring data across the EU—manual intervention is the enemy of stability.
I have seen it happen. A senior developer, tired on a Friday afternoon, overwrites config.php with a local version pointing to localhost. The site goes dark. The client screams. The logs show nothing but a sudden cessation of database connectivity. This isn't a software bug; it's a process failure.
We need to talk about treating your infrastructure and deployment logic with the same rigor as your code. We call this "Infrastructure as Code" or Continuous Deployment. By making Git the single source of truth, we eliminate the "it works on my machine" syndrome.
The Architecture of Trust
A robust deployment pipeline in 2013 relies on three core components: Version Control (Git), an Automation Server (Jenkins), and a destination environment that respects resource isolation (CoolVDS KVM instances). We don't use OpenVZ for serious build servers because we can't afford resource contention when compiling assets.
Method 1: The "Poor Man's" Deploy (Git Hooks)
For smaller projects or simple static sites, you don't always need a heavy CI server. A bare Git repository with a post-receive hook is elegant and fast.
The Setup:
On your remote VPS (running CentOS 6 or Debian 7), initialize a bare repository:
mkdir -p /var/git/project.git
cd /var/git/project.git
git init --bare
Now, we create the magic hook. This script executes immediately after you push code to the server.
#!/bin/bash
# /var/git/project.git/hooks/post-receive
TARGET="/var/www/html/production"
GIT_DIR="/var/git/project.git"
BRANCH="master"
while read oldrev newrev ref
do
if [[ $ref =~ .*/$BRANCH$ ]];
then
echo "Master ref received. Deploying to production..."
mkdir -p $TARGET
git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f
# Fix permissions (Common pain point!)
chown -R www-data:www-data $TARGET
find $TARGET -type d -exec chmod 755 {} \;
find $TARGET -type f -exec chmod 644 {} \;
echo "Deployment complete."
else
echo "Ref $ref received. Doing nothing: only the ${BRANCH} branch may be deployed on this server."
fi
done
Don't forget to make it executable: chmod +x hooks/post-receive. Now, a simple git push production master deploys your code instantly. No FTP client required.
Pro Tip: Always secure your SSH keys. If you are pushing from a laptop to a CoolVDS instance, disable password authentication in/etc/ssh/sshd_config. UsePasswordAuthentication no. Brute force bots scanning the Norwegian IP ranges never sleep. Use Fail2Ban. ALWAYS.
Method 2: Enterprise CI with Jenkins
When you have a team of developers, Git hooks become risky. You need tests. You need a build process. You need Jenkins. Jenkins (forked from Hudson) has matured significantly by version 1.5. It handles the "heavy lifting"—running PHPUnit, compiling Java, or minifying JS—before the code ever touches production.
The Pipeline
- Commit: Developer pushes to GitHub/Bitbucket.
- Build: Jenkins polls the repo (or receives a webhook).
- Test: Jenkins runs unit tests.
- Deploy: If tests pass, Jenkins rsyncs the build artifacts to the web server.
Here is a robust rsync command you might use in a Jenkins "Execute Shell" build step. This ensures we only transfer changed files and maintain atomic swaps where possible.
#!/bin/bash
# Jenkins Build Step
SSH_USER="deploy"
SSH_HOST="192.168.10.55" # Internal IP via CoolVDS private network
REMOTE_PATH="/var/www/vhosts/app_release"
# Exclude git metadata to save bandwidth
rsync -avz --delete --exclude '.git*' --exclude 'tests' \
-e "ssh -o StrictHostKeyChecking=no" \
$WORKSPACE/ $SSH_USER@$SSH_HOST:$REMOTE_PATH/
# Reload Nginx to clear opcode caches if necessary
ssh $SSH_USER@$SSH_HOST "sudo /etc/init.d/nginx reload"
Database Migrations: The Hidden Killer
Code deployment is easy. Database schema changes are where seasoned admins cry. If your deployment adds a column but the code expects it before the alter completes, you have downtime.
In 2013, tools like Liquibase or simple SQL versioning scripts are essential. Never run schema changes manually. Script it.
Furthermore, ensure your MySQL configuration is ready for the I/O hit during a backup or migration. On a standard HDD VPS, a heavy ALTER TABLE can lock the disk. This is why we advocate for SSD storage. Look at your innodb_buffer_pool_size in /etc/mysql/my.cnf. It should be 70-80% of your available RAM on a dedicated DB server.
[mysqld]
# Optimize for stability
innodb_buffer_pool_size = 4G
innodb_log_file_size = 256M
innodb_flush_log_at_trx_commit = 1 # ACID compliance is mandatory
sync_binlog = 1
query_cache_type = 0 # Disable query cache for high write loads
Why Infrastructure Matters
You can script the perfect deployment, but if the underlying virtualization is noisy, your builds will be inconsistent. This is the issue with cheap OpenVZ containers—"noisy neighbors" steal CPU cycles, causing your Jenkins build to time out randomly.
At CoolVDS, we use KVM (Kernel-based Virtual Machine). It provides true hardware virtualization. Your RAM is yours. Your CPU cores are reserved. When you run a heavy mvn install or a parallel make, you get the performance you paid for. This predictability is crucial for CI/CD pipelines.
Latency and Local Laws
If your target audience is in Norway, hosting in Frankfurt or London adds 20-30ms of latency. It doesn't sound like much, but for database calls inside a render loop, it stacks up. Hosting locally or on low-latency routes to Oslo ensures that your Git pushes and your user's requests feel instantaneous.
Furthermore, with the Data Inspectorate (Datatilsynet) becoming stricter about where user data lives, keeping your primary storage and backups within the EEA (and ideally consistent jurisdictions) minimizes legal headaches later. Don't just optimize for speed; optimize for compliance.
Final Thoughts
Automation is not a luxury. It is an insurance policy. By moving from FTP to Git-based workflows, you gain history, rollback capabilities, and sanity.
Is your build server struggling under load? Stop fighting with shared resources. Spin up a KVM-based SSD instance on CoolVDS today and watch your Jenkins builds fly.