Stop “SSH-ing” to Production: A Battle-Tested GitOps Workflow for Nordic Teams
It is 2021. If your deployment strategy still involves scp, a bash script named deploy_final_v2.sh, or manually editing Nginx configs on a live server via SSH, you are essentially juggling chainsaws. I have seen production databases corrupted because a developer tired from a long week typed rm -rf in the wrong directory. I’ve seen “quick fixes” applied directly to servers that disappeared the moment the instance rebooted, taking the service down for hours.
We need to stop treating servers like pets. In the post-Schrems II era, where data sovereignty in Europe is no longer optional, and with the complexity of microservices exploding, the only sane path forward is GitOps. This isn't just a buzzword; it's a survival strategy for any serious engineering team in Norway.
The Core Philosophy: Git is the Only Truth
The concept is simple: The state of your infrastructure and applications must be defined declaratively in Git. If it's not in Git, it doesn't exist. Your cluster synchronizes with the repo, not the other way around. This eliminates configuration drift—that silent killer where the live server configuration slowly diverges from your documentation until no one knows how the system actually works.
Pro Tip: In a GitOps workflow, access to the production environment (via kubectl or SSH) should be revoked for developers. CI/CD tools push to the registry, and the cluster pulls the changes. This is the ultimate security audit trail.
The Stack: Choosing Tools for Stability (2021 Edition)
For this guide, I am assuming a standard stack that balances modernity with stability:
- Orchestration: Kubernetes (K3s for efficiency on VPS).
- GitOps Operator: Argo CD (v2.1).
- CI: GitLab CI (popular in Europe for its self-hosted capabilities).
- Infrastructure: CoolVDS NVMe KVM Instances.
Why Infrastructure Matters for GitOps
Many developers think GitOps is purely software. They are wrong. When you run an operator like Argo CD inside your cluster, it constantly polls your Git repositories and compares the desired state against the live state (reconciliation). This consumes CPU cycles and significant I/O.
On cheap, oversold VPS providers where CPU steal time is high, the reconciliation loop lags. You push code, and... nothing happens for 5 minutes. Or worse, the etcd database (the brain of Kubernetes) suffers from disk latency, causing the API server to timeout. We use CoolVDS for these workloads specifically because they guarantee KVM isolation and NVMe storage. Kubernetes creates a lot of small random writes; standard SSDs often choke under the load.
Step 1: Preparing the Node (The "CoolVDS" Standard)
Before installing Kubernetes, we must tune the Linux kernel. Default Linux settings are optimized for desktop use or light serving, not for high-traffic container orchestration. I run these commands on every fresh CoolVDS instance intended for K8s.
Edit /etc/sysctl.conf to handle higher connection counts and packet forwarding:
# /etc/sysctl.conf optimizations for K8s nodes
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
# Increase file descriptors for high-load loggers/proxies
fs.file-max = 2097152
# Optimize swap (Keep it low for K8s stability)
vm.swappiness = 10
# Increase connection tracking table
net.netfilter.nf_conntrack_max = 131072
Apply these changes:
sudo sysctl -p
Step 2: Installing the Operator
We will use Argo CD. It provides a visual dashboard that is invaluable for debugging why a sync failed. Assuming you have a Kubernetes cluster running (K3s is excellent for a single-node VPS setup), install Argo CD:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Wait for the pods to stabilize. This is where low latency matters. If your VPS is hosted in Frankfurt but your team and users are in Oslo, you are adding unnecessary round-trip time (RTT) to every request. Hosting on CoolVDS in the Nordics keeps your latency to the Norwegian Internet Exchange (NIX) minimal, making the dashboard snappy and CLI interactions instant.
Step 3: Defining the Application
Instead of running kubectl apply -f my-app.yaml manually, we define an Application CRD (Custom Resource Definition) that tells Argo where to look.
Create a file named production-app.yaml:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nordic-ecommerce-backend
namespace: argocd
spec:
project: default
source:
repoURL: 'git@gitlab.com:your-org/backend-service.git'
targetRevision: HEAD
path: k8s/overlays/production
destination:
server: 'https://kubernetes.default.svc'
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Analysis of the Config:
prune: true: This is critical. If you delete a file in Git, Argo will delete the resource in the cluster. Without this, you leave orphaned resources consuming RAM.selfHeal: true: If someone manually changes a setting on the server (e.g., changing a replica count), Argo CD will immediately detect the drift and revert it to the Git state. This enforces discipline.
Step 4: The CI/CD Pipeline Integration
Your CI pipeline (Jenkins, GitLab CI, GitHub Actions) should not touch the cluster. Its job is to:
- Run tests.
- Build the Docker image.
- Push the image to a registry.
- Update the Git manifest with the new image tag.
Here is a snippet for a .gitlab-ci.yml file that updates the manifest using `kustomize` (a standard tool in 2021):
deploy_production:
stage: deploy
image: line/kubectl-kustomize:latest
script:
- cd k8s/overlays/production
- kustomize edit set image my-app=registry.gitlab.com/org/app:$CI_COMMIT_SHA
- git config user.email "ci-bot@coolvds.com"
- git config user.name "CI Bot"
- git add kustomization.yaml
- git commit -m "Update production image to $CI_COMMIT_SHA"
- git push origin main
only:
- main
Once the change is pushed to the main branch, Argo CD detects it and syncs the cluster. No credentials for the production server are exposed in the CI runner. This greatly aids in GDPR and security compliance.
The Nordic Compliance Factor: Schrems II & Data Residency
Since the Schrems II ruling last year, relying on US-based cloud providers for core infrastructure has become legally complex for Norwegian entities. If your GitOps pipeline syncs data or secrets to a server controlled by a US entity, you must navigate complex Transfer Impact Assessments (TIAs).
By hosting your Kubernetes nodes on CoolVDS, you ensure the compute and storage remain within the jurisdiction you expect. Furthermore, syncing from a local GitLab instance to a local CoolVDS instance ensures your proprietary code and configurations traverse fewer international borders.
Troubleshooting: When Things Go Wrong
GitOps is powerful, but it introduces new failure modes. A common issue is the "CrashLoopBackOff" caused by resource exhaustion. This often happens when teams migrate from bare metal to virtualized containers without adjusting resource requests.
Check your resources block in your deployment YAML:
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
If you don't set these, a single memory leak can crash the entire node. However, setting them too strictly on a noisy public cloud neighbor can lead to throttling. This is where the underlying hardware of your VPS provider is tested. CoolVDS guarantees the CPU cycles you pay for, reducing the likelihood of "noisy neighbor" induced throttling on your pods.
Conclusion
Moving to GitOps requires a shift in mindset. You stop being a firefighter and start being an architect. The initial setup takes time—configuring the repo, tuning the sysctl flags, setting up the operator—but the payoff is sleep. You sleep better knowing that your production environment exactly matches your Git repository.
For the infrastructure to back this up, you need reliability, low latency to the Nordic market, and strict data sovereignty. Don't let slow I/O kill your reconciliation loops.
Ready to build a pipeline that doesn't break? Spin up a high-performance KVM instance on CoolVDS today and install your first Argo CD controller.