Stop running kubectl apply manually. Seriously.
If you are still SSH-ing into your production server and running scripts to update your application, you are one typo away from a resume-generating event. I’ve been there. In 2018, I watched a fatigued senior engineer accidentally wipe a production database volume because he was tired, the terminal lag was high, and he thought he was in the staging environment. That data is gone. The client? Not happy.
In the Nordic market, where reliability is the currency and Datatilsynet (the Norwegian Data Protection Authority) watches data integrity like a hawk, manual operations are a compliance nightmare. You need an audit trail. You need rollback capabilities that take seconds, not hours.
Enter GitOps. But not the fluffy "marketing" version. We are talking about the battle-hardened implementation: ArgoCD pulling from a private Git repository, deploying to a KVM-based Kubernetes cluster running on bare-metal performant hardware.
The Architecture: Pull vs. Push
Most CI/CD pipelines start with a "Push" model (Jenkins, GitLab CI pushing directly to the cluster). This is a security risk. It requires your CI runner to hold the keys to your production kingdom (kubeconfig). If your CI gets compromised, your infrastructure is gone.
The "Pull" model reverses this. The cluster acts from the inside out. An agent (ArgoCD) sits inside your secure CoolVDS environment in Oslo, monitors the Git repository, and pulls changes when it detects a drift. No credentials leave your infrastructure.
Pro Tip: When hosting in Norway, latency matters. If your Git provider is GitHub (US-based) but your cluster is in Oslo, the sync latency is negligible. However, your container registry should be close to your compute. Pulling a 2GB Docker image across the Atlantic during a scaling event is a recipe for timeouts.
Step 1: The Foundation (Infrastructure)
You can't build a castle on a swamp. GitOps controllers like ArgoCD are surprisingly resource-intensive. They use Redis for caching and run continuous reconciliation loops. I’ve seen cheap, oversold VPS providers throttle the CPU during these loops, causing the sync to hang.
We use CoolVDS NVMe instances because KVM virtualization guarantees that our CPU cycles aren't stolen by a neighbor mining crypto. For a standard production cluster control plane + GitOps controller, we need low I/O wait times.
Bootstrapping the Cluster
Assuming you have a fresh Debian 11 or Ubuntu 22.04 node ready. First, we secure the node before installing K3s or K8s.
ufw allow from 10.0.0.0/8 to any port 6443 # Only allow internal API access
Step 2: Installing ArgoCD
Let's get ArgoCD running. Do not use the default manifest for production; it lacks high availability (HA). Use the Helm chart.
First, add the repo:
helm repo add argo https://argoproj.github.io/argo-helm
Now, we configure the values.yaml. This is where most people fail. They leave the Redis configuration default, which is not persistent. If the pod restarts, your cache is gone, and the dashboard lags.
# production-values.yaml
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
hosts:
- argocd.your-cool-domain.no
repoServer:
autoscaling:
enabled: true
minReplicas: 2
redis-ha:
enabled: true
# Critical for high-speed syncing
haproxy:
metrics:
enabled: true
Deploy it:
helm install argocd argo/argo-cd -f production-values.yaml -n argocd --create-namespace
Step 3: Handling Secrets (The GDPR Headache)
You cannot commit .env files to Git. That’s rule #1. But in GitOps, Git is the source of truth. So how do you store database passwords without exposing them?
In 2023, the standard is Bitnami Sealed Secrets. It uses asymmetric encryption. You encrypt with a public key (safe to commit), and the controller inside your CoolVDS cluster decrypts it with a private key (which never leaves the server).
Install the client side tool:
brew install kubeseal
Here is how a SealedSecret looks compared to a regular Secret. This is safe to push to a public repo:
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: my-database-creds
namespace: production
spec:
encryptedData:
password: AgBy3...[LONG ENCRYPTED STRING]...==
template:
metadata:
name: my-database-creds
type: Opaque
This ensures that even if your GitHub repo is compromised, your database credentials remain secure. This is crucial for complying with European data sovereignty requirements.
Step 4: Defining the Application
Instead of clicking around the UI, we define our "App of Apps" pattern declaratively. This tells ArgoCD where to look.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: production-cluster
namespace: argocd
spec:
project: default
source:
repoURL: 'git@github.com:yourorg/infra-repo.git'
targetRevision: HEAD
path: envs/production
destination:
server: 'https://kubernetes.default.svc'
namespace: default
syncPolicy:
automated:
prune: true
selfHeal: true
Notice prune: true. This is the danger zone. If you remove a file from Git, ArgoCD deletes the resource in the cluster. This guarantees that your Git repo is exactly what is running. No zombie resources wasting your RAM.
Step 5: Resource Optimization
Kubernetes schedulers are ruthless. If you don't define limits, a memory leak in one app will crash the node. On a VPS, even a high-performance one, you must set boundaries.
Check your current usage before committing limits:
kubectl top pods -n production --sort-by=memory
When defining your deployment in Git, always include the resources block. We optimize for the NVMe speeds available on CoolVDS by allowing higher ephemeral storage limits for cache-heavy apps.
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
Why Infrastructure Matters for GitOps
I once debugged a "broken" GitOps pipeline for three days. The issue wasn't the code. The issue was the underlying storage of the cheap cloud provider. The etcd latency was spiking above 40ms, causing the Kubernetes API to timeout during large apply operations.
We migrated that workload to CoolVDS. The NVMe storage kept etcd write latency under 2ms. The pipeline went from failing 30% of the time to 100% success rate.
Final Thoughts
GitOps isn't just a trend; it's the only way to manage modern complexity sanely. It provides the documentation, the history, and the disaster recovery mechanisms that enterprise clients demand.
But software is only as good as the hardware it runs on. You need a host that respects the raw I/O requirements of a modern Kubernetes control plane. Don't let slow I/O kill your deployment speed.
Ready to build a pipeline that doesn't break? Spin up a CoolVDS NVMe instance in Oslo today and start deploying with confidence.