Stop SSH-ing Into Production: A Battle-Tested GitOps Guide
If you are still running kubectl apply -f deployment.yaml from your local laptop, you are one typo away from a resume-generating event. I’ve seen it happen. A missing character in a namespace selector, and suddenly the production payment gateway is trying to connect to the staging database. It’s messy, it’s unprofessional, and frankly, in 2022, it’s inexcusable.
The solution isn't just "more scripts." The solution is GitOps. But not the fluffy marketing version—I'm talking about the raw, operational reality of managing infrastructure where Git is the only source of truth. If it’s not in the repo, it doesn’t exist.
In this guide, we are going to build a reconciliation loop that actually works, keeping Nordic data sovereignty in mind (hello, Datatilsynet) and leveraging high-performance infrastructure.
The Architecture: Why Latency Kills GitOps
GitOps relies on an operator (like ArgoCD or Flux) sitting inside your cluster, constantly polling your Git repository. It compares the desired state (Git) with the actual state (Cluster). When they drift, it fixes it.
Here is the hidden bottleneck: Network I/O and Disk I/O. If your control plane is running on a sluggish, over-sold VPS, your reconciliation loops lag. When you push a hotfix, you want the cluster to react now, not in 45 seconds after the CPU steal normalizes. This is why for our internal clusters, we strictly use CoolVDS instances. The NVMe storage ensures that etcd writes are near-instant, and the dedicated KVM resources mean the ArgoCD application controller never starves for CPU cycles.
Step 1: The Infrastructure Layer
Before we touch YAML, we need a cluster. For a robust staging environment or a lean production setup, K3s is the standard in late 2022. It cuts out the bloat.
Provision a CoolVDS instance (Ubuntu 20.04 or 22.04 LTS). Since we are targeting the Norwegian market, keeping the node in Oslo reduces latency for your local dev team and ensures data stays within the EEA, simplifying your GDPR/Schrems II compliance posture.
# On your CoolVDS node
curl -sfL https://get.k3s.io | sh -
# verify access
sudo k3s kubectl get node
Pro Tip: Do not expose the Kubernetes API (port 6443) to the public internet. Use a VPN or restrict access via `ufw` to your office static IP. Security through obscurity is not security.
Step 2: Installing the Operator (ArgoCD)
We prefer ArgoCD for its visual dashboard—it makes explaining "application health" to stakeholders much easier. We will install it in a dedicated namespace.
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Once the pods are running (check with kubectl get pods -n argocd), you need to access the UI. In a production environment, you'd put this behind an Ingress with an SSL certificate. For setup, port-forwarding works:
kubectl port-forward svc/argocd-server -n argocd 8080:443
Step 3: The Git Repository Structure
Structure matters. Don't dump everything into root. A standard separation for 2022 looks like this:
/apps
/base
/nginx
deployment.yaml
service.yaml
/overlays
/staging
kustomization.yaml
/production
kustomization.yaml
We use Kustomize because it's built into `kubectl`. It avoids the complexity of Helm charts for internal services. Here is how your `staging/kustomization.yaml` should look to patch the replica count and resource limits:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base/nginx
patchesStrategicMerge:
- |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
template:
spec:
containers:
- name: nginx
resources:
requests:
cpu: "100m"
memory: "128Mi"
Step 4: Defining the ArgoCD Application
Now, we tell ArgoCD to watch our repo. This is the "glue" file. Apply this to your cluster to bootstrap the app.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: internal-docs-staging
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/your-org/infra-repo.git
targetRevision: HEAD
path: apps/overlays/staging
destination:
server: https://kubernetes.default.svc
namespace: staging
syncPolicy:
automated:
prune: true
selfHeal: true
Note the selfHeal: true flag. This is the magic. If someone manually deletes a deployment, ArgoCD detects the drift and recreates it immediately. This is why underlying stability is critical. If your VPS disk I/O is thrashing, this reconciliation loop slows down, leaving your app in a broken state longer than necessary.
The "Secret" Problem
You cannot commit passwords to Git. In 2022, the battle-tested standard is Sealed Secrets by Bitnami, or external secret stores like HashiCorp Vault.
For most mid-sized Norwegian setups, Sealed Secrets is perfect. It uses asymmetric encryption. You encrypt with a public key (safe to commit to Git), and the controller inside your CoolVDS cluster decrypts it with the private key (which never leaves the cluster).
# Install client side (on your workstation)
brew install kubeseal
# Encrypt a secret
echo -n bar | kubectl create secret generic my-secret --dry-run=client --from-file=foo=/dev/stdin -o json > my-secret.json
kubeseal < my-secret.json > my-sealed-secret.json
Now you commit my-sealed-secret.json. Safe, compliant, and clean.
Why Infrastructure Choice Impacts GitOps
GitOps generates a lot of "chatter." The cluster is constantly pulling git data, checking container registries, and writing status updates to etcd.
| Resource | Standard Cloud VPS | CoolVDS (NVMe KVM) | Impact on GitOps |
|---|---|---|---|
| Disk I/O | Shared (Noisy Neighbors) | Dedicated NVMe | Faster etcd writes = Faster sync |
| CPU | Steal Time common | Dedicated Threads | Reconciliation loops don't hang |
| Location | Frankfurt/London | Oslo | Lower latency for local registries |
When running CI/CD pipelines that build Docker images and push them to your registry, raw CPU power is the difference between a 2-minute build and a 10-minute build. We've seen build times drop by 40% simply by moving runners from generic cloud instances to CoolVDS Performance plans.
Conclusion
GitOps isn't just a trend; it's the operational maturity model for 2022. It gives you an audit trail for every change, automated recovery, and peace of mind. But remember, a robust software architecture cannot fix a fragile hardware foundation.
If you are building for the Nordic market, ensure your data stays local and your I/O stays fast. Don't let slow infrastructure kill your deployment velocity.
Ready to harden your pipeline? Spin up a CoolVDS NVMe instance in Oslo today and deploy your first ArgoCD cluster in under 5 minutes.