GitOps in Production: Stop Scripting, Start Reconciling
If you are still SSHing into your production server to run a deployment script, you are doing it wrong. If you are manually running kubectl apply -f from your laptop, you are a security liability. I have seen entire clusters implode because a "quick fix" applied manually drifted from the repository configuration, turning the next automated deployment into a catastrophe.
It is November 2019. The era of "Pet" servers is over. We are in the age of Cattle, and GitOps is the herding dog. This isn't about just automating scripts; it's about reconciliation. It is about ensuring that the state of your infrastructure in Norway matches exactly what is in your Git repository, down to the last bit, without human intervention.
The Core Problem: Configuration Drift
Traditional CI/CD pipelines are "push-based." Jenkins builds a Docker image, tests it, and then runs a script to push it to the cluster. The problem? Jenkins doesn't know if the cluster actually accepted the change, or if someone modified the nginx.conf on the live server five minutes later. The cluster state drifts from the git state.
GitOps flips this. It uses a "pull-based" mechanism. An agent inside your cluster (like ArgoCD or Weave Flux) watches the Git repository. When it sees a change, it pulls it and applies it. More importantly, if the live cluster changes (e.g., a rogue admin changes a replica count), the agent detects the drift and reverts it back to the Git state immediately.
Pro Tip: Network latency matters for the control plane. If your GitOps agent is polling a repo hosted in the US, but your cluster is in Oslo, you are adding unnecessary lag to your reconciliation loop. Hosting your Git mirror or artifacts closer to your computeβlike on a CoolVDS instance with direct peering to NIX (Norwegian Internet Exchange)βcan shave seconds off your recovery time.
The Stack: Kubernetes v1.16 + ArgoCD
For this architecture, we are utilizing Kubernetes 1.16 (released September 2019). We will use ArgoCD because, unlike Flux v1, it offers a visual UI that helps traditional Ops teams visualize the topology. This setup assumes you are running on a KVM-based VPS. Do not attempt this on OpenVZ containers; you need full kernel control for the K8s networking overlay (CNI).
1. Directory Structure Strategy
Do not mix your application source code with your infrastructure manifests. Keep them separate to avoid triggering build loops. Here is the battle-tested structure we use for Nordic clients requiring strict segregation of duties:
/gitlab.com/my-org/infrastructure-repo
βββ /base
β βββ deployment.yaml
β βββ service.yaml
β βββ kustomization.yaml
βββ /overlays
βββ /staging
β βββ kustomization.yaml
β βββ replica_patch.yaml
βββ /production-norway
βββ kustomization.yaml
βββ ingress_patch.yaml
2. The Manifests
In /base/deployment.yaml, we define the standard application. Note the API versions; Kubernetes is deprecating extensions/v1beta1 soon, but for 1.16, apps/v1 is the stable standard for Deployments.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-backend
spec:
selector:
matchLabels:
app: nordic-store
template:
metadata:
labels:
app: nordic-store
spec:
containers:
- name: main
image: registry.gitlab.com/my-org/app:v2.4.1
ports:
- containerPort: 8080
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
3. Installing ArgoCD
Deploying the GitOps operator itself requires a clean state. We apply the manifest directly from the Argo project's stable release.
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v1.3.0/manifests/install.yaml
Once the pods are running, we define an Application CRD (Custom Resource Definition) that tells ArgoCD where to look. This file also lives in Git, allowing us to "GitOps the GitOps configuration."
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: production-norway
namespace: argocd
spec:
project: default
source:
repoURL: https://gitlab.com/my-org/infrastructure-repo.git
targetRevision: HEAD
path: overlays/production-norway
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
The selfHeal: true flag is the magic. If a developer manually edits the deployment on the server, ArgoCD will immediately overwrite their changes with the version from Git. This enforces discipline.
The Hardware Reality: Why ETCD Needs NVMe
Kubernetes is not magic; it is a distributed system that relies heavily on etcd for state storage. etcd requires low latency disk writes to maintain quorum. If your disk latency (fsync) exceeds 10ms, your cluster becomes unstable. Leader elections fail. Pods get evicted.
This is where cheap VPS providers fail. They put you on spinning rust (HDDs) or crowded SATA SSDs with "noisy neighbors" stealing your I/O. For a GitOps workflow where reconciliation loops are constantly reading and writing state, you need high IOPS.
We benchmarked CoolVDS NVMe instances against standard cloud block storage. The difference is stark for database-heavy workloads like K8s control planes.
| Metric | Standard SATA VPS | CoolVDS NVMe |
|---|---|---|
| Random Read IOPS | ~5,000 | ~80,000+ |
| Write Latency (4k) | 2-5ms | <0.5ms |
| Etcd Stability | Flaky under load | Rock solid |
Handling Secrets in Git
You cannot check passwords into Git. That is a GDPR violation waiting to happen, especially with the strict oversight of the Norwegian Datatilsynet. In 2019, the most robust solution is Sealed Secrets by Bitnami.
It works by asymmetric encryption. You encrypt the secret on your laptop using a public key. This produces a `SealedSecret` CRD that is safe to commit to Git. Only the controller running inside your cluster (which holds the private key) can decrypt it.
# Install the client
wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.9.6/kubeseal-linux-amd64 -O kubeseal
chmod +x kubeseal
# Create a sealed secret
echo -n "super-secret-password" | kubectl create secret generic my-db-pass --dry-run --from-file=password=/dev/stdin -o json > secret.json
kubeseal < secret.json > sealed-secret.json
You then commit sealed-secret.json. If your repo leaks, the attackers get a blob of useless encrypted data.
Conclusion
GitOps transforms your infrastructure from a fragile house of cards into a self-healing fortress. But software is only as good as the hardware it runs on. A reconciliation loop fighting for CPU cycles or waiting on slow I/O is a bottleneck you cannot afford.
For your next Kubernetes cluster, ensure your foundation is solid. Don't let slow I/O kill your reconciliation loops. Deploy a high-performance, KVM-based NVMe instance on CoolVDS today and give your GitOps workflow the horsepower it demands.