GitOps Architectures for 2020: Surviving the Shift to Immutable Infrastructure
If you are still running kubectl apply -f from your laptop in 2019, you are a liability. I’ve said it in boardrooms in Oslo, and I’ll say it here: manual intervention is the root cause of 80% of production outages. I recently audited a fintech setup in Bergen where a "quick fix" applied manually drifted from the git repo. When the autoscaler kicked in three weeks later during a traffic spike, the old configuration overwrote the manual fix. The result? Three hours of downtime and a very angry CTO.
The solution isn't "being more careful." The solution is GitOps. But implementing GitOps isn't just about installing Flux or ArgoCD; it is about architectural discipline, especially when dealing with strict Norwegian data compliance laws like GDPR and local Datatilsynet requirements.
The Core Principle: The Repo is the State
In a proper GitOps workflow, the Git repository is the single source of truth. Your cluster state must effectively be a mirror of your master branch. If it's not in Git, it doesn't exist. This sounds simple until you have to manage secrets, multi-environment promotions, and high-latency container registries.
Pro Tip: Separate your application source code from your infrastructure configuration. Do not keep your Kubernetes manifests in the same repo as your Go or Python code. It creates a noisy commit history and triggers unnecessary CI loops.
Directory Structure for Multi-Environment Clusters
Don't overcomplicate this. In late 2019, the most robust structure I see across successful Nordic dev teams utilizes Kustomize (now stable in k8s 1.14+) to handle overlays. Here is the hierarchy you should be aiming for:
├── base
│ ├── deployment.yaml
│ ├── service.yaml
│ └── kustomization.yaml
├── overlays
│ ├── dev
│ │ ├── kustomization.yaml
│ │ └── replicas.yaml
│ └── prod
│ ├── kustomization.yaml
│ └── resources.yaml
By using this structure, you define the "meat" of your application once in base, and patch it for production in overlays/prod. This keeps your DR (Disaster Recovery) plan simple: if the cluster melts, re-apply the overlay.
The Pipeline: Pull vs. Push
Traditional CI/CD (Jenkins, GitLab CI) pushes changes to the cluster. This is a security risk. It requires your CI server to have cluster-admin credentials. If your CI server is compromised, your entire infrastructure is gone.
The GitOps "Pull" model reverses this. An operator inside the cluster (like Weave Flux) watches the git repo. When it sees a change, it pulls it down and applies it. No external credentials needed.
Implementing Flux (The 2019 Standard)
Here is how we bootstrap a secure reconciliation loop on a CoolVDS KVM instance running Kubernetes 1.16:
# Install Flux into the cluster
helm repo add weaveworks https://weaveworks.github.io/flux
helm upgrade -i flux weaveworks/flux \
--set git.url=git@github.com:your-org/infra-repo \
--set git.branch=master \
--set git.pollInterval=1m \
--namespace flux
# Retrieve the RSA key to add to your GitHub/GitLab Deploy Keys
kubectl -n flux logs deployment/flux | grep identity.pub | cut -d '"' -f 2
Once this is running, Flux acts as the heartbeat of your infrastructure. However, a heartbeat is only as good as the body it pumps blood through. This is where infrastructure choice becomes critical.
Infrastructure Matters: The "Noisy Neighbor" Problem
GitOps relies on frequent reconciliation loops. The operator wakes up, checks Git, checks the Kubernetes API, calculates a diff, and applies changes. In a shared hosting environment with "burstable" CPU, your control plane performance fluctuates. If your etcd latency spikes because another user on the host node is mining crypto, your GitOps synchronization stalls.
This is why we deploy these architectures on CoolVDS. We use KVM virtualization, which creates a strict hardware boundary. Your CPU cycles are yours. When Flux wakes up to sync a critical security patch, the resources are there. Plus, with the NVMe storage standard on our plans, etcd write latency remains consistently below the danger threshold, ensuring your cluster state never corrupts.
Comparison: Hosting for GitOps
| Feature | Standard VPS | CoolVDS NVMe |
|---|---|---|
| Virtualization | Container-based (OpenVZ/LXC) | Hardware-based (KVM) |
| Disk I/O | SATA SSD (Shared) | NVMe (High IOPS) |
| Kernel Access | Shared Kernel | Custom Kernel (Essential for specialized K8s CNIs) |
Solving the Secrets Paradox
You cannot store passwords in Git. It is the cardinal sin. Yet, you need them in the cluster. In 2019, the most robust solution for this is Bitnami Sealed Secrets. It uses asymmetric cryptography. You encrypt the secret with a public key (safe to store in Git), and only the controller inside the cluster (holding the private key) can decrypt it.
Here is the workflow:
- Install kubeseal client:
brew install kubeseal - Create a raw secret:
kubectl create secret generic db-creds \
--from-literal=password=SuperSecretPassword123 \
--dry-run -o yaml > secret.yaml
- Seal it:
kubeseal < secret.yaml > sealed-secret.json
Now, commit sealed-secret.json to your repo. It is perfectly safe. Even if your repo is public, no one can decrypt it without access to the controller running on your CoolVDS instance.
The Norwegian Context: Latency and Jurisdiction
For my clients in Oslo and Stavanger, data sovereignty is paramount. Hosting your GitOps control plane on US-based clouds introduces legal headaches regarding the CLOUD Act. Furthermore, latency matters. If your CI/CD runner is in Virginia but your deployment target is in Oslo, you are adding unnecessary network overhead to every image pull and API call.
CoolVDS offers local European presence with optimized routing to the NIX (Norwegian Internet Exchange). This means your reconciliation loops are faster, and your data stays within a jurisdiction you understand.
Conclusion: Don't Build on Sand
GitOps is the future of operations. It turns your infrastructure into code that is auditable, reversible, and testable. But code needs a compiler, and GitOps needs a robust platform. Do not let IO wait times or stolen CPU cycles destabilize your production environment.
If you are ready to build a pipeline that survives the real world, start with the right foundation. Deploy a high-performance KVM instance on CoolVDS today and stop worrying about your infrastructure keeping up with your commits.