GitOps Workflows That Won't Break Production: A 2022 Guide for Nordic SREs
I still wake up in a cold sweat thinking about a Friday afternoon in 2019. We were managing a high-traffic e-commerce platform hosted in Oslo. A junior dev ran a manual kubectl apply -f . from their laptop. They didn't realize their local config pointed to the production cluster context, not staging. Three seconds later, the load balancers dropped. The culprit? A mismatched ingress controller version that wiped our routing rules.
If you are still SSH-ing into servers to pull git changes, or worse, manually applying YAML files to your cluster, you are a ticking time bomb. It is January 2022. The industry has moved on. We need determinism. We need audit trails. We need GitOps.
The Truth About "Source of Truth"
The core philosophy is simple: Git is the only source of truth. Not your laptop, not the CI server's temp folder, and definitely not the current state of etcd. If it isn't committed to the main branch, it doesn't exist.
But implementing this in the Nordic market brings unique challenges. With the fallout from Schrems II still shaking up the compliance landscape here in Norway, relying on US-managed control planes is risky. Running your GitOps operator (like ArgoCD) on sovereign infrastructure isn't just a performance tweak; it's a compliance necessity for handling sensitive user data under GDPR.
The Architecture: Pull vs. Push
In the old CI/CD "Push" model, Jenkins or GitLab CI had the keys to the castle (your cluster credentials) to deploy updates. That is a massive security vector. If your CI gets breached, your production environment is open.
We use the "Pull" model. The cluster reaches out to the git repository.
- Security: The cluster doesn't need to expose its credentials to an external CI tool.
- Consistency: The operator constantly compares the live state with Git. If someone manually changes a replica count, the operator reverts it instantly.
Pro Tip: Don't mix application source code and infrastructure manifests in the same repository. Use a "Config Repo" pattern. This prevents a CI loop where a commit triggers a build, which triggers a commit, which triggers a build...
Tooling Selection: ArgoCD on CoolVDS
For Kubernetes workloads in 2022, ArgoCD is the standard. Flux v2 is a strong contender, but Argo's UI provides visibility that "Battle-Hardened" teams need when debugging a rollout at 2 AM.
Here is why infrastructure matters: Your GitOps operator is the heartbeat of your deployment. If it lags, your synchronization drifts. We run our control planes on CoolVDS NVMe instances. Why? Because etcd latency is the enemy of stability. When ArgoCD reconciles hundreds of applications, you need high IOPS. Standard spinning disks or noisy-neighbor cloud shared storage will cause reconciliation timeouts.
Step 1: The Folder Structure
Keep it DRY (Don't Repeat Yourself) using Kustomize. Here is the structure we use for deployments serving the Norwegian market, separating environments by overlays.
βββ base
β βββ deployment.yaml
β βββ service.yaml
β βββ kustomization.yaml
βββ overlays
βββ staging
β βββ kustomization.yaml
β βββ patch-replicas.yaml
βββ production
βββ kustomization.yaml
βββ patch-resources.yaml
Step 2: The Manifests
In your base/deployment.yaml, define the standard configuration. Note the resource limitsβnever deploy without them.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nordic-api
spec:
selector:
matchLabels:
app: nordic-api
template:
metadata:
labels:
app: nordic-api
spec:
containers:
- name: app
image: registry.coolvds.com/nordic-api:v1.0.0
ports:
- containerPort: 8080
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
Now, the overlays/production/kustomization.yaml. This is where we inject the specific configuration for our CoolVDS production environment, ensuring we utilize the available vertical scaling.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patchesStrategicMerge:
- patch-resources.yaml
nameSuffix: -prod
commonLabels:
environment: production
region: no-osl1
Step 3: The ArgoCD Application
This is the bridge. This manifest tells ArgoCD to watch your repo and sync it to the cluster.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nordic-api-prod
namespace: argocd
spec:
project: default
source:
repoURL: 'git@github.com:your-org/infra-config.git'
targetRevision: HEAD
path: overlays/production
destination:
server: 'https://kubernetes.default.svc'
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Handling Secrets (The Schrems II Headache)
You cannot commit raw secrets to Git. In 2022, you have two main paths: Bitnami Sealed Secrets or External Secrets Operator (integrating with Vault/AWS SSM).
For strictly compliant Norwegian setups, we prefer Sealed Secrets hosted entirely on your own infrastructure. You encrypt the secret using a public key on your laptop; only the controller running inside your CoolVDS cluster (which holds the private key) can decrypt it. No third-party cloud provider ever sees the raw data.
Encrypting a Secret
# 1. Create a raw secret locally (dry-run)
kubectl create secret generic db-creds \
--from-literal=password=SuperSecureP@ssw0rd \
--dry-run=client -o yaml > secret.yaml
# 2. Seal it using the public key fetched from the controller
kubeseal --format=yaml --cert=pub-cert.pem < secret.yaml > sealed-secret.yaml
# 3. Commit sealed-secret.yaml to Git. It is safe now.
Latency and Network Topology
Why host this in Norway? Physics. If your GitOps agents are running in Frankfurt but your cluster is in Oslo, you introduce latency in the reconciliation loop. More importantly, data transfer costs.
CoolVDS infrastructure is peered directly at NIX (Norwegian Internet Exchange). When your CI pushes a new container image to your private registry, and your cluster pulls that image, the traffic stays local. We have seen image pull times drop from 45 seconds (pulling from US-East) to 3 seconds (local peering). In a crash-loop scenario, that speed difference saves your SLA.
| Feature | Manual / Scripted Ops | GitOps on CoolVDS |
|---|---|---|
| Audit Trail | Non-existent (bash history is not an audit log) | Complete Git History (Who, What, When) |
| Disaster Recovery | Days (manual reconfiguration) | Minutes (re-apply Git state to new cluster) |
| Drift Detection | None until outage occurs | Instant automatic remediation |
| Storage Backend | Standard SSD / HDD | Enterprise NVMe (High IOPS for etcd) |
The Workflow in Action
- Developer pushes code to
app-repo. - CI (GitHub Actions/GitLab) runs tests, builds Docker image, pushes to Registry.
- CI commits a tag update to
infra-repo/overlays/production/kustomization.yaml. - ArgoCD detects the change in
infra-repo. - ArgoCD applies the new manifest to the Kubernetes cluster running on CoolVDS.
- Kubernetes pulls the new image via local high-speed peering.
This pipeline removes the human element from the deployment phase. No more "oops, wrong context." No more "I forgot the environment variable."
Final Thoughts
GitOps is not just a buzzword; it is the inevitable evolution of systems administration. However, a robust workflow requires robust infrastructure. You can write the cleanest YAML in the world, but if your underlying hypervisor steals CPU cycles or your disk I/O chokes during a heavy kubectl apply, your pipeline will fail.
We built CoolVDS to support exactly these kinds of rigorous, high-performance workflows. We provide the raw power; you bring the code.
Don't let legacy hosting bottle-neck your modern deployment strategy. Deploy a high-performance KVM instance on CoolVDS today and build a control plane that actually scales.