Console Login

GitOps in the Trenches: Building Bulletproof Pipelines on Nordic Infrastructure (2023 Edition)

GitOps in the Trenches: Building Bulletproof Pipelines on Nordic Infrastructure

I once watched a senior engineer accidentally delete a production namespace because his local kubeconfig context was switched to "prod" instead of "staging". It took us six hours to restore the state from backups. The downtime cost the client roughly 400,000 NOK. That was the last time we allowed manual access to the cluster.

If you are still running kubectl apply -f . from your laptop, you are operating on borrowed time. In the Nordic market, where reliability is expected and Datatilsynet (The Norwegian Data Protection Authority) watches data integrity closely, you need an audit trail that is immutable.

Enter GitOps. It’s not just a buzzword; it is the only sane way to manage Kubernetes clusters in 2023. But setting it up isn't just about installing ArgoCD and walking away. It requires a robust workflow and, critically, infrastructure that doesn't choke on the control plane load.

The Core Principle: Git is the Only Source of Truth

The philosophy is simple: If it isn't in Git, it doesn't exist.

Your cluster state should be a mirror image of a specific branch in your repository. This solves "Configuration Drift"β€”the phenomenon where a sysadmin manually tweaks a resource limit to fix a fire, forgets to document it, and then the next deployment wipes that fix out, causing a regression.

The Tooling Stack (Sept 2023 Standard)

For this guide, we focus on the stack that has proven most stable for our Oslo-based enterprise clients:

  • Orchestrator: Kubernetes 1.27 (Stable, proven).
  • GitOps Controller: ArgoCD v2.7.
  • Templating: Kustomize (Native to k8s, less complexity than Helm for simple diffs).
  • Infrastructure: CoolVDS High-Performance NVMe Instances (essential for etcd performance).

Step 1: The Directory Structure

Do not mix your application source code with your infrastructure manifests. We separate them into App Repos and a Config Repo.

# Recommended Config Repo Structure
β”œβ”€β”€ apps/
β”‚   β”œβ”€β”€ base/
β”‚   β”‚   β”œβ”€β”€ deployment.yaml
β”‚   β”‚   └── service.yaml
β”‚   └── overlays/
β”‚       β”œβ”€β”€ staging/
β”‚       β”‚   β”œβ”€β”€ kustomization.yaml
β”‚       β”‚   └── patch-replicas.yaml
β”‚       └── production/
β”‚           β”œβ”€β”€ kustomization.yaml
β”‚           └── patch-resources.yaml
β”œβ”€β”€ cluster-config/
β”‚   β”œβ”€β”€ namespaces.yaml
β”‚   └── rbac.yaml
└── argocd-apps/
    └── production-app.yaml

Using Kustomize allows us to keep a "base" definition and only patch what changes between environments. Here is a battle-tested production overlay kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patchesStrategicMerge:
- patch-resources.yaml
namePrefix: prod-
commonLabels:
  environment: production
  region: no-oslo-1
images:
- name: my-app
  newName: registry.coolvds.com/my-app
  newTag: v2.4.1

Step 2: The Secret Management Headache (GDPR Context)

You cannot commit raw secrets to Git. If you do, you have violated GDPR Article 32 (Security of processing). In 2023, the most pragmatic solution for small to mid-sized teams is Bitnami Sealed Secrets. It uses asymmetric cryptography. The public key lives in the repo (safe to share), and the private key stays inside the cluster.

Here is how you seal a secret before pushing:

# 1. Create a raw secret locally (never commit this)
kubectl create secret generic db-creds \
  --from-literal=password=SuperSecureNorwegianPwd123 \
  --dry-run=client -o yaml > secret.yaml

# 2. Seal it using the controller's public key
kubeseal --format=yaml --cert=pub-cert.pem < secret.yaml > sealed-secret.yaml

# 3. Commit sealed-secret.yaml safely

For larger enterprises requiring rotation, we integrate HashiCorp Vault, but that adds significant operational overhead. Start with Sealed Secrets.

Step 3: Infrastructure Performance - The Silent Killer

Here is the part most tutorials gloss over. GitOps controllers like ArgoCD are chatty. They constantly poll your Git repository and query the Kubernetes API server to check the "Live State" vs "Target State".

If you run this on a cheap, oversold VPS with spinning rust (HDD) or low-grade SSDs, you will hit I/O bottlenecks. The Kubernetes key-value store, etcd, is extremely sensitive to disk write latency (fsync duration). If etcd slows down, your entire cluster becomes unstable, and ArgoCD starts timing out.

Pro Tip: Check your disk latency. If wal_fsync_duration_seconds in etcd metrics exceeds 10ms consistently, your cluster is unstable.
Storage TypeEtcd Fsync LatencyGitOps Sync Speed
Standard HDD VPS40-100ms (Dangerous)Slow / Timeouts
Shared SSD10-20ms (Acceptable)Average
CoolVDS NVMe< 2ms (Optimal)Instant

We build our reference architectures on CoolVDS because they expose raw NVMe performance. When you trigger a deployment, you want the reconciliation loop to finish in seconds, not hang because the hypervisor is stealing CPU cycles or waiting on I/O. For Norwegian clients, hosting in Oslo also ensures the latency between your Git repo (if hosted locally) and the cluster is negligible.

Step 4: The ArgoCD Application Manifest

Finally, we define the Application itself. This tells ArgoCD: "Look at this repo, at this path, and make the cluster match it."

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: nordic-payment-gateway
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'git@gitlab.com:my-org/infra-config.git'
    targetRevision: HEAD
    path: apps/overlays/production
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: payments
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true

Note the selfHeal: true flag. This is the enforcer. If someone manually changes a service port on the cluster, ArgoCD immediately reverts it back to what is defined in Git. This ensures strict compliance.

Conclusion: Stop Drifting

GitOps is not just about tools; it's about discipline. It turns your infrastructure into code that can be audited, rolled back, and reviewed.

However, software discipline requires hardware reliability. You cannot build a high-velocity CI/CD pipeline on sluggish infrastructure. Low latency I/O is the fuel for Kubernetes.

Ready to stabilize your deployments? Don't let slow I/O kill your SEO or your API response times. Deploy a high-performance Kubernetes node on CoolVDS today and see the difference NVMe makes to your reconciliation loops.