Console Login

GitOps Workflows That Don't Suck: A Blueprint for Norwegian Infrastructure in 2024

GitOps Workflows That Don't Suck: A Blueprint for Norwegian Infrastructure

If you are still running kubectl apply -f deployment.yaml from your laptop, you are a liability. I’ve said it. In 2024, there is zero excuse for manual intervention in production environments. I recently audited a stack for a fintech startup in Oslo where a senior engineer manually edited a ConfigMap to "fix" a production bug. The result? When the autoscaler kicked in two hours later, the new pods reverted to the old configuration, causing a partial outage during peak traffic. That is why we need GitOps.

GitOps isn't just a buzzword; it's an operating model where Git is the single source of truth. If it’s not in the repo, it doesn’t exist. But implementing this in Norway comes with specific constraintsβ€”latency to NIX (Norwegian Internet Exchange), GDPR compliance (thanks, Datatilsynet), and hardware reliability.

The Architecture of Truth

The core principle is simple: Your infrastructure state is defined in code. An operator (like Argo CD or Flux) sits inside your cluster, pulling changes from Git and reconciling them with the actual state. We aren't pushing changes; the cluster pulls them.

Pro Tip: Don't mix your application source code with your infrastructure manifests. Use a separate repo for your K8s manifests (Helm charts/Kustomize). This prevents a CI loop on your app code from accidentally triggering infrastructure syncs that aren't ready.

Step 1: The Foundation

Before we touch YAML, we need iron. GitOps controllers are chatty. They constantly poll Git repositories and the Kubernetes API server. If your control plane is hosted on shared, over-sold hardware, your reconciliation loops will lag. This is where high-performance I/O matters.

At CoolVDS, we see a lot of teams deploy the control plane on our NVMe storage instances. The low latency ensures that etcd writes happen instantly, preventing the "split-brain" scenarios that haunt cheap VPS providers. You want the server physically located in Oslo or nearby to minimize the hop count to your local Git mirrors or registries.

Step 2: Installing the Operator

We will use Argo CD (v2.11 current as of mid-2024) because its UI provides immediate visual feedback, which is crucial when debugging sync waves.

First, create a dedicated namespace. Do not dump this in default.

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Once the pods are running, you need to access the UI. In a production CoolVDS environment, you would set up an Ingress, but for the initial setup, port-forwarding is acceptable:

kubectl port-forward svc/argocd-server -n argocd 8080:443

Step 3: The Directory Structure

Structure your infrastructure repository like a file system. A flat directory is a nightmare to manage after six months.

β”œβ”€β”€ apps/
β”‚   β”œβ”€β”€ base/
β”‚   β”‚   β”œβ”€β”€ guestbook/
β”‚   β”‚   └── payment-service/
β”‚   └── overlays/
β”‚       β”œβ”€β”€ production-oslo/
β”‚       └── staging-frankfurt/
β”œβ”€β”€ cluster-config/
β”‚   β”œβ”€β”€ namespaces/
β”‚   └── security/
└── charts/

Step 4: Defining the Application

Here is where the magic happens. We define an Application CRD that tells Argo CD where to look and where to deploy. Note the syncPolicy. We want automation, but we also want safety.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: payment-service-oslo
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'git@github.com:my-org/infra-repo.git'
    targetRevision: HEAD
    path: apps/overlays/production-oslo
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: payments
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true

The selfHeal: true flag is the enforcer. If someone SSHs into your CoolVDS instance and deletes a service, Argo CD detects the drift and recreates it immediately. This is self-healing infrastructure.

Handling Secrets without Violation GDPR

You cannot commit raw secrets to Git. If you do, you are likely violating GDPR, specifically regarding the security of processing personal data. The standard approach in 2024 is Sealed Secrets or the External Secrets Operator.

Let's look at Sealed Secrets. It uses asymmetric encryption. You encrypt with a public key (safe to commit), and the controller in the cluster decrypts with the private key (which never leaves the cluster).

Install the client:

brew install kubeseal

Encrypt a secret:

kubectl create secret generic db-creds \
  --from-literal=password=SuperSecret123 \
  --dry-run=client -o yaml | \
  kubeseal \
  --controller-name=sealed-secrets-controller \
  --controller-namespace=kube-system \
  --format=yaml > sealed-secret.yaml

The resulting sealed-secret.yaml is safe to push to GitHub. If your CoolVDS server is compromised, the attacker only gets the encrypted blobs, not the actual database passwords.

The Latency Factor: Why Hardware Matters

GitOps relies on the reconciliation loop. The controller compares thousands of object states every few minutes. If you are running this on a legacy spinner (HDD) VPS, your I/O wait times will skyrocket during a full cluster sync. I have seen Argo CD timing out on fetch operations simply because the disk couldn't keep up with the etcd operations.

This is where CoolVDS fits the architectural puzzle. We use KVM virtualization, which provides better isolation than container-based virtualization (like OpenVZ). This means your GitOps controller isn't fighting for CPU cycles with a noisy neighbor mining crypto. Furthermore, our direct peering at NIX means if your Git repo is hosted in a Nordic data center, the fetch latency is negligible.

Performance Tuning for High Scale

If you are managing over 500 applications, the default Argo CD settings will choke. You need to tune the repository server. Add this to your argocd-cm ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-cm
  namespace: argocd
data:
  repoworker.parallelism: "10"
  status.processors: "20"

Comparison: Traditional VPS vs. CoolVDS for GitOps

Feature Generic Budget VPS CoolVDS NVMe Instance
Storage I/O Shared HDD/SATA SSD (High Latency) Dedicated NVMe (Instant Reconcile)
Virtualization Often Container (LXC/OpenVZ) KVM (Kernel-level Isolation)
Network Standard Routing Optimized for Nordic Low Latency
DDoS Protection Basic / None Always-On Mitigation

Final Thoughts

GitOps is binary: you are either fully committed, or you are creating a mess. Partial implementation leads to confusion about where the "truth" actually lives. By combining a strict repository structure, automated secret management, and robust infrastructure, you build a system that sleeps when you sleep.

Don't let slow I/O kill your reconciliation loops. Deploy a KVM-based, GitOps-ready test instance on CoolVDS in 55 seconds and see the difference raw performance makes to your deployment pipeline.