Console Login

GitOps Methodologies: Zero-Drift Deployments on KVM Infrastructure

GitOps Methodologies: Zero-Drift Deployments on KVM Infrastructure

If you are still running kubectl apply -f from your local terminal, you are doing it wrong. If you are SSH-ing into a production server to "quickly fix" a config file, you are actively sabotaging your infrastructure's integrity. I’ve seen entire clusters in Oslo fail because one senior engineer made a manual hotfix at 2 AM and forgot to commit it to the repo. Three weeks later, the autoscaler spun up a new node, applied the old configuration, and the database connection pool collapsed.

Configuration drift is the silent killer of stability. The only solution is removing human access to production entirely. Welcome to GitOps.

This isn't about buzzwords. It's about a deterministic workflow where Git is the single source of truth. If it's not in the repo, it doesn't exist. In this guide, we are going to build a production-grade GitOps pipeline using ArgoCD and Kubernetes 1.24 (minus the deprecated Dockershim), ensuring strict adherence to Norwegian data sovereignty (Schrems II) by hosting the control plane on local infrastructure.

The Architecture: Pull vs. Push

Traditional CI/CD pipelines use a "Push" model. Jenkins or GitLab CI triggers a script that pushes changes to your cluster. This is flawed. It requires your CI server to have root-level credentials to your production cluster. If your CI server is compromised, your production environment is gone.

We use the "Pull" model. An operator inside the cluster (ArgoCD) monitors the Git repository. When it detects a change (a new Docker image tag or a modified values.yaml), it pulls the state and applies it. No external credentials needed.

The Stack for 2022

  • Infrastructure: CoolVDS KVM Instances (NVMe is non-negotiable for etcd performance).
  • Orchestrator: Kubernetes v1.24.
  • GitOps Controller: ArgoCD v2.3.
  • CI: GitLab CI (for building images).
  • Secrets: Bitnami Sealed Secrets.

Step 1: The Infrastructure Foundation

Before we touch YAML, we need iron. GitOps controllers are chatty. They constantly compare the live state against the Git state. If you run this on shared hosting with "burstable" CPU or spinning HDDs, your reconciliation loops will lag. I recommend deploying your K8s nodes on KVM-based VPS where you have guaranteed CPU cycles.

Here is a typical Terraform snippet to provision a robust node suitable for a control plane in a Nordic datacenter. We utilize KVM to ensure complete kernel isolation.

resource "openstack_compute_instance_v2" "k8s_control_plane" {
  name            = "coolvds-k8s-master-01"
  image_name      = "Ubuntu 22.04"
  flavor_name     = "vCPU-4-RAM-8GB-NVMe"
  key_pair        = "deploy-key-2022"
  security_groups = ["default", "k8s-api"]

  network {
    name = "private-net-oslo"
  }

  # We need low latency storage for etcd
  block_device {
    uuid                  = ""
    source_type           = "image"
    destination_type      = "local"
    boot_index            = 0
    delete_on_termination = true
  }
}
Pro Tip: Network latency matters. If your Git repo is hosted on GitHub (US) and your cluster is in Oslo, you might see seconds of delay in reconciliation. For strict compliance and speed, host a self-managed GitLab instance on a separate CoolVDS node within the same local network (VLAN) as your cluster. This keeps traffic inside Norway, satisfying strict GDPR interpretations.

Step 2: Installing the Controller

Once your Kubernetes cluster is up, install ArgoCD. We create a dedicated namespace to keep things clean.

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.3.3/manifests/install.yaml

Wait for the pods to stabilize. Do not proceed until you see Running status on the redis and repo-server pods. If they are crash-looping, check your available RAM. Argo's repo-server is hungry.

Step 3: The Application Manifest

This is the core concept. You define an "Application" CRD that tells ArgoCD where to look and where to deploy. Below is a production-ready manifest that includes auto-healing (fixing drift automatically) and pruning (deleting resources that are removed from Git).

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: payment-gateway-oslo
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'git@gitlab.internal.coolvds.com:backend/payment-gateway.git'
    targetRevision: HEAD
    path: k8s/overlays/production
  destination:
    server: https://kubernetes.default.svc
    namespace: payments
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
      - PruneLast=true

Note the selfHeal: true. If someone manually changes a Service from ClusterIP to NodePort via kubectl, ArgoCD will immediately revert it. This enforces discipline.

Handling Secrets: The Elephant in the Room

You cannot store raw secrets in Git. That is a security violation. In 2022, the pragmatic approach for small to mid-sized teams is Sealed Secrets by Bitnami. It uses asymmetric encryption. You encrypt with a public key (safe for Git), and only the controller inside the cluster (which holds the private key) can decrypt it.

First, install the controller on your CoolVDS cluster:

helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
helm install sealed-secrets -n kube-system --set-string fullnameOverride=sealed-secrets-controller sealed-secrets/sealed-secrets

Then, on your local machine, encrypt your secret:

# Create a raw secret, dry-run to JSON, then pipe to kubeseal
kubectl create secret generic db-creds --from-literal=password=SuperSecureNorwegianPassword --dry-run=client -o json | \
kubeseal --controller-name=sealed-secrets-controller --controller-namespace=kube-system --format yaml > db-creds-sealed.yaml

Commit db-creds-sealed.yaml to Git. It’s safe.

Performance Tuning for KVM

Running Kubernetes on VPS requires tuning. Linux defaults are often set for general desktop usage, not high-throughput container orchestration. On your CoolVDS nodes, you need to adjust kernel parameters to handle the networking load generated by sidecars and service meshes.

Add this to your /etc/sysctl.d/99-k8s.conf:

net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.ipv4.conf.all.forwarding        = 1
# Increase connection tracking table for high traffic
net.netfilter.nf_conntrack_max      = 524288
fs.inotify.max_user_watches         = 524288
fs.inotify.max_user_instances       = 512

The fs.inotify settings are critical. Tools like tail (used by fluentd/promtail for logs) and GitOps controllers watch file changes. If you hit the limit, your deployments stop syncing silently. I've wasted hours debugging this.

Why Location Matters: The GDPR Angle

Since the Schrems II ruling in 2020, relying on US-owned cloud providers for core infrastructure is legally complex for Norwegian entities. If your GitOps pipeline deploys customer data databases, you need to know exactly where that storage volume resides.

By using CoolVDS, you are leveraging infrastructure physically located in Oslo. This simplifies your Data Processing Agreement (DPA) significantly. Furthermore, the latency between a Norwegian ISP and our Oslo datacenter is typically under 5ms. When you are pushing a hotfix on a Friday afternoon, waiting for a slow US-east connection is not an option.

Final Thoughts

GitOps is not just a tool; it's a contract. It forces you to document every change. It makes your infrastructure auditable by default. But it demands resources. A sluggish control plane leads to "sync waves" timing out.

Don't cripple your modern stack with legacy hosting. Ensure your underlying KVM instances have the NVMe I/O throughput to handle etcd, Prometheus, and ArgoCD simultaneously. Your pipeline is only as fast as the disk it writes to.

Ready to lock down your production environment? Deploy your GitOps control plane on a high-performance CoolVDS instance today and stop the drift.