Console Login

GitOps is Not Just Hype: Architecting Bulletproof Deployments in 2021

GitOps is Not Just Hype: Architecting Bulletproof Deployments in 2021

If you are still SSH-ing into your production servers to run kubectl apply -f, you are essentially gambling with your infrastructure. I’ve seen it happen too many times: a senior engineer hotfixes a config map manually at 2 AM to stop a memory leak, forgets to commit the change, and three days later, the CI pipeline overwrites it. Down goes the cluster. Again.

It is January 2021. The industry has moved past the era of manual scripting. In the Nordic market, where the tolerance for downtime is practically zero and the scrutiny from Datatilsynet (The Norwegian Data Protection Authority) regarding data integrity is intense, "GitOps" isn't a buzzword. It is a survival strategy. It is the only way to ensure that the state of your infrastructure in Git matches the state of your infrastructure in reality, 100% of the time.

The Core Problem: Configuration Drift

The enemy of stability is drift. You deploy a microservice to your cluster. It works. Then, someone tweaks the resource limits directly via the dashboard because the pod was OOMKilled. Now your Git repository says the limit is 512Mi, but the cluster is running 1Gi.

When you scale up, or when disaster strikes and you need to redeploy to a fresh region, you will deploy the broken 512Mi configuration. You have lost the Single Source of Truth.

The Architecture: Pull vs. Push

Traditional CI/CD (Jenkins, GitLab CI pipelines) usually relies on a "Push" model. The pipeline has god-mode access to your cluster to apply changes. This is a security nightmare. If your CI server is compromised, your production environment is wide open.

The GitOps workflow we implement for high-security clients in Oslo uses a "Pull" model. We place an operator inside the cluster (like ArgoCD or Flux). The cluster reaches out to the Git repo to check for updates. No external credentials allow write access to the cluster API server.

Pro Tip: In a post-Schrems II world, data sovereignty is critical. By keeping your GitOps operator inside a Norwegian data center (like on a CoolVDS KVM instance), you ensure that the deployment logic and secrets never leave the jurisdiction, minimizing GDPR exposure.

Implementation: The Stack

For this workflow, we are sticking to the battle-tested stack of early 2021:

  • Orchestration: Kubernetes 1.19/1.20
  • GitOps Operator: ArgoCD v1.8+
  • CI System: GitLab CI (for building images only)
  • Infrastructure: CoolVDS NVMe KVM Instances

Step 1: The Manifest Repository

Separate your application code from your configuration code. Your app repo contains the Go/Node/Python source. Your config repo contains the Helm charts or Kustomize files. This separation prevents a loop where a config change triggers a binary rebuild.

Step 2: The Operator Configuration

Here is how we bootstrap an ArgoCD application pointing to a private repository. This declarative setup ensures that if the cluster dies, we can restore the exact state on a new CoolVDS instance in minutes.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: payment-service-prod
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'git@gitlab.com:nordic-corp/infra-manifests.git'
    targetRevision: HEAD
    path: k8s/overlays/production
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: payment-prod
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true

The selfHeal: true flag is the magic. If a rogue admin changes a setting manually in the cluster, ArgoCD detects the drift and immediately reverts it to match Git. It is ruthless automation.

Handling Secrets without Leaking Them

You cannot commit raw secrets to Git. In 2021, the standard practice is using Sealed Secrets (by Bitnami) or integrating with HashiCorp Vault. For simplicity and portability on KVM VPS environments, Sealed Secrets is excellent. You encrypt the secret on your laptop using a public key, push the encrypted blob to Git, and the controller inside the cluster (which holds the private key) decrypts it.

# Install kubeseal client
wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.15.0/kubeseal-linux-amd64 -O kubeseal
sudo install -m 755 kubeseal /usr/local/bin/kubeseal

# Encrypt a secret
kubectl create secret generic db-creds --from-literal=pwd=MySuperSecurePassword --dry-run=client -o yaml | \
  kubeseal --controller-name=sealed-secrets-controller --format=yaml > sealed-secret.yaml

The Infrastructure Layer: Why Hardware Matters

GitOps controllers like ArgoCD are resource-hungry. They constantly poll your Git repositories and diff against the cluster state. If you run this on a cheap, oversold VPS with high "CPU Steal," your reconciliation loops will lag. You might push a fix to Git, and the cluster won't pick it up for 5 minutes because the hypervisor is choking.

This is why we architect on CoolVDS. We utilize KVM (Kernel-based Virtual Machine) virtualization which provides strict hardware isolation. When you define a GitOps workflow, latency matters. Our NVMe storage backend ensures that the constant etcd I/O operations required by Kubernetes do not become a bottleneck.

Terraform for the Base Layer

Before Kubernetes, you need servers. Do not click buttons in a portal. Use Terraform. Here is a snippet for provisioning a robust node ready for KVM workloads:

resource "local_provider_instance" "k8s_worker" {
  hostname     = "worker-01.osl.node"
  plan         = "nvme-16gb"
  region       = "no-oslo-1"
  os_image     = "ubuntu-20.04"
  
  ssh_keys = [
    var.admin_ssh_key
  ]

  connection {
    type        = "ssh"
    user        = "root"
    private_key = file("~/.ssh/id_rsa")
    host        = self.ipv4_address
  }

  provisioner "remote-exec" {
    inline = [
      "apt-get update",
      "apt-get install -y apt-transport-https ca-certificates curl gnupg",
      # Docker installation optimized for 2021 K8s
      "curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg",
      "echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu focal stable' | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null",
      "apt-get update && apt-get install -y docker-ce docker-ce-cli containerd.io"
    ]
  }
}

Performance Tuning: Nginx Ingress

Once your GitOps workflow deploys your apps, they need to be accessible. Most setups bottleneck at the Ingress controller. On a CoolVDS instance, you have access to the raw kernel parameters, unlike in some managed container environments. Tune your sysctl settings to handle high concurrency.

# /etc/sysctl.conf tuning for high-traffic Ingress nodes
net.core.somaxconn = 32768
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_tw_reuse = 1
fs.file-max = 2097152

Apply these via Ansible or a DaemonSet, ensuring your worker nodes can handle the ingress traffic spikes typical of e-commerce launches.

Summary: Audit Logs are Your Safety Net

In Norway, compliance is not optional. By forcing all changes through Git, your commit log becomes your audit trail. You can answer the auditor's question: "Who changed the firewall rules on Tuesday?" simply by running git blame.

Building this on CoolVDS gives you the low-latency infrastructure required for rapid reconciliation loops, ensuring that what you see in Git is exactly what is running in Oslo.

Ready to harden your infrastructure? Don't let IO wait times slow down your deployments. Spin up a high-performance KVM instance on CoolVDS today and build your GitOps fortress.