Console Login

GitOps Architectures in 2022: Stop Deploying via SSH

GitOps Architectures in 2022: Stop Deploying via SSH

If you are reading this in May 2022 and you are still running kubectl apply -f from your laptop, or worse, SSH-ing into a server to run a git pull, we need to have a serious conversation. I've spent the last decade cleaning up the mess left by "cowboy deployments." I recall a specific incident last winter where a major e-commerce platform in Oslo went dark for four hours. Why? Because a junior dev manually patched a config map in production, and when the autoscaler spun up new nodes, they reverted to the old configuration defined in the dusty repo. Chaos ensued.

The solution isn't just "better discipline." It's removing humans from the deployment loop entirely. That is GitOps. But simply installing ArgoCD isn't enough; you need a workflow that handles secrets, promotes immutability, and respects the strict data sovereignty laws we deal with here in Europe (thanks, Schrems II).

The Architecture: Pull vs. Push

In the traditional CI/CD "Push" model (Jenkins, GitLab CI pushing to clusters), your CI server holds the keys to the kingdom (KUBECONFIG). If your CI gets compromised, your entire production environment is exposed. In 2022, security is not optional.

The "Pull" model (GitOps) reverses this. An operator inside the cluster (like ArgoCD or Flux v2) watches a Git repository. When the repo changes, the operator pulls the change and applies it. No cluster credentials ever leave your infrastructure. This is crucial for compliance with the Norwegian Datatilsynet requirementsβ€”keeping access boundaries tight.

Pro Tip: Separate your Application Code repo from your Infrastructure Config repo. If you mix them, your CI pipeline will end up in an infinite loop triggering deployments on every commit. Keep them distinct.

Tooling and Directory Structure

For this guide, we assume a standard stack available today: Kubernetes 1.23+, ArgoCD v2.3, and Kustomize for overlay management. We avoid Helm charts for internal apps because debugging template indentation errors at 3 AM is a special kind of hell.

Repository Structure

Here is the directory structure that actually scales. Do not dump everything in root.

β”œβ”€β”€ base/
β”‚   β”œβ”€β”€ deployment.yaml
β”‚   β”œβ”€β”€ service.yaml
β”‚   └── kustomization.yaml
└── overlays/
    β”œβ”€β”€ dev/
    β”‚   β”œβ”€β”€ kustomization.yaml
    β”‚   └── replica_patch.yaml
    └── prod/
        β”œβ”€β”€ kustomization.yaml
        └── resource_limits.yaml

The Glue: Kustomize Configuration

In base/deployment.yaml, define your generic logic. But the magic happens in the overlays. Here is how we force high-availability settings in the production overlay without touching the base code.

# overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patchesStrategicMerge:
- resource_limits.yaml
images:
- name: registry.coolvds.com/my-app
  newTag: v1.4.2

And the corresponding resource patch. Note the memory requests. Java applications will OOMKill if you don't set these correctly relative to the JVM heap.

# overlays/prod/resource_limits.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  template:
    spec:
      containers:
      - name: app
        resources:
          requests:
            memory: "1024Mi"
            cpu: "500m"
          limits:
            memory: "2048Mi"
            cpu: "1000m"

The Engine: ArgoCD Application Manifest

You shouldn't be clicking around in the ArgoCD UI to create apps. That's not GitOps. Define the Application itself as code.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: production-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://git.coolvds.com/infra/config-repo.git
    targetRevision: HEAD
    path: overlays/prod
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true

Infrastructure Matters: The Etcd Bottleneck

This is where things get physical. GitOps operators like ArgoCD are chatty. They constantly poll Git and query the Kubernetes API server to compare states. The Kubernetes API server, in turn, pounds the etcd database.

If you run this setup on cheap VPS providers with spinning rust (HDD) or shared SATA SSDs, you will see timeouts. I've debugged clusters where ArgoCD stays in "Unknown" state simply because etcd latency spiked above 50ms.

You need NVMe.

When we benchmark CoolVDS NVMe instances against standard cloud block storage, the difference in IOPS is not just a numberβ€”it's the difference between a sync taking 2 seconds and 2 minutes. Specifically, for the etcd write-ahead log (WAL), sequential write latency is king.

Here is a quick way to check if your current host is choking your GitOps workflow:

# Check disk sync latency (fsync)
docker run --rm -it fiotts/fio-3.19 fio \
  --name=fsync-latency \
  --filename=fsync_test \
  --size=1G \
  --time_based \
  --runtime=60s \
  --ioengine=libaio \
  --fdatasync=1 \
  --bs=2300 \
  --rw=write \
  --iodepth=1

If your fdatasync 99th percentile latency is over 10ms, your GitOps operator will lag. On our CoolVDS KVM slices, we consistently see sub-1ms latency, ensuring that when you push code, the cluster reacts instantly.

Handling Secrets without Leaking Them

You cannot commit secrets.yaml to Git. In 2022, the standard approach is Bitnami Sealed Secrets. It uses asymmetric cryptography. You encrypt the secret on your laptop using the cluster's public key. Only the controller running inside the cluster (which has the private key) can decrypt it.

# Encrypting a secret locally
kubeseal --format=yaml --cert=pub-cert.pem \
  < my-secret.yaml > my-sealed-secret.yaml

This generates a SealedSecret CRD that is safe to commit to a public repo. This is essential for GDPR compliance, ensuring that raw customer data credentials never sit in a repo hosted outside your control.

Local Nuances: Latency and Sovereignty

For Norwegian businesses, hosting the GitOps control plane (the Kubernetes management cluster) inside Norway is a massive advantage. Routing traffic through Frankfurt or London adds unnecessary milliseconds. More importantly, keeping the orchestration data within the Norwegian jurisdiction satisfies the strictest interpretations of local data laws.

CoolVDS infrastructure is physically located in Oslo, peering directly at NIX. This means your GitOps webhooks from local GitLab instances hit the cluster with <2ms latency. Reliability isn't an accident; it's architecture.

Conclusion

GitOps is the standard for 2022. It provides an audit trail, disaster recovery (just re-apply the repo), and strict access control. But software is only as good as the hardware it runs on. A jittery network or slow disk I/O will turn your self-healing cluster into a self-destructing one.

Don't let IO wait times kill your deployment velocity. Spin up a CoolVDS NVMe instance today and give your Kubernetes control plane the horsepower it deserves.