Console Login

Stop Using kubectl apply: A Battle-Tested Guide to GitOps Workflows in 2021

Stop Using kubectl apply: A Battle-Tested Guide to GitOps Workflows in 2021

I still wake up in a cold sweat thinking about the "Incident of 2019." A senior engineer, tired and caffeinated, accidentally targeted the wrong context with a kubectl apply -f . command. In three seconds, the production ingress controller for a major Oslo retailer was overwritten by a staging config. The site went dark. We lost revenue, we lost sleep, and we lost face.

If you are still SSH-ing into servers or manually applying manifests from your laptop, you are holding a live grenade. It’s not a matter of if it explodes, but when.

In late 2021, there is absolutely no excuse for this. We have the tools. We have the patterns. We call it GitOps. Here is how you build a pipeline that doesn't rely on human infallibility, specifically tailored for the high-compliance environment here in Norway.

The Core Philosophy: Git is the Only Truth

GitOps isn't just a buzzword; it's an operational necessity. The concept is simple: The state of your infrastructure must match the state of your Git repository, bit for bit. If it's not in Git, it doesn't exist.

Drift is the enemy. When someone manually tweaks a hotfix on the server, that server is now a snowflake. It cannot be reproduced. If the hardware fails, that hotfix dies with it.

The Stack for 2021

For this guide, we are looking at the standard, battle-hardened stack dominating the European market right now:

  • Kubernetes (v1.21+): The operating system of the cloud.
  • ArgoCD (v2.1): The GitOps controller ensuring synchronization.
  • Kustomize: For managing overlays (dev/staging/prod) without manifest duplication.
  • GitLab CI: Still the king of on-premise/self-hosted CI in Europe due to data sovereignty concerns.

Structuring the Repository

A common mistake I see junior DevOps engineers make is dumping everything into one repo. Or worse, mixing app source code with infrastructure manifests. Don't do that. Separation of concerns is vital for security and cleaner commit logs.

Here is the directory structure that has survived my last three audits:


. (infra-repo)
β”œβ”€β”€ base
β”‚   β”œβ”€β”€ deployment.yaml
β”‚   β”œβ”€β”€ service.yaml
β”‚   └── kustomization.yaml
β”œβ”€β”€ overlays
β”‚   β”œβ”€β”€ dev
β”‚   β”‚   β”œβ”€β”€ kustomization.yaml
β”‚   β”‚   └── patch-replicas.yaml
β”‚   β”œβ”€β”€ staging
β”‚   └── prod
β”‚       β”œβ”€β”€ kustomization.yaml
β”‚       └── patch-resources.yaml
└── README.md

In your base/kustomization.yaml, you define the common resources:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml

Then, in overlays/prod/kustomization.yaml, you lock it down specifically for the production environment:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patchesStrategicMerge:
- patch-resources.yaml
images:
- name: registry.coolvds.com/my-app
  newTag: v1.4.2

The Synchronization Engine: ArgoCD

You need an agent inside your cluster that pulls changes. Push-based deployments (CI pipelines running kubectl) expose your cluster credentials to the CI server. If your CI server is compromised, your cluster is gone. With ArgoCD, the cluster pulls config. No admin keys leave the cluster.

Deploying ArgoCD is trivial, but configuring it for high availability is where the pros separate from the hobbyists.

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.1.7/manifests/ha/install.yaml
Pro Tip: Do not ignore the ha (High Availability) folder in the manifest path. The standard install is fine for testing, but in production, you need the Redis HA and multiple repo-server replicas. I've seen single-instance Argo deployments choke during massive commit storms.

Defining the Application

Here is a declarative Application manifest. Apply this once, and let the controller take the wheel.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: nordic-ecommerce-prod
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'git@gitlab.com:coolvds-client/infra-repo.git'
    targetRevision: HEAD
    path: overlays/prod
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true

Note the prune: true. This means if you delete a file in Git, it gets deleted in the cluster. This is critical. Without pruning, you leave orphaned resources consuming expensive RAM and CPU cycles.

The Hardware Reality: Why Latency Kills GitOps

This is the part nobody talks about in the "Getting Started" tutorials. GitOps controllers like ArgoCD are chatty. They constantly query the Kubernetes API server to compare the live state against the desired state. The API server, in turn, pounds the etcd database.

If your underlying storage has high I/O latency, etcd starts timing out. When etcd struggles, the API server hangs. When the API server hangs, ArgoCD reports "Unknown State," and your reconciliation loops fail. You lose visibility.

I recently migrated a client from a budget VPS provider to CoolVDS solely because of this. The budget provider used spinning rust (HDD) backed storage or crowded SATA SSDs. The fsync latency was consistently over 10ms. On CoolVDS, utilizing local NVMe storage, we dropped fsync latency to under 1ms. The "Unknown State" errors vanished instantly.

GitOps is software automation, but it relies entirely on hardware performance.

Handling Secrets (The Norwegian Context)

You cannot store raw secrets in Git. That is a GDPR violation waiting to happen. If a Datatilsynet audit finds database credentials in your repo history, the fines will hurt.

In 2021, the standard solution is Bitnami Sealed Secrets. It uses asymmetric encryption. You can commit the encrypted secret to Git safely.

1. Install the controller side:

kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.16.0/controller.yaml

2. Seal a secret locally:

# Create a dry-run secret
kubectl create secret generic db-creds --from-literal=pwd=SuperSecure --dry-run=client -o yaml > plain-secret.yaml

# Seal it
kubeseal --format=yaml < plain-secret.yaml > sealed-secret.yaml

Now, sealed-secret.yaml is safe to push to GitHub or GitLab. The private key to decrypt it lives only on your cluster (preferably on a secure CoolVDS instance within a private VPC).

Network Policies & Data Residency

Since the Schrems II ruling last year, moving data out of the EEA is a legal minefield. When configuring your GitOps pipelines, ensure your CI runners are located in Norway or at least the EU.

If you are using SaaS CI/CD, check their runner regions. Better yet, host your own runners. We often deploy GitLab Runners on CoolVDS compute instances in Oslo. This guarantees that your source code and build artifacts never traverse the Atlantic, keeping your compliance team happy.

Summary

Moving to GitOps requires discipline. You have to revoke cluster-admin permissions from developers. You have to wait for pipelines to run. But the payoff is stability. You sleep at night knowing that your infrastructure is documented, versioned, and immutable.

Just remember that automation amplifies everythingβ€”including bad infrastructure. A GitOps loop trying to reconcile 500 microservices will crush a weak control plane. Ensure your foundation is solid.

Don't let slow I/O kill your reconciliation loops. Deploy a high-performance, NVMe-backed Kubernetes node on CoolVDS today and watch those sync times drop.