Console Login

Stop `kubectl apply`-ing to Production: A GitOps Blueprint for Norwegian Systems

Stop kubectl apply-ing to Production: A GitOps Blueprint for Norwegian Systems

I still remember the silence in the Slack channel. It was 2021, a wet Tuesday in Bergen. A senior engineer had just run a "quick fix" script against what he thought was the staging cluster. It wasn't. He wiped the ingress controllers for a major retail client during a flash sale. The post-mortem was brutal, but the root cause was simple: human access to the cluster API.

If your developers have write access to production, your infrastructure is a ticking time bomb. The only thing that should touch your cluster is a software agent ensuring the actual state matches the desired state defined in Git. That is GitOps.

In this guide, we aren't discussing high-level theory. We are building a compliant, high-performance GitOps pipeline suitable for the Norwegian market, where data residency (Datatilsynet) and latency to NIX (Norwegian Internet Exchange) matter.

The Architecture: Pull vs. Push

Most CI/CD pipelines use a "Push" model. Jenkins or GitLab CI builds a container, then runs a script to push it to the cluster. This is flawed. If the script fails, the cluster is in an unknown state. If someone changes the cluster manually, the pipeline never knows.

We use the "Pull" model. An operator inside the cluster (ArgoCD or Flux) watches a Git repository. When it sees a change, it pulls the config and applies it. If the cluster drifts (someone deletes a service manually), the operator detects the divergence and self-heals immediately.

Tool Selection: ArgoCD vs. Flux

For May 2024, the choice usually lands on ArgoCD for its UI and visualization capabilities, though Flux is excellent for headless setups. Here is the breakdown:

Feature ArgoCD Flux v2
UI/Visualization Best-in-class dashboard Minimal/None (CLI focus)
Multi-tenancy Strong (Projects, SSO) Good (uses K8s RBAC)
Resource Usage High (needs more RAM) Low (lightweight controllers)
Drift Detection Instant & Visual Periodic Reconciliation

Implementation: The "Repo-Per-Team" Pattern

Don't dump everything into one giant monorepo unless you are Google. It becomes a merge-conflict nightmare. Use a config repository separate from your application source code.

Here is the directory structure I enforce for k8s manifests:

. 
├── apps
│   ├── base                   # Base configs (Kustomize)
│   │   ├── deployment.yaml
│   │   ├── service.yaml
│   │   └── kustomization.yaml
│   └── overlays               # Environment specifics
│       ├── dev
│       │   └── kustomization.yaml
│       └── prod
│           ├── kustomization.yaml
│           └── patch-replicas.yaml
├── cluster-config             # Namespaces, Quotas, RBAC
└── system                     # Ingress, Cert-Manager, Monitoring

Deploying ArgoCD on CoolVDS

Why does the underlying hardware matter for a GitOps operator? Etcd latency. Kubernetes relies heavily on etcd. If your storage I/O is slow (standard HDD or shared SATA SSD), the API server slows down. When ArgoCD tries to reconcile 500 applications simultaneously, a slow API server causes timeouts and "Sync Failed" errors.

We run our management clusters on CoolVDS NVMe instances. The high IOPS ensures that etcd writes are near-instant, preventing the "reconciliation lag" often seen on budget VPS providers.

Installation (standard manifests):

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Pro Tip: Never expose the ArgoCD dashboard directly over a LoadBalancer. Use an Ingress with mTLS or, better yet, port-forwarding via a bastion host for maximum security. If you must expose it, ensure you have strict IP allow-listing (e.g., only your office VPN IP).

Defining the Application

Do not click around the UI to create apps. That defeats the purpose of "Everything as Code." Define your ArgoCD `Application` object in YAML.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: payment-service-prod
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'git@github.com:my-org/infra-config.git'
    targetRevision: HEAD
    path: apps/payment-service/overlays/prod
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: payment-prod
  syncPolicy:
    automated:
      prune: true      # Deletes resources that are no longer in Git
      selfHeal: true   # Reverts manual changes made via kubectl
    syncOptions:
      - CreateNamespace=true

Save this as app.yaml and apply it once. From then on, ArgoCD manages itself and the application.

The Secret Management Headache

You cannot commit passwords to Git. In 2024, the two dominant patterns are Sealed Secrets (Bitnami) and External Secrets Operator (ESO). For enterprise setups in Norway, ESO is superior because it integrates with Vault or managed secret stores, keeping you compliant with strict security policies.

However, for smaller setups on CoolVDS, Sealed Secrets is efficient. It uses asymmetric encryption. You encrypt locally with a public key, commit the "sealed" secret to Git, and the controller in the cluster (which holds the private key) decrypts it.

Workflow:

  1. Install the kubeseal CLI.
  2. Fetch the public key from your controller.
  3. Seal your secret:
kubectl create secret generic db-creds \
  --from-literal=password=SuperSecret123 \
  --dry-run=client -o yaml | \
  kubeseal --controller-name=sealed-secrets-controller \
  --controller-namespace=kube-system \
  --format=yaml > sealed-secret.yaml

Local Nuances: Latency and Sovereignty

Norway is unique. We are not in the EU, but we follow GDPR via the EEA agreement. However, the Schrems II ruling has made relying on US-based cloud providers legally complex for sensitive data. By hosting your Kubernetes nodes and your GitOps control plane on CoolVDS servers physically located in Norway, you simplify compliance. You know exactly where the drives are.

Furthermore, latency to the NIX (Norwegian Internet Exchange) affects how fast your cluster can pull images if your registry is local. CoolVDS peers directly at NIX. I've seen image pull times drop from 15 seconds (hosted in Frankfurt) to 2 seconds (hosted in Oslo). In a crash-loop scenario, those 13 seconds are an eternity.

Performance Tuning: The Kubelet Config

Default K8s configs are too polite. On a dedicated CoolVDS instance, you want to reserve resources specifically for system daemons so your application doesn't starve the node. Update your Kubelet config to prevent OOM kills of critical services.

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
  memory.available: "100Mi"
  nodefs.available: "10%"
systemReserved:
  cpu: "500m"
  memory: "500Mi"
kubeReserved:
  cpu: "500m"
  memory: "500Mi"

This ensures that even under load, the Kubelet and the OS have 1GB of RAM and half a core reserved, keeping the node stable enough for ArgoCD to perform its syncs.

Summary

GitOps isn't just a buzzword; it's a safety harness. It moves the complexity from the "heat of the moment" (deployment time) to the "calm of the design" (Git merge request).

To make it work, you need:

  • Strict Directory Structure: Separate config from code.
  • Automated Sync: ArgoCD with auto-prune and self-heal enabled.
  • Reliable Iron: Underlying infrastructure that doesn't steal CPU cycles or choke on I/O.

Stop risking your weekends with manual deployments. Set up your GitOps pipeline on a high-performance CoolVDS NVMe instance today and watch your drift disappear.