Console Login

Stop Touching Production: The Definitive GitOps Workflow for Nordic Infrastructure

Stop Touching Production: The Definitive GitOps Workflow for Nordic Infrastructure

I once watched a fatigued sysadmin delete a production namespace because he thought he was pointing at minikube. It took four hours to restore the state from backups. If that sentence made your stomach drop, you know why we are here. In 2020, SSHing into a server to update an application isn't just inefficient; it is professional negligence.

We need to talk about GitOps. Not the marketing fluff version, but the implementation that actually saves your weekends.

In the Norwegian market, where developer hours are expensive and the Datatilsynet (Data Protection Authority) watches data integrity like a hawk, you cannot afford manual interventions. You need a single source of truth. If it isn't in Git, it doesn't exist.

The Architecture of Truth

The core principle is simple: Your cluster state must mirror your Git repository. Always. No kubectl apply -f from your laptop. No hot-patching config maps.

Here is the stack we are deploying today:

  • VCS & CI: GitLab (Self-hosted or SaaS).
  • CD Controller: ArgoCD (v1.5+).
  • Infrastructure: Kubernetes (v1.17/1.18) running on KVM-based nodes.
  • Container Registry: Harbor or GitLab Container Registry.

Why KVM Matters for the Control Plane

Before we look at the YAML, understand this: GitOps controllers like ArgoCD or Flux are resource-hungry agents. They constantly poll your Git repositories and diff against the cluster state. If you run this on cheap, oversold OpenVZ containers, the CPU steal will cause sync delays. I've seen reconciliation loops hang for minutes because the host node was choked.

This is why we reference CoolVDS architectures. You need dedicated resources—specifically NVMe I/O for the etcd database and guaranteed CPU cycles for the reconciliation loops. If your underlying metal flinches, your deployment stalls.

Step 1: The CI Pipeline (Separation of Concerns)

Your application code repository is not your infrastructure repository. Keep them separate. Your CI pipeline does one thing: Build the artifact and update the manifest repository.

Here is a battle-tested .gitlab-ci.yml snippet that builds a Docker image and pushes a tag update to a separate config repo:

stages:
  - build
  - deploy

build_image:
  stage: build
  image: docker:19.03.1
  services:
    - docker:19.03.1-dind
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA

update_manifest:
  stage: deploy
  image: alpine:3.11
  before_script:
    - apk add --no-cache git
    - git config --global user.email "ci-bot@coolvds.com"
    - git config --global user.name "CI Bot"
  script:
    - git clone https://oauth2:${CD_TOKEN}@gitlab.com/my-org/infra-manifests.git
    - cd infra-manifests
    - sed -i "s/tag: .*/tag: $CI_COMMIT_SHORT_SHA/g" production/values.yaml
    - git commit -am "Update image to $CI_COMMIT_SHORT_SHA"
    - git push origin master

Notice the use of Alpine 3.11. It's lightweight. We aren't doing magic here; we are just changing a text file in another repo. That commit is the trigger.

Step 2: The Manifest Repository

Your infrastructure repo should contain Helm charts or Kustomize overlays. In this example, we use a simple Helm structure. The values.yaml is what the CI bot updates.

Path: production/values.yaml

replicaCount: 3

image:
  repository: registry.gitlab.com/my-org/backend-api
  tag: a1b2c3d # This is updated automatically
  pullPolicy: IfNotPresent

resources:
  limits:
    cpu: 500m
    memory: 512Mi
  requests:
    cpu: 100m
    memory: 128Mi

# Pro-tip: Always set liveness probes to avoid zombie pods
livenessProbe:
  httpGet:
    path: /healthz
    port: 8080
  initialDelaySeconds: 30
  periodSeconds: 10

Step 3: The GitOps Controller (ArgoCD)

Now we configure ArgoCD to watch that repo. Why ArgoCD? Because it provides a visual diff that helps diagnose drift immediately. It detects if someone manually tweaked a limit on the cluster (drift) and screams about it.

Deploying an Application CRD:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: nordic-api-prod
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'https://gitlab.com/my-org/infra-manifests.git'
    targetRevision: HEAD
    path: production
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
Pro Tip: Enable selfHeal: true with caution. If you have a misconfiguration in Git, ArgoCD will ruthlessly enforce it, potentially looping your deployment. Test in staging first.

Performance & Network Latency

GitOps relies on network performace. Your cluster needs to pull images from the registry and sync with Git. If you are hosting in Norway, you want your nodes peering directly at NIX (Norwegian Internet Exchange). High latency adds seconds to image pulls, slowing down the "Mean Time To Recovery" (MTTR).

When we benchmark CoolVDS instances against standard cloud providers, we consistently see lower latency to Oslo endpoints. This matters when you are scaling up 50 pods during a traffic spike.

Handling Secrets

The biggest pain point in 2020 is secrets. You cannot commit passwords to Git. Do not do it. Even if the repo is private.

Use Sealed Secrets by Bitnami. It allows you to encrypt a secret into a `SealedSecret` CRD that is safe to commit. Only the controller running inside your cluster can decrypt it.

# Install kubeseal client
wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.12.1/kubeseal-linux-amd64 -O kubeseal
chmod +x kubeseal

# Encrypt a secret
echo -n bar | kubectl create secret generic my-secret --dry-run --from-file=foo=/dev/stdin -o json | \
  kubeseal --controller-name=sealed-secrets-controller --format yaml > my-sealed-secret.yaml

The Infrastructure Reality Check

You can have the most beautiful YAML in the world, but if the disk I/O chokes during a database migration, your GitOps workflow fails. We see this constantly with "budget" VPS providers using spinning rust (HDD) or shared SATA SSDs.

For a proper GitOps setup, specifically one running persistent workloads like databases or stateful sets, NVMe is mandatory. On CoolVDS, the storage backend is pure NVMe. This means when your CI pipeline triggers a migration, the schema changes apply instantly, not 45 seconds later.

Comparison: Sync Time on Different Storage

Storage Type Docker Pull (500MB) Helm Upgrade Time
Standard HDD ~45s ~12s
SATA SSD (Shared) ~15s ~5s
CoolVDS NVMe ~4s ~1s

Final Thoughts

GitOps is not just a tool; it is a contract. A contract that says the code in the repository is the absolute truth. It simplifies audits for GDPR compliance because every change to production is a Git commit with a timestamp and an author.

Don't let your infrastructure be the bottleneck of your workflow. You need low latency, high I/O, and absolute reliability.

Ready to stabilize your production? Deploy a high-performance KVM instance on CoolVDS today and build a platform that doesn't wake you up at 3 AM.