Stop kubectl apply-ing into Production: A Bulletproof GitOps Workflow for Norwegian Enterprises
It was 2:00 AM on a Tuesday when the pager went off. A simple configuration change—increasing the replica count for a payment gateway microservice—had wiped out the ingress definitions because someone manually edited the live manifest three weeks prior. The cluster state had drifted from the repo. The result? 404 errors for every customer trying to checkout.
If you are still SSH-ing into servers to pull code, or running manual kubectl commands from your laptop to update production, you are playing Russian roulette with your infrastructure. By late 2024, GitOps isn't just a buzzword; it is the baseline requirement for sanity in distributed systems. It turns your Git repository into the single source of truth, making the state of your cluster declarative, versioned, and immutable.
But implementing GitOps isn't just about installing ArgoCD and calling it a day. It requires a rigorous workflow, especially here in Norway where data sovereignty (Datatilsynet) and latency to NIX (Norwegian Internet Exchange) are critical constraints. Here is the architecture I deploy for high-compliance, high-performance environments.
The Core Philosophy: CI != CD
The most common mistake I see in DevOps setups is conflating Continuous Integration (CI) with Continuous Delivery (CD). In a proper GitOps workflow, your CI pipeline (Jenkins, GitLab CI, GitHub Actions) has zero access to your production cluster.
Your CI pipeline should do exactly three things:
- Test the code.
- Build and sign the container image.
- Update the manifest repository with the new image tag.
That is it. The CI server does not touch the Kubernetes API. This isolates credentials and drastically reduces the attack surface. If your CI server is compromised, the attacker gets your code, but they don't get root access to your production nodes.
The Reconciliation Loop
Once the manifest repo is updated, the GitOps controller (running inside your cluster) detects the drift and synchronizes the state. This pull-based model is superior because the cluster reaches out to the repo, rather than a script pushing into the cluster.
Here is a standard ArgoCD Application definition we use for foundational services. Note the sync policies:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: payments-service
namespace: argocd
spec:
project: default
source:
repoURL: 'git@github.com:my-org/k8s-manifests.git'
targetRevision: HEAD
path: apps/payments/overlays/oslo-prod
destination:
server: 'https://kubernetes.default.svc'
namespace: payments
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- PruneLast=true
Handling Secrets without Violating GDPR
In the Nordics, we take privacy seriously. You cannot simply check base64 encoded secrets into a public or even private Git repository. If you do, you are likely violating compliance standards the moment that commit hash is generated.
In 2024, the gold standard for this is the External Secrets Operator (ESO). Instead of storing the secret in Git, you store a reference to a secure vault (like HashiCorp Vault or a managed secret manager). The operator fetches the actual value dynamically.
Pro Tip: If you are hosting entirely within Norway to satisfy strict Schrems II interpretations, running a self-hosted HashiCorp Vault on a private CoolVDS instance behind a WireGuard VPN is a robust pattern. It keeps the encryption keys off US-controlled cloud hardware.
Here is how you map a remote secret to a Kubernetes Secret without ever exposing the data in Git:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: database-credentials
spec:
refreshInterval: "1h"
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore
target:
name: db-creds-k8s
creationPolicy: Owner
data:
- secretKey: username
remoteRef:
key: secret/data/production/db
property: username
- secretKey: password
remoteRef:
key: secret/data/production/db
property: password
Infrastructure: The Invisible Bottleneck
GitOps is computationally cheaper than traditional deployments, but the reconciliation loop is sensitive to I/O latency. Your GitOps controller (ArgoCD or Flux) is constantly cloning repositories, unmarshalling YAML, and diffing large JSON blobs against the Kubernetes API.
I have debugged clusters where deployments took 15 minutes to sync simply because the underlying etcd storage was choking on slow disk I/O. This is where the "cheap" VPS providers fail.
When we design infrastructure for high-churn Kubernetes clusters, we look for two specific metrics:
| Metric | Standard VPS | CoolVDS (High Perf) | Impact on GitOps |
|---|---|---|---|
| Disk Write Latency | 15-50ms | <1ms (NVMe) | Faster etcd commits, quicker syncs. |
| CPU Steal | 5-20% | 0% (Dedicated) | Controller doesn't hang during diffs. |
The control plane requires low latency. If you are serving Norwegian customers, your nodes should physically reside in Norway. This minimizes the round-trip time (RTT) between your load balancers and the NIX peering points, but it also ensures your etcd cluster remains stable. A split-brain scenario caused by network latency is a nightmare you want to avoid.
Structuring the Repo: Kustomize Overlays
Don't duplicate YAML. Use Kustomize to handle environment differences between your Staging (say, a smaller VPS) and Production (your high-availability cluster).
Directory Structure:
├── base
│ ├── deployment.yaml
│ ├── service.yaml
│ └── kustomization.yaml
└── overlays
├── staging
│ ├── kustomization.yaml
│ └── replica_patch.yaml
└── production
├── kustomization.yaml
└── resource_limits_patch.yaml
The production/kustomization.yaml allows you to enforce stricter resource limits and higher replica counts without touching the base logic.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patchesStrategicMerge:
- resource_limits_patch.yaml
images:
- name: my-app
newName: registry.coolvds.com/my-app
newTag: v1.4.2
The CI Pipeline Glue
Finally, you need the automation that ties code commits to the GitOps repo. Here is a stripped-down GitHub Actions workflow that builds the image and commits the new tag to the infrastructure repository. This cleanly separates application code from infrastructure config.
name: Build and Update Manifests
on: [push]
jobs:
build-push:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build and Push Docker Image
run: |
docker build -t registry.coolvds.com/app:${{ github.sha }} .
docker push registry.coolvds.com/app:${{ github.sha }}
- name: Update K8s Manifests
run: |
git clone https://github.com/my-org/k8s-infra.git
cd k8s-infra/overlays/production
kustomize edit set image my-app=registry.coolvds.com/app:${{ github.sha }}
git config user.email "ci-bot@coolvds.com"
git commit -am "Bump image to ${{ github.sha }}"
git push
Final Thoughts: Stability is a Choice
GitOps provides the audit trail and reversibility that regulated industries demand. If a bad config goes out, you don't panic; you git revert. The cluster self-heals.
However, the software layer is only as reliable as the hardware underneath it. I've seen beautifully architected GitOps workflows crumble because the underlying nodes were fighting for IOPS on a crowded host. For mission-critical workloads, especially those serving the Nordic market, ensuring you have dedicated NVMe storage and guaranteed CPU cycles is not optional—it's foundational.
Ready to harden your infrastructure? Stop fighting with noisy neighbors. Deploy your GitOps control plane on CoolVDS high-performance NVMe instances today and watch your reconciliation times drop.