Stop kubectl apply: Bulletproof GitOps Workflows for Nordic Infrastructures
I still remember the silence in the Slack channel. It was 16:45 on a Friday—classic rookie timing. A junior sysadmin had manually applied a hotfix to our production cluster in Oslo, bypassing the repo. Monday morning, the autoscaler kicked in, the nodes rotated, and the manual change evaporated. The checkout service 404'd for twenty minutes. We lost transactions, but worse, we lost trust.
If you are SSHing into servers or running kubectl apply -f from your laptop in 2022, you are doing it wrong. Your infrastructure is fragile. Your state is drifting. You are one coffee spill away from an outage.
This is the era of GitOps. Git is the single source of truth; everything else is just a convergence loop. In this guide, we’re going to build a workflow that handles high-velocity deployments without sacrificing stability or compliance, specifically tailored for teams operating under strict EU/EEA regulations.
The Architecture of Truth
The core principle is simple: If it's not in Git, it doesn't exist.
In a proper GitOps setup, you push code to a repository. A CI pipeline (like GitHub Actions or GitLab CI) tests it and builds a container. Then, instead of pushing that container to the server, the pipeline updates a manifest repository. An agent inside your cluster (the CD part) sees the change and pulls the new state down.
Why Pull vs. Push?
Traditional CI/CD pushes changes. This requires giving your CI server root access to your cluster. That's a security nightmare. In a Pull-based GitOps model (using ArgoCD or Flux), the cluster reaches out to the repo. No inbound ports needed. No credentials leaking in Jenkins logs.
The Tooling Stack (May 2022 Edition)
We are focusing on the industry standard for Kubernetes right now:
- Orchestrator: Kubernetes v1.23+
- GitOps Controller: ArgoCD v2.3
- Packaging: Helm 3
- Secrets: Sealed Secrets (Bitnami) or Mozilla SOPS
Step 1: The Infrastructure Foundation
GitOps is heavy. The reconciliation loops require constant CPU cycles. If your control plane is fighting for resources on a crowded host, your synchronization lags. You might push a fix, and ArgoCD sits there creating a 5-minute delay because of CPU steal time.
For our clusters, we rely strictly on CoolVDS NVMe instances. Why? Because KVM virtualization guarantees that my neighbors' heavy loads don't impact my controller's ability to reconcile state. When I need to sync 50 microservices instantly, I need high IOPS, not shared-hosting promises.
Furthermore, for Norwegian clients, latency matters. Hosting your GitOps controller in a CoolVDS Oslo datacenter ensures that the latency between your registry, your repo, and your cluster is negligible. Low latency = faster reconciliation.
Step 2: Configuring ArgoCD
Let's get ArgoCD running. Do not install this manually. Even your GitOps tool should be managed by... GitOps (eventually). For the bootstrap, we use Helm.
helm repo add argo https://argoproj.github.io/argo-helm
helm install argocd argo/argo-cd --namespace argocd --create-namespace \
--set server.extraArgs={--insecure} \
--set controller.resources.limits.cpu="1000m" \
--set controller.resources.limits.memory="2Gi"
Note the resource limits. Don't starve your controller.
Defining the Application
Here is a declarative Application manifest. This tells ArgoCD: "Make the cluster look like this folder in this repo."
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: payment-gateway
namespace: argocd
spec:
project: default
source:
repoURL: 'git@github.com:nordic-corp/infra-manifests.git'
targetRevision: HEAD
path: k8s/overlays/production-oslo
destination:
server: 'https://kubernetes.default.svc'
namespace: payments
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Crucial setting: selfHeal: true. This is the magic. If someone manually changes a service port on the server, ArgoCD detects the drift and immediately reverts it to what is defined in Git. No more cowboy changes.
Step 3: Handling Secrets (The GDPR Headache)
You cannot check passwords into Git. If you do, you are likely violating GDPR, especially if that repo is hosted on a US provider (GitHub/GitLab SaaS). The Datatilsynet (Norwegian Data Protection Authority) does not look kindly on leaked PII credentials.
In 2022, the robust solution is Sealed Secrets. You encrypt the secret on your laptop using a public key from the cluster. It can only be decrypted by the controller running inside your cluster.
Workflow:
- Install the kubeseal CLI.
- Generate a standard Kubernetes secret locally (dry-run).
- Pipe it through kubeseal.
kubectl create secret generic db-creds \
--from-literal=password=SuperSecureP@ssw0rd! \
--dry-run=client -o yaml | \
kubeseal --controller-name=sealed-secrets-controller \
--controller-namespace=kube-system \
--format yaml > db-creds-sealed.yaml
The resulting db-creds-sealed.yaml is safe to push to a public Git repo. Even if the NSA reads it, they can't decrypt it without the private key inside your CoolVDS-hosted cluster.
Step 4: The CI Pipeline Integration
Your CI pipeline (Jenkins, GitHub Actions) should not touch `kubectl`. Its only job is to update the image tag in the manifest repository. Here is a cleaner way to do it using yq in a GitHub Action:
name: Deploy to Production
on:
push:
branches:
- main
jobs:
update-manifest:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
repository: nordic-corp/infra-manifests
token: ${{ secrets.PAT }}
- name: Update Image Tag
run: |
# Use yq to update the specific deployment tag
yq -i e '.spec.template.spec.containers[0].image = "registry.coolvds.com/app:${{ github.sha }}"' k8s/deployment.yaml
git config user.name "CI Bot"
git config user.email "ci@coolvds.com"
git commit -am "bump image tag to ${{ github.sha }}"
git push
Infrastructure Performance & Compliance
GitOps automates the software, but the hardware must be reliable. We often see "managed Kubernetes" services from large US providers suffer from opaque network routing. Your traffic might bounce through Frankfurt before hitting Oslo.
Hosting your Kubernetes nodes on CoolVDS gives you full control over the networking stack (Linux bridges, iptables). You know exactly where your data resides—on NVMe storage physically located in the region you selected. This is critical for meeting Schrems II requirements regarding data transfer.
Pro Tip: Monitor the etcd latency. GitOps controllers hammer the API server, which hammers etcd. If your disk write latency exceeds 10ms, your cluster stability will degrade. This is why we insist on local NVMe storage found on CoolVDS plans rather than network-attached block storage for etcd nodes.
Final Thoughts
The transition to GitOps isn't just about tools; it's about discipline. It moves the "deployment panic" from Friday evening to a pull request review on Tuesday morning.
However, automation amplifies bad infrastructure. If your underlying VPS has noisy neighbors or poor I/O, your automated pipelines will just fail faster and more frequently. Build your fortress on solid ground.
Ready to build a cluster that doesn't sleep when you do? Spin up a high-performance NVMe KVM instance on CoolVDS today and get your GitOps workflow running in under 5 minutes.