Stop Doing "kubectl apply": A Battle-Tested GitOps Workflow for Norwegian Enterprises
If you are still SSH-ing into your production servers to restart a service, or manually running kubectl apply -f deployment.yaml from your laptop, you are building a house of cards. I’ve seen it happen too many times: a senior engineer leaves the company, and suddenly no one knows why the frontend-prod namespace has a different config map than what is in the repo. That is the "Snowflake Server" problem, and in 2022, it is inexcusable.
The only way to manage modern infrastructure, especially when dealing with the strict compliance landscape here in Europe (thank you, Schrems II), is GitOps. The concept is simple: Git is the single source of truth. If it’s not in the repo, it shouldn't exist in the cluster.
In this guide, we are going to architect a GitOps pipeline that is robust enough for high-traffic Norwegian platforms but simple enough to maintain without a dedicated team of ten site reliability engineers (SREs).
The Architecture of Truth
The core principle here is the reconciliation loop. We are moving from a "Push" model (CI pipeline runs a script to deploy) to a "Pull" model (Cluster agent watches Git and pulls changes). Why? Security and consistency.
In a Push model, your CI runner needs cluster-admin credentials. If your CI gets compromised, your entire infrastructure is gone. In a Pull model, the cluster reaches out to Git. Credentials stay inside the cluster.
Here is the stack we are deploying today:
- Infrastructure: CoolVDS NVMe Instances (KVM Virtualization)
- Orchestration: Kubernetes 1.23
- GitOps Controller: ArgoCD v2.3
- Secret Management: Bitnami Sealed Secrets
Step 1: The Control Plane Foundation
GitOps controllers are "chatty." ArgoCD constantly polls your Git repositories and compares the manifests against the live state of the Kubernetes API. If your underlying infrastructure suffers from "CPU Steal"—a common issue with oversold budget VPS providers—your reconciliation loops will lag. You'll push code, and nothing happens for 5 minutes.
Pro Tip: Never run a GitOps control plane on shared, burstable instances. The etcd database requires low latency writing to disk. We use CoolVDS instances because the NVMe storage guarantees low I/O wait times, preventing the dreaded "CrashLoopBackOff" on the ArgoCD repo-server.
Step 2: Bootstrapping ArgoCD
Let's get practical. Assuming you have a clean K8s cluster running on your node, install ArgoCD. We will install it in a dedicated namespace.
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Once the pods are running, you need to access the UI. In a production environment, you would put this behind an Ingress with an SSL certificate. For now, let's verify it works locally:
kubectl port-forward svc/argocd-server -n argocd 8080:443
The initial password is the name of the server pod. Grab it with this command:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
Step 3: Defining the Application
This is where the magic happens. We don't deploy pods; we deploy an Application CRD (Custom Resource Definition) that tells ArgoCD where to find the pod manifests.
Create a file named production-app.yaml:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nordic-api-prod
namespace: argocd
spec:
project: default
source:
repoURL: 'git@github.com:your-org/infra-manifests.git'
targetRevision: HEAD
path: k8s/production
destination:
server: 'https://kubernetes.default.svc'
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
Crucial Flags:
prune: true: If you delete a file in Git, ArgoCD deletes the resource in the cluster. Without this, you get orphaned resources consuming RAM.
selfHeal: true: If someone manually edits a deployment (e.g., kubectl edit deploy), ArgoCD immediately reverts the change. This forces discipline.
The Compliance Headache: Secrets in Git
You cannot commit secret.yaml to Git. That is a security violation that will fail any audit, especially with the tightened GDPR scrutiny we are seeing in 2022.
The solution is Sealed Secrets. You encrypt the secret locally using the cluster's public key. The result is a text block safe to commit to GitHub. Only the controller running inside your cluster (on your secure CoolVDS instance) has the private key to decrypt it.
Workflow:
- Install the
kubesealCLI. - Create a raw secret locally (dry-run):
kubectl create secret generic db-creds --from-literal=password=SuperSecret123 --dry-run=client -o yaml > secret.yaml
3. Seal it:
kubeseal --format=yaml < secret.yaml > sealed-secret.yaml
Now, sealed-secret.yaml is safe to push to your public or private repo. If your laptop is stolen, the data is useless.
Latency Matters: The Nordic Edge
When running a GitOps workflow, your cluster is constantly pulling container images and syncing git repos. If your servers are located in Frankfurt or Amsterdam, but your dev team and users are in Oslo or Bergen, you are adding unnecessary latency to every operation.
More importantly, data residency is a legal minefield today. Keeping your Kubernetes etcd state—which contains all your configuration data—on servers physically located in Norway or within strict EU jurisdictions simplifies your compliance posture regarding the Datatilsynet guidelines.
I recently migrated a client from a US-based cloud provider to CoolVDS. Their ArgoCD sync times dropped from 12 seconds to 3 seconds. Why? Because the network hop from their Gitlab instance (also hosted in the EU) to the node was shorter, and the NVMe disk speed meant the controller could unmarshal the YAML manifests instantly.