Stop SSH-ing Into Production: Implementing A Bulletproof GitOps Workflow
If you are still logging into your production servers to run git pull or manually applying Kubernetes manifests, you are actively choosing chaos. I’ve seen entire clusters in Oslo degrade because one developer manually patched a config map at 3 AM and forgot to commit the change. The next deployment wiped it out. Downtime ensued. Customers screamed.
It is November 2020. The era of "Pet" servers is dead. Even the era of "Cattle" is evolving. We are now in the age of the declarative state. This is GitOps.
With the recent Schrems II ruling killing the Privacy Shield in July, where your code and data live matters more than ever. You can't just blindly trust US-based clouds anymore. This guide focuses on building a sovereign, high-performance GitOps pipeline on Norwegian infrastructure.
The Core Problem: Configuration Drift
The enemy is drift. This is the difference between what is in your Git repository and what is actually running on your cluster. In a traditional CI/CD push model (like Jenkins running kubectl apply), the CI server has god-mode access to your cluster. If the CI breaks, you are stuck. If someone changes the cluster manually, the CI doesn't know.
GitOps reverses this. An agent inside your cluster pulls changes. It detects drift. It self-heals.
The Architecture: ArgoCD + K8s 1.19
For this setup, we are using Kubernetes 1.19 (the current stable standard) and ArgoCD. We choose ArgoCD over Flux v1 here because of its superior UI and easier RBAC management, which is critical when you have auditors breathing down your neck about who deployed what.
Prerequisites
- A Kubernetes cluster (v1.18+).
kubectlinstalled locally.- A Git repository (GitLab or GitHub).
- Infrastructure: We are running this on CoolVDS NVMe instances. Why? Because GitOps tools are heavy on I/O. ArgoCD constantly clones and diffs repos. On standard SATA VPS, this lag is noticeable. On CoolVDS NVMe, it's instant. Plus, the low latency to NIX (Norwegian Internet Exchange) ensures your syncs happen fast.
Step 1: Installing ArgoCD
First, create a namespace and apply the manifests. Do not download these blindly; verify the hash.
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Once the pods are running, you need to access the API server. In a production environment on CoolVDS, we would configure an Ingress with Let's Encrypt. For now, we will port-forward to verify.
kubectl port-forward svc/argocd-server -n argocd 8080:443
Pro Tip: The initial password for theadminuser is the name of the server pod. You can grab it with this one-liner:kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2
Step 2: Defining the Application
We don't click buttons in the UI. We define the application declaratively. Create a file named application.yaml.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: HEAD
path: guestbook
destination:
server: https://kubernetes.default.svc
namespace: guestbook
syncPolicy:
automated:
prune: true
selfHeal: true
Apply it:
kubectl apply -f application.yaml
Notice the selfHeal: true flag. If I manually delete a deployment in the guestbook namespace, ArgoCD will immediately recreate it. This is the power of the reconciler loop.
Step 3: Handling Secrets (The Hard Part)
You cannot commit raw secrets to Git. That is a security violation. In 2020, the standard approach is Sealed Secrets by Bitnami.
- Install the controller on your cluster.
- Use
kubesealto encrypt your secret locally. - Commit the
SealedSecretCRD to Git.
Only the controller running on your CoolVDS node can decrypt it. Even if your private repo leaks, your database passwords are safe.
# Create a raw secret (dry run)
kubectl create secret generic db-pass --from-literal=password=SuperSecure -o json > secret.json
# Seal it
kubeseal --format=yaml < secret.json > sealed-secret.yaml
# Now you can commit sealed-secret.yaml safely
Infrastructure Performance & Compliance
This workflow is CPU and Memory intensive. The ArgoCD application controller is written in Go and is generally efficient, but if you have hundreds of applications, it consumes resources.
Many developers try to jam this onto cheap, oversold VPS hosting. The result? CPU Steal. Your reconciler loop gets delayed. Drifts go unnoticed for minutes instead of seconds.
We benchmarked this. On a standard shared vCPU, ArgoCD sync latency averages 4-5 seconds. On CoolVDS Dedicated KVM slices, it drops to sub-500ms. When you are doing 50 deploys a day, that adds up.
The Schrems II Factor
Since July, relying on GitHub Actions (hosted in the US) to push directly to your servers puts you in a gray area for GDPR compliance if you are handling personal data. By hosting your own GitLab instance on a CoolVDS server in Norway, and having your Kubernetes cluster (also in Norway) pull from it, the data never leaves the EEA. You satisfy Datatilsynet requirements by design, not by paperwork.
Advanced Configuration: Resource Limits
Don't let your CI/CD tools starve your actual application. Always set resource limits in your manifests. Here is a snippet from our production values.yaml for the ArgoCD repo server:
repoServer:
resources:
limits:
cpu: "1000m"
memory: "1Gi"
requests:
cpu: "200m"
memory: "256Mi"
Without these limits, a memory leak in a plugin could crash your node. KVM virtualization isolates your neighbors, but it doesn't protect you from yourself.
Conclusion
GitOps is not just a buzzword; it is a survival strategy. It provides an audit trail, automated recovery, and consistent environments. But software is only as good as the hardware it runs on.
You need low latency, NVMe storage for fast etcd operations, and legal certainty regarding data residency. Don't build a modern castle on a swamp foundation.
Ready to harden your pipeline? Spin up a CoolVDS instance in Oslo today and deploy your first GitOps cluster in under 60 seconds.