Console Login

Stop SSH-ing into Production: A Battle-Tested GitOps Workflow for 2020

Stop SSH-ing into Production: A Battle-Tested GitOps Workflow for 2020

If you are still SSH-ing into your production servers to run docker-compose up -d or, god forbid, editing Nginx configs via nano on a live node, you are the single point of failure. I’ve seen entire infrastructures crumble because a sysadmin made a "quick hotfix" at 2 AM and forgot to commit it to the repo.

The solution isn't just "better discipline"—humans fail. The solution is removing the human element from the deployment mechanics entirely. Enter GitOps.

In this guide, I’m going to walk you through a setup we recently deployed for a high-traffic fintech client here in Oslo. We utilized Kubernetes (k8s 1.18), ArgoCD, and GitLab CI to ensure that the state of the cluster always matches the state of the git repository. No drift. No surprises.

The Architecture: Pull vs. Push

Most CI/CD pipelines in 2020 are still "Push-based." Jenkins or GitLab CI builds a container and runs kubectl apply against your cluster. This is fine, until it isn't. The problem is security and visibility. To do this, your CI runner needs full admin credentials to your production cluster. That is a security nightmare waiting to happen.

We prefer the "Pull-based" GitOps approach. An agent inside the cluster (ArgoCD or Flux) watches the git repository. When it sees a change, it pulls it down and applies it. Your CI system never touches the cluster API.

Pro Tip: Data sovereignty is critical here in Norway. By hosting your git repositories and your Kubernetes nodes on CoolVDS NVMe instances in Oslo, you ensure that neither your source code nor your deployment logic ever leaves Norwegian jurisdiction, keeping Datatilsynet happy.

Step 1: The Infrastructure Foundation

GitOps is heavy on the control plane. If you run a Kubernetes master node on cheap, oversold VPS hosting, the etcd latency will kill you. GitOps controllers like ArgoCD constantly diff the cluster state against git. This requires high I/O throughput.

We use CoolVDS instances with dedicated CPU threads and NVMe storage. For a standard k8s cluster, we configure the kernel to handle high network loads. Here is a snippet from our base sysctl.conf applied via Ansible during node provisioning:

# /etc/sysctl.conf optimization for K8s nodes
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

# Increase connection tracking for high-traffic load balancers
net.netfilter.nf_conntrack_max = 131072

Step 2: Installing ArgoCD

Assuming you have a k8s cluster running (we recommend kubeadm for bare metal/VPS control), install ArgoCD. Do not expose the ArgoCD dashboard to the public internet. Use port-forwarding or an internal VPN.

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Once the pods are running, the default password is the name of the server pod. Change it immediately.

# Get the initial password
kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2

Step 3: Defining the Application

Here is where the magic happens. Instead of writing a README that says "please install Redis," we define the infrastructure as code.

Create a file named application.yaml. This tells ArgoCD to watch a specific repo and path.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: nordic-payment-gateway
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'git@gitlab.com:your-org/infra-manifests.git'
    targetRevision: master
    path: k8s/overlays/production
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

Note the selfHeal: true flag. If a junior dev manually deletes a Deployment via kubectl, ArgoCD will notice the drift and immediately recreate it. That is the power of immutable infrastructure.

Step 4: Handling Secrets (The Norwegian Context)

You cannot commit raw secrets to Git. In 2020, the best practice is Sealed Secrets by Bitnami or encrypting them with git-crypt. However, for enterprise compliance, we often see HashiCorp Vault.

For a simpler setup on CoolVDS, we use Sealed Secrets. You encrypt the secret locally, commit the encrypted string, and the controller decrypts it inside the cluster.

# Encrypting a secret locally
kubectl create secret generic db-creds --from-literal=password=SuperSecret123 --dry-run=client -o json | \
kubeseal --controller-name=sealed-secrets-controller --format yaml > sealed-secret.yaml

Performance Tuning for Low Latency

Since we are targeting users in Oslo and Northern Europe, network latency is paramount. Standard Kubernetes Ingress controllers (like Nginx) need tuning to handle the GitOps rapid-sync cycles without dropping connections.

Inside your Nginx Ingress ConfigMap, ensure you tune the keepalive and buffer sizes. The default values are often too low for high-throughput microservices.

data:
  keep-alive: "60"
  upstream-keepalive-connections: "100"
  client-body-buffer-size: "64k"
  proxy-body-size: "10m"
  worker-processes: "4" 

Why Infrastructure Matters for GitOps

GitOps creates a continuous feedback loop. Your cluster is constantly pulling data, verifying checksums, and reporting status. This generates a significant amount of background I/O and CPU interrupts.

We’ve benchmarked this. On standard shared hosting with "noisy neighbors," we often see ArgoCD sync operations time out because the CPU steal time spikes. This leaves your cluster in an inconsistent state.

At CoolVDS, our KVM architecture ensures strict resource isolation. When you buy 4 vCPUs, you get 4 vCPUs. This stability is required for the control loops of Kubernetes and ArgoCD to function correctly without lag. Plus, with our datacenter located directly on the main fiber ring in Oslo, your latency to NIX (Norwegian Internet Exchange) is typically under 2ms.

Final Thoughts

Moving to GitOps isn't just a trend; it's a survival strategy for modern DevOps teams. It provides an audit trail for every change (via Git history), automatic disaster recovery (via selfHeal), and better security (no cluster credentials in CI).

Don't let legacy infrastructure bottleneck your modern workflow. You need compute that keeps up with your code.

Ready to build a cluster that doesn't sleep? Deploy a high-performance NVMe KVM instance on CoolVDS today and get your GitOps pipeline running in minutes.