Stop Trusting Manual Deployments: The GitOps Reality Check
If you are still SSHing into your production server to pull a git repo, or worse, running kubectl apply -f . from your laptop, you are operating on borrowed time. I have seen entire clusters in Oslo go dark because a developer's local kubectl version didn't match the server, or because a manual "quick fix" wasn't committed to the repository.
In 2020, with the complexity of microservices exploding, the "Push" model of CI/CD is showing its cracks. We need to move to a "Pull" model. We need GitOps. This isn't just a buzzword; it's the only way to manage infrastructure at scale without losing your mind or violating GDPR compliance logs required by Datatilsynet.
The Core Problem: Configuration Drift
The enemy of stability is drift. You define your state in Terraform or Kubernetes manifests, but then an emergency happens. You patch a config map manually. Two weeks later, the pod restarts, reverts to the old config in Git, and the service crashes.
GitOps enforces a strict rule: Git is the Single Source of Truth. If it's not in Git, it doesn't exist in the cluster.
Pro Tip: When setting up your control plane, latency matters. If your GitOps operator (like ArgoCD) resides on a server in Frankfurt but manages a cluster in Oslo, network jitter can cause sync timeouts during massive apply operations. Host your control plane close to your workload. On CoolVDS, we see 1-2ms latency to NIX (Norwegian Internet Exchange), making sync operations instantaneous.
Architecture: The 2020 Standard Stack
For a robust setup available right now, I recommend the following stack. It balances open-source community support with enterprise reliability:
- VCS: GitLab (Self-hosted or SaaS)
- CI: GitLab CI (for building images/testing)
- CD/GitOps: ArgoCD v1.5+
- Infrastructure: KVM-based Virtualization (CoolVDS)
Step 1: The CI Pipeline (Build Only)
Your CI pipeline should no longer touch the production cluster. Its only job is to run tests, build the Docker image, push it to a registry, and update the manifest repo. It separates credentials; the CI runner doesn't need cluster-admin access.
Here is a stripped-down .gitlab-ci.yml example focusing on the image build:
stages:
- build
- update-manifest
build_image:
stage: build
image: docker:19.03.8
services:
- docker:19.03.8-dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
update_manifest:
stage: update-manifest
image: bitnami/git:2.26.0
script:
- git clone https://oauth2:${GIT_ACCESS_TOKEN}@gitlab.com/my-org/k8s-manifests.git
- cd k8s-manifests
# Update the tag in the deployment file using sed
- sed -i "s/image:.*:/image: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA/g" deployment.yaml
- git config user.email "ci-bot@coolvds.com"
- git config user.name "CI Bot"
- git commit -am "Update image to $CI_COMMIT_SHA"
- git push origin master
Step 2: The ArgoCD Configuration
Inside your Kubernetes cluster (running on a performant VPS), ArgoCD watches the manifest repository. The moment the CI bot pushes the commit, ArgoCD detects the deviation and syncs the state.
Deploying ArgoCD manifests requires a stable underlying OS. I prefer Ubuntu 18.04 LTS or the brand new 20.04 LTS for the host nodes. Here is how you define the application monitoring:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nordic-payment-gateway
namespace: argocd
spec:
project: default
source:
repoURL: 'https://gitlab.com/my-org/k8s-manifests.git'
targetRevision: HEAD
path: env/production
destination:
server: 'https://kubernetes.default.svc'
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Handling Secrets (The Pain Point)
You cannot commit raw secrets to Git. This is the biggest hurdle for teams adopting GitOps in 2020. While HashiCorp Vault is powerful, it is often overkill for smaller deployments.
The pragmatic solution is Bitnami Sealed Secrets. It uses asymmetric encryption. You encrypt with a public key (safe for Git), and only the controller running inside your cluster can decrypt it.
Installation:
kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.12.1/controller.yaml
Workflow:
- Create a standard secret locally:
kubectl create secret generic db-pass --from-literal=password=SuperSecure -o yaml --dry-run > secret.yaml - Seal it:
kubeseal < secret.yaml > sealed-secret.json - Commit
sealed-secret.jsonto Git.
Why Infrastructure Choice Dictates GitOps Success
GitOps relies heavily on the control plane. The controller (ArgoCD or Flux) is constantly polling Git repositories and the Kubernetes API. If you run this on a cheap, oversold VPS where "CPU Steal" is high, your synchronization loops will lag. You might push code, and wait 5 minutes for the cluster to react.
This is where hardware isolation matters. We built CoolVDS on KVM (Kernel-based Virtual Machine) technology rather than container-based virtualization like OpenVZ. With KVM, your resources are hard-reserved. When your GitOps operator needs to compute a diff between 500 microservices and the git repo, the CPU cycles are there immediately.
Performance Comparison: Control Plane Latency
| Metric | Standard Container VPS | CoolVDS (KVM + NVMe) |
|---|---|---|
| Git Clone Time (Large Repo) | 12.5s | 2.1s |
| ArgoCD Sync Loop | Variable (spikes to 30s) | Consistent < 2s |
| I/O Wait | High (Noisy Neighbors) | Near Zero |
The Norwegian Context: Data Sovereignty
We are operating in an era of uncertainty regarding data transfers. With the Privacy Shield framework under legal scrutiny, keeping data within Norwegian borders is a strategic safety net. By hosting your GitOps control plane and your production data on servers physically located in Oslo, you reduce the scope of GDPR headaches significantly.
Furthermore, running your CI/CD pipelines locally means your intellectual property (source code) isn't traversing the Atlantic unnecessarily.
Final Configuration Checks
Before you commit to this workflow, ensure your Nginx Ingress controller is tuned to handle the websocket connections ArgoCD uses for its UI.
# Inside nginx-ingress ConfigMap
data:
proxy-read-timeout: "3600"
proxy-send-timeout: "3600"
use-forwarded-headers: "true"
worker-processes: "4" # Tune to match CoolVDS vCPU count
Don't let legacy deployment scripts hold your infrastructure back. The industry is moving to declarative specifications. Test your GitOps workflow on a platform that respects the performance requirements of modern orchestration.
Ready to eliminate configuration drift? Spin up a high-performance KVM instance on CoolVDS and deploy ArgoCD today.