Stop Using kubectl: Architecting Bulletproof GitOps Workflows in 2025
I still remember the silence on a Zoom call three years ago. A Senior dev had just run a manual kubectl apply -f . targeting what he thought was the staging cluster. It wasn't. He wiped the ingress configurations for a major Norwegian e-commerce client during Black Week. That outage cost roughly 400,000 NOK per hour.
If you are SSH-ing into your servers to update containers or manually editing ConfigMaps in 2025, you aren't managing infrastructure; you're gambling. GitOps isn't just a buzzword; it is the only way to maintain sanity when managing distributed systems across hybrid clouds or robust VPS Norway setups.
This guide cuts through the marketing fluff. We are looking at the exact architecture required to run a latency-sensitive, compliant GitOps pipeline using ArgoCD, Kustomize, and high-performance infrastructure.
The Core Principle: Git is the Only Truth
The state of your cluster must match the state of your Git repository. Period. If a node goes down or a datacenter in Oslo has a power blip, your recovery strategy shouldn't be "call the lead architect." It should be "re-sync the cluster."
However, latency matters here. A GitOps operator (like ArgoCD or Flux) is constantly polling your git repo and your internal Kubernetes API. If your control plane is hosted on sluggish spinning disks, that reconciliation loop lags. You get drift. Drift leads to outages.
Pro Tip: Your Kubernetes control plane (specificallyetcd) requires extremely low disk write latency. Iffsynctakes longer than 10ms, etcd starts dropping leaders. This is why we standardize on NVMe storage at CoolVDS. Standard SSDs often choke under the I/O pressure of a busy GitOps controller.
Step 1: The Repository Structure
In 2025, the debate between Monorepo and Polyrepo is largely settled for infrastructure: use a Hybrid approach. Keep application code in separate repos, but keep your manifests in a dedicated infrastructure repo.
Here is the directory structure that scales without becoming a nightmare:
βββ apps/
β βββ base/
β β βββ deployment.yaml
β β βββ service.yaml
β βββ overlays/
β βββ production-oslo/
β β βββ kustomization.yaml
β β βββ patch-replicas.yaml
β βββ staging-trondheim/
β βββ kustomization.yaml
βββ clusters/
β βββ oslo-primary/
β βββ bergen-dr/
βββ platform/
βββ argocd/
βββ monitoring/
This structure leverages Kustomize to minimize code duplication. You define the `base` once, and overlay environment specifics (like replica counts or ingress domains) for your production nodes.
Step 2: The Reconciliation Engine (ArgoCD)
We prefer ArgoCD over Flux for one specific reason: Visibility. When you are managing managed hosting environments for clients who demand SLAs, being able to see the sync status visually is invaluable.
Here is a production-ready `Application` manifest. Note the sync policiesβautomated pruning is dangerous if you aren't confident, but essential for true GitOps.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: payment-gateway
namespace: argocd
spec:
project: default
source:
repoURL: 'git@github.com:my-org/infra-manifests.git'
targetRevision: HEAD
path: apps/overlays/production-oslo
destination:
server: 'https://kubernetes.default.svc'
namespace: payments
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- PruneLast=true
Handling Secrets in a GitOps World
NEVER commit secrets to Git. Not even encrypted ones if you can avoid it. In 2025, the standard is the External Secrets Operator (ESO). ESO fetches secrets from a secure vault (like HashiCorp Vault or a cloud provider) and injects them as native Kubernetes secrets.
Since data sovereignty is critical in Norway (thanks to stricter enforcement of GDPR and Schrems II), hosting a self-hosted Vault on a secure, private CoolVDS instance ensures your secrets never leave Norwegian soil.
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: database-credentials
spec:
refreshInterval: "1h"
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore
target:
name: db-secret
creationPolicy: Owner
data:
- secretKey: password
remoteRef:
key: secret/data/production/db
property: password
Step 3: The CI/CD Handshake
GitOps handles the "CD" (Continuous Delivery). But you still need CI (Continuous Integration). Your CI pipeline should never touch the cluster directly. Instead, it should push a new container image and then commit a change to the *infrastructure repository*.
Here is a concise GitHub Actions snippet that updates the image tag in your manifest repo. This maintains the audit trail essential for compliance.
name: Deploy to Production
on:
push:
tags:
- 'v*'
jobs:
update-manifests:
runs-on: ubuntu-24.04
steps:
- name: Checkout Infra Repo
uses: actions/checkout@v4
with:
repository: my-org/infra-manifests
token: ${{ secrets.GIT_PAT }}
- name: Update Image Tag
run: |
cd apps/overlays/production-oslo
kustomize edit set image my-app=my-registry/app:${{ github.ref_name }}
- name: Commit and Push
run: |
git config user.name "GitOps Bot"
git config user.email "bot@coolvds.com"
git commit -am "update image to ${{ github.ref_name }}"
git push
Infrastructure Performance & The "Noisy Neighbor" Problem
You can have the best GitOps pipeline in the world, but it will fail if the underlying hardware is inconsistent. When ArgoCD attempts to apply hundreds of resources simultaneously, it creates a spike in CPU and I/O wait times.
On shared hosting or oversold VPS providers, this is where deployments hang. You'll see `kubectl` timeouts or "Context Deadline Exceeded" errors. This is usually due to "CPU Steal"βwhere the host hypervisor is throttling your VM to serve other tenants.
At CoolVDS, we configure our KVM hypervisors to prevent this. We use strict resource isolation. When you run a heavy low latency workload or a massive sync operation, the CPU cycles are yours, not shared.
| Feature | Generic Cloud VPS | CoolVDS Architecture |
|---|---|---|
| Storage Backend | Networked Ceph (High Latency) | Local NVMe RAID 10 |
| IOPS Consistency | Fluctuates (
Recent Searches |