The Era of "ClickOps" is Dead: Implementing True GitOps in 2019
If you are still SSH-ing into your production servers to run kubectl apply -f, or worse, manually updating Docker image tags in a Jenkins UI, you are already creating technical debt. I remember a deployment last winter for a client in Oslo—a large fintech setup. Their "DevOps" process consisted of a developer running a shell script from their laptop. One bad network packet drop on the notoriously spotty coffee shop Wi-Fi, and the cluster state was left inconsistent. The rollback took four hours.
That is unacceptable. In 2019, the standard is GitOps. It is not just a buzzword from Weaveworks; it is the fundamental shift toward treating your infrastructure exactly like your application code. The premise is simple: Git is the single source of truth. If it is not in the repo, it does not exist in the cluster.
The Push vs. Pull Architecture
Most legacy CI/CD pipelines (Jenkins, Bamboo, GitLab CI) use a "Push" model. The CI server builds the container, pushes it to the registry, and then runs a command to push the deployment manifest to the Kubernetes cluster.
The Security Flaw: To do this, your CI server needs full administrative access (or at least high-level write access) to your Kubernetes API. If your Jenkins server is compromised, your entire production environment is exposed. Furthermore, the CI server doesn't know if the deployment actually succeeded inside the cluster, only that the command was sent.
The Pull model (GitOps) flips this. You install an operator inside your cluster (like Flux or the rising ArgoCD). This operator watches the Git repository. When it sees a change, it pulls it down and applies it. It also watches the container registry for new images. The cluster updates itself.
This is crucial for Norwegian enterprises dealing with GDPR and Datatilsynet requirements. You no longer give external CI tools credentials to your production environment.
Setting Up Flux on Kubernetes (v1.13+)
Let's look at a practical implementation using Weave Flux. This assumes you are running a standard Kubernetes cluster (v1.11 or higher recommended). At CoolVDS, our KVM-based infrastructure is optimized for the heavy I/O requirements of the etcd key-value store, which is often the bottleneck in these setups.
First, we create a namespace for Flux:
kubectl create ns flux
Now, we add the Flux chart repo (using Helm 2, as Helm 3 is still in early alpha and not production-ready yet):
helm repo add weaveworks https://weaveworks.github.io/flux
helm install --name flux \
--set git.url=git@github.com:your-org/k8s-config \
--namespace flux \
weaveworks/flux
Once deployed, Flux generates an SSH key that you must add to your GitHub/GitLab repository as a Deploy Key with write access (if you want Flux to update image tags automatically).
fluxctl identity --k8s-fwd-ns flux
Pro Tip: Avoid the "Tiller" security nightmare. Helm 2 relies on Tiller, a server-side component that often has too many privileges. If you are on a CoolVDS managed instance, we recommend securing Tiller with mutual TLS or using --tiller-tls-verify to prevent unauthorized access inside the cluster.
Handling Secrets: The Elephant in the Room
You cannot store raw secrets (API keys, database passwords) in Git. That violates every security protocol known to man. Since we want everything in Git, we need a way to encrypt them.
In 2019, the most robust solution is Sealed Secrets by Bitnami. It uses asymmetric encryption. You encrypt the secret on your laptop using a public key. The controller inside the cluster (which holds the private key) decrypts it into a standard Kubernetes Secret.
Here is how you seal a secret:
# Create a raw secret (dry-run mode, don't actually create it)
kubectl create secret generic my-db-pass \
--from-literal=password=SuperSecureNorwegianPassword123 \
--dry-run -o json > my-secret.json
# Seal it
kubeseal < my-secret.json > my-sealed-secret.json
You can safely commit my-sealed-secret.json to your public GitHub repo. Even if someone steals it, they cannot decrypt it without the private key living deep inside your CoolVDS cluster.
Infrastructure Performance Matters
GitOps is "chatty." Your cluster is constantly polling your Git repository and your container registry. The controller compares the desired state (Git) with the actual state (etcd) every few seconds.
This puts significant load on your control plane. I have seen cheap VPS providers crumble under this load because of "CPU Stealing"—where the host node oversells CPU cycles. When etcd latency spikes, your Kubernetes API starts timing out, and your GitOps sync fails.
| Feature | Standard VPS | CoolVDS NVMe KVM |
|---|---|---|
| Storage Backend | SATA SSD (Shared) | Enterprise NVMe (High IOPS) |
| Virtualization | OpenVZ (Container) | KVM (Hardware Virtualization) |
| Etcd Latency | Variable (5-40ms) | Consistent (<2ms) |
For a reliable GitOps workflow, you need consistent I/O performance. We built CoolVDS on pure KVM with local NVMe specifically to handle the intense I/O patterns of modern orchestration tools. When your cluster attempts to reschedule 50 pods because you merged a Pull Request, you want that operation to happen instantly, not wait for the disk queue to clear.
A Warning on Docker Registry Latency
In Norway, bandwidth costs can be high, but latency is usually excellent—if you peer correctly. If your GitOps pipeline is pulling images from Docker Hub (US West) every time a pod restarts, you are introducing massive delay.
We recommend hosting a private registry (like Harbor) on a separate CoolVDS instance within the same datacenter (e.g., Oslo or Amsterdam) as your Kubernetes cluster. This ensures that image pulls traverse the local LAN or high-speed NIX peering points rather than the public internet.
# Example Docker Registry Config for caching
version: 0.1
proxy:
remoteurl: https://registry-1.docker.io
storage:
filesystem:
rootdirectory: /var/lib/registry
http:
addr: :5000
Deploying a pull-through cache like this reduces deployment times from minutes to seconds.
Conclusion
GitOps is not the future; it is the present reality for high-velocity teams. It provides an audit trail for compliance, automated recovery for stability, and separates CI from CD for security. However, it increases the demand on your infrastructure's control plane.
Don't let your "modern" workflow be bottlenecked by legacy hosting. If you are ready to implement Flux or Jenkins X on bare-metal equivalent performance, spin up a high-performance KVM instance today.