Stop the kubectl apply Madness: Implementing GitOps Workflows in 2018
It’s 3:00 AM on a Friday. Your mobile lights up. The production cluster in Oslo is acting up—latency is spiking, and pods are crash-looping. You log in, groggy, and run kubectl get deployment. Nothing makes sense. The configuration in the live cluster doesn't match what's in your Git repository.
Why? Because three days ago, a junior dev manually tweaked a memory limit to "fix a quick bug" and never committed the change. Welcome to Configuration Drift, the silent killer of uptime.
In the Norwegian tech scene, where reliability is paramount and teams are lean, we cannot afford these manual inconsistencies. With Kubernetes 1.10 having just dropped this week, the ecosystem is maturing. It's time to stop treating our clusters like pets we manually feed, and start treating them like cattle managed by robots. It's time for GitOps.
The Problem: The "Push" Model is Broken
Most of us are still using a CI-driven "push" model. Jenkins or GitLab CI builds a container, logs into your cluster (scary, right?), and runs a deployment command.
This has two massive flaws:
- Security: You are handing your CI server the "Keys to the Kingdom" (cluster admin credentials). If your Jenkins gets compromised, your entire infrastructure is gone.
- Drift: If someone changes the cluster state manually with
kubectl, Git has no idea. Your "Source of Truth" is a lie.
The Solution: The GitOps "Pull" Model
The term "GitOps" was coined recently by the folks at Weaveworks, but the concept is timeless. Instead of an external tool pushing changes, you run an agent inside your cluster. This agent monitors a Git repository. When it sees a change in Git, it applies it to the cluster.
If a developer manually changes a setting in the cluster? The agent sees the divergence from Git and reverts it immediately. Git is the only source of truth.
Why This Matters for GDPR (May 2018 Deadline)
We are less than two months away from the General Data Protection Regulation (GDPR) enforcement date of May 25th. The Norwegian Datatilsynet is ramping up. By using a Pull model (GitOps), you remove the need to store production credentials in your CI/CD pipeline. This significantly reduces your attack surface and helps demonstrate "Privacy by Design" compliance.
Tutorial: Setting up Weave Flux on CoolVDS
Let's implement a basic GitOps flow using Weave Flux, the current standard for this workflow. We assume you are running a Kubernetes 1.9+ cluster on CoolVDS instances.
Pro Tip: We run our Kubernetes control planes on CoolVDS NVMe instances. Etcd (the brain of Kubernetes) is extremely sensitive to disk write latency. If fsync takes too long, your cluster leader election will fail. Do not try this on standard spinning rust VPS providers.
Step 1: The Repository Structure
Your infrastructure repo should look like this:
/clusters
/oslo-prod
/namespaces
/releases
frontend.yaml
backend.yaml
redis.yaml
Step 2: Installing Flux
We will use fluxctl to bootstrap the agent. First, install the daemon into your cluster:
kubectl apply -f https://raw.githubusercontent.com/weaveworks/flux/master/deploy/flux-account.yaml
kubectl apply -f https://raw.githubusercontent.com/weaveworks/flux/master/deploy/flux-deployment.yaml
(Note: Always check the latest release hash, but for now we are pulling from master for the demo).
You need to configure the deployment arguments to point to your Git repo. Edit the deployment:
spec:
containers:
- name: flux
image: quay.io/weaveworks/flux:1.2.0
args:
- --git-url=git@github.com:your-org/k8s-config.git
- --git-branch=master
- --git-path=clusters/oslo-prod
Step 3: Identity Management
Flux generates an SSH key to talk to your Git host. Retrieve it:
$ fluxctl identity --k8s-fwd-ns flux
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC...
Add this key as a "Deploy Key" in your GitHub or GitLab project with Write access. This allows Flux to not only read configs but also update image tags automatically when you release new software.
Performance Considerations: The "CoolVDS" Factor
Automating your infrastructure is useless if the underlying hardware is unstable. In a GitOps workflow, your cluster is constantly reconciling state. This generates significant I/O traffic on the nodes.
Many "cheap" VPS providers in Europe oversell their CPU and storage. You might see "Steal Time" (CPU ready time) spike, causing your Flux agent to lag or your API server to time out.
At CoolVDS, we use KVM virtualization with dedicated resource guarantees. Our storage backend is pure NVMe, ensuring that the heavy I/O operations of Docker pulls and Etcd writes happen in microseconds, not milliseconds. When you are sleeping at night, you want to know that your "self-healing" infrastructure has the resources it needs to actually heal.
The Local Edge
Finally, consider latency. If your Dev team is in Oslo or Bergen, pushing code to a server in Frankfurt or Virginia adds unnecessary friction. Hosting your Kubernetes nodes on CoolVDS infrastructure in Norway ensures minimal latency for your management commands and, more importantly, keeps your customer data within the Norwegian legal framework—a massive plus for GDPR compliance.
Next Steps
Don't wait for the next 3 AM pager call. Start migrating your critical workloads to a GitOps model today.
Need a sandbox to test K8s v1.10? Deploy a high-performance CoolVDS NVMe instance in 55 seconds and see the difference dedicated hardware makes.