Stop kubectl apply-ing Your Production Into Oblivion
It is 3:00 AM. Your pager is screaming because the `payment-service` just 500'd. Why? Because a junior dev SSH'd into the cluster and hot-patched a config file three days ago, and the pod just restarted, reverting to the old, broken configuration defined in the deployment manifest. If this sounds familiar, your infrastructure is a house of cards.
We need to stop treating our clusters like pets. In 2018, with the rise of Kubernetes 1.10+ and the maturing container ecosystem, there is zero excuse for manual intervention. Enter GitOps.
I've managed infrastructure across Europe, from bare metal in Frankfurt to VPS clusters here in Norway. The only constant is that humans make mistakes. Git doesn't. Here is how to lock down your workflow, satisfy the Datatilsynet auditors, and keep your weekends free.
The Philosophy: "If It's Not in Git, It Doesn't Exist"
GitOps isn't just a buzzword Weaveworks coined last year; it's a survival strategy. The core premise is simple: Git is the single source of truth for both your application code AND your infrastructure.
When you want to change a firewall rule? Pull Request. Scale up replicas? Pull Request. Update an SSL cert? Pull Request.
Pro Tip: This approach is your best friend for the new GDPR regulations that hit us in May. When an auditor asks "Who changed the database access levels on June 12th?", you don't hunt through bash history. You show them the Git commit hash, the diff, and the user who merged it.
The 2018 GitOps Stack: GitLab CI + Kubernetes
While tools like Weave Flux are gaining traction for the "pull" model (where an operator inside the cluster pulls changes), the most robust method for most teams right now is the CI/CD "push" model using GitLab CI. It’s integrated, it keeps your registry close to your code, and it allows for granular control.
1. The Repository Structure
Don't mix your app source code with your infrastructure manifests. Split them up.
- app-repo: Source code, Dockerfile, unit tests.
- infra-repo: Helm charts, Kubernetes YAMLs, Terraform configs.
2. The CI/CD Pipeline
Here is a battle-tested .gitlab-ci.yml snippet for building and pushing a Docker image, then triggering an update in the infra repo. This uses the Docker-in-Docker service, which is standard practice right now.
image: docker:18.06
services:
- docker:dind
stages:
- build
- push
- deploy
variables:
DOCKER_DRIVER: overlay2
CONTAINER_IMAGE: registry.gitlab.com/$CI_PROJECT_PATH
build_push:
stage: push
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
- docker build -t $CONTAINER_IMAGE:$CI_COMMIT_SHA .
- docker push $CONTAINER_IMAGE:$CI_COMMIT_SHA
- docker tag $CONTAINER_IMAGE:$CI_COMMIT_SHA $CONTAINER_IMAGE:latest
- docker push $CONTAINER_IMAGE:latest
trigger_infra_update:
stage: deploy
image: alpine:3.7
script:
- apk add --no-cache curl
- curl -X POST -F token=$INFRA_REPO_TRIGGER_TOKEN -F ref=master https://gitlab.com/api/v4/projects/$INFRA_PROJECT_ID/trigger/pipeline
This pipeline creates an immutable artifact. It doesn't touch the cluster directly. It triggers the Infrastructure Repository to do the heavy lifting.
The Infrastructure Layer: Taming Tiller
If you are using Helm (and you should be using Helm 2), you are dealing with Tiller. Tiller runs inside your cluster with cluster-admin privileges. This is a massive security risk, especially in multi-tenant environments.
To mitigate this on your CoolVDS instances, ensure Tiller is locked down to localhost or uses SSL, and restrict who can talk to it. Even better, use helm template to generate raw YAMLs and apply them via kubectl in your pipeline, skipping Tiller entirely for production.
# Generate manifests without installing Tiller in prod
helm template ./charts/my-app --name release-1 --set image.tag=$CI_COMMIT_SHA > manifest.yaml
# Apply with pruning to remove deleted resources
kubectl apply -f manifest.yaml --prune -l app=my-app
The Hardware Reality: Why IOPS Kill Deployments
Here is where many DevOps engineers fail. They build a beautiful GitOps pipeline, but they run it on cheap, oversold VPS hosting.
When you implement GitOps, you increase the frequency of deployments. You are constantly pulling Docker images, unpacking layers, and churning etcd data. If your disk I/O latency spikes, your Kubernetes API server starts timing out. I've seen etcd clusters collapse because the underlying storage couldn't keep up with the write-ahead log (WAL) sync.
This is why we use CoolVDS for our K8s nodes. They provide genuine NVMe storage, not just SSD caching. When you are hitting the disk with thousands of IOPS during a rolling update, standard SATA SSDs choke. NVMe on KVM virtualization ensures that when your pipeline says "deploy," the infrastructure actually responds.
| Feature | Standard VPS | CoolVDS NVMe |
|---|---|---|
| IOPS | 5,000 - 10,000 | 20,000+ |
| Latency | 2-5ms | <0.5ms |
| Etcd Stability | Risk of leader election failure | Rock solid |
Secret Management: The Elephant in the Room
You cannot check password: supersecret into a public—or even private—Git repo. It violates every security standard and will get you fined under GDPR if that data leaks.
In 2018, the best practice is Bitnami Sealed Secrets. It uses asymmetric cryptography. You encrypt the secret on your laptop using a public key known to the cluster. The cluster (and only the cluster) can decrypt it using the private key stored in the controller.
# Install the client
brew install kubeseal
# Create a sealed secret
echo -n "supersecret" | kubectl create secret generic my-secret --dry-run --from-file=password=/dev/stdin -o json > secret.json
kubeseal < secret.json > sealed-secret.json
# Now you can safely git commit sealed-secret.json
git add sealed-secret.json
git commit -m "Add encrypted database credentials"
Network Latency and Geo-Location
For Norwegian clients, hosting location matters. If your GitOps pipeline is deploying to a server in Virginia, you are dealing with 100ms+ latency. If you are serving customers in Oslo or Bergen, you want your nodes peering at NIX (Norwegian Internet Exchange).
CoolVDS data centers are located locally. This means your kubectl commands are snappy, and more importantly, your user data stays within the EEA, simplifying your data residency compliance strategy.
Conclusion
Manual operations are a liability. By moving to a GitOps workflow, you document every change automatically, revert failures instantly, and sleep better at night. But remember: software automation relies on hardware performance. Don't let your beautifully architected Kubernetes cluster fail because of noisy neighbors or slow disks.
Ready to harden your infrastructure? Spin up a high-performance NVMe KVM instance on CoolVDS today and start building a pipeline that actually scales.