GitOps Workflows for Kubernetes: Stop Manually Breaking Production
If you are still SSH-ing into your production servers to tweak an Nginx config, or running kubectl apply -f from your laptop, you are the single greatest risk to your infrastructure. I’ve seen it happen too many times: a "quick fix" applied manually at 11 PM creates a configuration drift that brings the whole cluster down three days later when the autoscaler kicks in. It’s 2018. We have to stop treating infrastructure like a pet.
Enter GitOps. Coined by Weaveworks recently, it’s not just a buzzword; it’s the only sane way to manage complex distributed systems.
The Core Philosophy: Git is the Single Source of Truth
The concept is simple but ruthless. The state of your Git repository must equal the state of your cluster. If it's not in Git, it doesn't exist. If someone changes a setting on the live cluster, the GitOps agent should detect the drift and revert it (or at least scream about it).
For Norwegian businesses dealing with the fresh reality of GDPR (implemented just this May), this auditability is mandatory. When Datatilsynet asks who changed the ingress rules exposing user data, you point to a Git commit hash, not a vague slack message.
The Architecture: Push vs. Pull
There are two ways to handle this in late 2018:
- CI-Driven (Push): Your CI tool (Jenkins, GitLab CI) has credentials to your cluster and runs the deployment commands.
- Operator-Driven (Pull): An agent inside the cluster (like Weave Flux) pulls changes from Git and applies them.
For most teams migrating to CoolVDS from legacy hosting, the CI-Driven approach is the easiest transition. Let’s look at a practical implementation using GitLab CI and Kubernetes.
Step 1: Containerize with Speed
Your workflow dies if your build pipeline is slow. High I/O wait times during docker build are the silent killer of developer productivity. This is why we insist on NVMe storage for our CoolVDS instances. Standard SSDs choke when twenty developers push commits simultaneously, causing the disk queue to spike.
Here is a standard, optimized Dockerfile structure for a Node.js microservice. Notice the layer caching strategy:
FROM node:10-alpine
WORKDIR /usr/src/app
# Copy package files first to leverage Docker layer caching
COPY package*.json ./
# Install dependencies (use ci for reproducible builds)
RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
Step 2: Defining the Infrastructure
Don't write raw YAML files for every environment. Use Helm 2. Yes, Tiller (the server-side component of Helm) has security implications, so ensure you secure it properly with RBAC constraints, or better yet, run it only in a specific namespace.
Here is a snippet of a values.yaml that we override per environment:
replicaCount: 2
image:
repository: registry.gitlab.com/my-org/my-app
tag: stable
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: "letsencrypt-prod"
hosts:
- host: api.coolvds-client.no
paths: ['/']
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
Pro Tip: Always set resource limits. If you don't, a memory leak in one pod can trigger the OOMKiller on the host node, potentially taking down neighboring pods. On CoolVDS, we guarantee your allocated resources, but Linux kernel logic still applies inside your VM.
Step 3: The Pipeline (GitLab CI)
Here is where the magic happens. We configure `.gitlab-ci.yml` to deploy only when the pipeline passes tests. This example assumes you have a Kubernetes cluster connected to your GitLab instance.
stages:
- build
- deploy
variables:
DOCKER_DRIVER: overlay2
build_image:
stage: build
image: docker:stable
services:
- docker:dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
deploy_production:
stage: deploy
image: lwolf/helm-kubectl-docker:v1.11.1-v2.10.0
script:
- mkdir -p /root/.kube
- echo "$KUBE_CONFIG" | base64 -d > /root/.kube/config
- helm upgrade --install my-app ./charts/my-app \
--set image.tag=$CI_COMMIT_SHA \
--namespace production \
--wait
only:
- master
environment:
name: production
url: https://api.coolvds-client.no
Why Infrastructure Matters for GitOps
GitOps is automation heavy. It involves constant pulling, pushing, building, and redeploying. This generates significant network traffic and disk I/O.
If you are hosting your GitLab instance or your Jenkins master on a budget VPS with spinning disks or oversold CPU, your "automation" will become a bottleneck. You will wait 20 minutes for a pipeline that should take 2.
Latency affects reliability. For Norwegian clients, hosting your build infrastructure and production clusters in Oslo or nearby European hubs is critical. Round-trip times matter when you are executing hundreds of API calls to the Kubernetes master during a rolling update. CoolVDS provides the low-latency interconnects (peering at NIX) required to make these operations feel instantaneous.
The Security Implications (GDPR & Tiller)
Since we are using Helm 2, Tiller runs with high privileges inside your cluster. To stay compliant and secure:
- Enable TLS between Helm and Tiller.
- Limit Tiller to specific namespaces using
--tiller-namespace. - Ensure your CoolVDS instance is firewalled, allowing port 6443 (K8s API) access ONLY from your CI runner's IP address.
Automation is not an excuse to be lazy with security. It is an opportunity to codify it. By defining your infrastructure in Git, you create an immutable history of your compliance.
Ready to move your pipelines to infrastructure that doesn't choke on `docker build`? Deploy a high-performance CoolVDS instance today and stop fighting your servers.