The Era of "Snowflake" Servers is Over
I still see it happen. A senior developer SSHs into a production node, runs a quick vim /etc/nginx/nginx.conf, restarts the service, and patches a critical bug. The boss is happy. The uptime is saved. But that server is now a snowflake. It is unique, undocumented, and impossible to reproduce automatically. In a week, when autoscaling kicks in and spins up a fresh node from the original image, that fix is gone. The site crashes. Panic ensues.
We have to stop treating servers like pets. In 2018, with the maturity of Kubernetes 1.11 and the rise of the GitOps methodology coined by Weaveworks, there is no excuse for manual intervention. The cluster state must match the Git state. Always.
This isn't just about "clean code." It's about survival. If you are running high-availability workloads in Norway—whether it's for fintech in Oslo or energy sector monitoring in Stavanger—you need audit trails. You need to know exactly who changed what, and when. That is what GitOps provides. And it starts with a relentless refusal to touch kubectl manually.
The Architecture: Push vs. Pull
In the current landscape, we see two dominant patterns for getting code from Git to the Cluster:
- The Push Model (CI-driven): Your CI pipeline (Jenkins, GitLab CI) builds the container, pushes it to the registry, and then runs
kubectl set imageagainst the API server. - The Pull Model (Operator-driven): An agent inside the cluster (like Weave Flux) monitors the Git repository. When it sees a change in the manifest, it pulls the new config and applies it.
For teams transitioning from traditional VPS setups, the Push model is often the easiest entry point. Let's look at a concrete implementation using GitLab CI, which has exploded in popularity this year due to its integrated registry and K8s integration.
1. The Container Build
First, we need a deterministic build. We are using Docker 18.06. Multi-stage builds are mandatory now to keep image sizes down. Don't ship your build tools to production.
# Dockerfile
FROM golang:1.10-alpine as builder
WORKDIR /app
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
FROM alpine:3.8
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /app/main .
CMD ["./main"]2. The Pipeline Configuration
Here is a battle-tested .gitlab-ci.yml using the Docker-in-Docker pattern. This pipeline builds, tags with the commit SHA (essential for immutability), and updates the deployment manifest.
image: docker:stable
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
REGISTRY_URL: registry.example.no
stages:
- build
- deploy
build_image:
stage: build
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $CI_REGISTRY/project/app:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY/project/app:$CI_COMMIT_SHA
deploy_k8s:
stage: deploy
image: google/cloud-sdk:alpine
script:
- sed -i "s/latest/$CI_COMMIT_SHA/g" k8s/deployment.yaml
- kubectl apply -f k8s/deployment.yaml
only:
- masterNote the use of sed. In a pure GitOps world (using the Pull model), instead of running kubectl apply, the CI would commit the change back to a separate "config" repository. An operator like Flux would then sync that change. But for many shops today, the pipeline above is a massive improvement over manual deploys.
3. The Manifest
Your infrastructure code lives right next to your application code. This deployment.yaml is the source of truth.
apiVersion: apps/v1
kind: Deployment
metadata:
name: production-api
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: api
image: registry.example.no/project/app:latest
ports:
- containerPort: 8080
resources:
limits:
memory: "512Mi"
cpu: "500m"Pro Tip: Always define resource limits. Without them, a memory leak in one pod can cause the Linux OOM Killer to take down your database pod on the same node. This is a noisy neighbor problem that virtualization mitigates, but Kubernetes orchestration exacerbates if not managed.
The Hardware Reality: Why Your Host Matters
Kubernetes is not magic. It is a distributed system that relies heavily on etcd for state management. Etcd is extremely sensitive to disk write latency. If your disk fsync latency spikes, your cluster leader election fails, and your control plane goes down.
This is where generic "Cloud VPS" providers fail. They often put you on shared spinning disks or throttled SATA SSDs. When you run a GitOps workflow, you are constantly triggering deployments, creating new ReplicaSets, and churning containers. The I/O load on the master nodes is significant.
At CoolVDS, we built our infrastructure on NVMe storage from day one. We don't use Ceph over slow networks; we use local NVMe with RAID protection or high-performance SANs optimized for low latency.
Let's check your disk latency. If you are seeing fdatasync times over 10ms, your K8s cluster is in danger.
# Run this FIO benchmark to test your etcd partition write speed
fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytestOn a standard CoolVDS instance, you will consistently see low latency numbers that keep etcd happy and your GitOps pipelines green.
The Nordic Context: GDPR & Datatilsynet
We are five months post-GDPR implementation (May 2018). The dust hasn't settled. If you are a Norwegian business, storing your Git repositories and container images on US-controlled servers is becoming a legal headache. The definition of "processor" and "controller" implies that you must know exactly where your code—which often contains hardcoded secrets or config data—resides.
By hosting your GitLab instance and your Kubernetes cluster on CoolVDS, you ensure data sovereignty. Your data stays in Norway/Europe. You reduce latency to the NIX (Norwegian Internet Exchange), meaning your docker pull times are faster because you aren't traversing the Atlantic.
Conclusion
GitOps is the standard for modern infrastructure. It removes the human error element from deployments. But software best practices cannot fix hardware limitations. A robust CI/CD pipeline requires a robust underlying platform.
Don't let slow I/O kill your deployment velocity. Secure your infrastructure with the low latency and data sovereignty of CoolVDS.
Ready to stabilize your stack? Deploy a high-performance KVM instance in Oslo today.