Console Login

GitOps on the Edge: Killing Configuration Drift Before the GDPR Deadline

GitOps on the Edge: Killing Configuration Drift Before the GDPR Deadline

It is 3:00 AM. The pager screams. Your primary load balancer is throwing 502s. Why? Because a junior developer manually SSH'd into the production server six hours ago to "fix a quick typo" in the Nginx config, restarted the service, and went to sleep. The config wasn't committed to the repo. It wasn't tested. It is the ghost in your machine.

If you are still managing your infrastructure by logging into servers, you are already dead in the water. With the General Data Protection Regulation (GDPR) enforcement date of May 25th barely a month away, the "Cowboy DevOps" era is officially over. We need audit trails. We need immutability. We need GitOps.

In this guide, I’m going to show you how to architect a GitOps workflow that leverages Kubernetes 1.10 and GitLab CI, deployed on raw, high-performance KVM instances. Why not managed Kubernetes? Because when etcd latency spikes, you want to see the disk I/O stats yourself, not wait for a support ticket.

The Core Problem: Configuration Drift

The enemy is entropy. You deploy a cluster. It works. Over time, manual interventions accumulate. The state of your infrastructure no longer matches your documentation or your code. This is Configuration Drift.

In a GitOps model, Git is the single source of truth. You do not use kubectl create from your laptop. You do not edit /etc/my.cnf with nano. You push code. An automated operator synchronizes that state to your cluster. If the server drifts, the operator forces it back. Ruthless consistency.

The Stack: 2018 Best Practices

We aren't reinventing the wheel, we are putting rims on it. Here is the stack that survives production loads:

  • Version Control: GitLab (Self-hosted or Cloud).
  • Orchestration: Kubernetes 1.10 (The stability release).
  • Infrastructure: CoolVDS KVM Instances (NVMe is non-negotiable here).
  • Sync Agent: Weave Flux or standard CI Pipelines.

Step 1: Provisioning the Metal

You cannot build a stable house on a swamp. Kubernetes, specifically the etcd key-value store, is incredibly sensitive to disk write latency. If your fsync latency goes above 10ms, your cluster leader election fails, and your API server goes down.

This is why we avoid noisy-neighbor container hosting and stick to KVM-based VPS with dedicated NVMe storage. At CoolVDS, our benchmarks consistently show write latency under 1ms, which is critical for the control plane.

Here is how we provision the base infrastructure using Terraform (v0.11 syntax). This ensures even our servers are code:

resource "coolvds_instance" "k8s_master" {
  name             = "k8s-master-01"
  image            = "ubuntu-16.04-x64"
  region           = "no-oslo-1" # Norway Datacenter
  plan             = "nvme-8gb"  
  ssh_keys         = ["${var.my_ssh_key}"]
  
  # Network optimization for internal cluster comms
  private_networking = true
}

resource "coolvds_instance" "k8s_worker" {
  count            = 3
  name             = "k8s-worker-${count.index}"
  image            = "ubuntu-16.04-x64"
  region           = "no-oslo-1"
  plan             = "nvme-16gb"
  ssh_keys         = ["${var.my_ssh_key}"]
}
Pro Tip: Always enable private networking. In Norway, our local peering via NIX (Norwegian Internet Exchange) is fast, but your cluster internal traffic (Pod-to-Pod) should never traverse the public interface for security and latency reasons.

Step 2: The Pipeline Architecture

We will use a "Push" based approach common in 2018. While the "Pull" method (like Weave Flux) is gaining traction, a robust CI pipeline is easier for most teams to adopt immediately.

Your .gitlab-ci.yml acts as the gatekeeper. Nothing touches the cluster unless it passes the pipeline.

stages:
  - build
  - test
  - deploy

build_image:
  stage: build
  image: docker:17.12
  services:
    - docker:dind
  script:
    - docker build -t registry.example.com/myapp:$CI_COMMIT_SHA .
    - docker push registry.example.com/myapp:$CI_COMMIT_SHA

deploy_production:
  stage: deploy
  image: lachlanevenson/k8s-kubectl:v1.10.0
  script:
    - echo "$KUBE_CONFIG" > ~/.kube/config
    - kubectl set image deployment/myapp myapp=registry.example.com/myapp:$CI_COMMIT_SHA
  only:
    - master

Wait, look at that deploy_production script. It's imperative. It works, but it's not true GitOps yet. To make this fully declarative, we should apply manifests, not run commands.

Step 3: True Declarative Deployment

Instead of setting images, we template our Kubernetes manifests. The state in Git must match the cluster.

deployment.yaml template:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: production-api
  labels:
    app: core-api
spec:
  replicas: 3
  selector:
    matchLabels:
      app: core-api
  template:
    metadata:
      labels:
        app: core-api
    spec:
      containers:
      - name: api-container
        image: registry.example.com/myapp:__VERSION__
        resources:
          limits:
            memory: "512Mi"
            cpu: "500m"
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 10

We then use `sed` or `envsubst` in our pipeline to replace __VERSION__ and apply the file. This creates an audit trail. If the Datatilsynet (Norwegian Data Protection Authority) comes knocking regarding GDPR Article 32 (Security of processing), you can show them exactly what was deployed, by whom, and when.

Step 4: Tuning for the Metal

Since we are running on CoolVDS KVM instances, we have access to kernel parameters that managed containers hide from you. To handle high-throughput traffic without dropping packets, we tune the sysctl settings on the worker nodes.

Add this to your provisioning scripts or Ansible playbooks:

# /etc/sysctl.d/99-k8s-network.conf

# Increase the range of ephemeral ports
net.ipv4.ip_local_port_range = 1024 65000

# Maximize the backlog of pending connections
net.core.somaxconn = 4096
net.ipv4.tcp_max_syn_backlog = 4096

# Allow more open files
fs.file-max = 2097152

The Database Dilemma: To K8s or Not?

A common question I get: "Should I run my MySQL database inside Kubernetes?"

In 2018? No. Unless you have a dedicated storage engineering team, run your database on a dedicated CoolVDS instance outside the cluster. Use the low-latency private network to connect.

However, you must optimize that database instance. A standard install is not enough. Here is a snippet for my.cnf on a 16GB RAM CoolVDS instance to ensure you are utilizing the memory you are paying for:

[mysqld]
# Use 70-80% of RAM for InnoDB buffer pool
innodb_buffer_pool_size = 12G

# Log file size - critical for write-heavy workloads
innodb_log_file_size = 1G

# Flush method for NVMe SSDs
innodb_flush_method = O_DIRECT
innodb_io_capacity = 2000

Why Norway? Why Now?

With the GDPR deadline looming, data residency is not just a buzzword; it's a legal shield. Hosting your GitOps infrastructure and production data within Norway (an EEA member with strict privacy laws) provides a layer of compliance safety.

Furthermore, CoolVDS leverages the stable, green energy grid of the Nordics. But honestly, you care about the ping. From Oslo to London, we are seeing sub-20ms latency. That is fast enough for distributed teams across Europe to push code without lag.

Conclusion: Discipline Equals Freedom

GitOps is not a product you buy. It is a discipline. It requires you to stop touching the server manually. It forces documentation through code.

By combining this workflow with the raw power of CoolVDS NVMe instances, you get the best of both worlds: the flexibility of the cloud with the raw I/O performance of bare metal. Don't let your infrastructure drift into chaos.

Ready to lock down your stack? Spin up a high-performance KVM instance on CoolVDS in 55 seconds and start building your GitOps pipeline today.