Console Login

Stop `kubectl apply`-ing: A Battle-Tested GitOps Workflow for Norwegian Enterprises

Stop kubectl apply-ing: A Battle-Tested GitOps Workflow for Norwegian Enterprises

It has been exactly five days since the GDPR enforcement deadline hit on May 25th. If you are a SysAdmin or DevOps engineer in Oslo right now, you are likely exhausted. The panic to audit data flows is over, but the operational hangover has just begun. Here is the hard truth I tell every CTO who asks me why their cluster fell over at 3 AM: if you are still SSH-ing into production servers to make "quick fixes," or running kubectl apply -f from your laptop, you are not just inefficient. You are a compliance liability.

We need to talk about GitOps. Not as a buzzword, but as a survival mechanism. In the Nordic hosting market, where data sovereignty is now scrutinized by Datatilsynet (The Norwegian Data Protection Authority) with hawk-like precision, your infrastructure's state must be declarative, versioned, and auditable. If it isn't in Git, it doesn't exist.

The "Drift" Nightmare

I worked on a project last month migrating a legacy Magento stack to Docker containers. The team swore their infrastructure was automated. Yet, when we redeployed the staging environment, the payment gateway failed. Why? Because three months ago, a senior engineer manually tweaked an nginx.conf parameter to handle a traffic spike and never committed it to the repo. That is Configuration Drift. In a GitOps workflow, this is impossible because an operator inside the cluster pulls configuration from Git. If you change it manually, the operator reverts it. Ruthless? Yes. Necessary? Absolutely.

The 2018 GitOps Reference Architecture

For a robust setup targeting European users, we need low latency and high consistency. We aren't using proprietary cloud managed services here; we are building a portable, vendor-agnostic stack. Here is the architecture we are deploying on CoolVDS KVM instances:

  • Source Control: GitLab (Self-hosted or SaaS).
  • CI: GitLab CI (for building images).
  • Orchestrator: Kubernetes 1.10.
  • CD / GitOps Operator: Weave Flux.
  • Infrastructure: CoolVDS NVMe KVM (vital for etcd performance).

1. The Build Pipeline (CI)

Your CI should only do one thing: build the artifact and push it to the registry. It should not touch the production cluster. That is a security risk. Here is a lean .gitlab-ci.yml example using Docker-in-Docker, which is standard practice right now.

stages:
  - build
  - release

variables:
  DOCKER_DRIVER: overlay2

build_image:
  stage: build
  image: docker:17.12
  services:
    - docker:dind
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA

2. The Manifests (The "Truth")

We separate our code repo from our config repo. In the config repo, we define the state of the application. For high-performance PHP applications (common in Norway), we often need to tune the `php-fpm` workers based on available RAM.

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nordic-app
  namespace: production
  annotations:
    flux.weave.works/automated: "true"
    flux.weave.works/tag-glob: "master-*"
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nordic-app
  template:
    metadata:
      labels:
        app: nordic-app
    spec:
      containers:
      - name: app
        image: registry.coolvds.com/my-group/nordic-app:master-a1b2c3d
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "512Mi"
            cpu: "250m"
          limits:
            memory: "1Gi"
            cpu: "500m"

Notice the flux.weave.works/automated annotation? That tells Flux to watch the registry. When a new image lands, Flux updates this file in Git automatically and applies it to the cluster.

3. The Synchronization (The Operator)

We use Weave Flux running inside the cluster. It ensures that the cluster state matches the Git repo. If a developer tries to kubectl edit a deployment to change an environment variable, Flux wakes up, sees the drift, and overwrites the change with what is in Git. This creates a perfect audit trail for GDPR compliance.

To install Flux on your CoolVDS Kubernetes cluster, we use Helm 2 (ensure Tiller is secured with TLS, please):

helm repo add weaveworks https://weaveworks.github.io/flux
helm install --name flux \
--set git.url=git@gitlab.com:my-org/k8s-config \
--set git.branch=master \
--namespace flux \
weaveworks/flux

The Hardware Reality: Why I/O Matters

You can have the cleanest GitOps workflow in the world, but if your underlying infrastructure chokes on I/O, your cluster will destabilize. Kubernetes relies heavily on etcd for state management. Etcd requires extremely low write latency to maintain consensus.

Pro Tip: Do not run Kubernetes on budget VPS providers that use OpenVZ or standard HDD storage. The "noisy neighbor" effect and slow disk writes will cause etcd timeouts, leading to split-brain scenarios where your master nodes lose track of workers. I have seen production databases corrupt because the storage couldn't keep up with the journaling.

This is where CoolVDS shines in our benchmarks. We utilize KVM (Kernel-based Virtual Machine) virtualization, which provides true hardware isolation. More importantly, the storage backend is pure NVMe. When running fio tests on a standard CoolVDS instance, we consistently see random write speeds that are 5-10x faster than traditional SSD VPS offerings in the region.

Data Sovereignty and Latency

With the new privacy laws, where your data physically sits is paramount. Hosting on US-owned hyperscalers adds a layer of legal complexity regarding the CLOUD Act vs. GDPR. By utilizing CoolVDS, your data resides in Datacenters within the EEA, adhering strictly to local regulations. Furthermore, for a user in Oslo or Bergen, the latency difference between a server in Frankfurt and a server in Norway is noticeable. We are talking about 30ms vs 3ms. In the e-commerce world, that latency directly correlates to conversion rates.

Setting Up the Workflow

To get this running on CoolVDS today:

  1. Provision Resources: Spin up 3x KVM instances (Ubuntu 16.04 or 18.04 LTS). Use the private network interface for cluster communication to reduce bandwidth costs and increase security.
  2. Bootstrap Cluster: Use kubeadm. It has matured significantly in v1.10.
  3. Secure Tiller: Since we are using Helm 2, create a ServiceAccount for Tiller and limit its scope.
  4. Deploy Flux: Point it to your config repository.

Once active, you never touch the server again. You push code to Git. GitLab builds it. Flux pulls it. The server updates itself. If a node fails, the KVM isolation ensures the others pick up the slack without resource contention.

Conclusion

The era of the "Cowboy SysAdmin" is over. Stability and compliance are the metrics we represent now. GitOps provides the rigor required by modern European business standards, and high-performance hardware provides the reliability. Don't let slow I/O or manual configuration drift kill your uptime.

Ready to build a cluster that doesn't wake you up at night? Deploy a high-performance NVMe KVM instance on CoolVDS today and experience the stability of true hardware isolation.