Console Login

Stop `kubectl apply`: Architecting a Bulletproof GitOps Workflow for Norwegian Enterprise

Stop `kubectl apply`: Architecting a Bulletproof GitOps Workflow for Norwegian Enterprise

I still remember the silence in the Slack channel. It was 2019, and a junior engineer had just accidentally deleted the production namespace ingress controller because his local kubeconfig context wasn't where he thought it was. The store was down for forty minutes. That day, we banned manual cluster interaction.

If you are SSHing into your servers to pull code, or running kubectl apply -f deployment.yaml from your laptop, you are operating on borrowed time. In the Nordic market, where reliability is expected and the Datatilsynet (Norwegian Data Protection Authority) is watching, "it works on my machine" is not a valid defense strategy.

This guide breaks down a rigorous GitOps workflow suitable for April 2023 standards, leveraging ArgoCD, GitLab CI, and stable infrastructure. We aren't just automating; we are creating an audit trail that satisfies even the strictest GDPR compliance officers.

The Philosophy: The Repository is Truth

GitOps is simple in theory, hard in discipline. The entire state of your infrastructure and applications must be defined in Git. If it's not in Git, it doesn't exist. If someone changes a setting on the server manually, an automated agent should revert it immediately. This is called Self-Healing.

The Stack Selection (2023 Edition)

For this architecture, we are choosing:

  • GitLab CI: For Continuous Integration (Testing & Building).
  • ArgoCD: For Continuous Delivery (Syncing Git to Cluster).
  • CoolVDS NVMe Instances: To host the control plane and worker nodes with minimal latency to NIX (Norwegian Internet Exchange).

Step 1: Decoupling CI and CD

A common mistake is letting your CI tool (Jenkins, GitLab CI) touch your Kubernetes cluster directly. Don't do it. Giving your CI runner cluster-admin privileges is a massive security risk. Instead, your CI should only do two things:

  1. Build and push the Docker image.
  2. Commit a change to the Manifest Repository updating the image tag.

Here is a concise GitLab CI job that handles the image build. Note the use of caching to reduce build times on your VPS runners.

build_image:
  stage: build
  image: docker:20.10.16
  services:
    - docker:20.10.16-dind
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker build --cache-from $CI_REGISTRY_IMAGE:latest -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA

Step 2: The Manifest Update (The "Git" in GitOps)

Once the image is built, the CI pipeline needs to update the Kubernetes manifests. We use a separate repository for manifests to maintain a clean separation of concerns. This allows you to restrict write access to the production environment manifest repo to senior staff or automated bots only.

Below is a script often used in the `deploy` stage of the pipeline to update the Kustomize tag safely.

update_manifests:
  stage: deploy
  image: alpine:3.17
  before_script:
    - apk add --no-cache git yq
    - git config --global user.email "ci-bot@coolvds.com"
    - git config --global user.name "CoolVDS Bot"
  script:
    - git clone https://oauth2:${GIT_ACCESS_TOKEN}@gitlab.com/org/infra-repo.git
    - cd infra-repo/overlays/production
    - yq e -i ".images[0].newTag = \"$CI_COMMIT_SHA\"" kustomization.yaml
    - git add kustomization.yaml
    - git commit -m "Bump image to $CI_COMMIT_SHA [skip ci]"
    - git push origin main

Step 3: ArgoCD Configuration and Sync Waves

Now that the state is in Git, ArgoCD takes over. It sits inside your cluster (ideally hosted on a high-performance CoolVDS instance to ensure the reconciliation loop is fast). It polls the Git repo and applies changes.

However, simply applying everything at once causes downtime. You need Sync Waves. This ensures your database schema migrations run before your deployment updates.

Here is a robust ArgoCD Application definition utilizing Sync Waves and automated pruning:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: nordic-ecommerce-prod
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'https://gitlab.com/org/infra-repo.git'
    targetRevision: HEAD
    path: overlays/production
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
      - ApplyOutOfSyncOnly=true
    retry:
      limit: 5
      backoff:
        duration: 5s
        factor: 2
        maxDuration: 3m
Pro Tip: Network latency kills GitOps sync speeds. If your git repository is hosted in Europe (e.g., GitLab EU) but your cluster is in the US, you are introducing unnecessary lag during the reconciliation loop. Hosting your Kubernetes nodes on CoolVDS in Norway ensures your infrastructure is legally compliant and physically close to the fiber backbones connecting Northern Europe.

Step 4: Secret Management with Sealed Secrets

You cannot commit .env files to Git. That is Security 101. In 2023, the standard for this is Bitnami Sealed Secrets. It uses asymmetric encryption. You encrypt the secret on your laptop using a public key, push the encrypted "SealedSecret" CRD to Git, and the controller in the cluster (which holds the private key) decrypts it.

The workflow looks like this:

# 1. Create a generic secret locally (dry-run)
kubectl create secret generic db-creds \
  --from-literal=password=SuperSecret123 \
  --dry-run=client -o yaml > secret.yaml

# 2. Seal it using the public key fetched from the controller
kubeseal --format=yaml --cert=pub-cert.pem < secret.yaml > sealed-secret.yaml

# 3. Commit sealed-secret.yaml to Git

The resulting sealed-secret.yaml is safe to commit to public repositories. Even if compromised, it cannot be decrypted without the private key sitting safely inside your CoolVDS-hosted cluster.

Step 5: Infrastructure as Code (Terraform)

Before you even install Kubernetes, you need the servers. Terraform is the industry standard here. You should not be clicking buttons in a web portal to create your VPS instances.

Below is a Terraform configuration snippet to provision a robust environment. Note the anti-affinity rules implied by using distinct compute nodes—crucial for high availability.

resource "openstack_compute_instance_v2" "k8s_worker" {
  count           = 3
  name            = "k8s-worker-${count.index}"
  image_name      = "Ubuntu 22.04"
  flavor_name     = "v2-highcpu-4gb" 
  key_pair        = var.ssh_key
  security_groups = ["default", "k8s-node"]

  network {
    name = "private-net"
  }

  block_device {
    uuid                  = var.image_id
    source_type           = "image"
    destination_type      = "volume"
    boot_index            = 0
    delete_on_termination = true
    volume_size           = 80  # NVMe backed storage
  }
}

Why Infrastructure Choice Impacts GitOps

Many developers treat the underlying hardware as an abstraction, but in a GitOps workflow, I/O performance is critical. Your GitOps controller is constantly cloning repositories, unpacking Helm charts, and verifying state against etcd.

Latency Matters: When your ArgoCD instance polls a repo, high latency delays deployment updates. Disk I/O Matters: Etcd is extremely sensitive to disk write latency. If fsync takes too long, the cluster leader election fails.

We use CoolVDS for our reference architecture because they provide genuine KVM isolation with direct NVMe pass-through. Unlike container-based VPS solutions (OpenVZ/LXC) where "noisy neighbors" can steal your CPU cycles during a heavy build process, KVM guarantees the resources you pay for are actually yours. For a production Kubernetes cluster, this stability is non-negotiable.

Compliance and the "Schrems II" Reality

If you are operating in Norway or the EU, you are likely aware of the legal complexities regarding data transfer to US-owned cloud providers. By hosting your GitOps control plane and your production workloads on a Norwegian provider like CoolVDS, you simplify your GDPR compliance posture. Your data stays within the jurisdiction, governed by Norwegian law.

Final Thoughts

GitOps is more than a tool; it is a contract between your dev team and your infrastructure. It demands discipline, but the payoff is a system that documents itself and heals itself. Don't ruin a perfect GitOps workflow by running it on unstable, oversold hardware.

Ready to harden your pipeline? Spin up a KVM-based instance on CoolVDS today and experience the difference low-latency NVMe storage makes for your Kubernetes control plane.