Console Login

GitOps in 2023: Architecting Zero-Trust Deployment Pipelines for Nordic Infrastructure

GitOps in 2023: Architecting Zero-Trust Deployment Pipelines for Nordic Infrastructure

If you are still running kubectl apply -f deployment.yaml from your local laptop in August 2023, you are fundamentally risking your infrastructure's integrity. I used to be that guy. Then, during a routine Friday deployment for a fintech client in Oslo, a local network timeout left the cluster in a partial state. The API was up, the database migration hadn't run, and the rollback scripts failed because my local context was out of sync with the cluster state. We spent six hours manually patching manifests while the client called every fifteen minutes.

That is why we moved to GitOps. In the Nordic market, where the Datatilsynet (Norwegian Data Protection Authority) watches compliance like a hawk and latency to the Norwegian Internet Exchange (NIX) defines your user experience, you cannot afford "drift." GitOps isn't just a buzzword; it's the only way to mathematically guarantee that what is in your Git repository matches exactly what is running on your servers.

The Architecture of Truth: Pull vs. Push

The traditional CI/CD pipeline (Jenkins, GitLab CI) pushes changes to the cluster. This requires you to expose your Kubernetes API credentials to your CI runner. From a security standpoint, this is a nightmare. If your CI system is compromised, your entire production environment is exposed.

The GitOps "Pull" model reverses this. An operator inside the cluster (like ArgoCD or Flux) watches the Git repository and pulls changes inward. No inbound ports open. No cluster credentials leaving the perimeter. This architecture aligns perfectly with the strict Schrems II rulings we deal with in Europeβ€”keeping access control tightly scoped within the infrastructure boundary.

Feature Push-Based (Legacy) Pull-Based (GitOps)
Security CI needs Cluster Admin credentials Cluster only needs Read-Only Git access
Drift Detection Impossible (unless pipeline runs) Continuous & Automatic
Disaster Recovery Re-run pipelines (slow) kubectl apply the git repo (fast)

The Repository Strategy: Separation of Concerns

Do not store your application source code and your Kubernetes manifests in the same repository. I repeat: separate them. Your application repo should contain code and a Dockerfile. Your infrastructure repo should contain Helm charts or Kustomize manifests. Why? Because you don't want a change in a README.md to trigger a production cluster reconciliation loop.

Here is the folder structure we enforce for high-availability setups hosted on CoolVDS NVMe instances:

/infrastructure-repo
β”œβ”€β”€ /base
β”‚   β”œβ”€β”€ deployment.yaml
β”‚   β”œβ”€β”€ service.yaml
β”‚   └── kustomization.yaml
β”œβ”€β”€ /overlays
β”‚   β”œβ”€β”€ /dev
β”‚   β”‚   β”œβ”€β”€ kustomization.yaml
β”‚   β”‚   └── patch-replicas.yaml
β”‚   └── /prod
β”‚       β”œβ”€β”€ kustomization.yaml
β”‚       └── patch-resources.yaml
Pro Tip: When using Kustomize, always pin your image tags in the overlay kustomization.yaml. Never use :latest. It breaks the immutability principle of GitOps.

Implementing ArgoCD on Bare Metal KVM

We prefer ArgoCD for its visual dashboard and robust RBAC integration. However, the performance of the reconciliation loop depends heavily on the underlying storage I/O. ArgoCD (and Kubernetes itself via etcd) makes heavy use of disk writes to maintain state. If you are running this on cheap shared hosting with spinning rust (HDD) or throttled SSDs, you will see "Health Status: Unknown" errors simply because the controller can't write to the local database fast enough.

This is where infrastructure choice becomes critical. We deploy our GitOps control planes on CoolVDS instances because they guarantee NVMe storage with high IOPS. Unlike OpenVZ containers where a neighbor can steal your I/O, KVM virtualization ensures that the resources allocated to your GitOps operator are actually yours. This stability is non-negotiable when your deployment tool is managing the state of your entire business.

The ArgoCD Application Manifest

This is the declarative definition of your deployment. You commit this to your management cluster to spawn the application.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: nordic-payment-gateway
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'git@github.com:your-org/infra-repo.git'
    targetRevision: HEAD
    path: overlays/prod
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: payment-prod
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true

Notice the selfHeal: true flag. This is the magic. If a junior dev manually deletes a Service or changes a ConfigMap via the CLI, ArgoCD detects the drift and immediately reverts it to the state defined in Git. It is ruthlessly consistent.

Handling Secrets: The "Chicken and Egg" Problem

You cannot store raw secrets in Git. That is a firing offense. In 2023, the industry standard has coalesced around Sealed Secrets (by Bitnami) or the External Secrets Operator. For smaller teams, Sealed Secrets is more efficient.

It works by asymmetric encryption. You encrypt the secret locally using a public key fetched from the cluster. Only the controller running inside the cluster (on your secure CoolVDS node) has the private key to decrypt it.

Workflow:

  1. Install the client: brew install kubeseal

  2. Generate the raw secret (locally):

kubectl create secret generic db-creds \
  --from-literal=password=SuperSecureNorwegianPassword123! \
  --dry-run=client -o yaml > secret.yaml
  1. Seal the secret:

kubeseal --format=yaml < secret.yaml > sealed-secret.yaml

You can safely commit sealed-secret.yaml to your public repository.

The CI/CD Handoff: Updating the Image Tag

Your CI pipeline (GitHub Actions/GitLab CI) has one job: Test the code, build the container, push it to the registry, and then update the manifest repo. It does not touch the cluster.

Here is a robust GitHub Actions workflow step that handles the Git commit safely:

name: Build and Push

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Infra Repo
        uses: actions/checkout@v3
        with:
          repository: your-org/infra-repo
          token: ${{ secrets.GIT_PAT }}

      - name: Update Image Tag
        run: |
          cd overlays/prod
          kustomize edit set image my-app=my-registry/my-app:${{ github.sha }}
          git config user.name "CI Bot"
          git config user.email "ci@coolvds.com"
          git add kustomize.yaml
          git commit -m "Bump image tag to ${{ github.sha }}"
          git push origin main

Why Infrastructure Latency Matters for GitOps

GitOps is chatty. It constantly polls your Git repositories and your container registries. If your hosting provider has poor peering, your "time to sync" increases. Hosting in Norway, specifically on CoolVDS, gives you a distinct advantage if your Git repo (e.g., a self-hosted GitLab) and your cluster are in the same geographic region.

We benchmarked the latency from CoolVDS Oslo data center to major European endpoints. The low jitter ensures that the admission controllers don't time out during heavy update storms.

# Checking connectivity to NIX (Norwegian Internet Exchange)
ping -c 4 nix.no
# Result on CoolVDS: < 2ms avg latency

Infrastructure as Code (IaC) for the Base Layer

Before you can apply GitOps, you need the cluster itself. While you can click buttons in a UI, true professionals use Terraform. This ensures that the underlying VPS instances, firewalls, and networking are just as reproducible as the Kubernetes pods.

Below is a Terraform snippet to provision a robust KVM node suitable for a Kubernetes worker on CoolVDS. Note that we prioritize virtio drivers for disk and network to minimize virtualization overhead.

resource "openstack_compute_instance_v2" "k8s_worker" {
  name            = "worker-node-01"
  image_name      = "Ubuntu 22.04"
  flavor_name     = "v2-highcpu-nvme"
  key_pair        = "deploy-key"
  security_groups = ["default", "k8s-worker"]

  network {
    name = "private-net"
  }

  block_device {
    uuid                  = data.openstack_images_image_v2.ubuntu.id
    source_type           = "image"
    destination_type      = "volume"
    boot_index            = 0
    delete_on_termination = true
    volume_size           = 80 # NVMe storage size
  }

  metadata = {
    role = "worker"
    environment = "production"
  }
}

Conclusion

GitOps is the standard for modern delivery, but it exposes weaknesses in poor infrastructure. The constant reconciling, image pulling, and secret decryption require compute and storage that don't flinch under load. You can write the perfect ArgoCD manifest, but if your etcd latency spikes because of noisy neighbors on a cheap VPS, your deployment will stall.

For systems that demand stability and compliance within the Norwegian jurisdiction, the combination of GitOps workflows and high-performance hardware is unbeatable. Don't let slow I/O kill your reconciliation loop.

Ready to stabilize your pipeline? Deploy a high-performance KVM instance on CoolVDS today and experience the difference true NVMe throughput makes for your Kubernetes control plane.