Console Login

GitOps Workflows in 2019: Building Bulletproof CI/CD Pipelines on Kubernetes

GitOps Workflows in 2019

GitOps on Kubernetes: Stop Manually Applying Manifests (2019 Guide)

If you are still SSH-ing into your production servers to run kubectl apply -f or, god forbid, kubectl edit deployment, you are creating a ticking time bomb. I’ve seen it happen too many times. A "quick fix" at 2 AM becomes a configuration drift that brings down the entire cluster three weeks later during a traffic spike.

We need to talk about GitOps. Not because it's the buzzword of 2019, but because it is the only way to sleep soundly at night. The premise is simple: Git is the single source of truth. If it's not in the repo, it doesn't exist in the cluster.

In this guide, we’re going to build a pipeline that actually works, focusing on the Nordic market where data sovereignty (thanks, GDPR) and latency are critical. We will use Flux (v1), Kubernetes 1.13, and standard CI/CD practices.

The Architecture: Pull vs. Push

Most traditional CI/CD pipelines (Jenkins, CircleCI) use a Push model. The CI runner has the credentials to the cluster and pushes the changes. This is a security nightmare. If your CI server is compromised, your attackers have the keys to your production kingdom.

GitOps uses a Pull model. An operator inside your Kubernetes cluster watches the Git repository. When it sees a change, it pulls it and applies it. The cluster keys never leave the cluster.

Pro Tip: Network latency matters here. If your Git repository is hosted in the US (GitHub/GitLab.com) but your cluster is in Oslo, the sync delay is negligible. However, if your container registry is halfway across the world, image pulls will kill your rollout time. Host your registry close to your compute. On CoolVDS, we see image pull times drop by 60% when using local NVMe-backed registry mirrors.

Step 1: The Infrastructure Layer

Before we touch YAML, we need a cluster. But not all clusters are equal. GitOps operators like Flux are chatty. They constantly query the Kubernetes API server, which in turn hammers etcd. If your underlying storage is slow (spinning rust or standard SSDs with noisy neighbors), your API server latency spikes.

In 2019, running Kubernetes on standard HDD VPS is negligence. You need NVMe. Here is why CoolVDS has become my reference architecture for this:

  • KVM Virtualization: No shared kernel limitations like OpenVZ. You can load custom kernel modules if needed.
  • NVMe Storage: etcd writes are synchronous. Slow disk = slow cluster.
  • Oslo Peering: Direct routing to NIX (Norwegian Internet Exchange).

Setting up the Environment

Assuming you have a CoolVDS instance running Ubuntu 18.04 LTS, let's prep the K8s node. We'll use kubeadm for a bare-metal feel.

# Disable swap (Kubernetes 1.13 requirement)
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab

# Install Docker CE 18.06
apt-get update
apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu

Step 2: Installing Flux (The Operator)

Flux is the engine that makes GitOps work. It runs inside your cluster and ensures the state matches your Git repo.

First, install the fluxctl binary locally:

sudo snap install fluxctl --classic

Now, deploy Flux to your cluster. We will create a dedicated namespace.

kubectl create ns flux

export GHUSER="your-github-user"
export REPO="k8s-config-repo"

fluxctl install \
--git-user=${GHUSER} \
--git-email=${GHUSER}@users.noreply.github.com \
--git-url=git@github.com:${GHUSER}/${REPO} \
--git-path=namespaces,workloads \
--namespace=flux | kubectl apply -f -

Once deployed, Flux will generate an SSH key. You need to add this to your GitHub/GitLab repository as a Deploy Key with write access (if you want Flux to update image tags automatically).

fluxctl identity --k8s-fwd-ns flux

Step 3: Managing Secrets (The Hard Part)

You cannot commit raw passwords or API keys to Git. In 2019, the standard is Sealed Secrets by Bitnami or using Mozilla SOPS. Let's use Sealed Secrets because it integrates cleanly with the operator pattern.

The controller decrypts the secret only inside the cluster. The repo only contains the encrypted "sealed" secret.

# Install the Sealed Secrets controller
kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.7.0/controller.yaml

# Client side usage
# 1. Create a regular secret locally (dry-run)
kubectl create secret generic db-creds --from-literal=pwd=MySuperSecurePwd --dry-run -o json > secret.json

# 2. Seal it
kubeseal --format=yaml < secret.json > sealed-secret.yaml

# 3. Commit sealed-secret.yaml to Git.

Step 4: The CI/CD Pipeline Integration

Here is where the magic happens. Your CI (GitLab CI or Jenkins) builds the Docker image, pushes it to the registry, and then updates the Git config repo. Flux picks up the change.

If you are using GitLab CI, your .gitlab-ci.yml might look like this:

stages:
  - build
  - deploy

build_image:
  stage: build
  script:
    - docker build -t registry.coolvds.net/my-app:$CI_COMMIT_SHA .
    - docker push registry.coolvds.net/my-app:$CI_COMMIT_SHA

update_manifests:
  stage: deploy
  image: bitnami/git
  script:
    - git clone git@gitlab.com:my-org/k8s-config.git
    - cd k8s-config
    # Using yq to update the tag in the deployment yaml
    - ./yq w -i workloads/deployment.yaml spec.template.spec.containers[0].image registry.coolvds.net/my-app:$CI_COMMIT_SHA
    - git config user.email "ci-bot@coolvds.net"
    - git config user.name "CI Bot"
    - git add .
    - git commit -m "Bump image to $CI_COMMIT_SHA"
    - git push origin master

Performance & Compliance in Norway

Why do we care about the underlying metal? Because GitOps creates a loop. Git -> Flux -> API Server -> etcd -> Disk. If any link is slow, your "automated" platform feels sluggish.

Furthermore, we must address the elephant in the room: GDPR. As of 2018, Datatilsynet (The Norwegian Data Protection Authority) is rigorous. If you are storing customer data in your databases, you need to know exactly where those bits live.

Feature Global Public Cloud CoolVDS (Norway)
Data Location Often opaque (Frankfurt/Ireland?) Strictly Oslo, Norway
Latency to NIX 20-40ms < 2ms
Storage I/O Throttled IOPS (unless you pay premium) Unthrottled NVMe

Common Pitfalls in 2019

1. Helm Tiller Security:
If you are using Helm 2, remember that Tiller runs with huge privileges. Ensure you are using TLS to communicate with Tiller, or better yet, just use helm template and let Flux apply the raw YAMLs. This avoids the security risk of having Tiller running in your cluster.

2. CPU Stealing:
On cheap VPS providers, your "2 vCPUs" are often shared with 50 other neighbors. When you try to re-schedule pods during a deployment, the CPU wait times skyrocket. Monitor %st (steal time) in top. If it's consistently above 5%, move to a dedicated resource provider like CoolVDS immediately.

Conclusion

GitOps transforms your infrastructure into something deterministic. If disaster strikes, you don't panic. You just re-apply the repository. But software reliability relies on hardware reliability.

Don't let slow I/O or high latency undermine your sophisticated Kubernetes stack. Build your GitOps foundation on infrastructure that respects your engineering standards.

Ready to stabilize your production environment? Spin up a high-performance NVMe KVM instance on CoolVDS today and see the difference raw I/O makes to your cluster convergence time.