Console Login

GitOps Workflows in 2020: Stop Running kubectl apply From Your Laptop

The Era of "Click-Ops" is Over: Embracing GitOps in 2020

If you are still SSHing into your production servers to pull a git repo, or worse, running kubectl apply -f . from your local terminal, you are creating a ticking time bomb. I have seen it happen too many times: a developer's laptop configuration differs slightly from the staging environment, a deployment script misses a flag, and suddenly the production cluster in Oslo goes dark because of a configuration drift that no one tracked.

It is April 2020. We have tools like ArgoCD and Flux. There is no excuse for manual intervention in the deployment process anymore. We need audit trails, we need rollback capabilities that take seconds, and we need to treat infrastructure state exactly like we treat application code.

The Architecture of a Pull-Based Pipeline

Traditional CI/CD pushes changes. Your Jenkins or GitLab CI runner builds a container, authenticates with your Kubernetes cluster, and pushes the new manifest. This is a security risk. It requires handing your CI systemβ€”which often runs outside your trusted perimeterβ€”keys to the kingdom (ClusterAdmin).

GitOps flips this. It uses a Pull mechanism. An agent inside your cluster (the controller) watches a Git repository. When the state in Git changes, the controller detects the drift and syncs the cluster to match the repo.

Pro Tip: Keeping the controller inside the cluster means you don't need to expose your Kubernetes API server to the public internet or your CI provider. This is critical for complying with strict Norwegian security standards where minimizing attack surface is mandatory.

The Stack

For this workflow, we are standardizing on:

  • Kubernetes 1.18: The latest stable release as of March 2020.
  • ArgoCD v1.5: For visualization and synchronization.
  • Kustomize: For configuration management (now native in kubectl).
  • CoolVDS NVMe Instances: To run the control plane without etcd latency bottlenecks.

Setting Up the GitOps Controller

Latency matters here. The controller constantly compares the live state of your cluster against the Git state. If your etcd performance is sluggish due to noisy neighbors on shared hosting, your reconciliation loops lag. This is why we deploy on CoolVDS KVM slices; the dedicated NVMe allocation ensures the etcd writes happen instantly.

Let's install ArgoCD into a dedicated namespace.

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v1.5.2/manifests/install.yaml

Once the pods are running, you need to access the API server. In a production environment in Norway, you strictly do not want to expose this dashboard to the open web without a VPN. However, for initial setup, we can port-forward:

kubectl port-forward svc/argocd-server -n argocd 8080:443

The initial password is the pod name of the API server. Grab it with:

kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2

Structuring Your Repository for Kustomize

Do not dump raw YAML files into the root of your repo. You will regret it when you need to manage `staging`, `prod-oslo`, and `prod-trondheim` environments. Use Kustomize bases and overlays.

Here is a robust directory structure:

.
β”œβ”€β”€ base
β”‚   β”œβ”€β”€ deployment.yaml
β”‚   β”œβ”€β”€ service.yaml
β”‚   └── kustomization.yaml
└── overlays
    β”œβ”€β”€ staging
    β”‚   β”œβ”€β”€ kustomization.yaml
    β”‚   └── replica_count.yaml
    └── production
        β”œβ”€β”€ kustomization.yaml
        └── resource_limits.yaml

In your overlays/production/kustomization.yaml, you enforce the specific configurations required for high-traffic environments.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patchesStrategicMerge:
- resource_limits.yaml
images:
- name: my-app
  newName: registry.coolvds.com/my-app
  newTag: v1.0.4

Defining the Application in ArgoCD

Now we tell the controller to watch our repo. You can do this via the CLI or the UI. The declarative way (which fits the philosophy) is to define an Application CRD.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: payment-gateway-oslo
  namespace: argocd
spec:
  project: default
  source:
    repoURL: git@github.com:nordic-dev/payments.git
    targetRevision: HEAD
    path: overlays/production
  destination:
    server: https://kubernetes.default.svc
    namespace: payments
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

The selfHeal: true flag is the magic. If someone manually changes a replica count via kubectl, ArgoCD detects the drift and immediately reverts it to the state defined in Git. This ensures the "Immutable Infrastructure" promise.

Storage Performance and Audit Logs

With Datatilsynet (Norwegian Data Protection Authority) watching closer than ever, you need to prove who changed what and when. In a GitOps workflow, your Git commit log is your audit log. "Who deployed v1.2?" check git log.

However, running a GitOps operator alongside your workload consumes resources. The constant polling and diffing require CPU cycles and I/O operations.

Resource Shared Hosting CoolVDS (NVMe KVM)
IOPS (Random 4k) Unpredictable (Noisy Neighbors) Consistent, Dedicated
Reconciliation Time > 45 seconds < 5 seconds
Etcd Stability Risk of timeouts Solid

If your underlying storage is slow, the Kubernetes control plane struggles to persist state changes to etcd. This leads to API timeouts. We built CoolVDS on pure NVMe arrays precisely to handle the heavy I/O taxes levied by modern orchestration tools like Kubernetes 1.18 and ArgoCD.

Handling Secrets

You cannot check secrets into Git. In 2020, the best practice is using Sealed Secrets by Bitnami or integrating with HashiCorp Vault. For simplicity and self-containment, Sealed Secrets is excellent.

  1. Install the kubeseal controller on the cluster.
  2. Encrypt your secret locally using the public key fetched from the controller.
  3. Commit the SealedSecret CRD to Git.
  4. The controller decrypts it inside the cluster using the private key.
# Create a raw secret (dry run)
kubectl create secret generic database-creds --from-literal=password=SuperSecret -o yaml --dry-run=client > secret.yaml

# Seal it
kubeseal --format=yaml < secret.yaml > sealed-secret.yaml

# Now it is safe to git commit sealed-secret.yaml

Conclusion

Moving to GitOps isn't just a trend; it's a maturity milestone. It creates a firewall between your developers and your production environment. It ensures that the state of your infrastructure in Oslo matches your intent in the repository, byte for byte.

But software is only as good as the hardware it runs on. A jittery network or choked I/O can turn a self-healing cluster into a self-destructing one. Don't risk your production environment on oversold budget VPS providers.

Ready to build a rock-solid Kubernetes foundation? Deploy a high-performance NVMe KVM instance on CoolVDS today and get your GitOps pipeline running in minutes.