Console Login

GitOps is Non-Negotiable: Mastering ArgoCD v1.0 on Kubernetes

GitOps is Non-Negotiable: Mastering ArgoCD v1.0 on Kubernetes

If you are still SSH-ing into servers to pull code, or worse, running kubectl apply -f . from your laptop, you are doing it wrong. I've seen it a hundred times: a developer pushes a hotfix, the cluster state diverges from git, and suddenly no one knows what version is actually running in production. It is a recipe for disaster. The release of ArgoCD v1.0 just a few weeks ago marks a pivotal moment for those of us tired of fragile CI scripts pushing credentials into our clusters. It is time to invert the flow.

GitOps isn't just a buzzword used by Weaveworks; it is the only sane way to manage distributed systems. The concept is simple: Git is the single source of truth. Your cluster pulls the state; you don't push it. This solves the security nightmare of giving Jenkins or GitLab CI admin access to your Kubernetes API. But to make this work, you need a reconciliation engine that doesn't sleep. That is where ArgoCD comes in, and frankly, where your underlying infrastructure starts to sweat.

The Architecture of a Pull-Based Deployment

In a traditional push model, your CI server builds the Docker image and then runs a command to update the deployment. This introduces a security risk: if your CI server is compromised, your production environment is wide open. In the GitOps model using ArgoCD, the CI server's job ends at pushing the image to the registry and updating the image tag in the Git repository manifest. That's it.

ArgoCD, running inside your cluster, detects the change in the Git repo and synchronizes the cluster state. It matches the live state to the desired state defined in Git. If someone manually changes a service type from ClusterIP to LoadBalancer via the command line, ArgoCD sees the drift and (if configured) instantly reverts it. Strict consistency.

Prerequisites for this Setup

Before we touch the YAML, let's talk hardware. Running a Kubernetes control plane alongside a resource-hungry operator like ArgoCD requires stability. The reconciliation loop constantly queries the API server. If you are running this on cheap, oversold OpenVZ containers, you are going to see etcd timeouts. I've spent too many nights debugging "crash loop backoffs" that turned out to be disk I/O latency.

Pro Tip: For a production-grade Kubernetes cluster hosting ArgoCD, we strictly use KVM-based virtualization. We need the kernel isolation and, crucially, the NVMe storage speeds. At CoolVDS, our Oslo datacenter nodes are optimized for high IOPS specifically to handle the chatter of etcd and controller managers. Don't cheap out on IOPS.

Installing ArgoCD v1.0

Let's assume you have a Kubernetes v1.14+ cluster running. We will deploy ArgoCD into its own namespace. Since v1.0.0 is the fresh stable release, we will use that manifest.

# Create the namespace
kubectl create namespace argocd

# Apply the manifest directly from the Argo Project
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v1.0.0/manifests/install.yaml

This installs the custom resource definitions (CRDs), the controller, the repo server, and the API server. Watch the pods come up:

kubectl get pods -n argocd -w

Once they are running, you need to access the UI. By default, the API server is not exposed. For a quick internal test, use port-forwarding. For production, you'd want an Ingress or a Service of type LoadBalancer.

kubectl port-forward svc/argocd-server -n argocd 8080:443

The initial password acts as the name of the server pod. You can grab it with this one-liner:

kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2

Defining Your First Application

Now for the real work. We don't click buttons in the UI to deploy apps; we write code. We will define an Application CRD that tells ArgoCD where to look and where to deploy.

Create a file named guestbook-app.yaml:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/argoproj/argocd-example-apps.git
    targetRevision: HEAD
    path: guestbook
  destination:
    server: https://kubernetes.default.svc
    namespace: default
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

Apply it:

kubectl apply -f guestbook-app.yaml

Notice the syncPolicy. I have enabled prune: true. This means if I delete a file in the Git repository, ArgoCD will delete the corresponding resource in the cluster. This is true cleanup. selfHeal: true means if I manually mess with the live resources, ArgoCD will fight me and revert the changes. This is the discipline required for modern operations.

Handling Secrets and Compliance in Norway

Here is the tricky part. You cannot store raw Kubernetes Secrets in Git. That is a violation of basic security and, if you handle PII, a massive GDPR risk. In Norway, Datatilsynet (The Norwegian Data Protection Authority) is rightfully aggressive about data protection.

In 2019, the best practice is using Sealed Secrets by Bitnami or Git-crypt. Sealed Secrets allows you to encrypt the secret on your laptop using a public key located in the cluster. The controller in the cluster (and only the controller) can decrypt it using the private key.

The workflow looks like this:

  1. Developer creates a standard secret.yaml.
  2. Run kubeseal < secret.yaml > sealed-secret.json.
  3. Commit sealed-secret.json to Git.
  4. ArgoCD deploys the SealedSecret CRD.
  5. The SealedSecrets controller decrypts it into a native Kubernetes Secret.

This keeps your Git repo clean and your auditors happy. Ensure your cluster is physically located in a jurisdiction you trust. We see many Norwegian enterprises moving workloads from US-based clouds back to local hosting to ensure data sovereignty. CoolVDS offers that precise geolocation assurance in our Oslo datacenter.

Performance Tuning the Reconciliation Loop

As you scale to hundreds of applications, ArgoCD can become heavy. It clones repositories and generates manifests constantly. You might need to tune the resource limits in the argocd-repo-server deployment.

resources:
  requests:
    memory: "256Mi"
    cpu: "100m"
  limits:
    memory: "512Mi"
    cpu: "500m"

However, software limits only help if the hardware delivers. High "Steal Time" on a CPU means your VPS neighbor is noisy, stealing your cycles. This delays reconciliation. If you are doing GitOps, you need guaranteed CPU cycles. We designed the CoolVDS KVM platform to minimize steal time, ensuring your GitOps controller reacts instantly, not 5 seconds later.

Conclusion

ArgoCD v1.0 is the tool we have been waiting for. It visualizes the complex graph of Kubernetes resources and enforces the state defined in Git. It removes the human error from deployments and secures your API server by removing CI access. But remember, a declarative tool is only as reliable as the infrastructure it runs on. Don't build a Ferrari engine and put it in a rusted chassis.

If you are ready to implement a robust GitOps pipeline, start with a foundation that respects your need for speed and data sovereignty. Spin up a high-performance KVM instance on CoolVDS today and see the difference low latency makes to your deployment pipelines.