Console Login

Escape YAML Hell: Programmable Pipelines with Dagger CI (2025 Edition)

Escape YAML Hell: Programmable Pipelines with Dagger CI

I have spent more hours than I care to admit debugging GitHub Actions by committing fix ci, pushing, waiting 4 minutes, and watching a red 'X' appear because of a missing indentation in a YAML file. If you work in DevOps, you know this pain. It is the "push and pray" cycle.

By July 2025, we should be done with this. Writing complex logic in declarative YAML is a mistake. It is hard to test locally, impossible to debug interactively, and vendor-locked to the CI provider.

Enter Dagger. It allows you to write your CI/CD pipelines as code (Go, Python, TypeScript) rather than configuration. It runs entirely in Docker containers. If it runs on your laptop, it runs on the CI server. No surprises.

The Architecture of a Portable Pipeline

Dagger is not a CI provider. It is a CI engine. You still use Jenkins, GitLab CI, or GitHub Actions to trigger the workflow, but the heavy lifting is done by the Dagger engine via the Buildkit API. This means your pipeline is actually a client utilizing the Dagger SDK to construct a Directed Acyclic Graph (DAG) of container operations.

Why does this matter for your infrastructure? Because Dagger is heavy on containerization and caching.

Pro Tip: Dagger relies heavily on the Buildkit cache. If your hosting environment has slow I/O, your cache export/import will take longer than the build itself. This is where standard VPS providers fail. You need high IOPS NVMe storage, like the standard configuration on CoolVDS KVM instances, to see the real speed benefits.

Implementing a Go Pipeline

Let's look at a real scenario. We want to build a Go application, run linting, and if successful, publish a container. Instead of a 50-line YAML file with scattered run: | shell scripts, we write a Go program.

Prerequisites available in 2025:

  • Docker Engine 24+
  • Go 1.23+
  • Dagger CLI

1. The Setup

Initialize a Dagger module in your project root:

dagger init --sdk=go ci

This creates a ci/ directory. We will modify ci/main.go to define our pipeline.

2. The Pipeline Code

Here is how a systems architect defines a build process. It is typed, testable, and compiled.

package main

import (
	"context"
	"dagger/ci/internal/dagger"
)

type Ci struct{}

// Build and Publish the application
func (m *Ci) Build(ctx context.Context, source *dagger.Directory) (string, error) {
	// Define the base image
	builder := dag.Container().
		From("golang:1.24-alpine").
		WithDirectory("/src", source).
		WithWorkdir("/src").
		WithExec([]string{"go", "build", "-o", "myapp", "main.go"})

	// Run the compiled binary to verify it starts (smoke test)
	_, err := builder.WithExec([]string{"./myapp", "--version"}).Sync(ctx)
	if err != nil {
		return "", err
	}

	// create the final production image
	prod := dag.Container().
		From("alpine:latest").
		WithFile("/bin/myapp", builder.File("/src/myapp")).
		WithEntrypoint([]string{"/bin/myapp"})

	// Publish to registry (requires authentication set in env)
	addr, err := prod.Publish(ctx, "registry.example.com/my-org/myapp:latest")
	if err != nil {
		return "", err
	}

	return addr, nil
}

Notice what happened here. We aren't writing bash string interpolation. We are mounting directories and executing commands as function calls. If I make a syntax error, the Go compiler catches it before I push to git.

Running It Locally vs. Remote

To run this pipeline on your local machine:

dagger call build --source=.

It spins up the containers, mounts your local source, executes the build, and outputs the result. It takes 15 seconds. If it passes here, it is guaranteed to pass on your production server.

The Performance Bottleneck

When you move this to production, you might run it on a runner inside your infrastructure. This is where hardware matters. Dagger creates many intermediate container layers.

Resource Impact on Dagger CI CoolVDS Spec
Disk I/O Critical. Layer extraction and cache hydration kill HDD performance. Enterprise NVMe (High IOPS)
RAM High. Each pipeline step is a container in memory. DDR4/DDR5 ECC RAM
Kernel Needs modern OCI support (cgroups v2). Custom ISO / Latest Linux Kernel

I have seen pipelines time out simply because the shared VPS was stealing CPU cycles during the compression phase of the Docker export. Don't let cheap hosting bottleneck your deployment velocity.

Data Sovereignty and The "Norwegian Cloud"

For my clients in Oslo and Stavanger, GDPR and local data residency are non-negotiable. Using SaaS-based CI runners often means your code—and potentially your secrets—are processed on servers owned by US entities, creating a gray area for strict compliance mandates (Schrems II implications).

Hosting your own Dagger runners on a VPS in Norway solves this. You control the stack. You control the data. CoolVDS offers data centers located in the Nordics, ensuring that when you run dag.Container().Publish(), the artifact moves from your secure environment directly to your registry, without taking a detour through a server farm in Virginia.

Integration with GitHub Actions

You don't need to abandon your current CI provider. You just make it a dumb trigger. Here is your entire .github/workflows/pipeline.yml:

name: Dagger Pipeline

on: [push]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Install Dagger CLI
        run: cd /usr/local && curl -L https://dl.dagger.io/dagger/install.sh | sh
      - name: Run Pipeline
        run: dagger call build --source=.
        env:
          DAGGER_CLOUD_TOKEN: ${{ secrets.DAGGER_CLOUD_TOKEN }}

Summary: The Shift Left

We are shifting the complexity left—from the CI configuration file to the application code itself. This reduces the feedback loop from minutes to seconds.

However, running heavy container orchestration requires iron. If you try to run concurrent Dagger pipelines on a budget VPS with 1 vCPU and shared storage, you will crash. You need dedicated cores and NVMe throughput.

Your code deserves a clean build environment. Deploy a high-performance runner on CoolVDS today and stop waiting for the queue.