Console Login

Turbopack vs. Webpack in 2025: Why Your CI/CD Pipeline is Still Too Slow

Turbopack vs. Webpack in 2025: Why Your CI/CD Pipeline is Still Too Slow

I stopped drinking coffee during build times. Not for health reasons, but because the builds got too fast. If you are still staring at a spinning cursor for 45 seconds just to see a CSS change reflect in your local environment, you are burning money. In the high-cost Norwegian development market, where hourly rates for senior frontend engineers rival the GDP of small nations, efficiency isn't a luxury. It is math.

By late 2025, the transition from Webpack to Turbopack has largely settled from "bleeding edge" to "industry standard" for the Next.js ecosystem. Yet, I still see teams in Oslo and Bergen struggling with sluggish pipelines. They blame the code. I blame the bundler—and the metal it runs on.

The Rust Architecture: Why It Matters

Webpack is JavaScript. Turbopack is Rust. That is the entire argument in a nutshell. JavaScript is single-threaded and JIT-compiled. Rust is a systems language that compiles to machine code. When Vercel introduced Turbopack, they claimed it was 700x faster than Webpack. In real-world production apps—not Hello World demos—we are seeing closer to a 10x to 20x improvement in HMR (Hot Module Replacement) speeds. That is the difference between "instant" and "annoying."

However, Turbopack relies heavily on incremental computation. It caches function results. This means it is incredibly I/O and memory intensive during the initial "warm" phase. If your build server is running on a shared VPS with stolen CPU cycles and a spinning HDD (or cheap SATA SSD), you are bottlenecking the Rust compiler. You might as well put a Ferrari engine in a tractor.

Configuring Next.js 15 for Turbopack

Assuming you are running Next.js 15 (the stable release as of mid-2025), enabling Turbopack for local development is trivial. But for production builds, we need to be specific.

// package.json
{
  "scripts": {
    "dev": "next dev --turbo",
    "build": "next build",
    "start": "next start",
    "lint": "next lint"
  }
}

The --turbo flag invokes the Rust-based compiler. But the real magic happens in how you structure your imports. Turbopack is strict. Unlike Webpack, which often forgave circular dependencies or odd module resolutions, Turbopack will fail hard and fast. This is a feature, not a bug.

Here is a typical next.config.js setup we use for high-performance projects hosted on CoolVDS:

// next.config.js
/** @type {import('next').NextConfig} */
const nextConfig = {
  reactStrictMode: true,
  // Turbopack options are now native in Next.js 15
  experimental: {
     // Optimize package imports for faster resolution
    optimizePackageImports: ['@mantine/core', 'lucide-react'],
  },
  // Ensure we aren't leaking headers in production
  poweredByHeader: false,
};

module.exports = nextConfig;

The Hardware Bottleneck: CI/CD Reality Check

Here is the war story. Last month, a client in Trondheim complained that despite switching to Turbopack, their Jenkins pipeline was still taking 12 minutes to build and deploy. We logged into their build agent. It was a standard 2 vCPU cloud instance hosted in a massive German datacenter.

We ran htop during the build.

Pro Tip: Turbopack parallelizes aggressively. If your build agent only has 2 vCPUs, the context switching overhead destroys the performance gains of Rust.

We migrated their build agent to a CoolVDS NVMe Optimized Instance (4 Dedicated vCores, 16GB RAM) located in our Oslo datacenter. The result? The build time dropped to 3 minutes. Why? Because Rust compilation demands high clock speeds and fast disk I/O to read/write the cache layers. CoolVDS uses enterprise-grade NVMe drives that handle the high IOPS of incremental builds without choking.

Dockerfile Optimization for Turbopack

When deploying to CoolVDS, we use a multi-stage build to keep the final image tiny while giving Turbopack the resources it needs during the build phase.

# Syntax: docker/dockerfile:1

# Stage 1: Base
FROM node:22-alpine AS base

# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app

# Stage 2: Deps
FROM base AS deps
COPY package.json package-lock.json* ./
RUN npm ci

# Stage 3: Builder
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .

# Disable Next.js telemetry during build
ENV NEXT_TELEMETRY_DISABLED 1

# The heavy lifting happens here
RUN npm run build

# Stage 4: Runner
FROM base AS runner
WORKDIR /app
ENV NODE_ENV production

RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs

COPY --from=builder /app/public ./public

# Automatically leverage output traces to reduce image size
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static

USER nextjs

EXPOSE 3000
ENV PORT 3000

CMD ["node", "server.js"]

Data Sovereignty and Latency

Technical performance isn't the only metric. Legal performance matters too. If you are handling Norwegian user data, GDPR and Schrems II compliance are non-negotiable. Many developers unknowingly send their source code and environment variables (secrets!) to build farms hosted in the US.

By hosting your GitLab Runner or Jenkins agent on CoolVDS, your code never leaves Norway. The data stays under the jurisdiction of the Norwegian Datatilsynet. Plus, if your production servers are also in Oslo, the rsync or scp artifact transfer is effectively instantaneous. We are talking sub-millisecond latency within the NIX (Norwegian Internet Exchange) infrastructure.

Feature Webpack 5 Turbopack (2025)
Language JavaScript Rust
Cold Start Time 5-15s 0.5-2s
HMR Speed ~500ms ~15ms
Resource Usage High Memory High CPU/IOPS

The Verdict

Turbopack is the engine, but CoolVDS is the fuel. You cannot expect next-generation tooling to perform on last-generation infrastructure. If you are serious about reducing TTM (Time to Market) and keeping your developers happy, you need to upgrade the underlying metal.

Don't let slow I/O kill your workflow. Spin up a high-frequency NVMe instance on CoolVDS today and watch your build times plummet. Your developers (and your CFO) will thank you.