Console Login

Breaking the Monolith: Practical Microservices Architecture Patterns for 2014

Breaking the Monolith: Practical Microservices Architecture Patterns for 2014

It’s 3:00 AM on a Saturday. Your monolithic Java application just threw an OutOfMemoryError for the third time this month. Restarting the JVM takes 12 minutes. The marketing team is screaming because the checkout page is down, but the bug is actually deep inside the PDF invoice generation library. Because everything is bundled into one massive WAR file, the invoice bug took down the checkout.

If this sounds like your life, you are not alone. But there is a shift happening in our industry. Companies like Netflix and Amazon are talking about "Microservices"—breaking that monolith into small, decoupled components. With the recent release of Docker 1.0 just two months ago, this architecture is finally becoming accessible to the rest of us.

I’ve spent the last six months migrating a high-traffic e-commerce platform in Oslo from a legacy PHP monolith to a distributed architecture. It wasn’t pretty, and we broke a lot of things. Here is what we learned, and how you can avoid our scars.

The Container Revolution: Docker is Ready

Until recently, isolating services meant spinning up full Virtual Machines (VMs) using VMware or VirtualBox. That’s heavy. A full OS for a 5MB Python script? Wasteful.

Docker has changed the game. It uses Linux Containers (LXC) and cgroups to wrap your application in a lightweight box. We recently upgraded to Docker 1.1.2, and it’s finally stable enough for production if you are careful.

Here is a basic Dockerfile we use for a Node.js microservice. Note that we are explicit about versions—never trust latest.

FROM ubuntu:14.04

# Install Node.js and npm
RUN apt-get update && apt-get install -y nodejs npm

# App directory
WORKDIR /usr/src/app

# Install app dependencies
COPY package.json .
RUN npm install

# Bundle app source
COPY . .

EXPOSE 8080
CMD ["nodejs", "app.js"]
Pro Tip: Do not run Docker on OpenVZ VPS hosting. OpenVZ shares the kernel with the host, and you will hit module conflicts constantly. You need KVM virtualization to run your own kernel and Docker daemon properly. This is why we deploy our containers on CoolVDS—their KVM instances handle the cgroups without fighting the host node.

Pattern 1: The API Gateway

When you split your app into ten services (User, Cart, Product, Shipping...), you cannot ask the frontend to track ten different IP addresses. You need a gatekeeper.

We rely heavily on Nginx as a reverse proxy. It sits in front of the swarm and routes traffic. It also handles SSL termination, which offloads CPU work from your application containers.

Here is a snippet from our nginx.conf used as an API Gateway:

http {
    upstream user_service {
        server 10.0.0.5:3000;
        server 10.0.0.6:3000;
    }

    upstream cart_service {
        server 10.0.0.7:4000;
    }

    server {
        listen 80;
        server_name api.coolshop.no;

        location /users/ {
            proxy_pass http://user_service;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }

        location /cart/ {
            proxy_pass http://cart_service;
        }
    }
}

Pattern 2: Asynchronous Messaging

HTTP is easy, but it's synchronous. If Service A calls Service B, and Service B is slow, Service A hangs. In a microservices architecture, this latency cascades. Suddenly, your entire platform is waiting on a slow email server.

The fix? RabbitMQ. Stop coupling your services with HTTP REST calls for everything. Fire an event and move on.

For example, when a user registers, the User Service shouldn't send the welcome email. It should publish a user_created message to the queue. The Email Service listens for that message and handles it when it can.

import pika

# Connect to local RabbitMQ instance
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()

channel.queue_declare(queue='email_tasks')

channel.basic_publish(exchange='',
                      routing_key='email_tasks',
                      body='{"user_id": 42, "action": "welcome_email"}')

print " [x] Sent 'Welcome Email Request'"
connection.close()

The Infrastructure Reality Check

Microservices introduce a new problem: Network Latency. In a monolith, a function call takes nanoseconds. In a distributed system, a network call takes milliseconds. If you have 50 calls to render a page, those milliseconds add up to a sluggish UI.

This is where hosting geography becomes paramount. If your customers are in Norway, hosting in Virginia (US-East) adds ~100ms of round-trip latency to every single request. For a microservice mesh, that is death.

Data Sovereignty & The Datatilsynet

Post-Snowden, we are all more paranoid about where data lives. While Safe Harbor is currently the framework, the political climate suggests tighter controls are coming. Hosting data physically in Norway isn't just about latency; it's about compliance with the Personopplysningsloven (Personal Data Act). Your clients—especially if they are in Norwegian public sector or finance—will ask exactly where the physical drive sits.

Why Hardware Matters (SSD vs HDD)

With microservices, you are running many small databases (MySQL, Redis, MongoDB) in parallel. The I/O pattern becomes random and jagged. Spinning hard disks (HDDs) cannot keep up with the IOPS required by twenty Docker containers logging simultaneously.

You need Solid State Drives. Recently, enterprise-grade NVMe technology has started to appear on the market (Intel launched the P3700 series this summer), offering insane throughput. While rare, getting access to NVMe storage or at least enterprise-grade SATA SSDs is crucial to prevent the "noisy neighbor" effect where one busy database kills the performance of your entire cluster.

At CoolVDS, we have standardized on pure SSD arrays and KVM virtualization. We don't oversell CPU. If you buy 2 cores, you get 2 cores. This consistency is vital when you are trying to debug a race condition across three different nodes.

Conclusion

Microservices are not a silver bullet. They increase complexity in operations to decrease complexity in development. If you are going down this road, you need to automate everything using tools like Ansible or Chef, and you need infrastructure that doesn't flake out.

Don't build your future on sluggish legacy hardware. Test your architecture on a platform designed for modern workloads.

Ready to deploy your first Docker cluster? Spin up a high-performance KVM instance in Oslo on CoolVDS today.