Console Login

Architecting Microservices in 2015: Patterns, Performance, and The Safe Harbor Fallout

Architecting Microservices in 2015: Patterns, Performance, and The Safe Harbor Fallout

Everyone wants to be Netflix. I get it. You read a whitepaper about how they deploy thousands of times a day, and suddenly your perfectly functional PHP monolith looks like a dinosaur. But here is the hard truth: Netflix has an army of engineers to manage the chaos of distributed systems. You have three guys and a staging environment that hasn't worked since August.

Microservices are not a silver bullet; they are a trade-off. You trade code complexity for operational complexity. In a monolith, a function call takes nanoseconds. In a microservice architecture, that function call becomes a network request. It takes milliseconds. That is a magnitude difference. If you don't architect for that latency—and the inevitable network failures—your application will crawl.

Today, we are looking at the two critical patterns you need to implement to survive the transition to microservices in late 2015: The API Gateway and Service Discovery. We will also address the elephant in the server room: the recent invalidation of the Safe Harbor agreement and why your choice of hosting location just became a legal issue.

The API Gateway Pattern (using Nginx)

Do not let your clients talk directly to your microservices. It is a security nightmare and a refactoring straitjacket. If you move a service, you break the client. Instead, use an API Gateway.

In 2015, Nginx remains the undisputed king here. While HAProxy is great, Nginx's versatility makes it the default choice for the battle-hardened DevOps engineer. The gateway sits at the edge, handling SSL termination, logging, and routing.

Here is a battle-tested configuration for an API gateway routing traffic to a user service and an order service. Notice the upstream blocks—this is where we prepare for scale.

http {
    upstream user_service {
        server 10.0.0.5:8080;
        server 10.0.0.6:8080;
        keepalive 64;
    }

    upstream order_service {
        server 10.0.0.7:9000;
    }

    server {
        listen 80;
        server_name api.yourdomain.no;

        location /users/ {
            proxy_pass http://user_service;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            proxy_set_header X-Real-IP $remote_addr;
        }

        location /orders/ {
            proxy_pass http://order_service;
        }
    }
}
Pro Tip: Always set keepalive in your upstream blocks. Without it, Nginx opens a new TCP connection for every request to your backend. On a high-traffic site, you will exhaust your ephemeral ports and hit latency spikes that look like ghosts in your monitoring graphs.

Service Discovery: No More Hardcoded IPs

In the config above, I hardcoded IPs. That is fine for a static VPS setup, but we are moving toward containerization. With Docker gaining massive traction this year (especially with the 1.8 and 1.9 releases), IPs change every time you redeploy.

You need Service Discovery. Currently, Consul by HashiCorp is the robust choice over the complexity of a full ZooKeeper setup or the rawness of Etcd. Consul provides DNS-based discovery, which integrates beautifully with existing applications.

When a service boots up, it registers itself. When it dies, it deregisters. Your Nginx configuration can then use Consul's DNS interface to resolve user.service.consul instead of a static IP.

The "Safe Harbor" Crisis: Why Location Matters

Last month (October 2015), the European Court of Justice invalidated the Safe Harbor agreement (Schrems I). If you are storing Norwegian user data on servers owned by US companies (like AWS or DigitalOcean), you are now in a legal grey area regarding the Data Protection Directive.

The Norwegian Data Protection Authority (Datatilsynet) is watching this closely. Latency isn't the only reason to host locally anymore; data sovereignty is paramount. Moving your infrastructure to a Norwegian provider ensures your data stays within the EEA and under Norwegian jurisdiction, insulating you from the legal fallout of US surveillance laws.

Infrastructure Requirements for Microservices

Microservices are noisy. They generate logs, they chatter over the network, and they require rapid I/O operations. A monolith might do one big database read; microservices might do fifty small ones to construct a single page.

1. IOPS are King

Standard SATA SSDs are good, but for database-heavy microservices, you need high IOPS. We have seen Docker registries grind to a halt on standard storage because of the sheer number of layer reads/writes.

2. Low Latency Networking

If you split your app into 10 services, a single user request might traverse your internal network 20 times. If your internal latency is 1ms, you added 20ms of overhead. If it's 10ms, you added 200ms. Your app now feels sluggish.

Feature Budget VPS CoolVDS Architecture
Storage Shared SATA / HDD Pure SSD / NVMe tiers
Virtualization OpenVZ (Oversold) KVM (Kernel-based, Dedicated RAM)
Internal Network Public Internet routing Private 10Gbps LAN

The CoolVDS Implementation

This is why we architect CoolVDS the way we do. We use KVM (Kernel-based Virtual Machine) because it allows you to run your own kernel—essential for Docker support. Many budget providers still use OpenVZ, which shares the kernel and often lacks the modules required for modern containerization.

Furthermore, our data centers in Oslo connect directly to the NIX (Norwegian Internet Exchange). If your customers are in Norway, the latency is practically non-existent. You get the legal safety of Norwegian jurisdiction combined with the raw performance needed to handle the network overhead of a microservices architecture.

Don't let network latency or legal uncertainty kill your project. Spin up a KVM instance on CoolVDS today and build on a foundation that is ready for 2016.