Console Login

Microservices Architecture in 2017: Patterns, Pitfalls, and Infrastructure Reality

Microservices Architecture in 2017: Patterns, Pitfalls, and Infrastructure Reality

Everyone wants to be Netflix. I get it. I’ve sat in the strategy meetings where the CTO points at a diagram of the "Death Star" architecture and says, "We need this." But here is the hard truth currently circulating the DevOps meetups in Oslo: Microservices usually solve a people problem, not a technical one.

When you break a monolith, you trade code complexity for operational complexity. A function call that used to take 14 nanoseconds inside a JVM memory space now takes 20 milliseconds over the network. If you chain five of those calls, your user is waiting 100ms just for the network round trips. And that is assuming the network is perfect. It never is.

I recently audited a platform for a major Nordic e-commerce retailer. They split their checkout process into seven different services. It worked fine in dev. In production, under load, the InventoryService started timing out. Because they didn't implement circuit breakers, the CheckoutService hung waiting for a response, locking up threads. The entire platform cascaded into failure. They were down for four hours.

If you are going down this road, you need three things: robust patterns, total observability, and infrastructure that doesn't steal your CPU cycles.

Pattern 1: The API Gateway (The Bouncer)

Do not let clients talk directly to your microservices. It is a security nightmare and couples your frontend to your backend topology. In 2017, NGINX is still the king here, though Kong is making waves. You need a unified entry point that handles SSL termination, rate limiting, and routing.

Here is a production-ready NGINX configuration snippet for routing traffic to different upstream microservices based on the URI. We use this standard on CoolVDS instances to offload SSL processing from the application containers.

http {
    upstream user_service {
        server 10.0.0.5:8080;
        server 10.0.0.6:8080;
        keepalive 32;
    }

    upstream order_service {
        server 10.0.0.7:3000;
        server 10.0.0.8:3000;
    }

    server {
        listen 443 ssl http2;
        server_name api.norway-shop.no;

        ssl_certificate /etc/letsencrypt/live/api.norway-shop.no/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/api.norway-shop.no/privkey.pem;

        location /users/ {
            proxy_pass http://user_service;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            proxy_set_header X-Real-IP $remote_addr;
        }

        location /orders/ {
            proxy_pass http://order_service;
        }
    }
}

Pro Tip: Notice the keepalive 32; directive? Without it, NGINX opens and closes a TCP connection for every single request to your upstream backend. That handshake overhead destroys performance at scale. Keep the pipes open.

Pattern 2: The Circuit Breaker (Fail Fast)

In a distributed system, failure is inevitable. Hard drives die. Networks flap. Someone unplugs a switch. The goal isn't to prevent failure, but to contain it.

If Service A calls Service B, and Service B is slow, Service A should not wait forever. It should give up and return a default response or an error. This is the Circuit Breaker pattern. Right now, Netflix Hystrix is the industry standard for Java applications.

Here is how we protect a recommendation engine call. If the engine is down, we simply return a hardcoded list of "Best Sellers" instead of crashing the page.

@Service
public class ProductService {

    @HystrixCommand(fallbackMethod = "getDefaultRecommendations", commandProperties = {
        @HystrixProperty(name = "execution.isolation.thread.timeoutInMilliseconds", value = "500")
    })
    public List<Product> getRecommendations(String userId) {
        // Call to the remote microservice
        return restTemplate.getForObject("http://recommendation-service/api/" + userId, List.class);
    }

    public List<Product> getDefaultRecommendations(String userId) {
        // Fallback static list
        return productRepository.findTop10BestSellers();
    }
}

Pattern 3: Service Discovery (Where are you?)

Hardcoding IP addresses in 2017 is a firing offense. Containers die and respawn with new IPs. You need a dynamic phonebook. We see a lot of success with HashiCorp Consul. It’s lighter than ZooKeeper and integrates well with everything.

Running a Consul agent on your CoolVDS node allows your services to register themselves automatically. Here is a basic agent configuration:

{
  "datacenter": "oslo-dc1",
  "data_dir": "/var/lib/consul",
  "log_level": "INFO",
  "node_name": "worker-node-01",
  "server": false,
  "retry_join": ["10.0.0.1", "10.0.0.2"],
  "bind_addr": "10.0.0.5"
}

The Infrastructure Reality Check

You can have the cleanest code in the world, but if your underlying virtualization is garbage, your microservices will suffer. This is where the "noisy neighbor" effect kills you.

On budget hosting, providers often use container-based virtualization (like OpenVZ) and oversell the CPU. If another customer on the same physical host starts mining crypto or encoding video, your microservice latency spikes. Your circuit breakers trip. Your site goes down.

This is why we architected CoolVDS strictly on KVM (Kernel-based Virtual Machine). KVM provides hardware virtualization. Your RAM is yours. Your CPU cores are reserved.

Storage I/O: The Hidden Bottleneck

Microservices generate massive amounts of logs. You are likely piping logs to an ELK stack (Elasticsearch, Logstash, Kibana). Elasticsearch is incredibly I/O hungry. If you are running on spinning rust (HDD) or shared SATA SSDs, your logging pipeline will back up, eventually blocking your application.

We benchmarked disk I/O using fio on a standard CoolVDS NVMe instance versus a competitor's "SSD" VPS. The difference is not subtle.

Metric Standard SATA SSD VPS CoolVDS NVMe
Random Read IOPS (4k) ~15,000 ~80,000+
Write Latency 2-5 ms < 0.1 ms

Compliance and the Norwegian Context

We are all watching the GDPR regulation that comes into effect next year (2018). It is going to change how we handle data fundamental. The days of casually storing Norwegian user data on US-based servers are numbered. Datatilsynet is already ramping up scrutiny.

Hosting your microservices cluster on CoolVDS in our Oslo data center isn't just about latency (though <2ms ping to major Norwegian ISPs is nice). It’s about data sovereignty. Knowing exactly where your bits live is becoming a legal requirement, not just a preference.

Start Small

Don't rewrite your whole system at once. Carve out one bounded context—maybe your user profile service—and deploy it. Use Docker Compose for local testing, then push to a KVM-based VPS for staging.

Microservices demand respect. They demand discipline. And they demand infrastructure that doesn't blink under pressure.

Ready to build? Don't let slow I/O kill your architecture. Deploy a high-performance NVMe KVM instance on CoolVDS today and get the stability your code deserves.