Console Login

Surviving Microservices: A Practical Guide to Service Discovery and Load Balancing with Consul & HAProxy

The Monolith is Dead. Long Live the Network Nightmare.

We all read the Martin Fowler articles. We all drank the Kool-Aid. "Break up the monolith," they said. "It will be scalable," they said. Now you have twenty different Go binaries running inside Docker containers across five different nodes, and half of them don't know the IP address of the database. Welcome to 2015.

If you are manually updating /etc/hosts or, god forbid, hardcoding IP addresses in your application config files, you are doing it wrong. In a dynamic environment where containers spin up and die in seconds, static configuration is a death sentence. We need a dynamic fabric—a way for services to find each other without human intervention.

Let's look at how to solve this using tools that actually work in production today: Consul for service discovery and HAProxy for load balancing.

The Architecture: Smart Pipes, Dumb Endpoints

The goal is simple: Service A needs to talk to Service B. It shouldn't care if Service B is on Node 1 (192.168.1.10) or Node 2 (192.168.1.11). It should just hit a local proxy, and the proxy should route it.

Here is the battle-tested stack we are seeing deployed across serious infrastructure in Norway right now:

  • Docker (1.6+): For isolation.
  • Consul (0.5): For service discovery and health checking.
  • Consul Template: To dynamically rewrite config files.
  • HAProxy (1.5): To route the traffic.

Step 1: The Consensus Store (Consul)

Forget Zookeeper unless you love the JVM eating all your RAM. HashiCorp's Consul is the modern choice. It speaks DNS, it exposes an HTTP API, and it uses the Raft consensus algorithm.

Pro Tip: Raft is sensitive to disk latency. If you run your Consul agents on cheap, oversold OpenVZ hosting, the "noisy neighbors" stealing your CPU cycles will cause leader elections to time out. Your cluster will partition. We've seen this happen to dev teams in Oslo trying to save a few kroner on hosting. This is why CoolVDS only deploys on KVM with strict resource guarantees. We don't steal your CPU.

Step 2: Dynamic Load Balancing with HAProxy

We can't restart HAProxy every time a container starts. Instead, we use `consul-template`. This daemon watches the Consul cluster. When a new service registers, it rewrites the `haproxy.cfg` and reloads the service gracefully.

Here is a snippet of what your template file (`haproxy.ctmpl`) should look like:

defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms listen http-in bind *:80 {{range service "production.webapp"}} server {{.Node}} {{.Address}}:{{.Port}} check {{end}}

When you run this, `consul-template` polls the service catalog. If you scale your webapp from 2 containers to 20 using Docker, this file updates instantly.

Performance: The Latency Killer

Why go through this trouble? Latency.

If your users are in Oslo or Bergen, routing traffic through a centralized load balancer in Frankfurt or London adds 30-50ms of round-trip time (RTT). By running your own service discovery mesh on nodes physically located in Norway (or close by), you keep internal traffic on the LAN.

However, running a mesh adds overhead. Every request hits a proxy. If your virtualization layer adds I/O wait time, your application feels sluggish regardless of how optimized your code is. We benchmarked CoolVDS KVM instances against standard shared VPS providers. On a high-throughput test (10k req/sec), the "budget" VPS choked on context switches.

MetricStandard Shared VPSCoolVDS KVM
virt_steal (CPU)High (5-15%)Near Zero
Disk I/O LatencyVariable (Spikes >100ms)Consistent (<2ms)
Consul Health CheckFlakyStable

Security and The "Datatilsynet" Factor

We are operating in a post-Snowden world. Privacy matters. When you architect a mesh, you are creating a lot of internal chatter. If you are spanning this across public networks, you must encrypt it.

While tools like Tinc or IPsec are common, they reduce throughput. The pragmatic approach for 2015 is keeping your service discovery traffic on a private network interface (VLAN). CoolVDS offers private networking between your instances. Use it. Do not expose your Consul admin interface (port 8500) to the public internet unless you want your topology mapped by scanners.

Conclusion: Build for Failure

Hardware fails. Docker daemons crash. The network is unreliable. By implementing client-side load balancing with Consul and HAProxy, you ensure that when—not if—a node goes down, your traffic is automatically rerouted.

But software resilience cannot fix terrible hardware. You need a substrate that respects your need for consistent IOPS and CPU time. Don't let your infrastructure be the reason you get paged at 3 AM.

Ready to build a cluster that actually stays up? Deploy a high-performance KVM instance on CoolVDS today. We give you the raw power; you bring the code.