Stop Hardcoding IP Addresses: A Guide to Dynamic Service Discovery
It is 3:00 AM. Your primary backend API node just suffered a kernel panic. Your frontend web servers are still trying to talk to 192.168.1.55 because that IP is hardcoded in your Nginx upstream config. Your site is returning 502 Bad Gateway errors, and you are frantically SSH-ing into ten different boxes to update a config file. If this sounds familiar, your architecture is brittle.
We are seeing a massive shift in 2014. The monolithic application is being broken down into Service Oriented Architecture (SOA) or what the cool kids are starting to call "Microservices." But this introduces a massive headache: How do services find each other when IPs change dynamically?
You need a connectivity layer—a "mesh" of proxies—that handles this automatically. In this guide, we will build a robust service discovery mechanism using HAProxy 1.5 (which finally supports native SSL!) and HashiCorp's brand new tool, Consul.
The Architecture: The "Sidecar" Approach
Instead of a central load balancer that becomes a bottleneck, we will place a lightweight HAProxy instance on every application server. This local proxy handles outbound traffic to other services. It talks to localhost, and HAProxy routes it to the correct backend node.
To make this work, we need three components:
- The Registry: Consul (to keep track of who is up and who is down).
- The Proxy: HAProxy 1.5 (to route the traffic).
- The Glue: Consul Template (to rewrite HAProxy configs automatically).
Pro Tip: Do not attempt this on OpenVZ or shared hosting. The Gossip protocol used by Consul (Serf) requires stable network latency and legitimate entropy. If you are serious about this, use CoolVDS KVM instances. The hardware isolation ensures that a noisy neighbor won't cause your health checks to time out. We've seen "stolen CPU" on budget hosts cause false positives in cluster elections. Don't risk it.
Step 1: Installing the Service Registry (Consul)
HashiCorp released Consul just a couple of months ago, and it blows Zookeeper out of the water for this specific use case because it includes health checking and DNS out of the box. Let's install it on an Ubuntu 14.04 LTS node.
# Download Consul 0.3.0 (Checking for wget... yes)
cd /usr/local/bin
wget https://dl.bintray.com/mitchellh/consul/0.3.0_linux_amd64.zip
unzip 0.3.0_linux_amd64.zip
chmod +x consul
# Start the agent in server mode
consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul -ui-dir /usr/share/consul/ui
In a production environment (like on your CoolVDS private network), you would run 3 to 5 servers for quorum. For this demo, one is enough. Once running, every service you deploy will register itself here via the HTTP API.
Step 2: Configuring HAProxy 1.5
HAProxy 1.5 is a beast. The new SSL termination capabilities mean we don't need Nginx in front of it anymore. But the magic lies in how we configure the backends. We want HAProxy to read from Consul.
Since HAProxy doesn't speak "Consul" natively yet, we use consul-template. This daemon watches the Consul registry. When a change occurs (e.g., a new API node comes online), it rewrites the haproxy.cfg and reloads the service seamlessly. No downtime. No 3:00 AM wake-up calls.
The Template File (haproxy.ctmpl)
Create a template that defines how your config should look:
global
log 127.0.0.1 local0
maxconn 4096
user haproxy
group haproxy
defaults
log global
mode http
option httplog
option dontlognull
retries 3
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http-in
bind *:80
default_backend app-backend
backend app-backend
balance roundrobin
# This is where the magic happens.
# Iterate through all services named "web-api" in Consul
{{range service "web-api"}}
server {{.Node}} {{.Address}}:{{.Port}} check
{{end}}
Step 3: wiring it all together
Now, run the template daemon. This will generate the actual /etc/haproxy/haproxy.cfg file and reload HAProxy whenever the service list changes.
consul-template \
-consul 127.0.0.1:8500 \
-template "/etc/haproxy/haproxy.ctmpl:/etc/haproxy/haproxy.cfg:service haproxy reload"
If you register a new service in Consul now:
curl -X PUT -d '{"ID": "web1", "Name": "web-api", "Port": 8080}' http://localhost:8500/v1/agent/service/register
You will see your HAProxy config update instantly. This is the future of infrastructure.
Latency, Norway, and The Physical Reality
We often forget that "The Cloud" is just someone else's computer. When building a distributed system like this, network latency is the enemy. Consul relies on a Gossip protocol (SWIM) to detect failures. If your network is jittery, nodes will flap (mark each other as dead/alive repeatedly), causing your HAProxy to reload constantly.
This is where hosting location matters. If your users are in Oslo or Stavanger, hosting your infrastructure in Frankfurt or London adds unnecessary milliseconds. Worse, traversing public internet exchanges can introduce packet loss.
| Metric | Budget VPS (OpenVZ) | CoolVDS (KVM) |
|---|---|---|
| Disk I/O (Sequential) | ~80 MB/s (Shared) | ~450 MB/s (Dedicated NVMe) |
| Kernel Access | Shared Kernel | Dedicated Kernel (Tunable) |
| Consul Stability | Low (High Jitter) | High (Stable Latency) |
Furthermore, we must respect local data laws. While the EU Data Protection Directive (95/46/EC) sets the baseline, the Norwegian Personal Data Act (Personopplysningsloven) monitored by Datatilsynet is strict about where data lives and how it is secured. By running your own service mesh on dedicated KVM slices within Norway, you maintain full control over your data transit. You aren't offloading SSL termination to a black-box US load balancer.
Conclusion
The days of manual configuration management are ending. Tools like Ansible and Puppet handle the setup, but Consul and HAProxy handle the runtime state. This architecture gives you self-healing infrastructure that sleeps when you sleep.
But remember: software cannot fix bad hardware. A distributed system on unreliable I/O is just a distributed failure. If you are ready to build a stack that can survive the slashdot effect, you need the right foundation.
Stop fighting with latency. Deploy your Consul cluster on CoolVDS KVM instances today and see the difference dedicated resources make.