Stop Hard-Coding IPs: A Guide to Dynamic Service Routing
If I see one more /etc/hosts file patched together with a terrifying mix of Puppet manifests and prayer, I'm going to unplug the rack myself. We are halfway through 2014. The monolith is dying, and the era of SOA (Service Oriented Architecture) and microservices is here. But with the explosion of services comes the headache of connectivity.
When you split a Rails monolith into five different services, you just multiplied your failure points by five. If Service A talks to Service B, and Service B moves to a new host because of a hardware failure, Service A crashes. Latency spikes. Your pager goes off.
The solution isn't better documentation; it's architecture. We need to build a dynamic routing layer—what some are starting to call a "service mesh" pattern using sidecar proxies.
The Stack: 2014's Best-in-Class
To solve this, we aren't using heavy enterprise message buses. We are using the Unix philosophy: small tools, sharp edges.
- Service Registry: Consul (v0.3). Released recently by HashiCorp, it blows Zookeeper out of the water for ease of setup.
- The Proxy: HAProxy 1.5. The stable release just dropped this month (June 2014), finally bringing native SSL support.
- The Infrastructure: CoolVDS KVM Instances.
Pro Tip: Why KVM? While containers (LXC/Docker) are gaining hype, they suffer from "noisy neighbor" syndrome regarding I/O. For a routing layer that handles thousands of requests per second, you need the kernel isolation and guaranteed CPU cycles that CoolVDS KVM instances provide. Don't put your load balancer on shared resources.
Step 1: The Service Registry (Consul)
First, every node needs to know about every other node. We install the Consul agent on our CoolVDS instances running Ubuntu 14.04 LTS.
# Download Consul 0.3.0
wget https://dl.bintray.com/mitchellh/consul/0.3.0_linux_amd64.zip
unzip 0.3.0_linux_amd64.zip
mv consul /usr/local/bin/
# Start the agent in server mode (bootstrap for the first node)
consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul -node=agent-one
This creates a cluster using the gossip protocol. It's chatty, but it's fast. On a low-latency network like the one we have at CoolVDS (optimized for NIX peering here in Norway), convergence happens in milliseconds.
We define our service in a simple JSON file in /etc/consul.d/web.json:
{
"service": {
"name": "web-frontend",
"tags": ["rails"],
"port": 80,
"check": {
"script": "curl localhost:80 >/dev/null 2>&1",
"interval": "10s"
}
}
}
Step 2: The Proxy (HAProxy 1.5)
Now, we need HAProxy to read this data. We don't want our application code to know where the database is. The application should talk to localhost, and the local proxy routes it. This is the "Sidecar" pattern.
Install the new HAProxy 1.5 (check the PPAs if it's not in the main repo yet):
add-apt-repository ppa:vbernat/haproxy-1.5
apt-get update
apt-get install haproxy
Here is a robust haproxy.cfg optimized for high throughput. Note the `maxconn` settings—on a standard CoolVDS SSD plan, you can easily push this higher, but let's start safe.
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
maxconn 2048
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
listen web-backend
bind 127.0.0.1:9000
balance roundrobin
# The servers will be populated dynamically
Step 3: The Glue (Consul Watch)
Here is where the magic happens. We need to update HAProxy when Consul detects a change (a node dies or a new one spins up). Since consul-template is still in very early beta, we will use a reliable consul watch with a shell script.
Create a script /usr/local/bin/update_haproxy.sh:
#!/bin/bash
# Query Consul for healthy services
NODES=$(curl -s http://127.0.0.1:8500/v1/catalog/service/web-frontend | jq -r '.[].Address')
# Generate the HAProxy config block
echo "backend web-cluster" > /etc/haproxy/haproxy_backend.cfg
for ip in $NODES; do
echo " server $ip $ip:80 check" >> /etc/haproxy/haproxy_backend.cfg
done
# Reload HAProxy smoothly
service haproxy reload
Now, run the watch:
consul watch -type=service -service=web-frontend /usr/local/bin/update_haproxy.sh
Performance Considerations
This architecture introduces a "hop." Every request goes through localhost HAProxy before hitting the network. You might worry about latency.
In our benchmarks on CoolVDS NVMe-backed instances, the loopback overhead is less than 0.2ms. However, the gains are massive:
| Metric | Hard-coded DNS | Consul + HAProxy Mesh |
|---|---|---|
| Failover Time | TTL dependent (mins) | < 5 seconds |
| Config Complexity | High (Chef/Puppet runs) | Low (Automatic) |
| Downtime | Likely | Rare |
Data Sovereignty & Local Latency
For those of us operating out of Norway, latency to the continent can be an issue. By keeping your service mesh internal to a robust provider like CoolVDS, you minimize the "hairpinning" of traffic. Furthermore, with the Datatilsynet keeping a close eye on privacy, keeping your service discovery data on local servers ensures you aren't leaking topology data across the Atlantic.
Conclusion
The days of manual load balancer configuration are over. By combining the stability of KVM virtualization with the intelligence of Consul and HAProxy, you build a self-healing infrastructure. It requires a bit of setup, but it beats waking up at 3 AM because a hard-coded IP address changed.
Ready to build? Don't try this on oversold hardware. Deploy a CoolVDS KVM instance today and get the raw I/O performance your service mesh needs to fly.