Surviving the Microservices Hangover: Building a Resilient Service Fabric
It is December 2015. If you are like most of the DevOps engineers I talk to in Oslo, you have spent the last year chopping your monolithic PHP or Java applications into microservices. You were promised agility, scalability, and "web scale" performance. Instead, you got a paging alert at 3:00 AM because Service A cannot find Service B since the Docker container restarted and got a new IP address.
Welcome to the microservices hangover.
To make matters worse, the European Court of Justice just invalidated the Safe Harbor agreement in October. If you are piping customer data through US-owned clouds without strict legal frameworks, you are walking a compliance tightrope. The "Pragmatic CTO" choice right now is not just about code; it is about where that code lives.
This guide is not about abstract theory. It is a battle-hardened implementation guide for building what I call a "Service Fabric"—a dynamic, self-healing network layer using Consul 0.6 and the brand-new HAProxy 1.6. And we are going to deploy it on infrastructure that respects your data sovereignty.
The Problem: Static Configs in a Dynamic World
In the old days of 2013, we defined upstream servers in Nginx manually:
upstream backend {
server 192.168.1.10:80;
server 192.168.1.11:80;
}
In a Dockerized environment (especially with the new Docker 1.9 networking), containers come and go. If you are manually updating nginx.conf every time a container dies, you are not doing DevOps; you are doing data entry.
The 2015 Solution: Consul + HAProxy
We need three components to solve this:
- Service Discovery (Consul): A distributed database that knows exactly which services are alive and where they are.
- The Load Balancer (HAProxy): The robust engine that routes traffic.
- The Glue (Consul Template): A daemon that watches Consul and rewrites the HAProxy config in real-time.
Step 1: Deploying the Consensus Layer (Consul)
Consul uses the Raft consensus protocol. It is extremely sensitive to disk latency. If your underlying storage stalls on I/O wait, the Raft peers will desync, a leader election will trigger, and your cluster will flap.
Pro Tip: Do not run a Consul server cluster on standard mechanical HDD VPS providers. The I/O wait will kill your consensus during high traffic. At CoolVDS, we use NVMe storage which virtually eliminates I/O wait, ensuring your Raft leader remains stable even under heavy write loads.
Here is a production-ready config.json for a Consul 0.6 server node running on CentOS 7:
{
"datacenter": "oslo-dc1",
"data_dir": "/var/lib/consul",
"log_level": "INFO",
"node_name": "coolvds-node-01",
"server": true,
"bootstrap_expect": 3,
"bind_addr": "10.0.0.5",
"client_addr": "0.0.0.0",
"retry_join": ["10.0.0.6", "10.0.0.7"],
"ui": true
}
Start it up:
$ consul agent -config-dir=/etc/consul.d/ -config-file=/etc/consul.conf
Step 2: The Data Plane (HAProxy 1.6)
HAProxy 1.6 was released just a couple of months ago (October 2015), and it brought a game-changing feature: DNS Resolution. Before 1.6, you had to reload HAProxy every time an IP changed. Now, HAProxy can query Consul's DNS interface directly.
However, for maximum performance and complex routing rules, I still prefer the Consul Template method in production environments where millisecond latency counts.
Create a template file /etc/haproxy/haproxy.ctmpl:
global
log 127.0.0.1 local0
maxconn 4096
user haproxy
group haproxy
defaults
log global
mode http
option httplog
option dontlognull
retries 3
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http-in
bind *:80
default_backend app-backend
backend app-backend
balance roundrobin
{{range service "production.webapp"}}
server {{.Node}} {{.Address}}:{{.Port}} check
{{end}}
Step 3: Automating the Glue
Now, run consul-template as a daemon. It will watch for changes in the "production.webapp" service. When a new Docker container registers itself with Consul, this tool will rewrite the HAProxy config and reload the service seamlessly.
$ consul-template \
-consul 127.0.0.1:8500 \
-template "/etc/haproxy/haproxy.ctmpl:/etc/haproxy/haproxy.cfg:service haproxy reload"
Why Infrastructure Choice is a Security Feature
Technological implementation is only half the battle. The other half is where your bits actually live. Since the Schrems I ruling (Safe Harbor invalidation), many Norwegian companies are realizing that hosting on US-controlled public clouds puts them in a legal gray area regarding the Data Protection Directive.
Running this stack on CoolVDS solves two problems:
- Legal Compliance: Your data stays in Norway/Europe, governed by strict privacy laws, not the US Patriot Act.
- Performance Consistency: We do not oversell our CPU. When HAProxy needs to process 10,000 requests per second, the CPU cycles are there. No "noisy neighbor" effect stealing your cycles while your load balancer chokes.
The Verdict
Microservices are powerful, but without a proper service discovery layer, they are a liability. By combining the stability of Consul, the raw performance of HAProxy 1.6, and the dedicated I/O of CoolVDS NVMe instances, you build a system that heals itself.
Do not let legacy networking slow down your deployment pipeline. Spin up a CoolVDS instance today—provisioning takes less than 60 seconds—and start building a service fabric that actually works.