Stop Making Users Wait: Asynchronous Processing with RabbitMQ
It is 2013, and if your PHP application still handles image resizing, invoice generation, or third-party API calls synchronously during a user request, you are doing it wrong. I recently audited a Magento store based in Oslo that was taking 8 seconds to process a checkout. Why? Because the server was trying to talk to an ERP system in real-time before returning the "Success" page to the customer.
The server didn't melt, but the conversion rate did.
The solution isn't just "more RAM." The solution is decoupling architecture. Enter RabbitMQ. Unlike Redis (which is great, but often treats persistence as an afterthought), RabbitMQ is built on Erlang/OTP. It is designed for telecom-grade reliability. But it is also a beast to configure if you don't respect its hunger for resources.
The Architecture: Producer, Exchange, Queue, Consumer
The concept is simple. Your web application (the Producer) sends a message to an Exchange. The Exchange routes it to a Queue. A background worker (the Consumer) picks it up and does the heavy lifting. Your user gets an instant response, and the server processes the task when resources allow.
However, running this in production requires more than just apt-get install. You need to tune the OS, specifically file descriptors and TCP buffers.
Step 1: The Installation (CentOS 6)
We are using CentOS 6.4 for this setup. Do not rely on the default repositories; they are often outdated. We want the latest stable 3.1.x branch.
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
yum install erlang
rpm --import http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
rpm -Uvh http://www.rabbitmq.com/releases/rabbitmq-server/v3.1.3/rabbitmq-server-3.1.3-1.noarch.rpm
Step 2: Tuning the OS for High Throughput
Here is where most VPS hosting environments fail. RabbitMQ uses a massive number of file descriptors/sockets. The default Linux limit is often 1024. If you hit this, your broker crashes, and your messages vanish.
Check your current limits:
ulimit -n
If it returns 1024, you need to edit /etc/security/limits.conf immediately:
rabbitmq soft nofile 65536
rabbitmq hard nofile 65536
Pro Tip: On a shared hosting environment or a cheap OpenVZ container, you often cannot change these kernel-level parameters. This is why we deploy RabbitMQ strictly on KVM-based virtualization, like the instances provided by CoolVDS. You need full control over the kernel to guarantee stability. OpenVZ "guarantees" are often marketing fiction.
Step 3: Network Latency and The "Split Brain" Problem
If you are setting up a cluster for high availability (HA), network latency is your biggest enemy. Erlang is sensitive. If the latency between your nodes spikes, the cluster partitions (Split Brain), and you lose data consistency.
For Norwegian businesses, data sovereignty is also a legal concern under the Personopplysningsloven (Personal Data Act). If you are processing queue messages containing customer data (emails, addresses), routing that data through a server in Frankfurt or Amsterdam adds unnecessary latency and potential compliance headaches with Datatilsynet.
Keep your broker close to your web server. If your users are in Norway, your RabbitMQ instance should be in Oslo. The roundtrip time (RTT) from a CoolVDS instance in Oslo to NIX (Norwegian Internet Exchange) is typically under 2ms. Compare that to 30ms+ for continental Europe. That latency adds up when you are pushing thousands of messages per second.
Step 4: Management Plugin
Don't fly blind. Enable the management plugin to see your queues draining in real-time.
rabbitmq-plugins enable rabbitmq_management
service rabbitmq-server restart
You can now access the dashboard on port 15672. (Security Warning: iptables should lock this port down to your office IP only. Do not leave this open to the world.)
Why Infrastructure Choice Matters
RabbitMQ is an I/O and memory-intensive application. When the message queue fills up, it starts paging to disk. If your underlying storage is a slow SATA drive shared with 50 other noisy neighbors, your "asynchronous" system effectively freezes.
This is the primary use case where I recommend CoolVDS over generic cloud providers. The SSD (Solid State Drive) performance consistency is critical here. We aren't just talking about read speeds; we are talking about IOPS (Input/Output Operations Per Second). When a queue spikes to 100,000 messages, you need hardware that can write that persistence layer instantly.
Configuration Checklist for Production:
- Erlang Cookie: Ensure
/var/lib/rabbitmq/.erlang.cookieis identical on all clustered nodes. - Memory High Watermark: Set
vm_memory_high_watermarkto 0.6 inrabbitmq.configto prevent the broker from killing the OS. - Disk Space: Ensure you have ample space for persistent messages. If disk free space drops below a threshold, RabbitMQ stops accepting messages.
Decoupling your application is the only way to scale past a single server. But remember: a message broker is a critical component. Treat it with the respect it deserves—give it a proper KVM environment, tune your file descriptors, and keep the latency low.
Ready to decouple? Spin up a CentOS 6 instance on CoolVDS today. With our Oslo datacenter, your message latency is virtually non-existent.