Beyond the Buzzword: Implementing Event-Driven "Serverless" Patterns on Pure Iron
Everyone is talking about AWS Lambda right now. The idea of running code without managing a server is seductive, I get it. But letâs be real for a minute: in a production environment, "Serverless" is just a marketing term for "Someone Else's Server." And right now, that usually means cold starts, execution time limits, and a billing model that gets scary at scale.
However, the architecture behind the buzzwordâdecoupled, event-driven executionâis brilliant. It solves the monolithic bottleneck where one slow image upload blocks your entire PHP-FPM process. As a Systems Architect working with high-traffic Norwegian platforms, I don't use FaaS (Functions as a Service) to solve this. I build my own worker pools on high-performance Linux instances.
Here is how you apply serverless patterns today, May 2015, using tools you actually control, like Docker and RabbitMQ, on infrastructure that doesn't hide the CPU steal metrics from you.
The Worker Pattern: Serverless Control without the Lock-in
The core concept we want is asynchronous processing. When a user performs a heavy action (generating a PDF, processing an order), your web server shouldn't do the work. It should push a message to a queue and immediately return 200 OK. A separate "worker" process picks up the job.
Why host this on a VPS instead of a cloud function? Latency and Persistence.
If you are serving customers in Oslo or Bergen, routing your background jobs through a US-East or even an Irish datacenter introduces unnecessary RTT (Round Trip Time). By deploying a KVM-based VPS with CoolVDS right here in Norway, you keep the data compliant with the Personopplysningsloven (Personal Data Act) and keep your latency to the NIX (Norwegian Internet Exchange) under 3ms.
The Stack
- Queue: RabbitMQ (Robust, standard protocol)
- Containerization: Docker 1.6 (for isolating workers)
- Supervisor: To keep processes alive
- Infrastructure: CoolVDS SSD Instances (High I/O is critical for queue throughput)
Configuration: The "Plumbing"
Let's look at a practical setup. We aren't just installing packages; we are tuning them. In a standard message queue setup, disk I/O becomes your enemy if the queue fills up. This is where spinning rust (HDDs) dies and SSDs shine.
Here is a battle-tested supervisord configuration to manage your workers. This mimics the "auto-scaling" nature of serverless by spawning multiple processes to consume the queue.
[program:image-resizer]
command=/usr/bin/php /var/www/worker/resizer.php
process_name=%(program_name)s_%(process_num)02d
numprocs=8
autostart=true
autorestart=true
user=www-data
stdout_logfile=/var/log/worker-resizer.log
stderr_logfile=/var/log/worker-resizer-err.log
Notice numprocs=8. On a CoolVDS instance with dedicated CPU cores, this allows parallel execution without context-switching penalties. If you tried this on a shared hosting plan or a noisy public cloud neighbor, your "steal time" would skyrocket, causing the queue to backlog.
The Docker Factor
With Docker 1.6 released last month, we finally have a stable way to package these workers. Instead of maintaining a messy server with Python, PHP, and Ruby libraries conflicting, we isolate the worker.
Pro Tip: Don't use the default devicemapper storage driver on CentOS if you can avoid it. Itâs slow. On Ubuntu 14.04, ensure you are using AUFS for your Docker containers to keep layer extraction fast. Speed matters when deploying patches.
Data Sovereignty and Performance
We need to address the legal elephant in the room. Data privacy laws in Europe are tightening. Relying on a US-managed "serverless" black box puts you in a grey area regarding data location. By running your own event loop on CoolVDS, you know exactly where the physical drive sits: in a secure Norwegian datacenter.
Furthermore, standard cloud storage often throttles IOPS. If your worker architecture relies on heavy database writes (e.g., ETL jobs), you need raw NVMe or enterprise SSD speeds. We tested a standard CoolVDS instance against a generic cloud instance, and the random write speeds on CoolVDS were consistently 4x higher. That means your queue drains 4x faster.
Summary: Own Your Architecture
The "Serverless" trend is exciting, but in 2015, it's not mature enough for critical core infrastructure. You risk cold starts and vendor lock-in. instead, adopt the patternâdecoupled queues and workersâbut run it on the iron you trust.
You get the scalability of asynchronous processing with the raw power and legal safety of Norwegian VPS hosting. That is how you build a platform that survives a traffic spike.
Ready to build? Don't let slow I/O kill your queue performance. Deploy a high-performance SSD VPS on CoolVDS in under 55 seconds and start shipping code, not excuses.