Console Login
Home / Blog / Performance Optimization / Stop Wasting RAM: Transitioning from mod_php to PHP-FPM in 2010
Performance Optimization 10 views

Stop Wasting RAM: Transitioning from mod_php to PHP-FPM in 2010

@

Is Your Apache Process Bloat Killing Your Latency?

If you are running a high-traffic site in late 2010, you know the drill. You check top and see fifty Apache processes, each consuming 40MB of RAM, even when they are serving a 2KB static image. The machine starts swapping, I/O wait spikes, and your site crawls. It is the classic "thundering herd" problem, and quite frankly, the traditional LAMP stack configuration is failing us.

For years, we relied on spawn-fcgi or unstable patches to get decent FastCGI performance. But with the release of PHP 5.3.3 earlier this year, the game changed. The FastCGI Process Manager (FPM) is finally part of the PHP core. If you are still using mod_php embedded inside Apache, you are voluntarily slowing down your application.

The Architecture Shift: Nginx + PHP-FPM

The philosophy is simple: let the web server handle the connections (the C10k problem) and let a specialized daemon handle the PHP execution. Nginx is proving to be far superior to Apache at handling concurrent connections because it is event-based, not process-based. It doesn't need to spawn a new thread for every client.

However, the real magic happens in the php-fpm.conf.

Tuning the Process Manager

The default FPM configuration on most repositories (like the Dotdeb repos for Debian Lenny or EPEL for CentOS 5) is rarely optimized for production. The most critical directive is pm (process manager).

You have two main choices:

  • static: A fixed number of PHP child processes. If you have plenty of RAM, this is the fastest because there is no overhead in spawning processes.
  • dynamic: Scales processes based on demand. Good for shared environments, but risky if misconfigured.
Pro Tip: Do not trust the defaults. If you are running a VPS with 1GB RAM, calculate your average PHP process size (usually 20-30MB) and set your limit mathematically to avoid OOM (Out of Memory) kills.

Here is a battle-tested configuration for a mid-sized e-commerce site running on a 2GB RAM slice:

[www]
pm = dynamic
pm.max_children = 50
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 15
pm.max_requests = 500

Setting pm.max_requests is vital. PHP has a history of memory leaks. Restarting the process after 500 requests is a safety valve that keeps your memory usage flat over weeks of uptime.

The Secret Weapon: APC (Alternative PHP Cache)

You cannot talk about PHP performance in 2010 without mentioning APC. PHP is an interpreted language; it compiles your code into opcodes every single time a user requests a page. That is CPU suicide.

APC caches these opcodes in shared memory. I have seen Magento installations drop from 2-second generation times to 600ms just by enabling APC. Ensure you allocate enough shared memory in your php.ini:

apc.shm_size = 128M

If you see cache fragmentation in your APC stats, increase this value. Do not be stingy here.

Why Infrastructure Matters: The "Burstable" RAM Trap

Here is where many sysadmins fail. They optimize their configs perfectly but run on cheap, oversold OpenVZ containers where RAM is "burstable."

PHP-FPM, especially with APC enabled, requires dedicated, stable RAM. If your host overcommits memory and your neighbor launches a backup script, your PHP processes might get locked out of memory access, causing 502 Bad Gateway errors.

This is why at CoolVDS, we utilize strict Xen virtualization resource isolation. When we allocate 2GB of RAM to your instance, it is locked to your kernel. We also utilize high-performance SAS 15k RPM drives (and increasingly, Enterprise SSDs for select tiers) to ensure that if you do hit swap, you recover instantly. For a Norwegian business, reliability is a legal necessity under the Personopplysningsloven (Personal Data Act). You cannot claim "technical error" to the Datatilsynet if your server crashes due to cheap hosting.

Local Latency: The Oslo Factor

Finally, consider the network. If your target market is Norway, hosting in Germany or the US adds 30-100ms of latency per round trip. With modern web apps making dozens of AJAX calls, that latency compounds.

Peering at NIX (Norwegian Internet Exchange) is essential. CoolVDS ensures your packets take the shortest path to Telenor, NextGenTel, and Altibox users. We measure latency in single-digit milliseconds.

Implementation Plan

  1. Install nginx and php5-fpm from updated repositories.
  2. Configure your virtual host to pass .php files to 127.0.0.1:9000.
  3. Install php-apc.
  4. Stress test with ab (Apache Bench) to find your breaking point.

Don't let legacy configurations hold back your application. The tools are here, and they are stable. If you need a sandbox to test this stack without risking your production hardware, spin up a CoolVDS instance. You will feel the difference raw performance makes.

/// TAGS

/// RELATED POSTS

Taming Latency: Tuning NGINX as an API Gateway on Linux (2015 Edition)

Is your REST API choking under load? We dive deep into Linux kernel tuning, NGINX upstream keepalive...

Read More →

Stop Letting Apache mod_php Eat Your RAM: The PHP-FPM Performance Guide

Is your server swapping during peak hours? We ditch the bloated Apache mod_php model for the lean, m...

Read More →

Stop Wasting RAM: Migrating from Apache mod_php to Nginx & PHP-FPM on CentOS 6

Is your server swapping out under load? The old LAMP stack architecture is dead. Learn how to implem...

Read More →

PHP-FPM vs mod_php: Tuning High-Performance LAMP Stacks in 2011

Is your Apache server thrashing under load? Stop relying on the bloated mod_php handler. We dive dee...

Read More →

Stop Using mod_php: Optimizing PHP Performance with FPM and Nginx

Is your web server struggling under load? Learn why moving from Apache's mod_php to PHP-FPM and Ngin...

Read More →

Stop Watching 'wa' in Top: Why Spinning Disks Are the Bottleneck in 2011

Is your server load spiking despite low CPU usage? The culprit is likely I/O wait. We break down why...

Read More →
← Back to All Posts