Open Source Software Technical Articles

Want the Best of the Wazi Blogs Delivered Directly to your Inbox?

Subscribe to Wazi by Email

Your email:

Connect with Us!

Current Articles | RSS Feed RSS Feed

Supercharge WordPress, Part 2


In my last article I showed you how to set up CentOS, nginx, and MySQL to optimize your WordPress installation. That's a great start, but there's more you can do. Here we'll set up PHP-FPM (FastCGI Process Manager), WordPress itself, and Varnish Cache, a web application accelerator.

I use PHP-FPM as an alternative PHP FastCGI implementation because it offers some useful features for sites of any size, and especially busier sites. Some of the features I find especially useful in this implementation of PHP are:

    • Advanced process management with graceful stop and start

    • The ability to start workers with different uid, gid, chroot, or environment settings and different php.ini files (replaces safe_mode)

    • Stdout and stderr logging

    • Emergency restart in case of accidental opcode cache destruction

    • Accelerated upload support

    • Support for a "slowlog", you can log all the slow scripts.

Unfortunately, PHP-FPM is not available in the default CentOS repository, and not even in EPEL, the Extra Packages for Enterprise Linux repository we set up earlier, so we have to add another repository, this one from the IUS Community Project. The goal of this project is to provide up-to-date and regularly maintained RPM packages for the latest upstream versions of PHP, Python, MySQL, and other common software for Red Hat Enterprise Linux (RHEL).

To add IUS as a repository on a 32-bit system, run this command as root from a terminal window:

rpm -ivh

If you have a 64-bit system, run:

rpm -ivh

After adding the repository, update the list of the packages with the command yum update.

Now you can install PHP-FPM, package php53u-fpm-5.3.8-3.ius.el6.i686 to be exact, and the corresponding PHP extension to talk with MySQL with the command:

yum install php53u-fpm php53u-mysql

The defaults in the application's generic configuration file, /etc/php-fpm.conf, are good enough, but we'll tweak the pool configuration file. PHP-FPM can launch multiple pools of FastCGI processes listening on separate ports to meet the demands of multi-domain virtual hosting environments. The pools' configuration files are located in /etc/php-fpm.d/poolname.conf. The default pool is named www, and that's the only pool we'll use. The main options are:

Pool name The default is [www].

listen specifies the address on which to accept FastCGI requests. It's usually set to a localhost high port; 9000 is the default.

listen.allowed_clients Optionally specifies a list of IPv4 addresses of FastCGI clients that are allowed to connect. If you listen only on localhost you can safely skip this.

user and group Put your web server's user and group here. For us, this is nginx/nginx.

pm specifies the type of process manager. The options are static or dynamic; I use dynamic so I can start and stop more processes as needed.

pm.max_children sets the limit on the number of simultaneous requests that will be served. It controls the number of child processes to be created when pm is set to "static," and the maximum number of child processes to be created when pm is set to "dynamic." It's equivalent to the ApacheMaxClients directive you may be familiar with if you use mpm_prefork. The best value for this variable depends on how much RAM you have on your system and how much RAM you have set up for each PHP process. If you've set 32MB as the max limit of RAM for PHP, having a value of 20 here means that you could use up to 640MB of RAM for all the PHP-FPM processes.

pm.start_servers defines the number of child processes created on startup.

pm.min_spare_servers is the desired minimum number of idle server processes. You want some spare server processes ready to answer clients, but not too many.

pm.max_spare_servers is the desired maximum number of idle server processes.

pm.max_requests controls the number of requests each child process should execute before respawning. This directive can be useful to work around memory leaks in third-party libraries. By default it's not activated.

request_terminate_timeout specifies the timeout in seconds for serving a single request after which a worker process will be killed. By default it's not set, but I suggest putting a value here. Depending on the contents of your site it could be anything from 30 and 300; a fast WordPress site should be able to serve all request withing 30 seconds.

request_slowlog_timeout specifies a value in seconds (or minutes) that indicates a timeout for serving a single request, after which a PHP backtrace is to be dumped to the slowlog file. Set this to 1 or 2 seconds less than your request_terminate_timeout to see which PHP processes are slow and so are terminated, as long as the previous directive is activated as well.

slowlog defines the path to the log file where you can see slow requests.

By default PHP-FPM uses all the values defined in the file /etc/php.ini, but you can put in this file additional php.ini definitions that will be specific to this pool of workers. These definitions will override the values previously defined in the php.ini file. One useful setting for WordPress is php_admin_value[memory_limit], which specifies the maximum amount of memory that PHP can use in megabytes. Depending on the total RAM available on your server, this value can range from 32 to 512.

Here are a few example configuration settings that serve as good starting points for servers of different memory sizes running both MySQL and a web server on the same machine.

Server with 512MB RAM:

listen =
listen.allowed_clients =
user = nginx
group = nginx
pm = dynamic
pm.max_children = 10
pm.start_servers = 2
pm.min_spare_servers = 2
pm.max_spare_servers = 5
pm.max_requests = 500
request_terminate_timeout = 30
request_slowlog_timeout = 28
slowlog = /var/log/php-fpm/www-slow.log
php_admin_value[memory_limit] = 32M

Server with 1GB RAM:

listen =
listen.allowed_clients =
user = nginx
group = nginx
pm = dynamic
pm.max_children = 20
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 10
pm.max_requests = 500
request_terminate_timeout = 45
request_slowlog_timeout = 40
slowlog = /var/log/php-fpm/www-slow.log
php_admin_value[memory_limit] = 48M

Server with 2GB RAM:

listen =
listen.allowed_clients =
user = nginx
group = nginx
pm = dynamic
pm.max_children = 40
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 10
pm.max_requests = 500
request_terminate_timeout = 60
request_slowlog_timeout = 40
slowlog = /var/log/php-fpm/www-slow.log
php_admin_value[memory_limit] = 64M

Once your site is running you can get more information to perfect your setup. Are your scripts terminating because they reach memory_limit? Increase the value a bit. Does your system freeze because it eats up all your memory? Lower the pm.max_children parameter and/or the memory_limit. Also check the logs of nginx and php-fpm and the slowlog; they can give great information.

Alternate PHP Cache

I strongly suggest installing the Alternate PHP Cache (APC), an opcode cache that's easy to install and configure. Opcode caches save the compiled form of PHP scripts in shared memory to avoid the overhead of parsing and compiling the code every time the script runs. This saves RAM and reduces script execution time.

To install APC run yum install php53u-pecl-apc. You can find the program's configuration file at /etc/php.d/apc.ini. Among the parameters in it is apc.shm_size, which specifies how much memory to use for APC. Below is a sample configuration that I use on a 512MB Linux system:
apc.optimization = 0
apc.use_request_time = 1

For a full explanation of these settings, refer to the online documentation.

APC includes a web page where you can see the usage of the cache, including fragmentation and other useful information. Copy /usr/share/doc/php53u-pecl-apc-3.1.9/apc.php to your document root and point your browser to it.

One word of warning: If you don't periodically restart PHP-FPM, your cache could become fragmented, as you'll be able to see by visiting that web page. To avoid this, periodically empty the cache. I use a cron setting that once a day calls a web page on my site, which I call apc-clear.php, that empties the cache. It also checks that the address invoking it is local so it cannot be called from outside. Here is the content of apc-clear.php:

if (in_array(@$_SERVER['REMOTE_ADDR'], array('', '::1')))
echo json_encode(array('success' => true));

And this is my cron line:

0 6 * * * /usr/bin/curl --silent


Now that you have the infrastructure set up, you can install WordPress easily by following the five-minute install guide on Also install the nginx compatibility plugin; it will make your life easier if you use short-links and permalinks.

After the initial install is a good time to run a first performance benchmark. Pingdom and WebPagetest can give you the total time to load your WordPress site and some additional evaluation.

One plugin in particular can improve the user experience of your site. W3 Total Cache (W3TC) caches every aspect of your site, reducing download times and providing transparent content delivery network (CDN) integration. Once installed via the admin section of your WordPress site, it adds a Performance section at the bottom of the left-hand menu in your control panel. W3TC has too many options to explain them all here, but let's look at the main features and how you can tweak settings to suit your site.

On the main W3TC page set

Page Cache: Enable
Page Cache Method: Disk Basic

For typical environments, disk basic is best, or disk enhanced if you are in a shared environment. You can always test the various methods to see which one fit well for your site; the best setting also depends on what kind of stress you're trying to optimize your server for. Generally speaking, with opcode caching, pages are stored in RAM, so they're served faster, but php-interpreter is involved in serving requests, which can put a load on the CPU. By contrast, with disk-based caching, there's no load on php-interpreter if a page is already cached, but disk cache is slower than RAM.

The defaults are fine for most other options, but change the following:

Minify: Enable
Minify Mode: Manual
Minify Cache Method: Opcode Alternative PHP Cache (APC)

Database Cache: Enable
Database Cache Method: Opcode Alternative PHP Cache (APC)

You might need to modify these settings, depending on how you use your site. For instance, on my site I use a plugin that for every article read does an update to count the visit. If I enable the database cache, this plugin stop working. In a shared environment, database caching can slow down everything, so test it by enabling and disabling it before you go live.

Object caching increases performance for highly dynamic sites – ones that use the Object Cache API:

Object Cache: Enable
Object Cache Method: Opcode Alternative PHP Cache (APC)

Enable HTTP compression and add headers to reduce server load and decrease file load time:

Browser Cache: Enable

If you have a CDN you can set it up in the specified fields, after which theme files, media library attachments, CSS, and JavaScript files will load instantly for site visitors:

CDN: Enable
CDN Type: Select your CDN

Page Cache Settings

So much for the first page of settings. Now go into Page Cache Settings and look at Cache Preload. With this option, you can automatically fill the cache by using an XML sitemap. To do this, set these options:

Automatically prime the page cache: Enabled
Update interval: 3600
Pages per interval: 100
Sitemap URL:


Minify Settings

Minify is an application that speeds your site by removing comments and line breaks from CSS, HTML, and JavaScript files, combining similar files together, and zipping them. With Minify, the size of the files is reduced to make loading the site faster, but configuring Minify is the hardest part of W3TC configuration; you have to specify the list of all your JavaScript and/or CSS files, and some of them will not work once Minified, so it's a trial and error procedure. If you're uncomfortable with looking at HTML source code and finding CSS and JavaScript files there, skip this section. You don't have to Minify; Page Cache will work just fine. But you can use Minify to squeeze out every bit of performance optimization and site speed. To use it:

Enable JS minify settings, and in the section "JS file management" put all your JavaScript scripts. Enable CSS minify settings and in "CSS file management" put all your CSS files.

The defaults of the other W3TC tabs should be fine. You can now save all the settings. After saving, W3TC will tell you if something is wrong or not configured.

Now you have configured every aspect of W3TC. To activate it you have to load some nginx rules.
In the tab named Install you have all the rules that you need to load into your Nginx configuration to enable all your choices. The easiest thing to do is to copy the rewrite rules on this page and paste them in a file. W3TC will suggest a location; one good position is /etc/nginx/conf.d/w3tc.conf. Go back to your nginx configuration file and add this line just before the last }:

include /etc/nginx/conf.d/w3tc.conf;

Restart nginx with the command service nginx restart.

Now benchmark your site again. The changes you made should be enough to give you a satisfying result – but if you want more, and you're not scared by more configuration, there's Varnish Cache, an open source, state-of-the-art web application accelerator.


Before you install Varnish, be aware that if you have a lot of dynamic pages that use cookies and forms, Varnish will be useless, or could even cause problem if you configure it badly. If your site is not so dynamic and you can remove most of your cookies, you'll get wonderful results.

To use Varnish, you must configure nginx to listen to port 8080 so that Varnish can listen to port 80 and forward any requests that aren't in its cache to the backend server. To do this, just change the listen directive of your virtual host in /etc/httpd/conf.d/yoursite.conf from 80 to 8080, then restart nginx.

Varnish is available in the EPEL repository. Install it with the command yum install varnish.

The program's main configuration file is /etc/varnish/default.vcl. To make Varnish cache a WordPress website appropriately, clear all cookies exchanged between the server and the client except for the one set for WordPress admin. If you don't do this, login to the admin account will fail because the pages will be cached. This is an example of a VCL file for a WordPress site:

backend default {
.host = "localhost";
.port = "80";

acl purge {

sub vcl_recv {
if (req.request == "PURGE") {
if (!client.ip ~ purge) {
error 405 "Not allowed.";
# Remove cookies and query string for real static files
if (req.url ~ "^/[^?]+\.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.*|)$") {
unset req.http.cookie;
set req.url = regsub(req.url, "\?.*$", "");

# Unset all cookies if not WordPress admin - otherwise login will fail
if (!(req.url ~ "wp-(login|admin)")) {
unset req.http.cookie;

sub vcl_hit {
if (req.request == "PURGE") {
set obj.ttl = 0s;
error 200 "Purged.";

sub vcl_fetch {
# Drop any cookies WordPress tries to send back to the client.
if (!(req.url ~ "wp-(login|admin)")) {
unset beresp.http.set-cookie;

Varnish and W3TC

W3TC supports Varnihs, it can purge the cache every time you post a new article to do this, in the main page you just have to enable the option "Enable varnish cache purging" and put as address for the Varnish server.

Explaining all the Varnish Control Language directives goes beyond what I can cover in this article, but in the Varnish documentation you can find a full explanation of the commands and many examples.

After you've gotten Varnish working, repeat the benchmarks. You should see another improvement.

WordPress setup and optimization is a complex topic. I hope this article gives you some material to work with. While I've presented some common setups that could be perfect for you, they may need to be changed, depending on your server hardware, software, or kind of service. Test the various options and repeat the benchmarks until you find the perfect setup for your site.

This work is licensed under a Creative Commons Attribution 3.0 Unported License
Creative Commons License.

This work is licensed under a Creative Commons Attribution 3.0 Unported License
Creative Commons License.


In the early 80 century, the Italian Carlo Crocco in Switzerland, Lac Léman lakeside town of Nyon created the replica hublot watch brand, the brand was seemingly out of tune with the watch industry, known as the "alternative", put it nicely, some, Hublot is a a cutting-edge brand.The Replica Watches Hublot you can order are your opportunity to penetrate into the world of prestige and luxury without spending a fortune!80 years on the replica hublot watches is a turbulent period, since the invention of quartz watches, the Swiss watch factories are in the major traditional state of the verge of bankruptcy, which undoubtedly also provides a good opportunity for Hublot, which relied on a unique design a sense of easy to break into the market. The first collection is said to watch this is a big man - King of Greece. He bought two watches that he retained one, another one sent to the Spanish royal family. 
Hublot is French, meaning the side of light hull using the "porthole", which gave the designers a good reminder, so we are seeing now cheap replica hublot tables are mostly used to create a collection of thick multi-metal case, heavy metals, a very strong sense of , which is the design of the Swiss watch is extremely rare. Hublot rubber strap is another significant contribution. In order to alleviate the table to bring the wrist to bring the non-skin comfort, after years of research and development, began extensive use of rubber, this technology has always been to break the Swiss watch style, driving a wave strange fashion trend. Even the most advanced, most traditional watches have also introduced rubber technology, even though a lot of French classic watch shop very dissatisfied with this point, but the final decision by the consumer, the market faster than the watch industry accepted the Hublot.Brand Hublot Big Bang replica watches have different prices and are the best choice if you cannot afford to buy so expensive original ones .Someone like the apprearance of world renowned branded watches on their wrist ,such as Hublot watches.From the beginning Hublot impression is variable, founder Carlo Crocco champions is "Fusion", meaning "integration." He is Italian, the Italian and Swiss-style blend is a superb technique he has been looking for. Changing the design style which also happens to run into a "variable" and CEO, so that Hublot is even more powerful.
Posted @ Wednesday, July 23, 2014 5:09 AM by wordepake
Anyone who is a lover of ability will apperceive hermes wallets that bathrobe in the summer months can be a addictive proposition. Not abandoned do you acquire arbitrary acclimatize to altercate with, but aswell it can be harder to emphasis calmly admirable and aloft if you are airless in the baking weather. Any complete lover of ability will instead accustom you that the best time to achieve a ability appulse are in the winter months. Although these months accompany the algid acclimatize as expected, they aswell louis vuitton replica accompany with them a actually new abuttals of clothes to acquire from and the adventitious to dress stylishly and authentic your style. Abounding shoppers acquire to acquire women's ability in these colder months as they do not acquire to feel so aboveboard and get to plan with their action looks as well. If you acquire a adorable action that you will be attending, why not go for a acclimatized emphasis than you would usually acquire and adore louis vuitton outlet authentic a ability anniversary this Christmas. 
Posted @ Monday, October 20, 2014 10:14 PM by vv
Post Comment
Website (optional)

Allowed tags: <a> link, <b> bold, <i> italics