Universal problem: first request after ~25 second inactivity always slower (~1 second) than subsequent requests (~1/10sec)

Without looking at your box to see exactly what’s going on, here are some potential avenues of slowness:

Potential Causes

Apache

Apache is usually configured in such a way that a single httpd process is always running in the background. When a request comes in over the wire, it spins up a new httpd process to handle the request. Once the request closes, the new httpd process sits around for a while and the master process will pass off additional requests to it if it’s there.

After a certain period of inactivity, the child process shuts down, meaning the master process has to reallocate memory and spin up a new process when the next request comes in.

I use terms like “a while” and “certain period” because these are all configurable options for your server. You can manage the number of child processes that sit around using the StartServers directive. You might also want to consider looking at the MinSpareThreads and MaxSpareThreads directives, as they manage the “number of idle threads to handle request spikes.”

More details on these directives is available in the manual.

PHP

Like Apache, PHP can be configured a number of different ways depending on how you install it on your server. The two most popular for Apache are as a CGI script and as an Apache module.

As a CGI script, PHP will run as a separate process. Like Apache, you’ll usually have an instance or two sitting around to handle requests, but after a period of inactivity they’ll turn off to free up memory in your system.

As an Apache module, PHP is compiled into Apache itself, so whenever you have a free instance of httpd you also have a PHP handler set up. But as I described in the Apache system above, these extra instances will disappear unless you have Apache configured to keep them around and active.

There’s some more information on mod_php vs PHP as CGI here.

MySQL

The biggest hangup here, however, is MySQL. Like most other databases, MySQL stores its data on the disk but keeps the results of queries in an in-memory cache to make lookups a bit faster. What you are likely seeing is the result of MySQL deciding, after about 25 seconds, that you’ve gone away and don’t need the data again.

Caching basic queries is incredibly efficient for high-traffic sites, even those where the server is using an SSD for data persistence, because it keeps results and data in-memory and doesn’t have to perform any actual lookups to respond to requests. Unfortunately, you’re just probably running into an issue where your cache stales (or hits an interally-configured timeout threshold) and subsequent requests need to re-fetch the data from the filesystem.

You can read more about the Query Cache in the MySQL documentation.

Mitigation

Server Configuration

If you’re not a sysadmin, I’d recommend contracting one for a couple hours to take a look at your box and tweak settings. Installing new stacks off the shelf usually is just fine for everyone (and a ~1s page load on a vanilla, uncached server isn’t anything to be ashamed of). However, default configurations are not meant to be a 1-size-fits-all scenario. Having an expert on hand to configure your stack to make the best use of your hardware will be money well-spent.

Front-End Caching

If your pages aren’t changing, you can use a front-end cache to store the rendered pages and serve them quickly. Batcache, for example, uses Memcached on the server to store rendered pages in-memory. This means subsequent requests grab the fully-prepared post out of Memcached rather than waiting for WordPress to re-fetch the data from the database and build things back out for you.

WP Super Cache will do something similar, but will instead store the rendered page as a static HTML file so visitors will grab that instead of having WordPress parse anything. The bottleneck for page loads then becomes the speed of your SSD + the bandwidth supplied by your host.

tech