Performance tips for a large user base [closed]

You can use “W3 Total Cache” which isn’t a static file cache system. It however uses stuff such as opcode caching, memcached, and object caching to decrease page load time. APC, or another opcache, would be a good addition to your server, as well as using a lightweight httpd instead of bloated Apache.

Forcing GZIP on users is also considered a good idea as most people who don’t receive GZIP files are actually able to receive GZIP files. The request headers can get managed by firewalls, etc.

However 80% of page load is generally front end so that’s somewhere you’ll want to work on. “W3 Total Cache” does concatenation of CSS and JavaScript as well as minification of the files. It is the best option if you’ve properly got you’re JavaScript and CSS files to show up only on pages where they are needed. However most sites don’t, so its extra required configuration is nothing but annoying. Also minification of files generally results in breaking stuff so I just do the concatenation of the files.

The use of serving static files a cookieless domain will save a few ms but to get real savings in page load the use of a CDN will save roughly 100ms per item. Also using multiple domains to serve the files will increase the page load for older browsers who have limits on how many concurrent file requests can be done per domain.

You may also want to look into using http://smush.it to save on the size of images without loss in quality. (https://github.com/icambridge/filesmush script for running local files through smushit. https://github.com/tylerhall/Autosmush for running images on S3 through smushit.)

InnoDB should be used if your comments vastly outnumber your posts. Otherwise MyISAM may actually be faster.

Leave a Comment