Post meta information is automatically cached in memory for a standard WP_Query
(and the main query), unless you specifically tell it not to do so by using the update_post_meta_cache
parameter.
Therefore, you should not be writing your own queries for this.
How the meta caching works for normal queries:
If the update_post_meta_cache
parameter to the WP_Query
is not set to false, then after the posts are retrieved from the DB, then the update_post_caches()
function will be called, which in turn calls update_postmeta_cache()
.
The update_postmeta_cache()
function is a wrapper for update_meta_cache()
, and it essentially calls a simple SELECT
with all the ID’s of the posts retrieved. This will have it get all the postmeta, for all the posts in the query, and save that data in the object cache (using wp_cache_add()
).
When you do something like get_post_custom()
, it’s checking that object cache first. So it’s not making extra queries to get the post meta at this point. If you’ve gotten the post in a WP_Query
, then the meta is already in memory and it gets it straight from there.
Advantages here are many times greater than making a complex query, but the greatest advantage comes from using the object cache. If you use a persistent memory caching solution like XCache or memcached or APC or something like that, and have a plugin that can tie your object cache to it (W3 Total Cache, for example), then your whole object cache is stored in fast memory already. In which case, there’s zero queries necessary to retrieve your data; it’s already in memory. Persistent object caching is awesome in many respects.
In other words, your query is probably loads and loads slower than using a proper query and a simple persistent memory solution. Use the normal WP_Query
. Save yourself some effort.
Additional: update_meta_cache()
is smart, BTW. It won’t retrieve meta information for posts that already have their meta information cached. It doesn’t get the same meta twice, basically. Super efficient.
Additional additional: “Give as much work as possible to the database.”… No, this is the web. Different rules apply. In general, you always want to give as little work as possible to the database, if it’s feasible. Databases are slow or poorly configured (if you didn’t configure it specifically, you can bet good money that this is true). Often they are shared among many sites, and overloaded to some degree. Usually you have more web servers than databases. In general, you want to just get the data you want out of the DB as fast and simply as possible, then do the sorting out of it using the web-server-side code. As a general principle, of course, different cases are all different.