Is it safe to run wp-cron.php twice if the first instance takes too long?

How wp-cron.php behaves depends on how you execute it. If you execute it by HTTP request with ‘doing_wp_cron’ in the GET string, it will check if another wp-cron process has set a lock and exit if it has. By default, this is how WordPress executes it. It is easily done in crontab like so:

*/10 * * * * /usr/bin/wget -q -O "http://www.example.com/wp-cron.php?doing_wp_cron=`date +\%s.\%N`" > /dev/null 2>&1

But if you execute it directly with the PHP cli, as you are doing, then it’s different. Then it checks WP_CRON_LOCK_TIMEOUT, which by default is set to sixty seconds. If the existing process has been running for longer than that, it claims the lock for its own and proceeds to run scheduled jobs starting all over at the beginning.

This is not a disaster because wp-cron reschedules and unschedules jobs as it goes. Right before it runs a job, it unschedules it. This prevents the job from running twice[1]. And right after it finishes executing a job, it checks to make sure it still owns the lock. If it does not, it exits. Which is why you are seeing the first process quit shortly after you start the second.

So in your case, you have a set of scheduled jobs that sometimes take more than twenty minutes to run. Probably there are some jobs in the every10minutes queue that are taking up all that time. wp-cron starts and chugs away for twenty minutes. Then another instance starts. That second instance sees that the lock is older than the sixty second timeout and claims the lock for itself. It then starts over at the beginning. The first process unscheduled and rescheduled jobs as it went, but if the jobs are in the every10minutes queue, they are now ripe to be run again. So the second process starts them over again. The first one gets to the end of the job it was on, sees it no longer owns the lock, and quits. And because the scheduled posts are at the end of the queue, they never get posted.

Also, because wp-cron unschedules jobs just before running them, if it were to die between doing that and the job completing, due to memory running out or something, the job would just be lost.

  1. Most of the time. There is a race condition. If another process both a) claims the lock after the first process checks it and before that process unschedules the next job, that job will be run twice. On a bogged down server, the likelihood of hitting that condition might be larger.

Leave a Comment