The safest way to automate WordPress backups

(Partial answer as I’m familiar with AWS, not Google Drive.)

Having a WordPress DB stored somewhere on a cloud service is, in my opinion, no worse that hosting the site on a virtual or cloud server (given virtualisation platforms all allow you to reset the server’s root password – albeit typically with a reboot – so your entire machine is at risk if someone discovers your control panel logon.)

Make sure you’ve followed basic security precautions for your server and also consider the two-factor-auth plugin that, even with at it’s lowest security, will send a one-time password to the user’s email address when they login (a user’s email would need to be hacked as well for the hacker to gain access to the WordPress dashboard.)

Personally, many of my sites use Linode. I use their backup service as a precaution, but you can only fully restore the entire drive, so it’s no help if you accidentally deleting a single file or directory and want to leave everything else that has changed on the machine since the backup was taken intact.

backup2l is a free/simple/reliable command line backup utility that lets you recover files individually. It uses well known Linux commands like tar and gzip. (There are packages for it in most distributions, no need to install manually.)

By default it runs daily, but it does incremental backups so you can configure as often as you like.

I use the following code in my /etc/backup2l.conf to automatically dump the SQL databases before the backup runs:

# This user-defined bash function is executed before a backup is made
PRE_BACKUP ()
{
    # e. g., shut down some mail/db servers if their files are to be backup'ed

    # On a Debian system, the following statements dump a machine-readable list of
    # all installed packages to a file.

    echo "  writing dpkg selections to /root/.dpkg-selections.log..."
    dpkg --get-selections | diff - /root/.dpkg-selections.log > /dev/null || dpkg --get-selections > /root/.dpkg-selections.log

    echo " dumping databases"
    for i in /var/lib/mysql/*/; do
        name=`basename $i`

        # get username + password
        user=$(grep user /etc/mysql/debian.cnf | awk '{print $3}' | head -n 1)
        pass=$(grep pass /etc/mysql/debian.cnf | awk '{print $3}' | head -n 1)

        # do the dump
        mysqldump --user="$user" --password="$pass" --ignore-table=mysql.event $name | gzip > /var/backups/mysql/$name.gz
    done

}

Then in POST_BACKUP, I upload the backup files to an Amazon S3 bucket with s3cmd:

# This user-defined bash function is executed after a backup is made
POST_BACKUP ()
{
    # e. g., restart some mail/db server if its files are to be backup'ed/
    /usr/bin/s3cmd sync -c /home/myhomedir/.s3cfg --delete-removed /var/backups/localhost/ s3://my-bucket-name/
}

AWS has “Versioning” which means that if someone gained access to your server and used the IAM credentials to delete the backup files from your bucket, you’d still be able to get them back.