Skip to content

Instantly share code, notes, and snippets.

@abeyt
Forked from oodavid/README.md
Last active August 29, 2015 14:22
Show Gist options
  • Select an option

  • Save abeyt/999a949c17f64f3b07a8 to your computer and use it in GitHub Desktop.

Select an option

Save abeyt/999a949c17f64f3b07a8 to your computer and use it in GitHub Desktop.

Backup MySQL to Amazon S3

This is a simple way to backup your MySQL tables to Amazon S3 for a nightly backup - this is all to be done on your server :-)

Install s3cmd

this is for Centos 5.6, see http://s3tools.org/repositories for other systems like ubuntu etc

# Install s3cmd
cd /etc/yum.repos.d/
wget http://s3tools.org/repo/CentOS_5/s3tools.repo
yum install s3cmd
# Setup s3cmd
s3cmd --configure
    # You’ll need to enter your AWS access key and secret key here, everything is optional and can be ignored :-)

Add your script

Upload a copy of s3mysqlbackup.sh (it will need some tweaks for your setup), make it executable and test it

# Add the executable bit
chmod +x s3mysqlbackup.sh
# Run the script to make sure it's all tickety boo
./s3mysqlbackup.sh

Run it every night with CRON

Assuming the backup script is stored in /var/www/s3mysqlbackup.sh we need to add a crontask to run it automatically:

# Edit the crontab
env EDITOR=nano crontab -e
    # Add the following lines:
    # Run the database backup script at 3am
    0 3 * * * bash /var/www/s3mysqlbackup.sh >/dev/null 2>&1

Don't expose the script!

If for some reason you put this script in a public folder (not sure why you would do this), you should add the following to your .htaccess or httpd.conf file to prevent public access to the files:

### Deny public access to shell files
<Files *.sh>
    Order allow,deny
    Deny from all
</Files>
#!/bin/bash
# Basic variables
mysqlpass="ROOTPASSWORD"
bucket="s3://bucketname"
# Timestamps
datestamp=`date +"%Y-%m-%d"`
timestamp=`date +"%H-%M-%S"`
# List all the databases
databases=`mysql -u root -p$mysqlpass -e "SHOW DATABASES;" | tr -d "| " | grep -v "\(Database\|information_schema\|mysql\)"`
# Loop the databases
for db in $databases; do
# Define our filenames - it makes sense to keep all the tables grouped together on S3 so we can see all the tables in a single folder :-)
filename="$datestamp-$timestamp-$db.sql.gz"
tmpfile="/tmp/$filename"
object="$bucket/$datestamp/$filename"
echo -e "Dumping \e[0;34m$db\e[0m to \e[0;35m$tmpfile\e[0m..."
mysqldump -u root -p$mysqlpass --force --opt --databases $db | gzip -c > $tmpfile
echo -e "Moving \e[0;34m$tmpfile\e[0m to \e[0;35m$object\e[0m..."
s3cmd put $tmpfile $object
echo -e "Removing \e[1;31m$tmpfile\e[0m"
rm -f $tmpfile
echo -e "\e[1;32mDatabase: $db backed up successfully to S3\e[0m"
done;
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment