Skip to content

Instantly share code, notes, and snippets.

@oodavid
Created March 26, 2012 21:21
Show Gist options
  • Select an option

  • Save oodavid/2209819 to your computer and use it in GitHub Desktop.

Select an option

Save oodavid/2209819 to your computer and use it in GitHub Desktop.

Revisions

  1. oodavid revised this gist Mar 26, 2012. No changes.
  2. oodavid revised this gist Mar 26, 2012. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion README.md
    Original file line number Diff line number Diff line change
    @@ -21,7 +21,7 @@ This is a hands-on way to pull down a set of MySQL dumps from Amazon S3 and rest
    # Download all the snapshots
    s3cmd ls "$bucket"/"$s3folder"/ | grep -o "s3://.*\.sql\.gz" | xargs -n 1 -P 10 -I {} s3cmd get {} .
    # Unzip all the files
    ls | grep "sql.gz" | xargs gunzip
    ls | grep "sql.gz" | xargs -n 1 -P 10 -I {} gunzip {} .
    # Run each file through mysql
    find . -name '*.sql' | awk '{ print "source",$0 }' | mysql -u root -p$mysqlpass --batch

  3. oodavid revised this gist Mar 26, 2012. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion README.md
    Original file line number Diff line number Diff line change
    @@ -19,7 +19,7 @@ This is a hands-on way to pull down a set of MySQL dumps from Amazon S3 and rest
    # Create a temporary directory and change to it
    cd $(mktemp -d)
    # Download all the snapshots
    s3cmd ls $bucket/$s3folder/ | awk '{print $4}' | xargs -n 1 -P 10 -I {} s3cmd get {} .
    s3cmd ls "$bucket"/"$s3folder"/ | grep -o "s3://.*\.sql\.gz" | xargs -n 1 -P 10 -I {} s3cmd get {} .
    # Unzip all the files
    ls | grep "sql.gz" | xargs gunzip
    # Run each file through mysql
  4. oodavid revised this gist Mar 26, 2012. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -14,8 +14,8 @@ This is a hands-on way to pull down a set of MySQL dumps from Amazon S3 and rest

    ## 2 - Download all the snapshots in the S3 folder, extract them, run them through mysql

    # Set the name of the directory you want to use (say "2012-03-26")
    export s3folder="YYYY-MM-DD"
    # Set the name of the directory you want to use (ie "1332796733 - Monday 26 March 2012 @ 2218")
    export s3folder="1234567890 - DAY DD MONTH YYYY @ HHMM"
    # Create a temporary directory and change to it
    cd $(mktemp -d)
    # Download all the snapshots
  5. oodavid created this gist Mar 26, 2012.
    31 changes: 31 additions & 0 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,31 @@
    # Restore MySQL from Amazon S3

    This is a hands-on way to pull down a set of MySQL dumps from Amazon S3 and restore your database with it

    ***Sister Document - [Backup MySQL to Amazon S3](https://gist.github.com/2206527) - read that first***

    ## 1 - Set your MySQL password and S3 bucket, make a temp dir, get a list of snapshots

    # Set our variables
    export mysqlpass="ROOTPASSWORD"
    export bucket="s3://bucketname"
    # Get a list of the snapshot dates, newest at the bottom
    s3cmd ls $bucket | sort

    ## 2 - Download all the snapshots in the S3 folder, extract them, run them through mysql

    # Set the name of the directory you want to use (say "2012-03-26")
    export s3folder="YYYY-MM-DD"
    # Create a temporary directory and change to it
    cd $(mktemp -d)
    # Download all the snapshots
    s3cmd ls $bucket/$s3folder/ | awk '{print $4}' | xargs -n 1 -P 10 -I {} s3cmd get {} .
    # Unzip all the files
    ls | grep "sql.gz" | xargs gunzip
    # Run each file through mysql
    find . -name '*.sql' | awk '{ print "source",$0 }' | mysql -u root -p$mysqlpass --batch

    ## 3 - Check it all went smoothly

    # Login to MySQL and have a poke
    mysql -u root -p$mysqlpass