10

I want to transfer a mysql dump, compressed, to s3.

I tried:

mysqldump -u root -ppassword -all-databases | gzip -9 | s3cmd put s3://bucket/sql/databases.sql.gz

but then I get:

ERROR: Not enough paramters for command 'put'

How can I do this (in one line)?

zensys
  • 331

5 Answers5

22

This is possible with s3cmd 1.5+ (link):
$ mysqldump ... | s3cmd put - s3://bucket/file-name.sql

daharon
  • 436
3

This appear to now be possible. With s3cmd v1.6.1:

curl -v "http://remote-server/file.mp4" | 
s3cmd [-c .s3cfg-aws] put - [-P] s3://my-bucket/[folder/]filename.mp4
1

You are missing the actual file you want to backup to start.

s3cmd put /backup_dir/somefile.sql.gz s3://bucket/sql/

s3cmd takes two basic arguments, the file, and the bucket to backup too.

Secondly, I can't take credit for the following, but its basically doing what you want with an intermediate script. Basically, create a bak.sh file with the following, and then that shell script will be runnable via bash. (Credit: http://www.wong101.com/tech-cloud/configure-s3cmd-cron-automated-mysql-backup)

S3BUCKET="<bucketname>"

# Array of Databases
DBS=("<db1>" "<db2>" "<db3>" "<db4>")

for i in "${DBS[@]}"
do
    DBNAME=$i
    FILE=$DBNAME-`date "+%Y%m%d-%H%M"`.sql.gz
    mysqldump $DBNAME -u[uname] -p[password] | gzip -9> /home/$FILE
    #location of s3cmd may vary, modify if needed
    /usr/bin/s3cmd --config /root/.s3cfg put /home/$FILE s3://$S3BUCKET/DBBackup-$DBNAME/ >> /var/log/mysqlback.log
    sleep 5
    rm /home/$FILE

done
1

You could try using - to indicate to s3cmd that it should read stdin for the source parameter, it may work. Failing that you could do it with an intermediate step

mysqldump -u root -ppassword -all-databases | gzip -9 >tmp.file && s3cmd put tmp.file s3://bucket/sql/databases.sql.gz ... && rm tmp.file

This will compress the output to tmp.file and if successful (&&) it will put the file to s3 and then if that was successful it will delete the temporary file.

user9517
  • 117,122
0

Create mysql.sh for mysqldump store direct to s3 via pipline No local files will be stored.

#!/bin/bash

#/home/ec2-user/s3/mysql.sh

#logging
    LOG_ROOT="/var/www/html/laravel/storage/logs/laravel-$(date +%Y-%m-%d).log"

# Basic variables

# Database credentials
USER=root
PASSWORD=password
HOST=localhost
DB_NAME=dbname

#Backup_Directory_Locations

S3BUCKET=s3://bucket/database/$(date +%a).sql.gz

# Timestamp (sortable AND readable)
TSTAMP=$(date +"%Y-%m-%d %H:%M:%S")

#Dump of Mysql Database into S3\
echo "[$TSTAMP] MYSQL_Dump.WARNING: MySQL Dump for DB_NAME started from cron s3/mysql.sh" >> "$LOG_ROOT"
echo "Stack trace:" >> "$LOG_ROOT"
echo "#wil be stored as s3://bucket/database/$(date +%a).sql.gz" >> "$LOG_ROOT"

mysqldump --host=$HOST --user=$USER --password=$PASSWORD  $DB_NAME  --routines  --single-transaction | gzip -9 | aws s3 cp - $S3BUCKET

$TSTAMPEND=$(date +"%Y-%m-%d %H:%M:%S")
echo "[$TSTAMPEND] MYSQL_Dump.INFO: MySQL Dump finished." >> "$LOG_ROOT"
echo "Stack trace:" >> "$LOG_ROOT"
echo "#Backup stored at s3://bucket/database/$(date +%a).sql.gz" >> "$LOG_ROOT"

Create Cronjob for daily backup

#crontab -e
0 0 * * *  bash /home/ec2-user/s3/mysql.sh to  >/dev/null 2>&1
#midnight 12 daily