2

To do backups I created a script that makes an archive of all the folders I need to backup, sends it to S3 (through s3cmd) and then deletes it after upload has been completed.

I'm looking for a way to avoid having to create the archive and then delete it because I don't have enough space to temporary store the archive! Is it possible?

Here's my script:

DBLIST=`mysql -uMYSQL_USERNAME -pMYSQL_PASSWORD --events -ANe"SELECT GROUP_CONCAT(schema_name) FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','performance_schema')" | sed 's/,/ /g'`
MYSQLDUMP_OPTIONS="-uMYSQL_USERNAME -pMYSQL_PASSWORD --single-transaction --routines --triggers"
BACKUP_DEST="/home/backup/db"
for DB in `echo "${DBLIST}"`
do
    mysqldump ${MYSQLDUMP_OPTIONS} ${DB} | gzip -f > ${BACKUP_DEST}/${DB}.sql.gz &
done
wait
tar -czvf /home/backup/db2/`date +\%G-\%m-\%d`_db.tar.gz ${BACKUP_DEST}
s3cmd --reduced-redundancy put -r /home/backup/db2/ s3://MY-S3-BUCKET/ --no-encrypt
find /home/backup -type f -delete

On a sidenote, I can bet it's not a best practise to store usernames/passwords in plain text in a crontab file.. how can I solve this?

Thanks in advance :)

1 Answers1

1

It looks like s3cmd can accept input from stdin at least according to the resolution of this bug on 2/6/2014. If your s3cmd is newer than that you should be able to do:

tar -czvf - ${BACKUP_DEST} | s3cmd --reduced-redundancy put - s3://MY-S3-BUCKET/`date +\%G-\%m-\%d`_db.tar.gz --no-encrypt

Most utilities use - as a filename to indicate writing to stdout or reading from stdin. That will eliminate having the .tar.gz file on your drive.

As far as passwords/keys/etc go, it looks like you can specify a configuration file to s3cmd with -c FILENAME, presumably you'd use the commands generated by adding --dump-config to a complete s3cmd commandline to create the file. You'd still need to protect that file, though. Likewise MySQL has its ~/.my.cnf file (see here for an example) where you can store connection information.

Also, since you are already gzipping the individual database dumps, I suspect that gzipping the tar again is not going to compress the data much more, and will make the whole process take longer. Consider just using -cvf - and .tar for the filename.

DerfK
  • 19,826