4

I'm now using this in cron to backup all my databases in 1 sql.gz file:

0 0     * * *   root    mysqldump -u root -pPASSWORD --all-databases | gzip > /home/backup/db/`date +\%G-\%m-\%d`_db.sql.gz

I'd like to have one .tar.gz file within whom X other archives for how many databases I have.. Is this possible?

Ward
  • 13,010

3 Answers3

6

Something like this may work. It is un-tested, but only slightly different from what I am using for backups on my systems.

# define common vars
OPTIONS="--verbose --lock-tables --flush-logs --force --quick --single-transaction"
AUTHFILE="/etc/mysql/rootauth.cnf"
BACKUPDIR="/srv/backup/mysql/"
BACKUPDATE=`date +"%y%m%d%H"`

# create temp folder (this isn't entirely safe, but be sure only root or backup user has 
#                     write access here, you might want to use mktemp)
mkdir ${BACKUPDIR}/tmp/

# get a list of all the databases on the system
DBSQL="SELECT SCHEMA_NAME FROM information_schema.SCHEMATA where SCHEMA_NAME!='information_schema' \
       AND SCHEMA_NAME!='performance_schema' order by SCHEMA_NAME"
DBS=`/usr/bin/mysql --defaults-extra-file=${AUTHFILE} --batch \
                                  --skip-column-names --execute "$DBSQL"`
DBS=`echo $DBS | tr -d '\n' | sed -e "s/ \+/ /g"`

for DB in $DBS; do
  # perform a per-database dump
  BACKUPDIRDB="${BACKUPDIR}/tmp/${DB}"
  mkdir -p ${BACKUPDIRDB}
  /usr/bin/mysqldump --defaults-extra-file=${AUTHFILE} \
       ${OPTIONS} $DB > ${BACKUPDIRDB}/backup_${BACKUPDATE}
done

# create archive of everything
tar -czvf ${BACKUPDIR}/backup_${BACKUPDATE}.tar.gz ${BACKUPDIR}/tmp/ 
#remove temp files
rm -rf ${BACKUPDIR}/tmp/
Zoredache
  • 133,737
5

Create a script like this to mysqldump all databases in parallel

DBLIST=`mysql -uroot -pPASSWORD -ANe"SELECT GROUP_CONCAT(schema_name) FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','performance_schema')" | sed 's/,/ /g'`
MYSQLDUMP_OPTIONS="-uroot -pPASSWORD --single-transaction --routines --triggers"
BACKUP_DEST=/home/backup/db/`date +\%G-\%m-\%d`
mkdir ${BACKUP_DEST}
for DB in `echo "${DBLIST}"`
do
    mysqldump ${MYSQLDUMP_OPTIONS} ${DB} | gzip > ${BACKUP_DEST}/${DB}.sql.gz &
done
wait

Then place this script in the crontab

If there are way too many databases, you could dump 5 at a time like this

DBLIST=`mysql -uroot -pPASSWORD -ANe"SELECT GROUP_CONCAT(schema_name) FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','performance_schema')" | sed 's/,/ /g'`
MYSQLDUMP_OPTIONS="-uroot -pPASSWORD --single-transaction --routines --triggers"
BACKUP_DEST=/home/backup/db/`date +\%G-\%m-\%d`
mkdir ${BACKUP_DEST}
COMMIT_COUNT=0
COMMIT_LIMIT=5
for DB in `echo "${DBLIST}"`
do
    mysqldump ${MYSQLDUMP_OPTIONS} ${DB} | gzip > ${BACKUP_DEST}/${DB}.sql.gz &
    (( COMMIT_COUNT++ ))
    if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
    then
        COMMIT_COUNT=0
        wait
    fi
done
if [ ${COMMIT_COUNT} -gt 0 ]
then
    wait
fi

You can then add the tar commands to the script

0

I would say that you could accomplish it by making a script that runs from the cron that performs a dump of each database and then the final operation of the script archives all of the files together into one .tar.gz

So in your instance, you would remove the --all-databases option and put the name of a database there. Then repeat that line for all the databases you have. Then after all the dumps have been made, create a tar with all those files and zip it up. Last but not least, perform any necessary cleanup. Put all of that into a script and then run the script from cron.

Safado
  • 4,836