6

I'm running a MySQL server with a couple of customer databases, some of which are quite large. We are doing a complete mysql dump once a day. It takes about half an hour to complete.

During this time, every request to the MySQL server gets a ridiculously slow response (sometimes around 1 s instead of something like 10 ms). Even though there are no timeout errors, I would love to see the queries being completed in a decent amount of time, so customer websites are not slowed down by hanging MySQL queries.

I researched a lot and learned about data replication, LVM snapshots, various mysqldump flag options and other work-arounds, but nothing did really improve the situation.

Is there any option to set a query priority in MySQL?

If there was a way to define query priorities, I could give the backup process a lower priority to be sure all the other queries are executed faster. The mysqldump process could be temporarily paused (SIGTSTP) when it's challenged by another request and continued (SIGCONT) when the query is finished. However, I couldn't find a feature like this in MySQL natively.

Giving the mysqldump process a higher nice value doesn't work, because it just increases the time both, the backup process and the table locks will be present. (I'm locking per-table.) Also, limiting IO write rates just leads to longer pain.

Are there any solutions I missed, other than LVM and replication?

RolandoMySQLDBA
  • 185,223
  • 33
  • 326
  • 536
Gerald
  • 241
  • 2
  • 7

3 Answers3

8

Indeed the bottleneck was IO. Since I was using tar -czf to compress the files and the CPU was so performant, the drive simply couldn't handle all the data coming so fast (although we use mirrored RAID volumes).

The mysqldump itself is no problem at all, but tar was slowing down the whole system massively.

I'm now using pv to shape the pipe stream:

tar -czf - ./ | pv -q -L 10m > output.tgz
  • -q disables output of pv.
  • -L 10m limits the write operation to 10 MB per second. Just test which value you should use on your server by watching the gzip process via top. I wanted it to have around 50% CPU usage, so the IO impact is lowered to 50% as well.
  • CPU impact is lowered implicitly, because pv slows down everything (except of the dump itself, it is done before compression).

Many thanks to @Michael - sqlbot and others for pushing me into the right direction.

Gerald
  • 241
  • 2
  • 7
2

You most likely need to use the --single-transaction option so that the dump of every table is in one giant read transaction. This will allow for a consistent snapshot of your data.

Even though it is possible to do mysqldumps on a live system, it is best not to perform mysqldumps on your live system because the InnoDB Buffer Pool basically gets its most frequent used data wiped out.

Even if you use XtraBackup, keep in mind the difference between XtraBackup and mysqldump when it comes to the backup's point-in-time if you start a backup at 12:00 AM and the backup takes 30 minutes:

  • Using mysqldump --single-transaction creates backup whose point-in-time is the start of the backup (12:00 AM, even after 30 minutes)
  • Using XtraBackup creates a snapshot of the data and logs whose point-in-time is the end of the backup (12:30 AM)

SUGGESTION

This sounds like you really need to setup MySQL Replication. This will make the live DB the Master. Your Slave can be the source of the backup.

Once MySQL Replication is established, you can do all kinds of backups (mysqldump, ) on a Slave without disturbing your Master. You no longer have to worry about setting query priorities or worry about slow queries competing with the mysqldump or having the Buffer Pool obliterated by a mysqldump resulting in reading data form disk more often.

On that Slave, you can do the following at midnight

STOP SLAVE;
FLUSH TABLES;
<< Run your backup with whatever method you choose >>
START SLAVE;

or you can be more aggressive on the Slave

STOP SLAVE;
SET GLOBAL innodb_fast_shutdown = 0;
service mysql stop
Copy /var/lib/mysql to a backup folder
service mysql start
START SLAVE;    

Please my answer to the following posts for more suggestions

RolandoMySQLDBA
  • 185,223
  • 33
  • 326
  • 536
1

You should definitely have a look at XtraBackup by percona, which just copy the data directory instead of querying the DB.

If you want to continue using mysqldump you should check out the quick option:

mysqldump can retrieve and dump table contents row by row, or it can retrieve the entire content from a table and buffer it in memory before dumping it. Buffering in memory can be a problem if you are dumping large tables. To dump tables row by row, use the --quick option (or --opt, which enables --quick). The --opt option (and hence --quick) is enabled by default, so to enable memory buffering, use --skip-quick.

s.meissner
  • 161
  • 8