4

Converting a large schema to file-per-table and I will be performing a mysqldump/reload with --all-databases. I have edited the my.cnf and changed "innod_flush_log_at_trx_commit=2" to speed up the load. I am planning to "SET GLOBAL innodb_max_dirty_pages_pct=0;" at some point before the dump. I am curious to know which combination of settings will get me the fastest dump and reload times?

SCHEMA stats:

26 myisam tables 413 innodb ~240GB of data

[--opt= --disable-keys; --extended-insert; --quick, etc] --no-autocommit ??

vs prepending session vars like: "SET autocommit=0; SET unique_checks=0; SET foreign_key_checks=0;"

Are the mysqldump options equivalent or not really?

Thanks for your advice!

RolandoMySQLDBA
  • 185,223
  • 33
  • 326
  • 536
JShean
  • 169
  • 1
  • 3
  • 11

1 Answers1

5

ASPECT #1

While setting innodb_max_dirty_pages_pct to 0 is good to do prior to a dump, you will have to wait until the dirty page count falls below 1% of the InnoDB Buffer Pool size. Here is how you can measure it:

SELECT ibp_dirty * 100 / ibp_blocks PercentageDirty FROM
(SELECT variable_value ibp_blocks
FROM information_schema.global_status
WHERE variable_name = 'Innodb_buffer_pool_pages_total') A,
(SELECT variable_value ibp_dirty
FROM information_schema.global_status
WHERE variable_name = 'Innodb_buffer_pool_pages_dirty') B;

Keep running this report until PercentageDirty reaches close to 1.00. Perhaps you could just set innodb_max_dirty_pages_pct to 0 one hour before the dump.

If you do not change innodb_max_dirty_pages_pct, a mysqldump will force a flush of dirty blocks involving the table you are dumping.

ASPECT #2

You should not have to prepend "SET autocommit=0; SET unique_checks=0; SET foreign_key_checks=0;" because a mysqldump includes them at the beginning of the dump. Here is a sample mysqldump's header (Please note the two lines after TIME_ZONE)

-- MySQL dump 10.11
--
-- Host: localhost    Database: dbAccessData
-- ------------------------------------------------------
-- Server version       5.0.51a-community-log

/!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT /; /!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS /; /!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION /; /!40101 SET NAMES utf8 /; /!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE /; /!40103 SET TIME_ZONE='+00:00' /; /!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 /; /!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 /; /!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' /; /!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 /;

--

-- Current Database: dbAccessData

CREATE DATABASE /!32312 IF NOT EXISTS/ dbAccessData /!40100 DEFAULT CHARACTER SET latin1 /;

USE dbAccessData;

ASPECT #3

Please run this query

SELECT engine,count(1) table_count FROM information_schema.tables
WHERE table_schema='mysql' GROUP BY table_schema;

I ran this and got 25 for MySQL 5.5.23. Since you have 26 you have only 1 tables outside the mysql schema. To find it, run this:

SELECT table_schema,count(1) table_count FROM information_schema.tables
WHERE engine='MyISAM' GROUP BY table_schema;

If you stop writing to the one lone table, you should be able to mysqldump all databases just fine.

ASPECT #4

All the needed options for --opt are adequate. No need to alter it.

ASPECT #5

You may want to dump the databases into different file: Please see my Apr 17, 2011 post How can I optimize a mysqldump of a large database? on how to script parallel mysqldumps.

RolandoMySQLDBA
  • 185,223
  • 33
  • 326
  • 536