3

Please bear with me as I ask this question, which looks similar to questions already on DBA@SE.

In the current configuration, we have a server (a Linux quadcore box with 4 GB RAM) - Both the Database and Application Server are hosted on this server. There are two MySql databases - Prod and Staging on the same server. Some of the tables are on MyIasm engine while some on InnoDB.

root     10119     1  0  1233  1012   3 Mar03 ?        00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe ...
mysql    10233 10119  9 238047 108896 0 Mar03 ?        12:11:11 /usr/local/mysql/libexec/mysqld ...

Every day a dump of the Prod db is taken and restored in the Staging db. The total size is close to 1 GB and a few tables have more than million rows.

  • Taking the dump via mySqlDump is fast and takes less than a minute (complete dump - not just deltas from binary logs). Optimizations like turning off auto_commit, foreign_key_checks etc. have been added to the dump.

  • From a couple of days, the restoration part (to the Staging DB) has become extremely terrible as it takes ~1 hr and paralyzes the application. (The same restore on my dev box takes less than 4 minutes - configurations are different and noone's accessing it)

    • key_buffer_size is 512Mb [~12.5 % of the total RAM]. table_cache=256 is suboptimal (but there aren't too many requests during that time). read_buffer_size=1M. sort_buffer_size=64M
    • In the restore script, before inserting data each table is locked, data is inserted and then the table is unlocked.
    • Slow Query logs are enabled (and most of the queries from the dump find a place there) - Could high I/O be affecting the performance?

I am trying to figure out the root cause so that it may be mitigated. As I have limited access to the server, I am trying to gather possible questions so that I take them to the hosting providers. Could you please provide me with pointers that I should be looking at?

CPU utilization and I/O peak during this time.

Thank you.

[Edit] - After increasing innodb_buffer_pool_size, the restore time has reduced to an extent.

This is the sample dump, that gets generated:
SET FOREIGN_KEY_CHECKS=0;
SET unique_checks=0;
SET AUTOCOMMIT=0;

// The scripts go here:
DROP TABLE IF EXISTS `access_tokens`;
CREATE TABLE `access_tokens` ( ... );
LOCK TABLES `access_tokens` WRITE;
INSERT INTO `access_tokens` VALUES (...) // All of the data is in a single insert
UNLOCK TABLES;

...
// Script for other tables

SET FOREIGN_KEY_CHECKS=1;
SET unique_checks=1;
COMMIT;
SET AUTOCOMMIT=1;
TJ-
  • 81
  • 1
  • 6

2 Answers2

1

You have not included your server config so I am writing all I could think of.

You have already innodb_buffer_pool_size so skipping that.

On Staging Server use following

innodb_flush_log_at_trx_commit = 2
innodb_log_file_size = 256M

On mysqldump use following options

--disable-keys
--extended-insert
--add-locks

You should use change log size this way.

Jai
  • 133
  • 10
0

This may be totally off-base, but have you confirmed the disk system is working optimally? It sounds like it just started happening, and I'd assume your database hasn't grown that much in a short period of time.

A failed RAID disk, or failing standard drive could be causing all sorts of write errors/latency.

Not really a DBA answer, but something worth checking out.

Mike
  • 31
  • 4