40

Assuming a production OLTP system with predominantly InnoDB tables

  • What are the common symptoms of a mistuned/misconfigured system?
  • What configuration parameters do you most commonly change from their defaults?
  • How do you spot potential bottlenecks before there is a problem?
  • How do you recognize and troubleshoot active problems?

Any anecdotes detailing specific status variables and diagnostics would be appreciated.

RolandoMySQLDBA
  • 185,223
  • 33
  • 326
  • 536
Riedsio
  • 1,377
  • 1
  • 14
  • 18

6 Answers6

20

Interestingly, in MySQL 5.5, you can now have multiple innodb buffer pools.

The parameters you care about are

In about a month, I am slated to implement 112 innodb buffer pools for a client. I'll let you know how it went.

UPDATE 2011-02-27 21:57 EDT

I found out the max value for innodb_buffer_pool_instances is 64 I decided to configure 144 GB , so I set innodb_buffer_pool_instances to 18 and innodb_buffer_pool_size to 8. I am currently loading the server with 450GB

UPDATE 2011-04-28 13:44 EDT

I tried out multiple InnoDB Buffer Pools. There was too much threading locking and contention. I changed over to a single 162GB Buffer Pool + setting read_io_threads and write_io_threads to 64 (maximum value). This worked way better.

UPDATE 2012-07-03 17:27 EDT

I learned something amazing about MySQL. If you allocate a single monolithic InnoDB Buffer Pool that is bigger that Total Installed Divided By Number of Physical CPUs, your will incite the OS to regular intervals memory swapping due to a full InnoDB Buffer Pool. MySQL 5.5's option known as innodb_buffer_pool_instances can be used to split up the buffer pool. Yesterday, I properly implemented this for the client I mentioned in my answer last year. I still have 162GB for the client's Buffer Pool. I have set the server's innodb_buffer_pool_instances option to 2 because each DB Server is dual hexacore. I was thinking of setting it to 12 but then a colleague showed me a blog from Jeremy Cole on MySQL and Swappiness. After reading it, I put it into practice immediately for my client. I ran this command

numactl --hardware

I saw a mapping of 192GB of server RAM as 96GB to each physical core. Therefore, I set the innodb_buffer_pool_instances to 2. Things are looking good right now. I will update my answer to see how this affects memory swapping for the next 2 montns.

RolandoMySQLDBA
  • 185,223
  • 33
  • 326
  • 536
15

Here is a good article on InnoDB tuning from Sun's Jenny Chen - she blogs a lot about MySQL, some of it is Solaris-specific (e.g. using DTrace) but the entire blog is full of interesting tidbits.

mustaccio
  • 28,207
  • 24
  • 60
  • 76
Gaius
  • 11,238
  • 3
  • 32
  • 64
7

You may want to explore the following resources:

Brian Ballsun-Stanton
  • 4,731
  • 2
  • 30
  • 36
4

Firstly increase the default InnoDB Buffer Pool Size in my.cnf (I believe it default to 8MB)

You should probably set this to 75% of your RAM size (in general)

Matt Healy
  • 1,342
  • 2
  • 12
  • 17
0

Here shows a good configration for Mysql.

Besides, it's better to set some system limits, like

vim /etc/security/limits.conf
  • soft fsize unlimited
  • hard fsize unlimited
  • soft nproc 1048576
  • hard nproc 1048576
  • soft nofile 1048576
  • hard nofile 1048576
  • soft core unlimited
  • hard core unlimited
  • soft data unlimited
  • hard data unlimited
  • soft memlock unlimited
  • hard memlock unlimited

Also, turn off large transparent page

echo never > /sys/kernel/mm/transparent_hugepage/enabled      
echo never > /sys/kernel/mm/transparent_hugepage/defrag 

And if stress test is done on more than one machine, some network setting is important.

Jason Lee
  • 1
  • 1
-1

What configuration parameters do you most commonly change from their defaults?

memory configuration

TomaszSobczak
  • 207
  • 3
  • 7