3

I was trying to do mysql optimization. Yesterday the CPU usage was 100%. I am trying to optimize with mysqltuner but this time I gave a mysqltuner warning.

I have a high-traffic db. 500-600 connections in a day. Maybe more.

How can I fix it?

[!!] Joins performed without indexes: 5568

Key buffer used: 18.2% (97M used / 536M cache)

Ratio InnoDB log file size / InnoDB Buffer pool size (12.5 %): 256.0M * 2/4.0G should be equal 25%

InnoDB Write Log efficiency: 70.64% (3619 hits/ 5123 total)

Centos 6 8gb Ram and 4 prossesor

hdd: https://pastebin.com/AnFdUHp6

UPDATED:

my.cnf: https://pastebin.com/g7DbmZ2T

UPDATED:

mysqltuner: https://pastebin.com/HBdSjxaj

global variables: https://pastebin.com/xTzu2PGM

But Cpu usage still was %100.

ibennetch
  • 355
Tota1907
  • 39
  • 1
  • 1
  • 4

4 Answers4

3

Joins performed without indexes: 91. This is most likely the biggest culprit for your performance.

Your database doesn't have proper indexes for columns that are used in SQL queries that join data from multiple tables.

This means that instead of looking into only the index data, MySQL needs to scan the full table to get rows matching query keys.

So, you need to look into your SQL queries, and add proper indexes to tables / columns used in joins.

Tero Kilkanen
  • 38,887
2

There are at least two issues with your configuration:

  • Overallocated max_connections. Your highest connection usage is 4 and you allocated 440 connections which caused memory overconsumption (each connection allocates 18.5 M and multiplied by 440 connection it uses 8G memory). Reduce max_connections down to 20 and monitor connection usage regularly.
  • Overallocated innodb_buffer_pool_size. Your dataset is 254.3M and you allocated 4G. Reduce it down to 1G and review it later when your database has been running under load without restarting for at least a couple days. Also reset innodb_buffer_pool_instances to 1.

Due to memory overallocation (12.6G allocated for MySQL + 1.3G other process memory > 8G of system memory) your system probably went into swap thrashing which caused high CPU usage.

AlexD
  • 11,191
1

Sorry about the unfitting first answer - I obviously was too tired yesterday and misunderstood your question.

While using connection pools would help with the overall performance of the application, there are most probably other measures to take first:

MysqlTuner suggests valid options to set:

query_cache_size (=0)
query_cache_type (=0)
query_cache_limit (> 1M, or use smaller result sets)
join_buffer_size (> 1.0M, or always use indexes with JOINs)
innodb_log_file_size should be (=528M) if possible, so InnoDB total log files size equals to 25% of buffer pool size.
innodb_buffer_pool_instances(=4)

Also make sure the mysqld.log gets rotated regularly and check for warnings/errors in there. You might start with deleting it and restarting the mysqld, then see what errors/warnings will get printed back in there from here on.

You could test if lowering the innodb_buffer_pool_size to 1G (1073741824) and innodb_buffer_pool_instances to 1 affects the performance negatively. According to the mysqltuner, there are only 254.3M of data in the buffer - so you would have enough room to grow.

The suggestion by @tero-kilkanen might be worth a shot, too. Though as there only were 58 out of 19K queries missing a matching index, I don't think this will yield much of a performance boost. But check the indexes for their sanity and included keys anyway.

To get rid of the warning, that your mysqld might use more than the installed amount of RAM, you should lower the max_connections to a sane number: 20 - 30. Your application used 4 at the most during the uptime of he mysqld.
The maximum amount of memory to be assigned by mysqld is calculated rougly by multiplying the max_connections (440) with the cache assigned per connection (mostly sort_buffer_size plus some smaller ones) and then adding the global buffers (i.e. innodb_buffer_pool_size) to it. While 18.5MB per connection sounds not too much (it isn't) - multiplying it by 440 amounts to 8GB on their own.

As your application mainly uses read statements (99%) adding caching on the application side might reduce the overall load on the database and increase performance on the frontend side noticably.
Regularly queried values which rarely change at all (frontend user tables for instance) might be cached. Then you could invalidate the cache when editing student/parent users and their permissions.

If you still encounter problems, run the mysqld and application for a longer time before running mysqltuner. 1-3 days should yield better suggestions due to a more accurate usage profile.

0

Suggestions to consider for your my.cnf [mysqld] section Rate Per Second=RPS

REMOVE first line with open_files_limit (there are 2 in your my.cnf) AND REMOVE only line with innodb_open_files=800000 to allow default to work for your instance.

following could be change or ADD line to my.cnf

thread_cache_size=100  # for CAP of 100 per V5.7 refman to reduce threads_created
query_cache_type=0  # for NO query cache to conserve CPU cycles
query_cache_size=0  # from 1M to conserve RAM for more useful purpose
innodb_io_capacity=15000  # from 200 to use more of SSD IOPS capacity
sort_buffer_size=2M  # from 6M to conserve RAM per CONNECTION
read_buffer_size=128K  # from 1M to reduce handler_read_next RPS
join_buffer_size=256K  # from 2M per table join OPS per CONNECTION
table_open_cache=10000  # from 524288 for a practical LIMIT
table_definition_cache=1000  # from 2000 since you have less than 100 tables today
open_files_limit=65536  # from 1049026 million for a practical LIMIT
yagmoth555
  • 17,495