3

AWS RDS by default logs slow queries to a database table.

Is there a way to run mysqldumpslow against that table, without exporting it to a file first? Or is there a better way to get a report of slow queries?

chris
  • 1,232
  • 5
  • 17
  • 29

1 Answers1

4

mysqldumpslow only works on text files, so you need to forget about using it.

The table mysql.slow_log is a CSV table by default.

mysql> show create table mysql.slow_log\G
*************************** 1. row ***************************
       Table: slow_log
Create Table: CREATE TABLE `slow_log` (
  `start_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `user_host` mediumtext NOT NULL,
  `query_time` time NOT NULL,
  `lock_time` time NOT NULL,
  `rows_sent` int(11) NOT NULL,
  `rows_examined` int(11) NOT NULL,
  `db` varchar(512) NOT NULL,
  `last_insert_id` int(11) NOT NULL,
  `insert_id` int(11) NOT NULL,
  `server_id` int(10) unsigned NOT NULL,
  `sql_text` mediumtext NOT NULL
) ENGINE=CSV DEFAULT CHARSET=utf8 COMMENT='Slow log'
1 row in set (0.00 sec)

mysql>

You do not want to query against a CSV table as it will do full table scans every time.

What you can do is create a copy of the table into another database and index start_time

DROP DATABASE IF EXISTS my_slow_log;
CREATE DATABASE my_slow_log;
CREATE TABLE my_slow_log.slow_log LIKE mysql.slow_log;
ALTER TABLE my_slow_log.slow_log ADD INDEX (start_time);
ALTER TABLE my_slow_log.slow_log DISABLE KEYS;
INSERT INTO my_slow_log.slow_log
    SELECT * FROM mysql.slow_log
;
ALTER TABLE my_slow_log.slow_log ENABLE KEYS;

Then, you can query against my_slow_log.slow_log by datetime values

I have discussed this before

Give it a Try !!!

CAVEAT: You would have to reload the data every time you want the latest slow logs.

In that event, if you want just want slow logs from midnight 3 days ago, do this:

TRUNCATE TABLE my_slow_log.slow_log;
ALTER TABLE my_slow_log.slow_log DISABLE KEYS;
INSERT INTO my_slow_log.slow_log
    SELECT * FROM mysql.slow_log
    WHERE start_time >= DATE(NOW() - INTERVAL 3 DAY) + INTERVAL 0 SECOND;
;
ALTER TABLE my_slow_log.slow_log ENABLE KEYS;

or just in the last 3 hours

TRUNCATE TABLE my_slow_log.slow_log;
ALTER TABLE mysql.slow_log DISABLE KEYS;
INSERT INTO my_slow_log.slow_log
    SELECT * FROM mysql.slow_log
    WHERE start_time >= (NOW() - INTERVAL 3 HOUR);
;
ALTER TABLE my_slow_log.slow_log; ENABLE KEYS;

UPDATE 2014-12-11 14:50 EST

Your comment

This makes querying the data a bit quicker, but I'm interested in how to provide a summary of slow queries that I can then pass along to our developers

You could present this to your developers by requesting slow logs entries as follows:

Today's 20 worst performing queries

SELECT query_time,sql_text
FROM my_slow_log.slow_log
WHERE start_time >= DATE(NOW()) + INTERVAL 0 SECOND
ORDER BY query_time DESC
LIMIT 20;

Count of Slow Queries By the Hour over the Last 2 Weeks

SELECT hr,COUNT(1) SlowQueries FROM
(
    SELECT (unix_timestamp(start_time) - MOD(unix_timestamp(start_time),3600)) hr
    FROM my_slow_log.slow_log
    WHERE start_time >= DATE(NOW() - INTERVAL 2 WEEK) + INTERVAL 0 SECOND
) A GROUP BY hr;

Count of Slow Queries By the Minute over the Last 6 Hours

SELECT mn,COUNT(1) SlowQueries FROM
(
    SELECT (unix_timestamp(start_time) - MOD(unix_timestamp(starttime),60)) mn
    FROM my_slow_log.slow_log
    WHERE start_time >= ( NOW() - INTERVAL 6 HOUR )
) A GROUP BY mn;

Queries like these can give you stats when slow queries run and what the actual queries were.

You can send these queries to a text file and send them to the developers

MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
SQL="SELECT query_time,sql_text FROM my_slow_log.slow_log"
SQL="${SQL} WHERE start_time >= DATE(NOW()) + INTERVAL 0 SECOND"
SQL="${SQL} ORDER BY query_time DESC LIMIT 20"
mysql ${MYSQL_CONN} -ANe"${SQL}" > Slow_Queries_Report.txt
RolandoMySQLDBA
  • 185,223
  • 33
  • 326
  • 536