302

I am importing a 7 GB foobar.sql to restore a table in a local database.

$ mysql -h localhost -u root 'my_data' < foobar.sql

$ mysql --version
/usr/local/mysql/bin/mysql  Ver 14.12 Distrib 5.0.96, for apple-darwin9.8.0 (i386) using readline 5.1

How can I monitor its progress?

RolandoMySQLDBA
  • 185,223
  • 33
  • 326
  • 536
qazwsx
  • 3,507
  • 9
  • 23
  • 21

18 Answers18

408

If you're just importing from a dump file from the CLI on *nix, e.g.

mysql -uxxx -pxxx dbname < /sqlfile.sql

then first install pipe viewer on your OS then try something like this:

pv sqlfile.sql | mysql -uxxx -pxxxx dbname

which will show a progress bar as the program runs.

It's very useful and you can also use it to get an estimate for mysqldump progress.

pv dumps the sqlfile.sql and passes them to mysql (because of the pipe operator). While it is dumping, it shows the progress. The cool thing is that mysql takes the data only as fast as it can progress it, so pv can show the progress of the import. I do not have any proof. But it seems so. I guess there is some buffer used, but at some point I think mysql does not read any more data when it is still busy processing.

Pipe Viewer screenshot

mahemoff
  • 331
  • 2
  • 14
Rob
  • 4,196
  • 1
  • 15
  • 2
49

If you've already started the import, you can execute this command in another window to see the current size of your databases. This can be helpful if you know the total size of the .sql file you're importing.

SELECT table_schema "Data Base Name", sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MiB" 
FROM information_schema.TABLES GROUP BY table_schema;  

Credit to: http://forums.mysql.com/read.php?108,201578,201578


The MySQL 8.0 Reference states the following about the accuracy:

DATA_LENGTH

For MyISAM, DATA_LENGTH is the length of the data file, in bytes.

For InnoDB, DATA_LENGTH is the approximate amount of memory allocated for the clustered index, in bytes. Specifically, it is the clustered index size, in pages, multiplied by the InnoDB page size.

 

INDEX_LENGTH

For MyISAM, INDEX_LENGTH is the length of the index file, in bytes.

For InnoDB, INDEX_LENGTH is the approximate amount of memory allocated for non-clustered indexes, in bytes. Specifically, it is the sum of non-clustered index sizes, in pages, multiplied by the InnoDB page size.

lucidbrot
  • 105
  • 4
Josh Grinberg
  • 591
  • 4
  • 3
19

When you execute a mysqldump of a single database, all tables are dumped in alphabetical order.

Naturally, the reload of the mysqldump into a database would also be in alphabetical order.

You could just do a SHOW PROCESSLIST; and find out the DB Connection running the mysqldump. When the dump is reloaded, the DB Connection will vanish.

If you want to know what tables are in the dumpfile, run this against foobar.sql

cat foobar.sql | grep "^CREATE TABLE" | awk '{print $3}'

UPDATE 2012-05-02 13:53 EDT

Sorry for not noticing that there is only one table.

If the table is MyISAM, the only way to monitor is from the OS point of view. The reason? The table is write-locked throughout the reload. What do you look for? The size of the .MYD and .MYI files. Of course, you need to compare that with what the table size was before on the other DB server you imported from.

If the table is InnoDB and you have innodb_file_per_table enabled, the only way to monitor is from the OS point of view. The reason? The table is write-locked throughout the reload. What do you look for? The size of the .ibd file. Of course, you need to compare that with what the table size was before on the other DB server you imported from.

If the table is InnoDB and you have innodb_file_per_table disabled, not even the OS point of view can help.

UPDATE 2012-05-02 13:56 EDT

I addressed something like this last year : How do I get % progress for "type db.sql | mysql"

UPDATE 2012-05-02 14:09 EDT

Since a standard mysqldump write-locks the table like this:

LOCK TABLES `a` WRITE;
/*!40000 ALTER TABLE `a` DISABLE KEYS */;
INSERT INTO `a` VALUES (123),(451),(199),(0),(23);
/*!40000 ALTER TABLE `a` ENABLE KEYS */;
UNLOCK TABLES;

then, there is no way to get a progress from with mysql until the table lock is released.

If you can get LOCK TABLES and UNLOCK TABLES commented out of the dumpfile...

  • if the table is MyISAM, SELECT COUNT(*) would work
  • if the table is InnoDB, SELECT COUNT(*) would probably slow down/halt the load until count is done
RolandoMySQLDBA
  • 185,223
  • 33
  • 326
  • 536
18

Every 2 seconds you will see the processes running.

watch 'echo "show processlist;" | mysql -uuser -ppassword';

If you want it less frequent then add -n x where x is the number of seconds. 5 seconds would be:

watch -n 5 'echo "show processlist;" | mysql -uuser -ppassword';
Marco
  • 3,720
  • 5
  • 25
  • 31
17

If you just want to check if it is stalled you can query

show processlist; 

and see what is being executed.

SCL
  • 271
  • 2
  • 4
5

Rob's answer is great for most situations, but Pipe Viewer doesn't work well in use cases where a tty isn't available, like when monitoring a mysql docker container's initialization output or when you want to log the progress to a file.

Pipe Monitor (github) is an alternative designed to output updates to a log stream via STDERR. Disclaimer: I am the author.

Their basic functionality is very similar: Read from STDIN or a file. Pipe the contents to STDOUT. Show progress. However, whereas Pipe View uses terminal control sequences to update a visual progress bar on a single line, Pipe Monitor outputs text updates appropriate for non terminal applications.

Pipe Monitor supports the following basic options. Output is customizable via the --format option:

Usage: pm [--size SIZE] [--name NAME] [--format FORMAT] INPUT_FILE

Positional arguments: INPUT_FILE Optional input file. If not provided input will be read from STDIN

Options: --size SIZE, -s SIZE Size of input from STDIN. Ignored if using INPUT_FILE --name NAME, -n NAME A NAME tag for this output. Will be pre-pended to default FORMAT string --format FORMAT, -f FORMAT Output format string. Allowed keys: %name, %size, %time, %eta, %percent, %written, %buffered --help, -h display this help and exit

Here is a comparison of the output of each in a non terminal environment.

Pipe Viewer (non terminal):

$ pv -nf testin > testout
40
70
77
84
90
96
100

Pipe Monitor:

$ pm testin > testout
Processed 0 bytes of 2456678400 (0% complete). 0 bytes buffered. Running 0s, eta: <unknown>
Processed 1750794240 bytes of 2456678400 (71% complete). 327680 bytes buffered. Running 2s, eta: 1s
Processed 2106937344 bytes of 2456678400 (85% complete). 700416 bytes buffered. Running 4s, eta: 1s
Processed 2419339264 bytes of 2456678400 (98% complete). 2871296 bytes buffered. Running 6s, eta: 0s
Processed 2456678400 bytes of 2456678400 (100% complete). 0 bytes buffered. Running 6s, eta: 0s
cmorris
  • 151
  • 2
5

As a solution for someone who can't get pv to work or for whom pv tells lies. You can monitor the size of ibdata1 file in /var/lib/mysql which contains the data. This will end up the same size (or thereabouts) of the filesize in your source server.

If there are many tables you can also watch them appear one by one in /var/lib/mysql/< database name>.

I happened to use this fact recently when a long term database had built up a log file of around 20G over a period of three or four years. I noticed the transfer was taking ages and used this technique to monitor progress.

I think that it is highly unlikely that the day will dawn when a database does not involve a file somewhere or other. Meanwhile, you can monitor the file to see how a transfer is progressing. The method I suggested has been something you could do in one form or another since the first sql database was written. I never intended to suggest that it was any kind of "official" technique that a manual jockey could fall back on. It assumes a general level of proficiency with computers in general and unix in particular.

Paul White
  • 94,921
  • 30
  • 437
  • 687
nerak99
  • 151
  • 1
  • 2
4

I had a 500 MB SQL file to import. It took me around 2 hours. The mysqld CPU usage was next to 100 % at the start of the import process. But after a few minutes the CPU usage was down to 15 %.

I tried many tweaks but only this one helped me: innodb_flush_log_at_trx_commit = 0

After applying this setting and restarting mysql the import took just 3 minutes! CPU utilization was 100 % all the time.

If you like to use this setting, you will need to edit the "/etc/mysql/my.cnf" file and restart the mysql server using "sudo service mysql restart".

Here are the settings of my "my.conf" file:

    [mysqld]
    innodb_log_buffer_size = 256M
    innodb_fast_shutdown = 0
    innodb-doublewrite = OFF
    innodb_io_capacity = 1000
    innodb_flush_log_at_trx_commit = 0

Please note: The "innodb_flush_log_at_trx_commit = 0" will do a commit only every second. So it's not ACID conform, but for a bulk import acceptable. After the import you can set the value of "innodb_flush_log_at_trx_commit" back to 1 and restart your database. Link to mySQL Documentation

MaFli
  • 41
  • 1
2

If your DB is otherwise quiet (i.e. there are not other users active) and you want to just see read/write activity why not just do something like:

mysqladmin -h<host>-uroot -p<yourpass> extended -r -i 10 |grep 'row'

You will see number of reads/writes/inserts/waits/updates.

If you are inserting for example you will see something like:

Innodb_rows_inserted                          | 28958 

Where 28958 is the number of rows inserted for your interval (10 seconds in my case).

Philᵀᴹ
  • 31,952
  • 10
  • 86
  • 108
user113373
  • 21
  • 1
2

For someone who is looking for the pipe viewer example using mysqldump you would just doing something like this:

mysqldump -hxxx -uxxx -p dbname | pv -W > dump.sql

The -W flag just tells pv to wait for the first byte to come before showing the progress (after the prompt)

1

A more detailed explanation for newbies like me of @Robbs Answer https://dba.stackexchange.com/a/28646/224874

It may not be of very much help, but using pv and pipes in Linux, you can create a progress bar that will at least help you observe the quantity of data that has been transferred from .sql file into the mysql process. It will also help you to assess the overall progress of the task.

As I am new on StackExchange, I need to pass a reputation score to use more StackOverflow. I request you to upvote, comment about improvements, or any other tips which can make this more helpful.

The command which worked for me is:

pv file-name.sql | mysql -u root -pPass <DataBaseName>

example:

pv steam_activity.sql | mysql -u root -pPass GAMEDATA

This will give a progress bar for SQL import which will look like this :

enter image description here

77.9GiB 9:20:33 [4.40MiB/s] [===========================================> ] 48% ETA 9:52:3

PLEASE NOTE THAT ALL THE METRICS ABOVE ARE CALCULATED WITH RESPECT TO DATA TRANSFERRED FROM SQL FILE TO MYSQL SERVER (i.e. from steam.sql to mysql process). THEREFORE THIS WON'T GIVE YOU PRECISE AMOUNT OF TIME LEFT FOR COMPLETION. BUT IT CAN PROVIDE A GOOD APPROXIMATION OF TIME REQUIRED.

It has nothing to do with the internal working of MySQL server engines so it won't be able to give a precise ETA for completion.

Let us discuss the output progress bar:

I am importing a 173 GB steam_activity.sql file into a newly created empty database named GAMEDATA.

Let us discuss the output progress bar:

77.9GiB 9:20:33 [4.40MiB/s] [===========================================> ] 48% ETA 9:52:3

  • 77.9GiB: Amount of Data transferred from that 173 data file:

  • 9:20:33 <Hours: Minutes: Seconds> : Time Elapsed since the process started

  • [4.40MiB/s] : Rate At which statements are read from steam_activity.sql

  • A progress bar: [===========================================> ]

  • 48% : Amount of transfer completed:

  • ETA 9:52:30: Time required for completing the import. THIS IS NOT THE CORRECT N PRECISE TIME AS SOME INSERT STATEMENT MIGHT TAKE LONGER EXECUTION TiME THAN OTHER. Also, the table checks process can also happen, which can delay the importing.

Original post: How can I monitor the progress of an import of a large .sql file?

If you want tips on speedup, please comment or post other questions ( I loaded that 173 GB .SQL file in 17 hours ) or follow this post: MySQL any way to import a huge (32 GB) SQL dump faster? Estimation is 24 Days

0

You can monitoring an Import in the folder \Msql\Data[DB name]

0

Ok, another work around. But that may be the worst and inaccurate option.

That said, here is my solution for Windows:

Open the Task Manager pressing

CTRL + SHIFT + ESC

Copy the "mysqld.exe" disk value speed

e.g. 11mb/s

Put that in a calculator like this one: https://techinternets.com/copy_calc?do

Estimate the ETA. My case was:

Speed: 8 MB/s
Size: 4.9 GB
0 Hours, 11 Minutes and 29 Seconds

Results:

Beg -> 11:19
ETA -> 11:31
End -> 11:39
Daniel
  • 101
  • 1
0

if you have gnu coreutils/dd version >= 8.24 (released 2015 july 03) you can use dd's status=progress argument,

eg

cat dbdump.gz | gzip -d | mysql --password=root -v | time dd of=/dev/null status=progress

*PS: does not work with busybox, busybox dd does not understand "status=progress"

hanshenrik
  • 103
  • 4
0

I use https://github.com/Xfennec/progress for it and monitor via watch

watch progress

After firing the import zcat example.sql.gz | mysql -u root -proot -h localhost example

michalzuber
  • 251
  • 1
  • 2
  • 8
0

if you (and you should have done so) are using a screen (screen -S import) then your standard mysql -uuser -p < file.sql you can use ctrl-a-d to get out and then tail -F /var/log/mysql/mysql.log this will give you the current location in your file as it is processed.

tinlyx
  • 3,810
  • 14
  • 50
  • 79
0

If you are already started importing then you can use these commands in another terminal to get current size of database or table

for getting database size:

SELECT pg_size_pretty(pg_database_size('Database Name'));

for getting table size:

SELECT pg_size_pretty(pg_relation_size('table_name'));
tinlyx
  • 3,810
  • 14
  • 50
  • 79
Vignesh M
  • 1
  • 1
-2

I'm so surprised no one just posted 'mysql -v' as an option. If it gets stuck, the output will stop.

dtc
  • 129
  • 2