Originally posted to https://www.facebook.com/groups/GNUAndLinux/permalink/10155020539805019/ but no replies received.
I run Percona Server 5.6.21-70.0-688 x64 on Ubuntu Server 12.04 LTS x64, 64GB RAM, 2x2TB SATA* in hardware RAID0 and Intel Xeon E5-1620 v2 3.70GHz. InnoDB/XtraDB is the primary engine (with Barracuda format), with 20GB (21 after recent change) buffer pool size divided to 20 (21 after recent change) pools. The zlib compression is maximized to 9.
The Database Servers and its threads are dropped to nice -20 and ionice -c 1 -n 1.
Almost every Server on rautamiekka.org depends on MySQL databases, especially Minecraft.
I found out how to make a perfect list of IPv4 addresses into a text file. I created a table with auto-increment column and 1 column for each address section, to maximum of 5 columns in total. I made a Python script that started a transaction and explicitly disabled auto-commit and uploaded those addresses. The script had uploaded so many addresses from the 115GB file it had about 51GB of data in, likely already compressed. Then Percona Server 5.6.21-70.1 came out. I thought I'd run the routine update process cuz I thought there'd be no problems so I shutted down Minecraft Servers and any other Server depending on MySQL and forgot about the Python script. However, after fighting with aptitude and apt-get for few minutes I realized Database Server was still running and not accepting any connections since it was the in the shutdown process. I noticed the Python script had quitted by way or another.
That was Monday 2014-11-24, and the many processes/threads have been running since then, at 20-50KB/s (read) or even 200, and 120-500KB/s (write). It's mainly 1 that runs read and 2 that run write, with 1 process running at least twice as fast as the second, and only 2 processes, mainly, use CPU, but they're not the only ones. I presume it's committing the data into the table and reading the compressed data to verify it.
My estimations say the shutdown will take 9 days if we assume it writes 80GB at 120KB/s at minimum, and I warned ppl it might not be ready until Monday 2014-12-08 although it's been running for days now.
I made changes to the config just recently:
[mysqld]
innodb_buffer_pool_dump_at_shutdown=1
innodb_buffer_pool_load_at_startup=1
I've had
[mysqld]
innodb_fast_shutdown = 0
innodb_flush_method=O_DIRECT
for ages to ensure InnoDB will write everything before going down so we'll have smaller chances of data loss, but due to that we've had to wait for ages, and probly have to for at least a week, which I hate.
For reference, I posted a change request to MySQL bugs asking for changing the way MySQL handles order to shut down to prevent this kind of downtime: http://bugs.mysql.com/bug.php?id=75110
What can I do here ? I can consider every method to bring the process down, even loss of data if the loss ain't big, particularly if I lose just a portion of the IPv4 cuz I'll use MySQL script to continue filling up the table. However, I'm afraid I could lose something else, which is almost not an option, but I'll see.