14

My MySQL instance is getting killed by the Linux oom_killer due to huge consumption of memory by mysql when inserting into longblob columns. This occurs when restoring a mysqldump which contains a very large longblob column.

I've run through things like this blog which suggests setting various read/write buffers to different sizes in order to limit memory consumption. But despite the mentioned script outputting a "TOTAL (MAX)" memory of 350MB after tweaks, mysql will still happily gobble up gigabytes of memory before eventually getting killed.

Here's a reproduction via Docker:

docker run -p 3306:3306 -e MYSQL_ROOT_PASSWORD=foobar -d --name mysql-longblob mysql:5.7

mysql -h 127.0.0.1 -P 3306 -u root --password -e "CREATE DATABASE blobs; USE blobs; CREATE TABLE longblob_test (bigcol LONGBLOB NOT NULL) ENGINE = InnoDB;"

mysql -h 127.0.0.1 -P 3306 -u root --password -e
"SET GLOBAL max_allowed_packet=536870912;" # 512MB

mysql -h 127.0.0.1 -P 3306 -u root --password -D blobs -e "source ./500MB.sql"

In this case, docker stats reported consumption of ~1.8GB memory before it got OOM killed for exceeding its limit. At idle, mysql was reporting ~200MB memory.

Where 500MB.sql is a file inserting a 500MB blob of text in the shape:

INSERT INTO longblob_test VALUES ('500MB_WORTH_OF_TEXT_HERE')

So a couple of questions:

1) Why does mysql need to eat 1.6GB of memory to ingest a 500MB column?

2) How do I set a hard upper limit on mysql to prevent it ever exceeding "x" amount of memory?

P.S: You might be tempted to tell me that this is very silly, and you shouldn't be storing 500MB+ blobs in a database. I absolutely 100% agree! But it's an unfortunate situation, and rearchitecting the data storage is not possible in this situation.

cjheppell
  • 213
  • 1
  • 9

3 Answers3

1

Are both the client and server in the same docker space?

I would expect the client to load 500MB, expand it into hex (1.0GB), send that to the server. Then the server might need similar space for parsing.

So, I guess 6x is needed. Note that the "log file" needs to be 10x the largest blob. So maybe 10x is the number you need. That's just on the server.

Re the 10x:

----- 2014-12-01 5.6.22 General Availability -- Bugs Fixed -- InnoDB -----

The MySQL 5.6.20 patch for Bug #16963396 / MySQL BLOB writes to 10% of the redo log file size. This limitation has been relaxed. Redo log BLOB writes are now limited to 10% of the total redo log size (innodb_log_file_size * innodb_log_files_in_group). As a result, innodb_log_file_size * innodb_log_files_in_group should be 10 times larger than the largest BLOB data size found in the rows of your tables plus the length of other variable length fields (VARCHAR, VARBINARY, and TEXT type fields). No action is required if innodb_log_file_size * innodb_log_files_in_group is already sufficiently large or if your tables contain no BLOB data. (Bug #73707, Bug #19498877)

References: See also: Bug #16963396.

----- 2014-12-01 5.6.22 General Availability -- Bugs Fixed -- InnoDB -----

The MySQL 5.6.20 patch for Bug #16963396 / MySQL BLOB writes to 10% of the redo log file size. This limitation has been relaxed. Redo log BLOB writes are now limited to 10% of the total redo log size (innodb_log_file_size * innodb_log_files_in_group).

As a result, innodb_log_file_size * innodb_log_files_in_group should be 10 times larger than the largest BLOB data size found in the rows of your tables plus the length of other variable length fields (VARCHAR, VARBINARY, and TEXT type fields). No action is required if innodb_log_file_size * innodb_log_files_in_group is already sufficiently large or if your tables contain no BLOB data. (Bug #73707, Bug #19498877)

Rick James
  • 80,479
  • 5
  • 52
  • 119
1
  1. Why does mysql need to eat 1.6GB of memory to ingest a 500MB column?

Keep in mind that MySQL has a pluggable storage engine architecture. One implication of this is that both the "SQL layer" holds a representation of a row of data in memory, in a storage-independent format, then passes that row to the storage engine API to the respective storage engine. The storage engine may then need to make another copy to transform it into a format it will ultimately store. Given that InnoDB stores a large blob over many pages, it may need to split up the 500MB into pieces as it does this.

  1. How do I set a hard upper limit on mysql to prevent it ever exceeding "x" amount of memory?

You can't. There is no feature in MySQL to set a hard upper limit on memory consumption. Sorry. I know some other database products do have this feature. MySQL does not, in part because the SQL layer and the storage engine can only communicate using a simple API, and as far as I know, they don't have a way to coordinate memory use between these layers.

... rearchitecting the data storage is not possible in this situation

Then you'll have to host the database on a server with plenty of spare RAM to account for the occasional spikes in memory usage.

Bill Karwin
  • 16,963
  • 3
  • 31
  • 45
0

You might have to have the client doing the insert break up the huge blob into fragments. Depending on how you get that SQL file, this may or may not be practical. Also, this would require instead of using the mysql cli, using a programmatic MySQL client (such as from a Java, Python, or C program).

A starting point could be how in this other answer: https://stackoverflow.com/a/4846674/1680777, in which the user was able to circumvent the max_allowed_packet by having the MySQL client driver automatically segment the blob into small units during the insert operation.

I'm not positive that this will totally solve the memory issues, as there are things like various InnoDB and transaction buffers in play.

Joshua Huber
  • 1,757
  • 11
  • 15