50

Background

I would like to provide the subset of my database required to reproduce a select query. My goal is to make my computational workflow reproducible (as in reproducible research).

Question

Is there a way that I can incorporate this select statement into a script that dumps the queried data into a new database, such that the database could be installed on a new mysql server, and the statement would work with the new database. The new database should not contain records in addition to those that have been used in the query.

Update: For clarification, I am not interested in a csv dump of query results. What I need to be able to do is to dump the database subset so that it can be installed on another machine, and then the query itself can be reproducible (and modifiable with respect to the same dataset).

Example

For example, my analysis might query a subset of data that requires records from multiple (in this example 3) tables:

select table1.id, table1.level, table2.name, table2.level 
       from table1 join table2 on table1.id = table2.table1_id 
       join table3 on table3.id = table2.table3_id
       where table3.name in ('fee', 'fi', 'fo', 'fum'); 
RolandoMySQLDBA
  • 185,223
  • 33
  • 326
  • 536
David LeBauer
  • 3,162
  • 8
  • 32
  • 34

8 Answers8

68

mysqldump has the --where option to execute a WHERE clause for a given table.

Although it is not possible to mysqldump a join query, you can export specific rows from each table so that every row fetched from each table will be involved in the join later on.

For your given query, you would need to mysqldump three times:

First, mysqldump all table3 rows with name in ('fee','fi','fo','fum'):

mysqldump -u... -p... --where="name in ('fee','fi','fo','fum')" mydb table3 > table3.sql

Next, mysqldump all table2 rows that have matching table3_id values from the first mysqldump:

mysqldump -u... -p... --lock-all-tables --where="table3_id in (select id from table3 where name in ('fee','fi','fo','fum'))" mydb table2 > table2.sql

Then, mysqldump all table1 rows that have matching table1_id values from the second mysqldump:

mysqldump -u... -p... --lock-all-tables --where="id in (select table1_id from table2 where table3_id in (select id from table3 where name in ('fee','fi','fo','fum')))" mydb table1 > table1.sql

Note: Since the second and third mysqldumps require using more than one table, --lock-all-tables must be used.

Create your new database:

mysqladmin -u... -p... mysqladmin create newdb

Finally, load the three mysqldumps into another database and attempt the join there in the new database.

mysql -u... -p... -D newdb < table1.sql
mysql -u... -p... -D newdb < table2.sql
mysql -u... -p... -D newdb < table3.sql

In mysql client, run your join query

mysql> use newdb
mysql> select table1.id, table1.level, table2.name, table2.level 
       from table1 join table2 on table1.id = table2.table1_id 
       join table3 on table3.id = table2.table3_id
       where table3.name in ('fee', 'fi', 'fo', 'fum'); 

Give it a Try !!!

WARNING : If not indexed correctly, the second and third mysqldumps may take forever !!!

Just in case, index the following columns:

ALTER TABLE table2 ADD INDEX (table1_id);
ALTER TABLE table2 ADD INDEX (table3_id);
ALTER TABLE table3 ADD INDEX (name,id);

I'll assume id is the primary key of table3.

David LeBauer
  • 3,162
  • 8
  • 32
  • 34
RolandoMySQLDBA
  • 185,223
  • 33
  • 326
  • 536
8

I would consider using an 'outfile' as part of your SELECT instead of mysqldump to solve this problem. You can produce whatever SELECT statement you want, then append "INTO OUTFILE '/path/to/outfile.csv' ..." at the end with the appropriate configuration for CSV style output. Then you can simply use something like 'LOAD DATA INFILE...' syntax to load the data into your new schema location.

For example, using your SQL:

select table1.id, table1.level, table2.name, table2.level 
       from table1 join table2 on table1.id = table2.table1_id 
       join table3 on table3.id = table2.table3_id
       where table3.name in ('fee', 'fi', 'fo', 'fum')
INTO OUTFILE '/tmp/fee-fi-fo-fum.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
; 

Keep in mind you'll need enough available storage space on the target disk partition.

randomx
  • 3,944
  • 4
  • 31
  • 44
6

The mysqldump util has a --tables option that lets you specify which tables to dump. It lets you specify the list of tables.

I don't know of any easier (automated) way.

Richard
  • 1
  • 8
  • 42
  • 62
5

What was useful for me was something like:

mysqldump -u db_user -p db_name table_name --no_create_info \
--lock-all-tables --where 'id in (SELECT tn.id FROM table_name AS tn \
JOIN related_table AS rt ON tn.related_table_id = rt.id \
WHERE rt.some_field = 1)' > data.sql

From http://krosinski.blogspot.com/2012/12/using-table-join-with-mysqldump.html

Ryan
  • 385
  • 1
  • 5
  • 13
2

This question is quite old and already has some good answers provided. One possible solution is to use mysqldump with the --where option.

However, in recent years, several new products have been released to help you subset your database. Without delving too much into the details, these products allow you to make the configuration process more declarative, so that newly connected tables and columns are handled automatically. Additionally, you can embed them into your CI/CD pipeline.

I might be biased (I am the CTO at Synthesized), but we have an awesome tool in this category called Synthesized TDK: https://docs.synthesized.io/tdk/latest/. There is a free community version that supports only Open Source databases, so it should work for you!

An example configuration for your case would be (config.yaml):

default_config:
  mode: "KEEP"

tables:

  • table_name_with_schema: "table3" filter: name in ('fee', 'fi', 'fo', 'fum')

table_truncation_mode: TRUNCATE schema_creation_mode: CREATE_IF_NOT_EXISTS

To run the tool, simply execute the corresponding command:java -jar tdk.jar <connection options> -c config.yaml

To learn more about subsetting using a filter, please refer to the documentation: Data Filtering

If you have any questions, please join our community: Slack Community

2

Have you tried the quote function in mysql?

SELECT CONCAT("insert into table4(id,level,name,levelt2) VALUES(",   quote(table1.id),   ",",    quote(table1.level),   ",",    quote(table2.name),   ",",    quote(table2.level),    ");") as q
       from table1 join table2 on table1.id = table2.table1_id 
       join table3 on table3.id = table2.table3_id
       where table3.name in ('fee', 'fi', 'fo', 'fum'); 

save the above, as query.sql

cat query.sql|mysql --skip-column-names --raw > table4.sql
velcrow
  • 139
  • 2
2

i wrote a small script for similar problem, here it is: https://github.com/digitalist/mysql_slice

include ('queryDumper.php');


$exampleQuery="select * from information_schema.columns c1 
left join information_schema.columns c2 on 1=1 limit 1";

//define credentials
$exampleMysqli = new mysqli($host, $user, $password, $database);
$exampleResult=$exampleMysqli->query($exampleQuery);

//if  mysqlnd (native driver installed), otherwise use wrapper
$exampleData=fetchAll($exampleResult);
$exampleMeta=$exampleResult->fetch_fields();

/*
 * field content removal options
 * column name => function name in queryDumper.php, namespace QueryDumperHelpers
 * 
 * */

$forbiddenFields=array(
'password'=>'replacePassword', //change password -> md5("password")
'login'=>'replaceLogin', //change login vasya@mail.ru -> vasya@example.com
'comment'=>'sanitizeComment' //lorem ipsum or 
);


//get tables dump
$dump=(\queryDumper\dump($exampleData, $exampleMeta, $forbiddenFields));



$dropDatabase=true; //default false
$dropTable=true; //default false

$dbAndTablesCreationDump=\QueryDumperDatabaseAndTables\dump($exampleMysqli,$exampleMeta, $dropDatabase, $dropTable);

$databases=$dbAndTablesCreationDump['databases'];
$tables=$dbAndTablesCreationDump['tables'];
$eol=";\n\n";
echo implode($eol, $databases)."\n";
echo implode($eol, $tables).";\n";
echo "\n";

//consider using array_unique($dump) before imploding
echo implode("\n\n", $dump);
echo "\n";
?>

i.e. you have this query:

SELECT * FROM employees.employees e1 
LEFT JOIN employees.employees e2 ON 1=1 
LIMIT 1; 

you got this dump:

DROP DATABASE `employees`;

CREATE DATABASE `employees`;
CREATE TABLE `employees` ( /* creation code */ ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

INSERT IGNORE INTO `employees`.`employees` VALUES ("10001","1953-09-02","Georgi","Facello","M","1986-06-26");

INSERT IGNORE INTO `employees`.`employees` VALUES ("10001","1953-09-02","Georgi","Facello","M","1986-06-26");
digitalist
  • 21
  • 2
1

In MySQL:

SHOW CREATE TABLE table1; -- use these two create statements
SHOW CREATE TABLE table2; -- to design table4's create statement
CREATE TABLE table4( .... );
INSERT INTO table4(id,level,name,levelt2)
SELECT table1.id, table1.level, table2.name, table2.level 
   from table1 join table2 on table1.id = table2.table1_id 
   join table3 on table3.id = table2.table3_id
   where table3.name in ('fee', 'fi', 'fo', 'fum'); 

On Command Line:

mysqldump mydb table4 |gzip > table4.sql.gz

On your destination server, setup ~/.my.cnf

[client]
default-character-set=utf8

Import on destination server

zcat table4.sql.gz | mysql
velcrow
  • 139
  • 2