Simple MySQL Data Migration
I find myself more often than I think needing to shuffle around MySQL data. Usually this is about dumping one table to another database. For example, I sometimes want to get some production data into a test or development database to reproduce an issue. On my local box I often need to recreate an existing database structure for development purposes.
The task happened often enough that I took the time to write a little helper script for that. I wanted to have it as general as possible so I came up with the following command line interface.
The --from
and --to
arguments are mandatory and define the source and destination database.
Since the databases could be everywhere (read: on different hosts) one specifies the database’s host and credentials, too.
The actual table(s) to be copied are set via --dump-table
which can provided multiple times.
Likewise, --rename-table
will rename a table in the target database.
The script does not dump & rename within one step.
Using a script with a nice command line interface has several advantages.
- It’s usually faster than any GUI MySQL client.
- Rerunning the same command or a slightly different (dumper another table) is simple and efficient.
- Less error-prone compared to manually typing the MySQL commands and/or clicking around in the GUI.
Here is the script’s code:
(Download)
The code is pretty forward I believe. I’d like to highlight some points, though.
Using the password on the command line is insecure. Anyone with read access to the process list (or your history) can read the password. This script is for development purposes and I believe it is thus okay to leave it like so.
There is a --no-data
switch which allows to simply copy a table’s structure.
I am using mysqldump’s –single-transaction, –lock-tables=false & –skip-add-locks switches to make the dumping process faster. This will make sure the database is not blocked (for other applications) while the dump is going on (pretty important for a shared databases). The downside is that the dumped data may not be consistent (when multiple tables are dumped) but I never experienced any problems with that.
Before actually doing anything, the script prints the source & target databases and asks the user to confirm. This is the last resort before overwriting any existing data. It’s kind of funny: The less steps you have to do manually, the faster you can do stupid things…