views:

10637

answers:

11

Anyone know a quick easy way to migrate a SQLite3 database to MySQL?

A: 

From the SQLite site there appears to be a dump command. You can tell it how you want the data. From there you can import it into mysql.

mk
A: 

Pretty sure that just dumps the data. I was looking for a tool that would migrate the schema too. Probably just end up writing something myself. Thanks though.

Stephen Cox
+1  A: 

I recently had to migrate from MySQL to JavaDB for a project that our team is working on. I found a Java library written by Apache called DdlUtils that made this pretty easy. It provides an API that lets you do the following:

  1. Discover a database's schema and export it as an XML file.
  2. Modify a DB based upon this schema.
  3. Import records from one DB to another, assuming they have the same schema.

The tools that we ended up with weren't completely automated, but they worked pretty well. Even if your application is not in Java, it shouldn't be too difficult to whip up a few small tools to do a one-time migration. I think I was able to pull of our migration with less than 150 lines of code.

Outlaw Programmer
+2  A: 

Probably the quick easiest way is using the sqlite .dump command, in this case create a dump of the sample database.

sqlite3 sample.db .dump > dump.sql

You can then (in theory) import this into the mysql database, in this case the test database on the database server 127.0.0.1, using user root.

mysql -p -u root -h 127.0.0.1 test < dump.sql

I say in theory as there are a few differences between grammars.

In sqlite transactions begin

BEGIN TRANSACTION;
...
COMMIT;

MySQL uses just

BEGIN;
...
COMMIT;

There are other similar problems (varchars and double quotes spring back to mind) but nothing find and replace couldn't fix.

Perhaps you should ask why you are migrating, if performance/ database size is the issue perhaps look at reoginising the schema, if the system is moving to a more powerful product this might be the ideal time to plan for the future of your data.

rsg
+21  A: 

Everyone seems to starts off with a few greps and perl expressions and you sorta kinda get something that works for your particular dataset but you have no idea if it's imported the data correctly or not. I'm seriously surprised nobody's built a solid library that can convert between the two.

Here a list of ALL the differences in SQL syntax that I know about between the two file formats: The lines starting with:

  • BEGIN TRANSACTION
  • COMMIT
  • sqlite_sequence
  • CREATE UNIQUE INDEX

are not used in MySQL

  • SQLlite uses CREATE TABLE/INSERT INTO "table_name" and MySQL uses CREATE TABLE/INSERT INTO table_name
  • MySQL doesn't use quotes inside the schema definition
  • MySQL uses single quotes for strings inside the INSERT INTO clauses
  • SQLlite and MySQL have different ways of escaping strings inside INSERT INTO clauses
  • SQLlite uses 't' and 'f' for booleans, MySQL uses 1 and 0 (a simple regex for this can fail when you have a string like: 'I do, you don\'t' inside your INSERT INTO)
  • SQLLite uses AUTOINCREMENT, MySQL uses AUTO_INCREMENT

Here is a very basic hacked up perl script which works for my dataset and checks for many more of these conditions that other perl scripts I found on the web. Nu guarentees that it will work for your data but feel free to modify and post back here.

#! /usr/bin/perl

while ($line = <>){
    if (($line !~  /BEGIN TRANSACTION/) && ($line !~ /COMMIT/) && ($line !~ /sqlite_sequence/) && ($line !~ /CREATE UNIQUE INDEX/)){

     if ($line =~ /CREATE TABLE \"([a-z_]*)\"(.*)/){
      $name = $1;
      $sub = $2;
      $sub =~ s/\"//g;
      $line = "DROP TABLE IF EXISTS $name;\nCREATE TABLE IF NOT EXISTS $name$sub\n";
     }
     elsif ($line =~ /INSERT INTO \"([a-z_]*)\"(.*)/){
      $line = "INSERT INTO $1$2\n";
      $line =~ s/\"/\\\"/g;
      $line =~ s/\"/\'/g;
     }else{
      $line =~ s/\'\'/\\\'/g;
     }
     $line =~ s/([^\\'])\'t\'(.)/$1THIS_IS_TRUE$2/g;
     $line =~ s/THIS_IS_TRUE/1/g;
     $line =~ s/([^\\'])\'f\'(.)/$1THIS_IS_FALSE$2/g;
     $line =~ s/THIS_IS_FALSE/0/g;
     $line =~ s/AUTOINCREMENT/AUTO_INCREMENT/g;
     print $line;
    }
}
Shalmanese
Alex martelli did a great job rewriting this as python over at http://stackoverflow.com/questions/1067060/perl-to-python
Jiaaro
I added the complete python script (the perl script alone didn't quite work for me... needed some extra processing to handle Foreign Keys, and Indexes)
Jiaaro
I rewrote this answer on the other question http://stackoverflow.com/questions/1067060/_/1070463#1070463
Brad Gilbert
Nice and thorough answer :)
Jesper Rønn-Jensen
MySQL in ANSI mode accepts identifiers in double quotes.
porneL
+5  A: 

It's messy because dump files are database vendor specific.

If you're using Rails, a great plugin exists for this. Read: http://blog.heroku.com/archives/2007/11/23/yamldb_for_databaseindependent_data_dumps/

ryw
Worked great for me - thanks!
Rich Apodaca
+5  A: 

Here is a python script, built off of Shalmanese's answer and some help from Alex martelli over at http://stackoverflow.com/questions/1067060/perl-to-python

I'm making it community wiki, so please feel free to edit, and refactor as long as it doesn't break the functionality (thankfully we can just roll back) - It's pretty ugly but works

use like so (assuming the script is called dump_for_mysql.py:

sqlite3 sample.db .dump | python dump_for_mysql.py > dump.sql

Which you can then import into mysql

note - you need to add foreign key constrains manually since sqlite doesn't actually support them

here is the script:

#!/usr/bin/env python

import re
import fileinput

def this_line_is_useless(line):
    useless_es = [
     'BEGIN TRANSACTION',
     'COMMIT',
     'sqlite_sequence',
     'CREATE UNIQUE INDEX',    
     ]
    for useless in useless_es:
     if re.search(useless, line):
       return True

def has_primary_key(line):
    return bool(re.search(r'PRIMARY KEY', line))

searching_for_end = False
for line in fileinput.input():
    if this_line_is_useless(line): continue

    # this line was necessary because ''); was getting
    # converted (inappropriately) to \');
    if re.match(r".*, ''\);", line):
     line = re.sub(r"''\);", r'``);', line)

    if re.match(r'^CREATE TABLE.*', line):
     searching_for_end = True

    m = re.search('CREATE TABLE "?([a-z_]*)"?(.*)', line)
    if m:
     name, sub = m.groups()
     line = "DROP TABLE IF EXISTS %(name)s;\nCREATE TABLE IF NOT EXISTS `%(name)s`%(sub)s\n"
     line = line % dict(name=name, sub=sub)
    else:
     m = re.search('INSERT INTO "([a-z_]*)"(.*)', line)
     if m:
       line = 'INSERT INTO %s%s\n' % m.groups()
       line = line.replace('"', r'\"')
       line = line.replace('"', "'")
    line = re.sub(r"([^'])'t'(.)", "\1THIS_IS_TRUE\2", line)
    line = line.replace('THIS_IS_TRUE', '1')
    line = re.sub(r"([^'])'f'(.)", "\1THIS_IS_FALSE\2", line)
    line = line.replace('THIS_IS_FALSE', '0')

    # Add auto_increment if it's not there since sqlite auto_increments ALL
    # primary keys
    if searching_for_end:
     if re.search(r"integer(?:\s+\w+)*\s*PRIMARY KEY(?:\s+\w+)*\s*,", line):
      line = line.replace("PRIMARY KEY", "PRIMARY KEY AUTO_INCREMENT")
     # replace " and ' with ` because mysql doesn't like quotes in CREATE commands
     line = line.replace('"', '`').replace("'", '`')

    # And now we convert it back (see above)
    if re.match(r".*, ``\);", line):
     line = re.sub(r'``\);', r"'');", line)

    if searching_for_end and re.match(r'.*\);', line):
     searching_for_end = False

    if re.match(r"CREATE INDEX", line):
     line = re.sub('"', '`', line)

    print line,
Jiaaro
Hi Jim, on my dataset every first INSERT statement is wrapped by a backquote instead of a single quote :__DROP TABLE IF EXISTS schema_migrations;CREATE TABLE IF NOT EXISTS `schema_migrations` (`version` varchar(255) NOT NULL);INSERT INTO schema_migrations VALUES(`20100714032840`);INSERT INTO schema_migrations VALUES('20100714033251');__
David
well... it does not show up above, but the backquotes appears inside the VALUES ([HERE]20100714032840[/HERE])
David
A: 

I use data loader for migrating almost any data, it helps me to convert MSSQL to MYSQL, MS access to MSSQL, mysql, csv loader, foxpro and MSSQL to MS access, MYSQl, CSV, foxpro etc. In my view this is a best Data Migration Tool

Download Free : http://www.dbload.com

A: 
A: 

Ha... I wish I had found this first! My response was to this post... http://stackoverflow.com/questions/489277/script-to-convert-mysql-dump-sql-file-into-format-that-can-be-imported-into-sqlit

Combining the two would be exactly what I needed:


When the sqlite3 database is going to be used with ruby you may want to change:

tinyint([0-9]*) 

to:

sed 's/ tinyint(1*) / boolean/g ' |
sed 's/ tinyint([0|2-9]*) / integer /g' |

alas, this only half works because even though you are inserting 1's and 0's into a field marked boolean, sqlite3 stores them as 1's and 0's so you have to go through and do something like:

Table.find(:all, :conditions => {:column => 1 }).each { |t| t.column = true }.each(&:save)
Table.find(:all, :conditions => {:column => 0 }).each { |t| t.column = false}.each(&:save)

but it was helpful to have the sql file to look at to find all the booleans.

daicoden
A: 

fallino correctly identified the location of the error in the script. I have the solution. The problem is the lines: line = re.sub(r"([^'])'t'(.)", "\1THIS_IS_TRUE\2", line) line = line.replace('THIS_IS_TRUE', '1') line = re.sub(r"([^'])'f'(.)", "\1THIS_IS_FALSE\2", line) line = line.replace('THIS_IS_FALSE', '0')

The replacement pattern (2nd parameter) in the re.sub calls is a "regular" string, so instead of \1 expanding to the first regexp match, it expands to a literal 0x01. Likewise, \2 expands to 0x02. For example, a line containing: ,'t','f', would be replaced with: <0x01>10<0x02>
(First substitution changes ,'t', to <0x1>1<0x2> Second substitution changes <0x02>'f', to <0x1>0<0x1>)

The fix is to either change the replacement strings by adding an 'r' prefix, or by escaping the \1 and \2 in the existing string. Since easy manipulation of regexp strings is what raw strings are for, here's the fix using those:

line = re.sub(r"([^'])'t'(.)", r"\1THIS_IS_TRUE\2", line)
line = line.replace('THIS_IS_TRUE', '1')
line = re.sub(r"([^'])'f'(.)", r"\1THIS_IS_FALSE\2", line)
line = line.replace('THIS_IS_FALSE', '0')
mxbraun