I have a mysql database that I am trying to migrate into another database. THey have different schema's and I have written a php script for each table of the old database in order to populate its data in to the new one. The script works just fine but the problem is that it does not move all the data. for example if I have a table and all its info are being selected and then inserted into the new table but only half of them are done. The way I am doing it I am opening a database selecting * and puting it in an associative array. then I close the db connection and connect to the other one go through each element of the array and insert them in the new one. Is there a limit to how big an array could be? what is wrong here?
Your server (as all server do) will have a memory limit for PHP - if you use more than the assigned limit, then the script will fail.
Is it possible to just Dump the current MySQL Database into text files, perform find-and-replaces or RegExp-based replacements to change the schemas within the text files, and then reload the amended test files into MySQL to complete the change? If this is a one-off migration, then it may be a better way to do it.
You may be running into PHP's execution time or memory limits. Make sure the appropriate settings in php.ini
are high enough to allow the script to finish executing.
You may have constraints in the target database that are rejecting some of your attempted inserts.
Why not do this via sql scripts?
If you prefer to do it via php then you could open connections to both databases and insert to target as you read from source. That way you can avoid using too much memory.
You should read the rows from the first database in chunks (of 1000 rows for example), write those rows to the second database, clean the array (with unset() or an empty array) and repeat the process until you read all the rows. This overcomes the memory limitations.
Another problem might be that the script is running for too long (if the table is too large), so try using the function set_time_limit(). This function resets the timeout for a script after which it should be terminated. I suggest calling it after processing each chunk.
First of all, I don't see the point in writing a script to do this. Why don't you just get a SQL dump from phpMyAdmin and edit it so that it fits the other database? Or are they that different?
But to reply on your question: my first thought would be, like other people already said, that the problem would be the time limit. Before you try to do something about this, you should check the value of max_execution_time
in php.ini (this is about 30 seconds most of the time) and how long it takes for the script to execute. If it terminates after roughly 30 seconds (or the value of max_execution_time
if it's different), then it's likely that that's the problem, although php should throw an error (or at least a warning).
I don't think there's a limit on the size of an array in php. However, there is a directive in php.ini, namely memory_limit
that defines the amount of memory a script can use.
If you are have acces to your php.ini file, I suggest setting both max_execution_time
and memory_limit
to a higher value. If you don't have acces to php.ini, you won't be able to change the memory_limit
directive. You will have to work your way around this, for example by using LIMIT
in your SQL. Be sure to unset your used variables, or you could run in to the same problem.
Using php to do the transform/convert logic is a possibility. I would do it, if you are doing complex transformations and if your php skills are much better thant your mysql skillset.
If you need more memory in your php script use:
memory_limit = 2048M max_execution_time = 3600
This will give you 2gigs of possible space for the array and about an hour for processing. But if your database is really this big, it would much (really a lot) much faster to use:
1. mysqldump, to make a dump of your source-server
Check it here: http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
2. Upload the dumpfile and iport it. There are a bunch of example on the mysql documentation page. (Look also in the comments).
After this you can transform your database through CREATE/SELECT-statements.
CREATE TABLE one SELECT * FROM two;
As an alternative you can use UPDATE-statements. What is best depends heavily on the kind of job that you are doing.
Good luck!
It would be preferable to do a mysql dump at the command line:
mysqldump -a -u USER_NAME -p SOURCE_DATABASE_NAME > DATA.mysql
You can also gzip the file to make it smaller for transfer to another server:
gzip DATA.mysql
After transfer, unzip the file:
gunzip -f DATA.mysql.gz
And import it:
mysql -u USER_NAME -p TARGET_DATABASE_NAME < DATA.sql