views:

110

answers:

2

Hi,

I have installed MySQL 5.1x instance on a Linux machine (768MB RAM). I restored a backup of about 1000 rows and through my .NET application (deployed on a different Windows webserver) I performed certain read operations, which, when considering that the table had no indexes, were fast.

I then cleared the server from these rows and used a feature in my web application on the other server to insert rows (Basically, I read text off a text file, entering the data into fields of a custom class, and then calling a stored procedure with these parameters for each record). When I had the website and database both on my local machine, I would insert 2500 rows instantly (less than a second). Now, however, in the "production" environment, these same 2500 rows are taking 10 minutes to be inserted. I am using the default my.cnf that comes with the installation of MySQL.

Is there any way I can optimise writes without compromising the speed of read operations? Or am I missing something obvious?

Thanks in advance!

+2  A: 

Have you looked at the MySQL manual page for insert speed? The first guess that comes to mind for me is that you aren't wrapping the set of 2500 procedures in a Begin Transaction/Commit statement, so MySQL is autocommitting each one of the 2500 actions.

Mark Rushakoff
Hi Mark, thanks for your reply...I had read about the autocommit but only briefly and thought it had to do only with InnoDB tables. (I am using MyISAM). But you gave me an idea, rather than execute the stored procedure repeatedly, I might write the calls to a text file, say CALL sp(A,B); CALL sp(C,D) etc, and then I execute them all at once through the query browser. Presumably, (and if I wrap them in a Begin Transaction), this should be much faster?! Thanks!
TMM
I'd expect it to be significantly faster. It won't be autocommitting every time, and it won't be repeatedly making a separate connection like @Joachim said. Let us know how it goes.
Mark Rushakoff
+1  A: 

In addition to the problem mentioned by Mark (auto-commit instead of transaction), I'd definitely investigate using batch updates. Because one major difference between a local connection and a network connection (even a very fast one) is roundtrip time.

While the bandwidth is most likely not the limiting factor in this case, doing 2500 network roundtrips to the database and back will definitely take a lot longer on a network than on the local machine.

Using a batch update can reduce those inserts to a single roundtrip ideally and just a few ones in practice.

Unfortunately I can't tell you how to do those batch updates in the .NET world, in Java you'd use PreparedStatement.addBatch(). I'm sure there's something similar in .NET.

Joachim Sauer
Thanks for the answer Joachim. However, I think I can use the BatchUpdate in .NET if I were using a DataSet, rather than a MySqlCommand object.
TMM