tags:

views:

576

answers:

5

Having a major hair-pulling issue with extremely slow inserts from Delphi 2010 to a remote MySQL 5.09 server.

So far, I have tried:

  • ADO using MySQL ODBC Driver
  • Zeoslib v7 Alpha
  • MyDAC

I have used batching and direct insert with ADO (using table access), and with Zeos I have used SQL insertion with a Query, then used Table direct mode and also cached updates Table mode using applyupdates and commit. With MyDAC I used table access mode, then direct SQL insert and then batched SQL insert

All technologies I have tried, I set compression on and off with no discernable difference.

So far I have seen a pretty much the same across the board 7.5 records per second!!!

Now, I would from this point assume that the remote server is just slow, but the MySQL Workbench is amazingly fast, and the Migration toolkit managed the initial migration very quickly (to be honest, I don't recall how quickly - which kind of means that it was quick)

Edit 1

It is quicker for me to write the sql to a file, upload the file to the server via ftp and then import it direct on the remote server - I wonder if they perhaps are throttling incoming MySQL traffic, but that doesn't explain why the MySQL Workbench was so quick!

Edit 2

At the most basic level, the code has been:

while not qMSSQL.EOF do
begin
  qMySQL.SQL.Clear;
  qMySQL.SQL.Add('INSERT INTO tablename (fieldname1) VALUES (:fieldname1)');
  qMySQL.ParamByName('fieldname1').asString:=qMSSQL.FieldByName('fieldname1').asString;
  qMySQL.ExecSQL;
  qMSSQL.Next;
end;

I then tried

qMySQL.CachedUpdates:=true;
i:=0;
while not qMSSQL.EOF do
begin
  qMySQL.SQL.Clear;
  qMySQL.SQL.Add('INSERT INTO tablename (fieldname1) VALUES (:fieldname1)');
  qMySQL.ParamByName('fieldname1').asString:=qMSSQL.FieldByName('fieldname1').asString;
  qMySQL.ExecSQL;
  inc(i);
  if i>100 then
  begin
    qMySQL.ApplyUpdates;
    i:=0;
  end;
  qMSSQL.Next;
end;
qMySQL.ApplyUpdates;

Now, in this code with CachedUpdates:=False (which obviously never actually wrote back to the database) the speed was blisteringly fast!!

To be perfectly honest, I think it's the connection - I feel it's the connection... Just waiting for them to get back to me!

Thanks for all your help!

A: 

I'm not sure about ZeosLib, but using ADO with ODBC driver, you will not get the fastest way to insert the records, here few step that may make your insertion faster:

  1. Use Mydac for direct access, they work without the slow ODBC > ADO > OLEDB > MySqlLib to connect to Mysql.

  2. Open the connection at first before the insertion.

  3. if you have large insertion such as 1000 or more, try use transaction and commit after 100 record or more depend on number of records.

Point 3 may makes your insertion faster even with ZeosLib or ADO.

Mohammed Nasman
MarkRobinson
A: 

Are you using query parameters? The fastest way to insert should be using plain queries and parameters (i.e. INSERT INTO table (field) VALUES (:field) ), preparing the query and then assigning parameters and executing as many times as required within a single transaction - committing at the end (don't use any flavour of autocommit)

That in most databases avoids hard parses each time the query is executed, which requires time. Parameters allow the query to be parsed only once, and then re-executed many times as needed.

Use the server facilites to check what's going on - many offer a way to inspect what running statements are doing.

ldsandon
I used query parameters and batched inserts at first (I always try to use SQL first if I can) but the average speed was the same :(
MarkRobinson
I would not use ADO batch insert, it may reduce roundtrips but it gets control of the inserts, as cached updates does. IMHO the fastests way is to let the statements go directly to the DB with less overhead as possible. Anyway, first you have to find where the bottleneck really is.
ldsandon
Correct answer to Idsandon who mentioned checking the server facilities which in this case would have probably answered my issue (the db workbench results must have been spurious)
MarkRobinson
+1  A: 

You can try AnyDAC and it Array DML feature. It may speedup a standard SQL INSERT for few times.

da-soft
Thanks Dmitry, I tried a demo and the Array DML feature seems really interesting - but alas similar speed.
MarkRobinson
We tried array dml with MySQL - that was really fast. It seems something on your system makes troubles.
oodesigner
A: 

You've got two separate things going on here. First, your Delphi program is creating Insert statements and sending them to the DB server, and then the server is handling them. You need to examine both ends to find the bottleneck. I'm not to familiar with MySql tools, but I bet you could find a SQL profiler for it easily enough. Use it to profile your inserts from the Delphi app, and compare it to running inserts from the Workbench tool and see if there's a significant difference.

If not, then the slowdown is in your app. Try hooking it up to Sampling Profiler or some other profiling tool that understands Delphi, and it'l show you where you're spending lots of time on. Once you know that, then you can work on attacking the problem, or maybe come back here to ask a more specific question. But until you know where the problem is coming from, any answers you get here are just gonna be educated guesses at best.

Mason Wheeler
Thanks Mason, I'm pretty sure it's the server speed and not my code, I just wasn't 100% sure I was using the mysql libs properly, and/or what the best practices were with regards to Delphi, but they seem to be similar to what I tried anyway.
MarkRobinson
+1  A: 

Sorry that this reply comes long after you asked the question.

I had a similar problem. BDS2006 to MySQL via ODBC across the network - took 25 minutes to run - around 25 inserts per second. I was using a TDatabase connection and attached the TTable Tquery to it. Prepared the SQL statements.

The major improvement was when I started starting transactions within the loop. A simple example, Memebrships have Member Period. Start a transaction before the insert of the Membership and Members, Commit after. The number of memberships was 01585 and before transactions it took 279.90 seconds to process all the Membership records but after it took 6.71 seconds.

Almost too good to believe and am still working through fixing the code for the other slow bits.

Maybe Mark you have solved your problem but it may help someone else.

DavidG
Made some further changes. A Commit is expensive so the key to good performance is to choose a commit frequency that balances the expense of the commit versus the overhead of holding the transaction for too long.
DavidG
After further cahnges, I managed to reduce run time from 19 minutes to 51 seconds using transactions.
DavidG
+1 Excellent, thanks David! I actually can't remember now whether I tried transactions or not, when you say across the network did you mean across a local network, or the internet?
MarkRobinson
Mark, I initially made the code changes access MySQL on the same server (localhost) and tonight ran the code over my local network (from one machine to the MySQL server) took 73 seconds this time . Sorry have not run it over the real internet - suspect that the network latency might be substantially more that way. HTH
DavidG