views:

78

answers:

3

Hey guys, What is the best/most time efficient way to insert 1000 rows into one table (jdbc/connector-mysql database)? (it is a buffer and need to be dumped into the database everytime it is full)

1- One auto generated/concanated SQL statement?

2- for (int i = 0; i<1000; i++) { con.prepareStatement(s.get(i)); } con.commit();

3- stored procedure ?

4- bulk data insertion via a file?

5- (your solution)

A: 

You can use JDBC batch inserts.

PreparedStatement ps = cn.prepareStatement("INSERT INTO....");

for (int i = 0; i < 1000; i++) {
    ps.setString(1, "value-" + i); // Set values
    ps.addBatch();                 // Add this to the batch
    ps.clearParameters();          // Reset the parameters
}

ps.executeBatch();

However MySQL's does not support batch insert functionality natively, so to get reasonable performance you will need to add rewriteBatchedStatements=true to your mysql jdbc configuration.

There is also a MySQL specific batch insert syntax that you could use if you want to create a big SQL string. I would try both and test the performance.

INSERT INTO x (a,b)
     VALUES ('1', 'one'),
            ('2', 'two'),
            ('3', 'three');

Be careful with both approaches as you may exceed the maximum packet size for a single request. You may need to set a batch size that is different to the number of entries in your buffer. E.g. you may need to split 1000 entries across 10 batches of 100.

Michael Barker
+2  A: 

The LOAD DATA INFILE statement is probably your best bet for performance. (#4 from your list of options above) Though it will probably take more code to accomplish your task since you need to create the intermediate file, get it to the server and then call LOAD DATA to get it into your db.

MySql Help pages quote:

The LOAD DATA INFILE statement reads rows from a text file into a table at a very high speed.

Paul Sasik
Use this approach with caution. I've seen systems that have done this in the past and the cost of creating the intermediate file and shipping it to the server have outweighed the performance benefits of using the fast file load operation.
Michael Barker
It looks very dangerous since we have two app servers concurrently accessing the same database. I was really afraid of that answer to come up.Thanks all for your reply.
Hayati Guvence
A: 

Note that JDBC auto-commits after each statement, unless you explicitly tell it not to. Doing so and running X inserts before running commit, should improve your performance quite a bit.

Thorbjørn Ravn Andersen
i forgot to write that i was using c3p0. so i have to commit at the end.
Hayati Guvence