I've noticed the following behavior.
I have a file that is about 3MB containing several thousand rows. In the rows I split and create prepared statement (about 250 000 statements).
What I do is:
preparedStatement
addBatch
do for every 200 rows {
executeBatch
clearBatch().
}
at the end
commit()
The memory usage will increase to around 70mb without out of memory error. Is it possible get the memory usage down? and have the transactional behavior (if one fails all fails.).
I was able to lower the memory by doing commit with the executeBatch
and clearBatch
... but this will cause a partial insert of the total set.