tags:

views:

58

answers:

3

Good day, I receive data from a communication channel and display it. Parallel, I serialize it into a SQLite database (using normal SQL INSERT statements). After my application exit I do a .commit on the sqlite object.

What happens if my application is terminated brutally in the middle? Will the latest (reasonably - not say 100 microsec ago, but at least a sec ago) data be safely in the database even without a .commit is made? Or should I have periodic commit? What are best patterns for doing these things?


I tried autocommit on (sqlite's option) and this slows code a lot by a factor ~55 (autocommit vs. just one commit at end). Doing commit every 100 inserts brings performance within 20% of the optimal mode. So autocommit is very slow for me.

My application pumps lots data into DB - what can I do to make it work well?

+4  A: 

You should be performing this within a transaction, and consequently performing a commit at appropriate points in the process. A transaction will guarantee that this operation is atomic - that is, it either works or doesn't work.

Atomicity states that database modifications must follow an “all or nothing” rule. Each transaction is said to be “atomic” if when one part of the transaction fails, the entire transaction fails. It is critical that the database management system maintain the atomic nature of transactions in spite of any DBMS, operating system or hardware failure.

If you've not committed, then the inserts won't be visible (and be rolled back) when your process is terminated.

When do you perform these commits ? When your inserts represent something consistent and complete. e.g.. if you have to insert 2 pieces of information for each message, then commit after you've inserted both pieces of info. Don't commit after each one, since your info won't be consistent or complete.

Brian Agnew
+2  A: 

The data is not permanent in the database without a commit. Use an occasional commit to balance the speed of performing many inserts in a transaction (the more frequent the commit, the slower) with the safety of having more frequent commits.

Ned Batchelder
what is best way to do it? count inserts and once in N do commit?
zaharpopov
It depends mostly on your application. You could do it once per N inserts, or once per unit of time, or any other tick source you have. You have to make the tradeoff for your application.
Ned Batchelder
+2  A: 

You should do a COMMIT every time you complete a logical change.

One reason for transaction is to prevent uncommitted data from a transaction to be visible from outside. That is important because sometimes a single logical change can translate into multiple INSERT or UPDATE statements. If one of the latter queries of the transaction fails, the transaction can be cancelled with ROLLBACK and no change at all is recorded.

Generally speaking, no change performed in a transaction is recorded in the database until COMMIT succeeds.

does not this slow down considerably my code? – zaharpopov

Frequent commits, might slow down your code, and as an optimization you could try grouping several logical changes in a single transaction. But this is a departure from the correct use of transactions and you should only do this after measuring that this significantly improves performance.

ddaa
does not this slow down considerably my code?
zaharpopov