views:

675

answers:

4

Short background: We are just starting to migrate/reimplement an ERP system to Java with Hibernate, targeting a concurrent user count of 50-100 users using the system. We use MS SQL Server as database server, which is good enough for this loads.

Now, the old system doesn't use any transactions at all and relies for critical parts (e.g. stock changes) on setting manual locks (using flags) and releasing them. That's something like manual transaction management. But there are sometimes problems with data inconsistency. In the new system we would like to use transactions to wipe out these problems.

Now the question: What would be a good/reasonable default transaction isolation level to use for an ERP system, given a usage of about 85% OLTP and 15% OLAP? Or should I always decide on a per task basis, which transaction level to use?

And as a reminder the four transaction isolation levels: READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ, SERIALIZABLE

+1  A: 

For SQL Server (and probably most major RDBMS), I'd stick with the default. For SQL Server, this is READ COMMITTED. Anything more and you start overtaxing the DB, anything less and you've got consistency issues.

chadmyers
The default for SQL Server may be READ COMMITTED, but I believe the default for ADO.net is SERIALIZABLE, in which case that matters more if he's using .net.
Robert C. Barth
@Robert This is not correct.The default is whatever the default is on the SQL Server. When you change the isolation level on the SqlTransaction, for example, it executes SET TRANSACTION ISOLATION LEVEL XXXX on the server.
chadmyers
+1  A: 

Don't forget about SNAPSHOT, which is right below SERIALIZABLE.

It depends on how important it is for the data to be accurate in the reports. It really is a task-by-task thing.

Robert C. Barth
+6  A: 

99 times out of 100, read committed is the right answer. That ensures that you only see changes that have been committed by the other session (and, thus, results that are consistent, assuming you've designed your transactions correctly). But it doesn't impose the locking overhead (particularly in non-Oracle databases) that repeatable read or serializable impose.

Very occasionally, you may want to run a report where you are willing to sacrifice accuracy for speed and set a read uncommitted isolation level. That's rarely a good idea, but it is occasionally a reasonably acceptable workaround to lock contention issues.

Serializable and repeatable read are occasionally used when you have a process that needs to see a consistent set of data over the entire run regardless of what other transactions are doing at the time. It may be appropriate to set a month-end reconciliation process to serializable, for example, if there is a lot of procedureal code, a possibility that users are going to be making changes while the process is running and a requirement that the process needs to ensure that it is always seeing the data as it existed at the time the reconciliation started.

Justin Cave
Depending on the user count, I wouldn't advise running reports on your primary OLTP system. Have a log shipped (or otherwise replicated) side server for reporting with a denormalized version of the data
chadmyers
+2  A: 

It really depends a lot on how you design your application, the easy answer is just run at READ_COMMITTED.

You can make an argument that if you design your system with it in mind that you could use READ_UNCOMMITTED as the default and only increase the isolation level when you need it. The vast majority of your transactions are going to succeed anyway so reading uncommitted data won't be a big deal.

The way isolation levels effect your queries depends on your target database. For instance databases like Sybase and MSSQL must lock more resources when you run READ_COMMITTED, than databases like Oracle.

BeWarned