views:

9057

answers:

4

I understand the differences between optimistic and pessimistic locking*. Now could someone explain to me when I would use either one in general?

And does the answer to this question change depending on whether or not I'm using a stored procedure to perform the query?

*But just to check, optimistic means "don't lock the table while reading" and pessimistic means "lock the table while reading."

+2  A: 

Optimistic assumes that nothing's going to change while you're reading it.

Pessimistic assumes that something will and so locks it.

If it's not essential that the data is perfectly read use optimistic. You might get the odd 'dirty' read - but it's far less likely to result in deadlocks and the like.

Most web applications are fine with dirty reads - on the rare occasion the data doesn't exactly tally the next reload does.

For exact data operations (like in banking) use pessimistic. It's essential that the data is accurately read, with no un-shown changes - the extra locking overhead is worth it.

Oh, and Microsoft SQL server defaults to page locking - basically the row you're reading and a few either side. Row locking is more accurate but much slower. It's often worth setting your transactions to read-committed or no-lock to avoid deadlocks while reading.

Keith
JPA Optimistic Locking allows you to guarantee read-consistency.
Gili
+6  A: 

Optimistic locking is used when you don't expect many collisions. It costs less to do a normal operation but if the collision DOES occur you would pay a higher price to resolve it as the transaction is aborted.

Pessimistic locking is used when a collision is anticipated. The transactions which would violate synchronization are simply blocked.

To select proper locking mechanism you have to estimate the amount of reads and writes and plan accordingly

Ilya Kochetov
+24  A: 

Optimistic Locking is a strategy where you read a record, take note of a version number and check that the version hasn't changed before you write the record back. When you write the record back you filter the update on the version to make sure it's atomic. (i.e. hasn't been updated between when you check the version and write the record to the disk) and update the version in one hit.

If the record is dirty (i.e. different version to yours) you abort the transaction and the user can re-start it.

This strategy is most applicable to high-volume systems and three-tier architectures where you do not necessarily maintain a connection to the database for your session. In this situation the client cannot actually maintain database locks as the connections are taken from a pool and you may not be using the same connection from one access to the next.

Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking but requires you to be careful with your application design to avoid Deadlocks. To use pessimistic locking you need either a direct connection to the database (as would typically be the case in a two tier client server application) or an externally available transaction ID that can be used independently of the connection.

In the latter case you open the transaction with the TxID and then reconnect using that ID. The DBMS maintains the locks and allows you to pick the session back up through the TxID. This is how distributed transactiona using two-phase commit protocols (such as XA or COM+ Transactions) work.

ConcernedOfTunbridgeWells
Optimistic locking doesn't necessarily use a version number. Other strategies include using (a) a timestamp or (b) the entire state of the row itself. The latter strategy is ugly but avoids the need for a dedicated version column, in cases where you aren't able to modify the schema.
Andrew Swan
+3  A: 

In addition to what's been said already, it should be said that optimistic locking tends to improve concurrency at the expense of predictability. Pessimistic locking tends to reduce concurrency, but is more preditable.

You pays your money, etc

skaffman