In order to handle concurrency issue, is locking-- any form of locking, whether it's row, table or database locking a good solution?
If not, how to handle concurrency issue?
In order to handle concurrency issue, is locking-- any form of locking, whether it's row, table or database locking a good solution?
If not, how to handle concurrency issue?
I think not, because it is all the way "down" and makes your application more complex. Most languages have excelent concurrency handling techniques, and even then the best way is to write code which can handle concurrency "natively".
For more info on concurrency in Java: http://java.sun.com/docs/books/tutorial/essential/concurrency/index.html
Do you mean to handle concurrency in your application, or to solve a concurrency problem you're having with the database. If the former, it doesn't strike me as a good approach. If the latter, this may be your only answer without re-engineering your schemas.
Database locking - as opposed to table or row locking - is a bad way of dealing with concurrency; it rules out concurrency.
You have to first define your objective. In case of concurrent request would you like to win the last user or the first user. Database locking is certainly a bad way. Try to lock the table/rows as late as possible and release lock asap.
There are many reasonable alternative to having to code DMBS locking of some sort yourself. Keep in mind the some kind of locking really is always happening (atomic actions, etc) but the key is that you don't want to go there if you don't have to. You don't want to dine with philosophers if you don't have to. Transactions are one way of getting around locking, but that is mostly for commits. Using a field that marks/indicates the record is dirty the (Dirty bit pattern) is another way, just make sure you are doing so in a atomic access manner. The language often has a suitable solution as referenced by previous posts, however the language typically only supports application, process to process, level concurrency and sometimes the concurrency has to be in the database. I didn't want to assume you have a rich application layer, but if you do there are many abstraction layers which handle this for you. TopLink by Oracle is free and is the more robust brother of Hibernate both of which help you manage your database concurrency challenges through abstraction, dirty bits, caching, and lazy locking. You really don't want to implement these yourself, unless you are coding for a school or personal project. Understand the problem, but stand on the shoulders of giants.
If you believe Oracle, no, not at all. That's because Oracle went to great lengths to avoid it.
The problem is that readers can block writers and writers block readers, and a writer has to wait until all readers have finished with a row before it can write. That delays the writing process and its caller. Exclusive locks (for writing) are held to the end of the transaction, in case the transaction has to be rolled back - this stops other transactions seeing the new value until the transaction commits.
In practice locking is generally fine if there isn't too much contention, the same as with any concurrent programming. If there's too much contention for a row/page/table (not many database servers do whole-DB locking), it will cause the transactions to execute in turn rather than concurrently.
Oracle uses row-versioning, where instead of locking a row to write it, a new version of the row is created instead. Readers that need to repeat their reads remember which version of the row they read. However, an error will occur if a reader that's remembering its reads tries to update a row that has been updated by another writer since this transaction read it; this is to stop lost updates. To ensure you can update a row, you have to say that the SELECT is FOR UPDATE; if you do that, it takes a lock - only one transaction can hold a row FOR UPDATE at a time, and a conflicting transaction has to wait.
SQL Server 2005 and later support Snapshot Isolation, which is their name for row-versioning. Again, you should ask for update locks if you need to update some data you just read - in SQL Server, use WITH (UPDLOCK).
A further problem with locking is the likelihood of deadlocks. This is simply where two transactions each hold a lock on a resource the other needs, or in general a cycle of transactions hold locks that each other need to progress. The database server will generally detect this deadlock and kill one of the transactions, rolling it back - you then need to retry the operation. Any situation where you have multiple concurrent transactions modifying the same rows has potential for deadlock. The deadlock will occur if the rows are touched in a different order; it's very hard to enforce the order that the database server will use (generally you want the optimizer to pick the fastest order, which won't necessarily be consistent across different queries).
Generally I would suggest the same as with threading - go with locks until you can prove that they're causing a scalability problem, then work out how to make the most critical sections lock-free.