The detail that could make your solution administration intensive is getting rid of locks after crashes or connectivity failures. That's where the tradeoff between optimistic and pessimistic locking really lies. Manually merging, or redoing, edits when optimistic locking fails is a pain, but mopping up after crashes on pessimistic and persistent locking models creates its own headaches (as anyone who supported users of Pervasive backed accounting systems in the 90s will tell you at length given the opportunity)
One answer is to use your RDBMS's mechanisms for managing transactions and concurrency: Grab the record with SELECT FOR UPDATE or whatever syntax your suits your environment and isolation level. If one of your clients crashes or gets disconnected the transaction rolls back and the lock gets released.
In a connectionless environment like the web or an environment where connections get lost and recovered frequently a session based model with a session timeout could also work:
- Attempt to clear the existing lock on the record if it is for an expired session
- Attempt to lock the record to the sessionid (Fails if the previous step failed)
- Select the locked record (No record returned if the previous step failed)
So the lock gets released when the session expires. No having to manually remove locks after crashes and some tolerance of client/connectivity problems. It does take a bit more work to code though.