views:

3877

answers:

5

I need to do transactions (begin, commit or rollback), locks (select for update). How can I do it in a document model db?

Edit:

The case is this:

  • I want to run an auctions site.
  • And I think how to direct purchase as well.
  • In a direct purchase I have to decrement the quantity field in the item record, but only if the quantity is greater than zero. That is why I need locks and transactions.
  • I don't know how to address that without locks and/or transactions.

Can I solve this with CouchDB?

+49  A: 

No. CouchDB uses an "optimistic concurrency" model. In the simplest terms, this just means that you send a document version along with your update, and CouchDB rejects the change if the current document version doesn't match what you've sent.

It's deceptively simple, really. You can reframe many normal transaction based scenarios for CouchDB. You do need to sort of throw out your RDBMS domain knowledge when learning CouchDB, though. It's helpful to approach problems from a higher level, rather than attempting to mold Couch to a SQL based world.

Keeping track of inventory

The problem you outlined is primarily an inventory issue. If you have a document describing an item, and it includes a field for "quantity available", you can handle concurrency issues like this:

  1. Retrieve the document, take note of the _rev property that CouchDB sends along
  2. Decrement the quantity field, if it's greater than zero
  3. Send the updated document back, using the _rev property
  4. If the _rev matches the currently stored number, be done!
  5. If there's a conflict (when _rev doesn't match), retrieve the newest document version

In this instance, there are two possible failure scenarios to think about. If the most recent document version has a quantity of 0, you handle it just like you would in a RDBMS and alert the user that they can't actually buy what they wanted to purchase. If the most recent document version has a quantity greater than 0, you simply repeat the operation with the updated data, and start back at the beginning. This forces you to do a bit more work than an RDBMS would, and could get a little annoying if there are frequent, conflicting updates.

Now, the answer I just gave presupposes that you're going to do things in CouchDB in much the same way that you would in an RDBMS. I might approach this problem a bit differently:

I'd start with a "master product" document that includes all the descriptor data (name, picture, description, price, etc). Then I'd add an "inventory ticket" document for each specific instance, with fields for product_key and claimed_by. If you're selling a model of hammer, and have 20 of them to sell, you might have documents with keys like hammer-1, hammer-2, etc, to represent each available hammer.

Then, I'd create a view that gives me a list of available hammers, with a reduce function that lets me see a "total". These are completely off the cuff, but should give you an idea of what a working view would look like.

Map

function(doc) 
{ 
    if (doc.type == 'inventory_ticket' && doc.claimed_by == null ) { 
     emit(doc.product_key, { 'inventory_ticket' :doc.id, '_rev' : doc._rev }); 
    } 
}

This gives me a list of available "tickets", by product key. I could grab a group of these when someone wants to buy a hammer, then iterate through sending updates (using the id and _rev) until I successfully claim one (previously claimed tickets will result in an update error).

Reduce

function (keys, values, combine) {
    return values.length;
}

This reduce function simply returns the total number of unclaimed inventory_ticket items, so you can tell how many "hammers" are available for purchase.

Caveats

This solution represents roughly 3.5 minutes of total thinking for the particular problem you've presented. There may be better ways of doing this! That said, it does substantially reduce conflicting updates, and cuts down on the need to respond to a conflict with a new update. Under this model, you won't have multiple users attempting to change data in primary product entry. At the very worst, you'll have multiple users attempting to claim a single ticket, and if you've grabbed several of those from your view, you simply move on to the next ticket and try again.

MrKurt
very useful! thanks
damian
No problem, fun mental workout!
MrKurt
It's not clear to me how having 'tickets' that you attempt to claim in sequence is a significant improvement over simply retrying the read/modify/write to update the master entity. Certainly it doesn't seem worth the extra overhead, especially if you have large amounts of stock.
Nick Johnson
From my perspective, the ticket convention is "simpler" to build. Failed updates on the master entry require you to reload the document, perform your operation again, and then save. The ticket thing allows you to try and "claim" something without having to request more data.
MrKurt
Also, it depends what sort of overhead you're worried about. You're either going to fight with increased contention, or have additional storage requirements. Given that a ticket can also double as a purchase record, I don't know that there'd be as much of a storage problem as you think.
MrKurt
I am editing a quantity field of a product document. Then I must create thousands of "tickets" if quantity=2K for example. Then I reducing a quantity, I must delete some tickets. Sounds completely unrelaxed for me.A lot of headache in basic use cases. Maybe I am missing something, but why not bring back previously removed transaction behavior, just make it optional with something like _bulk_docs?reject_on_conflict=true. Quite useful in single-master configurations.
Sam
Bulk inserts for tickets doesn't seem like a huge deal to me. Depending on your setup, you could just add a few tickets at a time and put more in as quantities change.You'll likely need some sort of document per quantity reduction in any case. If you reduce quantity because someone bought one, the ticket can also serve as a purchase record for that particular item. Same goes for returns or most anything else that reduces quantity.
MrKurt
+11  A: 

Expanding on MrKurt's answer. For lots of scenarios you don't need to have stock tickets redeemed in order. Instead of selecting the first ticket, you can select randomly from the remaining tickets. Given a large number tickets and a large number of concurrent requests, you will get much reduced contention on those tickets, versus everyone trying to get the first ticket.

kerrr
Yes, this would make sense!
MrKurt
+1  A: 

Actually, you can in a way. Have a look at the HTTP Document API and scroll down to the heading "Modify Multiple Documents With a Single Request".

Basically you can create/update/delete a bunch of documents in a single post request to *URI /{dbname}/_bulk_docs* and they will either all succeed or all fail. The document does caution that this behaviour may change in the future, though.

EDIT: As predicted, from version 0.9 the bulk docs no longer works this way.

Evan
That wouldn't really help in the situation being discussed, i.e. contention on single docs from multiple users.
kerrr
Starting with CouchDB 0.9, the semantics of bulk updates have changed.
Barry Wark
A: 

How do you do the classic "bank account" example of a database transaction? I.e. you want to atomically withdraw $100 from Alice's account and deposit it into Bob's. There are millions of accounts so you can't really expect Alice's and Bob's accounts to be the same document.

this point has now been discussed in the SO podcast 59.
geocoin
This is not an answer, but a question. If you want to know the answer, ask the question!
Daniel
A simple answer is double-entry bookkeeping <http://en.wikipedia.org/wiki/Double-entry_bookkeeping>. A transfer from Alice's account to Bob's is represented by a debit document for $100 with Alice's account id, and a credit document with Bob's account id. You sum the debit and credit documents referencing an account to compute account balance. If you use CouchDB's bulk update API <http://wiki.apache.org/couchdb/HTTP_Bulk_Document_API> you can create both the debit and credit documents in a single atomic operation. Or you can put both the debit and the credit in one document.
Jesse Hallett
+2  A: 

A design pattern for restfull transactions is to create a "tension" in the system. For the popular example use case of a bank account transaction you must ensure to update the total for both involved accounts:

  • Create a transaction document "transfer USD 10 from account 11223 to account 88733". This creates the tension in the system.
  • To resolve any tension scan for all transaction documents and
    • If the source account is not updated yet update the source account (-10 USD)
    • If the source account was updated but the transaction document does not show this then update the transaction document (e.g. set flag "sourcedone" in the document)
    • If the target account is not updated yet update the target account (+10 USD)
    • If the target account was updated but the transaction document does not show this then update the transaction document
    • If both accouts have been updated you can delete the transaction document or keep it for auditing.

The scanning for tension should be done in a backend process for all "tension documents" to keep the times of tension in the system short. In the above example there will be a short time anticipated inconsistence when the first account has been updated but the second is not updated yet. This must be taken into account the same way you'll deal with eventual consistency if your Couchdb is distributed.

Another possible implementation avoids the need for transactions completely: just store the tension documents and evaluate the state of your system by evaluating every involved tension document. In the example above this would mean that the total for a account is only determined as the sum values in the transaction documents where this account is involved. In Couchdb you can modles this very nicely as a map/reduce view.

ordnungswidrig
But what about cases where the account is debited but the tension doc isn't changed? Any failure scenario between those two points, if they are not atomic, will cause permanent inconsistency, right? Something about the process has to be atomic, that's the point of a transaction.
Ian Varley
Yes, you're correct, in this case -- while the tension is not resolved -- there will be inconsistency. However the inconsistency is only temporary until the next scan for tension documents detects this. That's the trade of in this case, a kind of eventual consistency regarding time. As long as you decrent the source acount first and later increment the target account this can be acceptable. But beware: tension documents wont give you ACID transactions on top of REST. But they can be a good tradeoff between pure REST and ACID.
ordnungswidrig
Imagine every tension document has a timestamp, and account documents have a 'last-tension-applied' field - or a list of applied tensions. When you debit the source account you also update the 'last-tension-applied' field. Those two operations are atomic because they are on the same document. The target account also has a similar field. That way the system can always tell which tension docs have been applied to which accounts.
Jesse Hallett