Let's take an example UI for editing Customer information. The user edits 5 fields and presses "submit". Because we're good abstractionists, we've taken the edits to the 5 fields and make them different commands (describing the edit to a particular field).
The command handlers end up setting properties on objects that are persisted with a tool such as NHibernate, so we end up with doing an UPDATE against the database.
My main question is: from a database performance point-of-view, does it make more sense to issue a single UPDATE statement or is it okay to issue 5 different UPDATE statements?
I like the idea of a command handler being a transactional boundary. Either it works and the transaction is committed, or it doesn't and the transaction is rolled back (and we possibly re-queue to try again). The success or failure of each command is independent from the other.
The other approach might be to wrap the handling of these commands into a single database transaction, so that when NHibernate decides to flush it ends up sending a single UPDATE. But this makes the handling of the commands an all-or-nothing type deal, and I can't necessarily execute them asynchronously.
But if I wanted to make sure all of the commands executed properly, and rollback entirely in the case of failure? Maybe there is a single distributed transaction containing many smaller transactions? This could lead to database contention, increasing the risk of deadlocks, which slows down processing (further increasing the risk of deadlocks). But at the same time, consistency is key. I suppose it's a trade off between availability and consistency (see CAP).