views:

18

answers:

1

I have a high-traffic site that records web hits in a table, Hits. There's logic in place so that duplicate hits are discarded (where the definition of what defines a hit as duplicate is arbitrary for the purposes of this example).

Problem: with multiple web servers running (against the same DB), two or more hits can arrive at the same time on different web servers and, from each server's perspective, none of them are duplicates (whereas were they serialized through a single server all but the first should be discarded). Hence, all of them get written to the Hits table.

Without altering the DB schema to force uniqueness on a field in the Hits table, how do I force Rails to synchronize the transaction to the database to guarantee that no duplicates are written? According to the documentation, ActiveRecord transactions are only enforced on a per-connection basis, which isn't good enough (as far as I can tell).

A: 

Transactions in MySQL are always per-connection, there's no other way to tell what is an atomic operation.

I think the only way to do this is either by placing a lock on the whole table or to enforce the integrity on the schema level.

Toby Hede
I ended up going the schema route, by revamping things to allow for a unique field/index. Thanks.
D Carney