views:

254

answers:

3

I have a table with a name and a name_count. So when I insert a new record, I first check what the maximum name_count is for that name. I then insert the record with that maximum + 1. Works great... except with mysql 5.1 and hibernate 3.5, by default the reads don't respect transaction boundaries. 2 of these inserts for the same name could happen at the same time and end up with the same name_count, which completely screws my application!

Unfortunately, there are some specific situations where the above is actually fairly common. So what do I do? I assume I can do a pessimistic lock where any row I read is locked for further reading until I commit or roll-back my transaction. Or I can do an optimistic lock with a version column that automatically keeps trying until there are no conflicts?

What's the best approach for my situation and how do I specify it in Hibernate 3.5 and mysql 5.1? The above table is massive and accessed frequently.

+2  A: 

This is why most people use a SEQUENCE to create unique numbers. That said, what you must do is lock the whole table (LOCK TABLES). The problem: You must lock all the tables you need (i.e. anything Hibernate may touch), or you will get errors. Next, you must unlock the tables and do both of these operations in sync with the rest of the transaction.

I hope this gives you an idea why people use sequences: They have some small drawbacks like gaps but everything else is much worse.

[EDIT] You can define a sequence and then use a native SQL query to get the next value. You can try to define a sequence generator (see the docs) but maybe the mapping is only allowed on Id fields.

Re "200 Million names": As long as the database can store the number, you can also define a sequence over it.

Re "row based locking": Which row do you plan to lock? The one with the max value? I'm not sure that the max() operator will stop if you lock it. What you could try is a trigger. Since triggers are atomic, no one can insert a row while it runs. But trigger are a bit hard to maintain.

Aaron Digulla
I guess a few questions... how do I use a sequence in hibernate? Can I really use sequences for my situation? There might be 200 million unique names that I'll need a sequence for. Regarding locks, I can't lock the whole table, this is a massive table which is frequently accessed... can't I do row-based locking with hibernate/mysql in this situation?
at
The documentation you linked to is only to autoincrement ids.. that'll work, but I'd have to create 200 million tables in my case, not sure if that's feasible or desirable.I explained my situation as calling max(), but I don't actually do that as max() is unusably slow with large tables. Instead I read the row with the same name and largest count (order by name_count limit 1). So I would want that row locked.
at
A: 

MySQL don't support sequences, so they should be simulated. This is an interesting recipe for it: http://forums.mysql.com/read.php?61,143867,194058#msg-194058

Note that table type is MyISAM to not respect transaction concurrency and return next counter value on each request.

rsvato
Is that really safe? And how do you get the value back? Is there a mechanism in JDBC to do so?
at
A: 

If you want to make it database agnostic (avoiding sequences) and not have to run a max() function before you insert try using a guid instead:

/**
 * The unique id. This id is generated this object is persited. The id is a 32 character UUID.
 * This gives each entity a completely unqique identifier. This is completely unique across all
 * databases, jvms and entities.
 */
@Id
@GeneratedValue(generator = "system-uuid")
@GenericGenerator(name = "system-uuid", strategy = "uuid")
@Column(length = 32, name = EntityObject.Columns.ID)
@DocumentId
private String id;

This also has the benefit that your primary keys are unique across every table in your database which lends well for distributed databases.

We've used this with many databases and we can move between them easily.

Kango_V
I don't need Ids generated... I need counters incremented.
at
I think I need to read more carefully :)
Kango_V