views:

129

answers:

2

I read app engine wiki, on datastore contention if too frequent write more than 5 times in 1 seconds. The wiki introduced use "shard" approach" as workaround. May i know if we use spring @transactional on this, this can prevent datastore contention timeout right since writing in done concurrently ?

+1  A: 

No, you can't do that. Whether or not you use @transactional, it will not make the problem go away - the fact that you have one object that you need to keep on writing to. The contention limit will continue to remain whatever approach you use.

The answer to this problem is actually deciding what it is you want to do, and how important accuracy is to you. Take the case of a simple counter, which is a common example of this problem. If you think accuracy is very important, then you will have to have a list of counters that you choose either sequentially, or at random, and write into. If you have ten counters in this list, then that gives you then times more writes per second, even transactional writes. You need to write code to choose which counters you want to write to, though.

On the other had, if you don't require too much precision, you could try writing to memcache very often. The write contention limits are much higher when writing to memcache or incrementing a counter there. You can then write out and reset the counter at a set interval.

Sudhir Jonathan
there is a detailed article here on how to achieve it: http://code.google.com/appengine/articles/sharding_counters.html
rochb
yes, i understand your explaination. but let say i 'do not need a fast write'. will doing @transactional+sequential writing prevent contention timeout from happening by slowly doing sequential writing 'one at a time'
cometta
let say i need to do 6 times write to db and i understand db can do max 5 times write per second. will the remaining 1 (6-5) times write properly after 1 second elapsed
cometta
I don't believe @transactional will help you there - the write will be retried and certain number of times, after which an exception will be thrown. That approach simply won't scale, though... the leftover writes will just keep adding up.
Sudhir Jonathan
+1  A: 

When I was on a project that needed to store a lot of individual records to DB I found the system could not handle all the concurrent transactions. I instead build the object in memory and then all at once saved it to the DB.

Ben