views:

732

answers:

1

I'm new to JPA so forgive me if not being clear.

Basically I want to prevent concurrent modifications by using Optimistic Locking. I've added the @Version attribute to my entity class.

I need to know if this algorithm on handling OptimisticLockException is sound. I'm going to use the Execute Around Idiom like so:

interface UpdateUnitOfWork 
{
    doUpdate( User user ); /* may throw javax.persistence.PersistenceException */
}

public boolean exec( EntityManager em, String userid, UpdateUnitOfWork work)
{
    User u = em.find( User, userid );
    if( u == null ) 
        return;

    try
    {
        work.doUpdate( u );
        return true;
    }
    catch( OptimisticLockException ole )
    {
        return false;
    }
}

public static void main(..) throws Exception
{
    EntityManagerFactory emf = ...;
    EntityManager em = null;

    try
    {
        em = emf.createEntityManager();

        UpdateUnitOfWork uow = new UpdateUnitOfWork() {
            public doUpdate( User user )
            {
                user.setAge( 34 );
            }
        };

        boolean success = exec( em, "petit", uow );
        if( success )
            return;

        // retry 2nd time
        success = exec( em, "petit", uow );
        if( success )
            return;

        // retry 3rd time
        success = exec( em, "petit", uow );
        if( success )
            return;
    }
    finally
    {
        em.close();
    }
}

The question I have is how do you decide when to stop retrying ?

+1  A: 

The question I have is how do you decide when to stop retrying ?

In my opinion, Optimistic Locking should be used when modifying the same object in the same time is an exceptional situation.

Now, if this situation occurs, and if the process was manual, I would warn the user that the modifications couldn't be saved and ask him to save his changes again.

If the process is automated, it can make sense to implement an automatic retry mechanism but I wouldn't retry more than something like 3 or 5 times, depending on the processing time (and I'd use recursive calls to implement this). If an automated process fails 5 times in a row on an concurrent access problem, then it is very likely competing with another automated process and they are either not working on independent chunks of data (which is bad for parallelization) or the strategy is just not the right one. In both case, retrying more is not the right solution.

Pascal Thivent