views:

134

answers:

2

In a set of SOAP web services the user is authenticated with custom SOAP header (username/password). Each time the user call a WS the following Auth method is called to authenticate and retrieve User object from NHibernate session:

[...]
public Services : Base {
    private User user;

    [...]
    public string myWS(string username, string password) 
    {
        if( Auth(username, password) ) { [...] }
    }
}

public Base : WebService {

   protected static ISessionFactory sesFactory;
   protected static ISession session;

   static Base {
     Configuration conf = new Configuration();
            [...]
     sesFactory = conf.BuildSessionFactory();
   }

    private bool Auth(...) {    

        session = sesFactory.OpenSession();

        MembershipUser user = null;
            if (UserCredentials != null && Membership.ValidateUser(username, password)) 
            {
             luser = Membership.GetUser(username);
            }
        ...
        try {
      user = (User)session.Get(typeof(User), luser.ProviderUserKey.ToString()); 
 } catch {
      user = null;
             throw new [...]
        }

        return user != null; 
     }

}

When the WS work is done the session is cleaned up nicely and everything works: the WSs create, modify and change objects and Nhibernate save them in the DB.

The problems come when an user (same username/password) calls the same WS at same time from different clients (machines). The state of the saved objects are inconsistent.

How do I manage the session correctly to avoid this? I searched and the documentation about Session management in NHibernate is really vast. Should I Lock over user object? Should I set up a "session share" management between WS calls from same user? Should I use Transaction in some savvy way?

Thanks

Update1

Yes, mSession is 'session'.

Update2

Even with a non-static session object the data saved in the DB are inconsistent. The pattern I use to insert/save object is the following:

try {
   Auth([...]);
} catch {
  // ....
}
var return_value = [...];
try {
  using(ITransaction tx = session.Transaction)
  {
    tx.Begin();

    MyType obj = new MyType();
    user.field = user.field - obj.field; // The fields names are i.e. but this is actually what happens.

    session.Save(user);
    session.Save(obj);

    tx.Commit();

    return_value = obj.another_field;
  }
} catch ([...]) {
    // Handling exceptions...   
} finally {
    // Clean up
    session.Flush();
    session.Close();
}

return return_value;

All new objects (MyType) are correctly saved but the user.field status is not as I would expect. Even obj.another_field is correct (the field is an ID with generated=on-save policy).

It is like 'user.field = user.field - obj.field;' is executed more times then necessary.

A: 

Fellow,

Even without seeing the code that you're using to save your object, it's not a good practice to share the same session between different calls.

The session factory is a very expensive object to create and doesn't execute any method specific action, so it's always a good idea to share it between calls (using a static variable, as you did, by example). The session, though, is a call context specific object. You must use one session / call, no doubt about it.

Simply change the behavior of your static session object and give it a try. If it doesn't solve your problem, please, update your question with the part of your code that is updating your object allowing us to give you a better try.

Regards,

Filipe

jfneis
+1  A: 

It’s really weird that the behavior looks like being executed MORE times than what it really was. I would expect the opposite. But it’s not a problem, let’s add some concurrency control to your application.

You have some options to control the concurrency between threads that are accessing your objects at the same time: optimistic and pessimistic.

Optimistic control will let your sessions work until the time they try to save the object, and then will throw an exception. Catching and working around an exception can be a very boring to your user as you will probably have to ask him to do the same operation again, but if the number of occurrences that your concurrence issues happens isn’t meaningful to your context, I’d opt for one of them: their impact in your application overall performance is smaller.

In the other side you have pessimistic control. In this case, you won’t have surprises when saving objects, but the second session will have to wait until the first one commits its transaction until it can proceed with its job. If the first session takes too long, obviously, you will get some kind of timeout exception. If it takes a regular time, everything will work, but your system will have this average speed hit by the lock time.


Optimistic - Option 1: you can make some kind of dirty checking, so NHibernate will check if the object that is being updated has the same values that it had before the property was modified. Any difference will result in an exception. To configure it (via Fluent NH) you’d have to add something similar to:

        OptimisticLock.Dirty().DynamicUpdate();

Enabling DynamicUpdate tells NHibernate to check only the properties that are being saved, not the whole object. You can use OptimisticLock.All() to force NHibernate to check all properties, but I think it’s not needed in your case.

Optimistic - Option 2: the other option is to explicit have a Version column that will be handled by NH to know if the object you are trying to save is the most updated. If not, as usual, you will get an exception. To configure it via FluentNH:

        OptimisticLock.Version();
        Version(x => x.Version); // supposing that you have a Version property

Pessimistic: pessimistic lock is achieved via LockMode enum in your Get methods. As told before, the object won’t be read by any other session until the owner session commits its transaction. Use something like this:

        MappedEntity ent1 = session1.Get<MappedEntity>(entity.Id, LockMode.Force);

Using any of this approaches will probably solve your problems, but consider the pros and cons of each in your scenario.

All the information shown here is also available in Ayende’s blog, with the proper HBM mappings. See this: http://ayende.com/Blog/archive/2009/04/15/nhibernate-mapping-concurrency.aspx

Hope this helps, and let me know if I can help you with anything else.

Regards,

Filipe

jfneis
In last couple of days running I let the tests running over and over again. The pessimistic lock solved the problem but one more strange thing is that only LockMode.Upgrade works.I'm using NHibernate 1.2 (which did not have LockMode.Force) over PostgreSQL 8.3 and Mono 2.4.4. Maybe this combination is problematic at certain level. I will try to remember to update this post if I will update to NHibernate 2.0 or any other components.Filipe, thanks for your quick and really informative informations.
Anonymous Coward
Lock mode can really change from BD to BD. My tests (and actually my experience) are really based on SQL Server. Updating NH version, anyway, would be nice.
jfneis