views:

60

answers:

5

I'm not sure exactly how to describe this question, but here goes. I've got a class hierarchy of objects that are mapped in a SQLite database. I've already got all the non-trivial code written that communicates between the .NET objects and the database.

I've got a base interface as follows:

public interface IBackendObject
{
    void Read(int id);
    void Refresh();
    void Save();
    void Delete();
}

This is the basic CRUD operations on any object. I've then implemented a base class that encapsulates much of the functionality.

public abstract class ABackendObject : IBackendObject
{
    protected ABackendObject() { } // constructor used to instantiate new objects
    protected ABackendObject(int id) { Read(id); } // constructor used to load object

    public void Read(int id) { ... } // implemented here is the DB code
}

Now, finally, I have my concrete child objects, each of which have their own tables in the database:

public class ChildObject : ABackendObject
{
    public ChildObject() : base() { }
    public ChildObject(int id) : base(id) { }
}

This works fine for all my purposes so far. The child has several callback methods that are used by the base class to instantiate the data properly.

I now want to make this slightly efficient. For example, in the following code:

public void SomeFunction1()
{
    ChildObject obj = new ChildObject(1);
    obj.Property1 = "blah!";
    obj.Save();
}

public void SomeFunction2()
{
    ChildObject obj = new ChildObject(1);
    obj.Property2 = "blah!";
    obj.Save();
}

In this case, I'll be constructing two completely new memory instantiations and depending on the order of SomeFunction1 and SomeFunction2 being called, either Property1 or Property2 may not be saved. What I want to achieve is a way for both these instantiations to somehow point to the same memory location--I don't think that will be possible if I'm using the "new" keyword, so I was looking for hints as to how to proceed.

Ideally, I'd want to store a cache of all loaded objects in my ABackendObject class and return memory references to the already loaded objects when requested, or load the object from memory if it doesn't already exist and add it to the cache. I've got a lot of code that is already using this framework, so I'm of course going to have to change a lot of stuff to get this working, but I just wanted some tips as to how to proceed.

Thanks!

+2  A: 

What you want is an object factory. Make the ChildObject constructor private, then write a static method ChildObject.Create(int index) which returns a ChildObject, but which internally ensures that different calls with the same index return the same object. For simple cases, a simple static hash of index => object will be sufficient.

JSBangs
I've gone ahead and implemented this and it works like a dream. Only problem is I had to modify all my `ChildObjects`, of which there are about 20. I suppose that's okay as long as I don't need to change it again!
sohum
Using Reed Copsey's suggestion may help you cut down on the amount of code duplication.
JSBangs
+6  A: 

If you want to store a "cache" of loaded objects, you could easily just have each type maintain a Dictionary<int, IBackendObject> which holds loaded objects, keyed by their ID.

Instead of using a constructor, build a factory method that checks the cache:

public abstract class ABackendObject<T> where T : class
{
     public T LoadFromDB(int id) {
         T obj = this.CheckCache(id);
         if (obj == null)
         { 
             obj = this.Read(id); // Load the object
             this.SaveToCache(id, obj);
         }
         return obj;
     }
} 

If you make your base class generic, and Read virtual, you should be able to provide most of this functionality without much code duplication.

Reed Copsey
+1 Damn...you type entirely too fast. I was so close. Haha
Justin Niessner
This looks nice and refined. I just had one question, which is whether I could make some of these intermediate functions static. From your code sample, I'm assuming each `ChildObject` would implement the `CheckCache` and `SaveToCache` methods (`Read` is already implemented by `ABackendObject` with callbacks into each `ChildObject`).It would be nice if I could call `ChildObject.LoadFromDB(id)` instead of `new ChildObject().LoadFromDB(id)`. Any suggestions on that front? Otherwise, this looks great!
sohum
@sohum: You should be able to make most of these static. I actually would put CheckCache and SaveToCache into the base class - it would need to be generic, and you'd put the actual type in the subclass: [public class ChildObject : ABackendObject<ChildObject> { ...]
Reed Copsey
Awesome, I just realized how I could get this done thanks to your tips. I'm going to be storing a generic-type based cache in the ABackendObject (static) as well as make the LoadFromDB static. Just so that I don't have to change my Read code, I'll have my LoadFromDB dynamically construct the ChildObject using reflection. The ChildObject(int) constructor will be protected so that it can't be used by any external callers.I can't see how I can make my ChildObject(int) constructor private (thus preventing other ABackendObjects from making it) without rearchitecting too much. Thanks!
sohum
Whoops... just realized that I had a brainfart with regards to the protected modifier. I guess I'll just have to remember not to call new ChildObject(int) from anywhere!
sohum
+1  A: 

If you're using .NET Framework 4, you may want to have a look at the System.Runtime.Caching namespace, which gives you a pretty powerful cache architecture.

http://msdn.microsoft.com/en-us/library/system.runtime.caching.aspx

Warren
A: 

Sounds perfect for a reference count like this...

    #region Begin/End Update
    int refcount = 0;
    ChildObject record;
    protected ChildObject ActiveRecord
    {
        get 
        {
            return record;
        }

        set 
        {
            record = value;
        }
    }

    public void BeginUpdate()
    {
        if (count == 0)
        {
            ActiveRecord = new ChildObject(1);

        }

        Interlocked.Increment(ref refcount);
    }

    public void EndUpdate()
    {
        int count = Interlocked.Decrement(ref refcount);

        if (count == 0)
        {
            ActiveRecord.Save();
        }
    }
    #endregion


    #region operations

    public void SomeFunction1()
    {
        BeginUpdate();

        try
        {
            ActiveRecord.Property1 = "blah!";
        }
        finally
        {
            EndUpdate();
        }
    }

    public void SomeFunction2()
    {
        BeginUpdate();

        try
        {
            ActiveRecord.Property2 = "blah!";
        }
        finally
        {
            EndUpdate();
        }
    }


    public void SomeFunction2()
    {
        BeginUpdate();

        try
        {
            SomeFunction1();
            SomeFunction2();
        }
        finally
        {
            EndUpdate();
        }
    } 
    #endregion
Steve Sheldon
A: 

I think your on the right track more or less. You can either create a factory which creates your child objects (and can track "live" instances), or you can keep track of instances which have been saved, so that when you call your Save method it recognizes that your first instance of ChildObject is the same as your second instance of ChildObject and does a deep copy of the data from the second instance over to the first. Both of these are fairly non-trivial from a coding standpoint, and both probably involve overriding the equality methods on your entities. I tend to think that using the first approach would be less likely to cause errors.

One additional option would be to use an existing Obect-Relational mapping package like NHibernate or Entity Framework to do your mapping between objects and your database. I know NHibernate supports Sqlite, and in my experience tends to be the one that requires the least amount of change to your entity structures. Going that route you get the benefit of the ORM layer tracking instances for you (and generating SQL for you), plus you would probably get some more advanced features your current data access code may not have. The downside is that these frameworks tend to have a learning curve associated with them, and depending on which you go with there could be a not insignificant impact on the rest of your code. So it would be worth weighing the benefits against the cost of learning the framework and converting your code to use the API.

ckramer
I looked at NHibernate before starting this project and decided against using it since this was at that point a relatively small personal project. It may be time to re-evaluate that!
sohum
Yeah, it's far from small at this point :). I've used Linq2Sql and Entity Framework as well as NHibernate on projects before, and NHibernate is by far the winner for me, it just fits in better with how I like to write code.
ckramer