views:

213

answers:

3

I was reading Steven Sanderson's book Pro ASP.NET MVC Framework and he suggests using a repository pattern:

public interface IProductsRepository
{
    IQueryable<Product> Products { get; }
    void SaveProduct(Product product);
}

He accesses the products repository directly from his Controllers, but since I will have both a web page and web service, I wanted to have add a "Service Layer" that would be called by the Controllers and the web services:

public class ProductService
{
    private IProductsRepository productsRepsitory;

    public ProductService(IProductsRepository productsRepository)
    {
        this.productsRepsitory = productsRepository;
    }

    public Product GetProductById(int id)
    {
        return (from p in productsRepsitory.Products
                where p.ProductID == id
                select p).First();
    }

    // more methods
}

This seems all fine, but my problem is that I can't use his SaveProduct(Product product) because:

1) I want to only allow certain fields to be changed in the Product table

2) I want to keep an audit log of each change made to each field of the Product table, so I would have to have methods for each field that I allow to be updated.

My initial plan was to have a method in ProductService like this:

public void ChangeProductName(Product product, string newProductName);

Which then calls IProductsRepository.SaveProduct(Product)

But there are a few problems I see with this:

1) Isn't it not very "OO" to pass in the Product object like this? However, I can't see how this code could go in the Product class since it should just be a dumb data object. I could see adding validation to a partial class, but not this.

2) How do I ensure that no one changed any other fields other than Product before I persist the change?

I'm basically torn because I can't put the auditing/update code in Product and the ProductService class' update methods just seem unnatural (However, GetProductById seems perfectly natural to me).

I think I'd still have these problems even if I didn't have the auditing requirement. Either way I want to limit what fields can be changed in one class rather than duplicating the logic in both the web site and the web services.

Is my design pattern just bad in the first place or can I somehow make this work in a clean way?

Any insight would be greatly appreciated.

A: 

To address your need for the auditing/logging of changes, just today I put the finishing touches on a system I'll suggest for your consideration. The idea is to serialize (easily done if you are using LTS entity objects and through the magic of the DataContractSerializer) the "before" and "after" state of your object, then save these to a logging table.

My logging table has columns for the date, username, a foreign key to the affected entity, and title/quick summary of the action, such as "Product was updated". There is also a single column for storing the change itself, which is a general-purpose field for storing a mini-XML representation of the "before and after" state. For example, here's what I'm logging:

<ProductUpdated>
    <Deleted><Product ... /></Deleted>
    <Inserted><Product ... /></Inserted>
</ProductUpdated>

Here is the general purpose "serializer" I used:

public string SerializeObject(object obj)
{
    // See http://msdn.microsoft.com/en-us/library/bb546184.aspx :
    Type t = obj.GetType();
    DataContractSerializer dcs = new DataContractSerializer(t);
    StringBuilder sb = new StringBuilder();
    XmlWriterSettings settings = new XmlWriterSettings();
    settings.OmitXmlDeclaration = true;
    XmlWriter writer = XmlWriter.Create(sb, settings);
    dcs.WriteObject(writer, obj);
    writer.Close();
    string xml = sb.ToString();
    return xml;
}

Then, when updating (can also be used for logging inserts/deletes), grab the state before you do your model-binding, then again afterwards. Shove into an XML wrapper and log it! (or I suppose you could use two columns in your logging table for these, although my XML approach allows me to attach any other information that might be helpful).

Furthermore, if you want to only allow certain fields to be updated, you'll be able to do this with either a "whitelist/blacklist" in your controller's action method, or you could create a "ViewModel" to hand in to your controller, which could have the restrictions placed upon it that you desire. You could also look into the many partial methods and hooks that your LTS entity classes should have on them, which would allow you to detect changes to fields that you don't want.

Good luck!
-Mike


Update:

For kicks, here is how I deserialize an entity (as I mentioned in my comment), for viewing its state at some later point in history: (After I've extracted it from the log entry's wrapper)

public Account DeserializeAccount(string xmlString)
{
    MemoryStream s = new MemoryStream(Encoding.Unicode.GetBytes(xmlString));
    DataContractSerializer dcs = new DataContractSerializer(typeof(Product));
    Product product = (Product)dcs.ReadObject(s);
    return product;
}
Funka
I also forgot to mention one cool thing I can do with this approach, which is to instantiate an object from the logging table's XML at any point in its history, allowing me to go back through time if needed, restore an earlier state, etc...
Funka
Funka, thanks for the insight. Originally I was going to have a separate table for each entity and column. Therefore, using your example, I'd have a Product_Update_ColumnName table. I was a bit worried about having too many tables, but I have a pretty fixed amount of tables that I need to audit. However, I like your XML solution since there could be a single audit table, but I'm not sure how I feel about storing XML in the DB. Do you have any comments about the Repository pattern I'm using? Where did you put the code that actually does the auditing? That's the main thing I'm struggling with.
Brandon
Ignoring the XML-based approach for the moment, one thing you could do for auditing---instead of creating separate tables for each column---is create a single "history" table that mirrors the schema of your source table (with perhaps a few extra fields such as for a datestamp). For example, you could have tables `Products` and `ProductsHistory`. Anytime you make a change to `Products`, insert a new row into the `ProductsHistory` table. I've even added this after-the-fact to production systems simply through the use of a SQL trigger, without adding any code to the app.
Funka
In my case, I have a separate Logger service layer. I do have calls to this mixed into my controller's action method, which I realize probably isn't the nicest-n-neatest way to do this, but it got the job don. I recognize this won't be so ideal in your situation. As far as repository usage, I, too, based my own code on the ideas in Sanderson's book! I would probably recommend away from having separate methods for every field. Perhaps you could create a single `SaveProduct` method in your service layer which does the logging for you, just before calling upon the repository's same-named method?
Funka
+1  A: 

I split the repository into two interfaces, one for reading and one for writing.

The reading implements IDisposeable, and reuses the same data-context for its lifetime. It returns the entity objects produced by linq to SQL. For example, it might look like:

interface Reader : IDisposeable
{
    IQueryable<Product> Products;
    IQueryable<Order> Orders;
    IQueryable<Customer> Customers;
}

The iQueryable is important so I get the delayed evaluation goodness of linq2sql. This is easy to implement with a DataContext, and easy enough to fake. Note that when I use this interface I never use the autogenerated fields for related rows (ie, no fair using order.Products directly, calls must join on the appropriate ID columns). This is a limitation I don't mind living with considering how much easier it makes faking read repository for unit tests.

The writing one uses a separate datacontext per write operation, so it does not implement IDisposeable. It does NOT take entity objects as input or out- it takes the specific fields needed for each write operation.

When I write test code, I can substitute the readable interface with a fake implementation that uses a bunch of List<>s which I populate manually. I use mocks for the write interface. This has worked like a charm so far.

Don't get in a habit of passing the entity objects around, they're bound to the datacontext's lifetime and it leads to unfortunate coupling between your repository and its clients.

Frank Schwieterman
This is an interesting solution, thanks. For your "writing implementation" would your method signature look like the one I mentioned above? i.e. UpdateProductName(Product product, string newName)? Since you aren't re-using the DataContext I would assume you have to do some tricky stuff to attach to the row too.
Brandon
No it would not take a Product class as input (assuming that is a linq2sql generated class). It might take the needed fields as parameters, or a parameter object containing separate fields... The advantage is it only takes the data it really needs, so you're not passing around big balls of stuff not sure which parts of the object are initialized and which would actually be used.
Frank Schwieterman
For instance, if a Update operation needs to operate on a specific instance, it would take the ID of that specific instance.
Frank Schwieterman
A: 

I would also recommend reading Chapter 13, "LINQ in every layer" in the book "LINQ in Action". It pretty much addresses exactly what I've been struggling with -- how to work LINQ into a 3-tier design. I'm leaning towards not using LINQ at all now after reading that chapter.

Brandon