views:

688

answers:

7

My situation is that I screwed up essentially. I inherited my code base about 1.5 years ago when I took this position and rather than reinventing the wheel, even though I know now that I should have, I kept the DAL in pretty much the same structure as the previous developer.

Essentially there is one file (now at 15k lines of code) that serves as a go between to a bunch of DAO's that use DataSets and TableAdapters to retrieve data. My xsd files have grown to such size that they cause R# to crash visual studio every time it opens and the intermediary class that is now 15k lines also takes forever for R# to analyze. Not to mention it is ugly, it works but not well, and is an absolute nightmare to debug.

What I have tried thus far is switching to NHibernate. NHibernate is a great library, but unfortunately it was not adaptable enough to work with my application, from what the lead developer says (Fabio Maulo) it is pretty much a combination of my application requirements and the restrictions upon NHibernate when using identity as a database PK strategy.

So now I am back to essentially designing my own DAL. I am looking at a few different patterns for this, but would like to get your DAL design strategies. There are so many ways and reasons to implement a DAL in a particular manner so if you could please explain your strategy and why it was best fit for you I would greatly appreciate it.

Thanks in advance!

Edit: Let me explain why NHibernate did not work since that seems to be the immediate response. My users create a "job" that is actually just a transient representation of my Job class. Within this job they will give it one or a list of weight factors that are also transient at the time of creation. Finally they provide a list of job details that have a particular weight factor associated to them. Because, in the DB, weight factors are unique when I go to persist the job and it cascades down to weight factor it dies when it finds a duplicate weight factor. I tried running a check before assigning the weight factor to the detail (which I didn't want to do because I don't want the extra calls to the db) but calling CreateCriteria in NH also causes a flush in the session, according to Fabio, which destroys my cache and thus kills the entire in memory representation of the job. Folks over at the NH mailing list said I should switch over to GUID, but that is not a feasible option as the conversion process would be a nightmare.

A: 

If your DAL is written to an interface, it would be much easier to switch to NHibernate or something comperable (I would prefer Fluent-NHibernate, but I digress). So why not spend the time instead refactoring the DAL to use an interface, and then write a new implementation using NH or your ORM of choice?

mgroves
+1  A: 

For me the best fit was a pretty simple concept - use DAO class definitions and with reflection create all SQL necessary to populate and save them. This way there is no mapping file, only simple classes. My DAO's require an Entity base class so it is not a POCO but that doesn't bother me. It does support any type of primary key, be it single identity column or multi column.

Otávio Décio
Do you use this in combination with a Meta Data Mapper and/or Data Mapper? Also how do you deal with Joins or referenced classes?
joshlrogers
@joshlrogers No data mapper - each table is represented by a DAO class. Joins are defined by implementing a GetJoin method in each DAO that returns the appropriate join statement based on the other table(s). Each joined DAO has a list or an object for the joined table. Joins are defined by chaining constructors at query time -example: List<Entity> = SQL.Read(searchCriteria, new (Products(new Orders())) returns a join between Products and Orders.
Otávio Décio
A: 

In recent projects we have stopped programming a separate DAL.

Instead we use an Object Relational Mapper (in our case Entity Framework). We then let the business layer program directly against the ORM.

This has saved us over 90% of development effort in some cases.

Shiraz Bhaiji
This is exactly why I wanted to switch to NHibernate. Unfortunately it didn't work out.
joshlrogers
A: 

My first step would be to break the code out of a 15 KLOC monster, then come up with a strategy for creating a new DAL.

RichardOD
+1  A: 

My experience with NHibernate is that, while it is packed with features and very high-performance, you will eventually need to become an NHibernate expert in order to fix some unexpected behavior. Reading through the pro-NHibernate answers and seeing

Hmm , perhaps he uses long running Sessions (Session per Business Transaction model), and in such an approach, using identity is discouraged, since it breaks your unitofwork (it needs to flush directly after inserting a new entity). A solution could be to drop the identity, and use the HiLo identity generator.

illustrates exactly what I mean.

What I've done is create a base class modeled somewhat off of the ActiveRecord pattern, that I inherit from and mark up the inherited class with attributes that attach it to a stored procedure each for Select, Insert, Update and Delete. The base class uses Reflection to read the attributes and assign the class's property values to SP parameters, and in the case of Select(), assign the result SQLDataReader's column values to the properties of a list of generics.

This is what DataObjectBase looks like:

interface IDataObjectBase<T>
    {
        void Delete();
        void Insert();
        System.Collections.Generic.List<T> Select();
        void Update();
    }

This is an example of a data class deriving from it:

[StoredProcedure("usp_refund_CustRefundDetailInsert", OperationType.Insert)]
    [StoredProcedure("usp_refund_CustRefundDetailSelect", OperationType.Select)]
    [StoredProcedure("usp_refund_CustRefundDetailUpdate", OperationType.Update)]
    public class RefundDetail : DataObjectBase<RefundDetail>
    {

        [StoredProcedureParameter(null, OperationType.Update, ParameterDirection.Input)]
        [StoredProcedureParameter(null, OperationType.Insert, ParameterDirection.Output)]
        [StoredProcedureParameter(null, OperationType.Select, ParameterDirection.Input)]
        [ResultColumn(null)]
        public int? RefundDetailId
        { get; set; }

        [StoredProcedureParameter(null, OperationType.Update, ParameterDirection.Input)]
        [StoredProcedureParameter(null, OperationType.Insert, ParameterDirection.Input)]
        [StoredProcedureParameter(null, OperationType.Select, ParameterDirection.Input)]
        [ResultColumn(null)]
        public int? RefundId
        { get; set; }
        [StoredProcedureParameter(null, OperationType.Update, ParameterDirection.Input)]
        [StoredProcedureParameter(null, OperationType.Insert, ParameterDirection.Input)]
        [ResultColumn(null)]
        public int RefundTypeId
        { get; set; }

        [StoredProcedureParameter(null, OperationType.Update, ParameterDirection.Input)]
        [StoredProcedureParameter(null, OperationType.Insert, ParameterDirection.Input)]
        [ResultColumn(null)]
        public decimal? RefundAmount
        { get; set; }        
        [StoredProcedureParameter(null, OperationType.Update, ParameterDirection.Input)]
        [StoredProcedureParameter(null, OperationType.Insert, ParameterDirection.Input)]
        [ResultColumn(null)]
        public string ARTranId
        { get; set; }

    }

I know it seems like I'm reinventing the wheel, but all of the libraries I found either had too much dependence on other libraries (ActiveRecord + NHibernate, for instance, which was a close second) or were too complicated to use and administer.

The library I made is very lightweight (maybe a couple of hundred lines of C#) and doesn't do anything more than assign values to parameters and execute the SP. It also lends itself very well to code generation, so eventually I expect to write no data access code. I also like that it uses a class instance instead of a static class, so that I can pass data to queries without some awkward criteria collection or HQL. Select() means "get more like me".

Chris McCall
This is interesting, it is a bit verbose, but very interesting. Does your strategy allow for any caching?
joshlrogers
one more thing: this also works great with a library of existing stored procs. There are no requirements that they be named a certain way or have non-arbitrary parameter names or orders.
Chris McCall
I'm an old-school VB guy, verbose is my shit :)There is no caching built-in (yet). I'm toying with the idea of using PostSharp to implement caching AOP-style by intercepting method calls and returning canned data. For now, this is light-years ahead of the raw ADO.net they were doing.
Chris McCall
Your implementation is VERY close to what I was trying here. Why I stopped, and I am curious how you handle this, is that I found myself generating a multitude of very trivial stored procedures. I didn't really want to have to sift through a list of hundreds of stored procedures to find the one I am looking for (even if the naming scheme is convenient.) So I began working on a dynamic sql generator but I started running into a conceptual issue with deep loading and lazy loading. Do you just have a ton of Sprocs? What is your plan for deep loading? Lazy Loading?
joshlrogers
The organization I'm working for is one of those "ton of sprocs" types. In my experience, most organizations have stopped innovating at "use SPs for data access", and have a huge library of SPs and a policy that developers create CRUD SPs for each table in the database. What I've done is use a generator (Codesmith) to gen all the SPs I need for each project, so I don't even have to think about it.
Chris McCall
As for deep/lazy loading, I take care of that myself, by hand, where it crops up. I load parts of the object graph on property access. My email address is [email protected]. If you want, I can send you all this stuff zipped up.
Chris McCall
A: 

Linq to SQL is nice if you are using SQL Server. There is source out there for a LinqToSQL provider to Access and MySQL. I haven't tested it though. LinqToSql follows the UnitOfWork model which is similar to the way ADO.NET functions. You make a series of changes to a local copy of the data then commit all the changes with one update call. It's pretty clean I think.

You can also extend the DataRow class yourself to provide strongly typed access to your fields. I used XSLT to generate the DataRow descendants based on the metadata of each table. I have a generic DataTable decendant. MyDataTable where T is my derived row. I know that MS's strongly-typed datasets do a similar thing but I wanted a light-weight generic version that I complete control of. Once you have this, you can write static access methods that query the db and fill the DataTable.

You would be in charge of writing the changes from the DataTable back to the DataSource. I would write a generic class or method that creates the update,inserts and deletes.

Good Luck!

Steve
A: 

I use mine wrapper for SPs for the fastest data retrieving and L2S when perfomance is not a goal. My DAL uses repository pattern and encapsulated logic for TDD.

pocheptsov