tags:

views:

479

answers:

4

How would you go about calling several methods in the data access layer from one method in the business logic layer so that all of the SQL commands lived in one SQL transaction?

Each one of the DAL methods may be called individually from other places in the BLL, so there is no guarantee that the data layer methods are always part of a transaction. We need this functionality so if the database goes offline in the middle of a long running process, there's no commit. The business layer is orchestrating different data layer method calls based on the results of each of the previous calls. We only want to commit (from the business layer) at the very end of the entire process.

+1  A: 

What you describe is the very 'definition' of a long transaction.

Each DAL method could simply provide operations (without any specific commits). Your BLL (which is in effect where you are coordinating any calls to the DAL anyway) is where you can choose to either commit, or execute a 'savepoint'. A savepoint is an optional item which you can employ to allow 'rollbacks' within a long running transaction.

So for example, if my DAL has methods DAL1, DAL2, DAL3 are all mutative they would simply 'execute' data change operations (i.e. some type of Create, Update, Delete). From my BLL, lets assume I have BL1, and BL2 methods (BL1 is long running). BL1 invokes all the aforementoned DAL methods (i.e. DAL1...DAL3), while BL2, only invokes DAL3.

Therefore, on execution of each business logic method you might have the following:

BL1 (long-transaction) -> {savepoint} DAL1 -> {savepoint} DAL2 -> DAL3 {commit/end}

BL2 -> DAL3 {commit/end}

The idea behind the 'savepoint' is it can allow BL1 to rollback at any point if there are issues in the data operations. The long transaction is ONLY commited if all three operations successfully complete. BL2 can still call any method in the DAL, and it is responsible for controlling commits. NOTE: you could use 'savepoints' in short/regular transactions as well.

+1  A: 

Good question. This gets to the heart of the impedance mismatch.

This is one of the strongest arguments for using stored procedures. Reason: they are designed to encapsulate multiple SQL statements in a transaction.

The same can be done procedurally in the DAL, but it results in code with less clarity, while usually resulting in moving the coupling/cohesion balance in the wrong direction.

For this reason, I implement the DAL at a higher level of abstraction than simply encapsulating tables.

le dorfier
+4  A: 

well, firstly, you'll have to adhere to an atomic Unit of Work that you specify as a single method in your BLL. This would (for example) create the customer, the order and the order items. you'd then wrap this all neatly up inside a TransactionScope using statement. TransactionScope is the secret weapon here. below is some code that luckily enough I'm working on right now :):

public static int InsertArtist(Artist artist)
{
    if (artist == null)
        throw new ArgumentNullException("artist");

    int artistid = 0;
    using (TransactionScope scope = new TransactionScope())
    {
        // insert the master Artist
        /* 
           we plug the artistid variable into 
           any child instance where ArtistID is required
        */
        artistid = SiteProvider.Artist.InsertArtist(new ArtistDetails(
        0,
        artist.BandName,
        artist.DateAdded));

        // insert the child ArtistArtistGenre
        artist.ArtistArtistGenres.ForEach(item =>
        {
            var artistartistgenre = new ArtistArtistGenreDetails(
                0,
                artistid,
                item.ArtistGenreID);
            SiteProvider.Artist.InsertArtistArtistGenre(artistartistgenre);
        });

        // insert the child ArtistLink
        artist.ArtistLinks.ForEach(item =>
        {
            var artistlink = new ArtistLinkDetails(
                0,
                artistid,
                item.LinkURL);
            SiteProvider.Artist.InsertArtistLink(artistlink);
        });

        // insert the child ArtistProfile
        artist.ArtistProfiles.ForEach(item =>
        {
            var artistprofile = new ArtistProfileDetails(
                0,
                artistid,
                item.Profile);
            SiteProvider.Artist.InsertArtistProfile(artistprofile);
        });

        // insert the child FestivalArtist
        artist.FestivalArtists.ForEach(item =>
        {
            var festivalartist = new FestivalArtistDetails(
                0,
                item.FestivalID,
                artistid,
                item.AvailableFromDate,
                item.AvailableToDate,
                item.DateAdded);
            SiteProvider.Festival.InsertFestivalArtist(festivalartist);
        });
        BizObject.PurgeCacheItems(String.Format(ARTISTARTISTGENRE_ALL_KEY, String.Empty, String.Empty));
        BizObject.PurgeCacheItems(String.Format(ARTISTLINK_ALL_KEY, String.Empty, String.Empty));
        BizObject.PurgeCacheItems(String.Format(ARTISTPROFILE_ALL_KEY, String.Empty, String.Empty));
        BizObject.PurgeCacheItems(String.Format(FESTIVALARTIST_ALL_KEY, String.Empty, String.Empty));
        BizObject.PurgeCacheItems(String.Format(ARTIST_ALL_KEY, String.Empty, String.Empty));

        // commit the entire transaction - all or nothing
        scope.Complete();
    }
    return artistid;
}

hopefully, you'll get the gist. basically, it's an all succeed or fail job, irrespective of any disparate databases (i.e. in the above example, artist and artistartistgenre could be hosted in two separate db stores but TransactionScope would care less about that, it works at COM+ level and manages the atomicity of the scope that it can 'see')

hope this helps

EDIT: you'll possibly find that the initial invocation of TransactionScope (on app start-up) may be slightly noticeable (i.e. in the example above, if called for the first time, can take 2-3 seconds to complete), however, subsequent calls are almost instantaneous (i.e. typically 250-750 ms). the trade off between a simple point of contact transaction vs the (unwieldy) alternatives mitigates (for me and my clients) that initial 'loading' latency.

just wanted to demonstrate that ease doesn't come without compromise (albeit in the initial stages)

A: 

just in case my comment in the original article didn't 'stick', here's what i'd added as additional info:

<----- coincidently, just noticed another similar reference to this posted a few hours after your request. uses a similar strategy and might be worth you looking at as well: http://stackoverflow.com/questions/494550/how-does-transactionscope-roll-back-transactions ----->