views:

381

answers:

5

I designed the data access portion of our framework so that every time a business object needs to interact with the database it would have to open a connection, invoke the data access layer (to execute the query), and then close the connection. Then if it needed to run in a transaction it would open the connection, begin the transaction, invoke the data access layer (to execute the query) and then commit the transaction, close the transaction, and finally close the connection.

I did it this way with the mindset of open late and close early... but what if I needed to call other BO's to submit data in a single transaction? Is there a better way to handle opening and closing connections as well as working with transactions?

I'm a rookie in designing application architecture so I hope I'm not doing this wrongly... any help is appreciated.

+3  A: 

Sounds like you've got the right idea. If multiple BOs need to be involved, then one of them needs to be a 'controller' - it should open and close the connection, and pass it to the others. Or some 'wrapper' object could handle the connection and pass it to each of the BOs. Your BOs may need to be designed to operate both on their own (handle their own connection), and to accept an existing connection from outside.

Ray
+4  A: 

If a give BO needs to execute various methods in a transaction, use a TransactionScope like so:

using ( var ts = new TransactionScope() )
{
    this.Save();
    childobj.Save();
    childobj.Save();
    childobj.Save();
    childobj.Save();

    ts.Complete();
}

If any of the objects throws an exception, it will rollback the transaction.

See Transaction Scope for more.

Thomas
Although `TransactionScope` *is* the class to use here, with the OP's current design it will result in a distributed transaction or possibly an exception if MSDTC is disabled or locked down. It's *probably* not the desired result.
Aaronaught
@Aaronaught, with SQL Server 2008 and .NET 3.5 the transaction will not be promoted to a distributed transaction (for the current design).
Tuzo
@Tuzo: True, With SQL Server 2008 you're able to cheat the multiple-connection issue **IF** all connections are to the same database **and** there is only ever one open at the same time. It's still a pretty questionable practice, especially when the alternative design is easier to build/maintain anyway.
Aaronaught
+3  A: 

When higher-level abstractions depend on lower-level abstractions (such as business logic classes depending on data connections), it's common to supply the lower-level abstractions through the constructor. The technique is called constructor injection:

public class OrderService
{
    private SqlConnection connection;

    public OrderService(SqlConnection connection)
    {
        if (connection == null)
            throw new ArgumentNullException("connection");
        this.connection = connection;
    }

    // Other methods
}

This then allows you to write code against the services similar to the following:

using (TransactionScope tsc = new TransactionScope())
using (SqlConnection connection = new SqlConnection(...))
{
    connection.Open();
    OrderService os = new OrderService(connection);
    os.ProcessOrder(myOrder);
    ShippingService ss = new ShippingService(connection);
    ss.ShipOrder(myOrder);
    tsc.Complete();
}

Which is most likely going to be what you want, in the end - the ability to share one connection among many services.

This also helps to decouple your services from the implementation details of the data connection. That way, if you want to do something like change the connection settings under certain circumstances, you don't have to dig into the details of 50 different services, you only have to change the one line of code that creates the connection.

One more thing: If you're going to use the TransactionScope, make sure to add Transaction Binding=Explicit Unbind to the connection string, otherwise it's actually possible to end up with inconsistent data if a transaction times out.

Aaronaught
+1  A: 

As mentioned by others, TransactionScope is the way to go.

If you are using SQL Server 2008 and .NET 3.5, I would modify the design to have the business object control the transaction and leave the opening and closing of the connection to the data layer.

With connection pooling on, you will not actually be incurring the overhead of opening a physical database connection and your connections will only be open when performing actual work. Since (I assumed) you have SQL Server 2008 with .NET 3.5 your transaction will not escalate to a distributed transaction (unless you open multiple connections at the same time) so you get the best of both worlds.

Then you could write your business object like this:

using (TransactionScope transactionScope = new TransactionScope())
{
    DataObject dataObject = new DataObject();
    dataObject.UpdateQuantity(...);

    ShippingManager shippingManager = new ShippingManager();
    shippingManager.ShipOrder(...);

    transactionScope.Complete()
}

This avoids having to pass connection strings around to all business objects and makes coordinating transactions easy.

Update

The beauty of System.Transactions is that all of the transactions are managed for you irrespective of the connection that you are using. You just declare a TransactionScope and all database access within that TransactionScope will occur withing a single transaction (unless you request otherwise with different TransactionScope settings).

In the past (SQL Server 2005 .NET 2.0), if you opened and closed a connection and then opened and closed another connection (even with the same connection string) then the transaction was promoted from a Lightweight Transaction to a Distributed Transaction. This was undesirable because performance suffers (communication to MSDTC is out of process and the two phase commit protocol) and MSDTC can be a pain to configure in many production environments (firewalls and security).

With SQL Server 2008 and .NET 3.5 they have added the ability to avoid this promotion when opening and closing multiple connections with the same connection string within a single transaction. For a really good explanation of what they did see Extending Lightweight Transactions in SqlClient.

Update 2

Transactions with Oracle 10g will function properly with TransactionScope. And it looks like ODP.NET supports Lightweight Transactions (which is nice). Unfortunately, I think that the promotion to a distributed transaction will occur with the closing and opening of connections.

If you wish to avoid a distributed transaction you could pass the connection to every method call/Business Object. If you don't want to pass a connection around, you could use a ConnectionScope class which keeps the connection open on the thread. An alternative to that would be to use Enterprise Library 3.0 (and above) Data Access Application Block. The Data Access Block can detect that a transaction is in progress and use the same connection to avoid a distributed transaction.

Tuzo
If you have the data layer handle opening and closing connections, how do you use transactions? If you have three BO's that need to do something and they are all on the same transaction, wouldn't they all have to use the same connection?
Dan H
Will this work with Oracle 10g?
Dan H
I was thinking about using the Data Access block in Enterprise Library. I'm just not sure I have enough time to rewrite that portion of my application. I'll have to use EL or pass the connection around to each of my BO's. My BO's and Entities are one of the same so I may need to break them apart to make this work better.
Dan H
+1  A: 

You are probably looking for the Unit of Work pattern and Registry pattern. These two patterns can work in concert to separate the concerns of finding business objects and tracking them for later commit to your data store as a transaction.

I would also look into Object Relational Mapping, or ORM. ORM is a higher level composition of the unit of work, registry, persistence ignorance and other patterns that provides a very clean separation of your business logic from your persistence logic. Using and ORM, you can generally eliminate the need to write stored procedures, build a custom DAL, etc. The ORM takes care of your persistence concerns for you, allowing you to focus on the business logic that needs to be done.

Since you are using C# and .NET, I would look into Entity Framework (v4, don't use v1), or LINQ to SQL. Both are OR mappers that come with the .NET framework from v3.5 and onwards. LINQ to SQL is a very simple and well-tooled ORM that should get you going very quickly. Entity Framework is a much richer ORM that is also very well tooled (better than LINQ to SQL), and offers considerably more functionality. There are also third-party ORM's that can do the job, including a free one called NHibernate. While it is not as well tooled as the Microsoft ORM's, NHibernate is a very mature open source ORM with a large community following.

If an ORM is not a possibility, then I would look into Unit of Work, Registry (or Repository), Persistence Ignorance, Separation of Concerns, Single Responsibility, and other related patterns.

jrista