views:

386

answers:

7

Some "high risk" data operations need to be logged. In this case, the "high risk" operations are defined as writes to our ERP system. It happens that we are logging those events to our SQL Server database.

Pseudo-code:

Public Class MyCompany.DAL.ERP {
  Public void WriteToERP(string msg) {
    // ... do the write
    MyCompany.Logging.Write("Wrote msg: " + msg);
  }
}

Public Class MyCompany.Logging {
  Public void Write(string msg) {
    MyCompany.DAL.ExecuteSQL("Insert INTO EventLog VALUES " + msg);
  }
}

What is the best practice to eliminate this tight coupling?

A: 

Maybe you could have your logging component, moved to a seperate assembly (I'm assuming this is C# code), raise an event, that the caller could register for before calling Logging.Write(). After Logging.Write() returns, unregister from the event. In the event handler you could execute your MyCompany.DAL.ExecuteSQL("Insert INTO EventLog VALUES " + msg) call.

unforgiven3
+2  A: 

Hmm, IMHO logging is an infrastructure concern. You can use it in your DAL, but your logger should not use your DAL.

If you remove the dependency your logger has on your DAL, then you should be able to use your logger in other projects as well.

Frederik Gheysels
+1  A: 

You can create a custom TraceListener (System.Diagnostics) to insert into your company's SQL Server database. Then use Trace / TraceSource (System.Diagnostics) for your logging in your application's code. You can then use standard .NET configuration to use your custom TraceListener at design time. That way, if you ever need to change your event logging, you just have to change the TraceListener. Plus you could reuse the TraceListener in other applications.

You could also use the Logging Application Block of the Enterprise Library and many other 3rd party logging solutions.

Aaron Daniels
+1 - As the EntLibs are heavily driven by config you can have a consistent logging sub-system in place which is cleanly seperated from your application (in areas such as DAL); you can log stuff to a n application db or somewhere else - it's all loosely coupled.
Adrian K
A: 

I've decoupled this situation before in two ways: Status changes and Log events.

First way is to create an IHaveStatus interface like such:

/// <summary>
/// Interface for objects that have a status message 
/// that describes their current state.
/// </summary>
public interface IHaveStatus
{
    /// <summary>
    /// Occurs when the <seealso cref="Status"/> property has changed.
    /// </summary>
    event EventHandler<StatusChangedEventArgs> StatusChanged;
    /// <summary>
    /// The current status of the object.  When this changes, 
    /// <seealso cref="StatusChanged"/> fires.
    /// </summary>
    string Status { get; }
}

As your object does stuff, you set your Status property. You can configure your property setter to fire the StatusChanged event when you set it. Whoever uses your objects can listen to your status changed event and log everything that happens.

Another version of this is to add a Log events to your objects.

public event EventHandler<LogEventArgs> Log;

The principal is pretty much the same, only your objects would be less chatty than a Status-driven log (you only fire the event when you specifically want to log something).

The idea is that its the responsibility of callers outside of your DAL to hook these events up to the proper log (hopefully set by using DI). Your DAL is oblivious to who or what is consuming these events, which separates these concerns well.

Will
A: 

The common thread among the responses seems to be the suggestion to implement something like the observer pattern.

(Let me know if there's a better summary statement. I'll update accordingly.)

Larsenal
A: 

Actually, if by high-risk data you mean criticial/important to know it is how it is supposed to be data, and also if you need to have the logs in the database (some kind of meta-data), then the solution should be completely different as what others have suggested.

The situation I described would mean that the result of a database transaction should have both the logging data and the data itself in the database at any given time. One should not be done independently from the other.

As a result, this kind of "logging" should be done as part a single database transaction and the DAL should make sure that both items are inserted correctly at the same time in the same transaction.

Failure to do so could have the following side effect:

  • Having only one of the data or log inserted in the db.
  • Having only one of the data or log inserted in the db before the other, meaning that a system relying on the fact that both must be present at any given time might randomly fail in specific circumstances.
Loki
It's not so much metadata. We're dealing with writes to a slow, error-prone system. The logging is done to the fast, reliable database as a sort of fallback to identify and recover from problems with the "bad" database. This whole issue stems from the fact that one database is relatively unreliable.
Larsenal
Ok then I will keep my answer for the info you added, but it shall not apply to your situation.
Loki
A: 

To avoid the circular dependency DAL -> Logger -> DAL, I'd propose you have two layers of DAL: "simple DAL", and "logging DAL".

The "simple DAL" ist just a DAL. The "logging DAL" builds on the "simple DAL"; it manipulates the DB using the simple DAL, and also logs stuff, again using the simple DAL. So you have:

[application logic] --uses--> [logging DAL] --uses--> [simple DAL] --uses--> DB

If you want to do stuff to the DB that does not need to be logged ("low risk" operations ;-)) you could use "simple DAL" directly and bypass the logging DAL.

sleske