views:

214

answers:

2

I'm facing a dilemma (!).

In a first scenario, I implemented a solution that replicates data from one data base to another using SQLBulkCopy synchronously and I had no problem at all.

Now, using ThreadPool, I implemented the same in a assynchronously scenario, a thread per table, and all works fine, but past some time (usualy 1 hour because the operations of copy takes about the same time), the operations send to the ThreadPool stop being executed. There is one diferent SQLBulkCopy using one diferent SQLConnection per thread.

I already see the number of free threads, and they are all free at the beginning of the invocation. I have one AutoResetEvent to wait that the threads finish their job before launching again, and a Semaphore FIFO that hold the counter of active threads.

Is there some issue that I have forgotten or that I should avaliate when using SqlBulkCopy? I appreciate some help, because my ideas are over;)


->Usage

SemaphoreFIFO waitingThreads = new SemaphoreFIFO();
AutoResetEvent autoResetEvent = new AutoResetEvent(false);
(...)
List<TableMappingHolder> list = BulkCopy.Mapping(tables);
waitingThreads.Put(list.Count, 300000);

for (int i = 0; i < list.Count; i++){
    ThreadPool.QueueUserWorkItem(call =>
         //Replication
         (...)
         waitingThreads.Get();

        if (waitingThreads.Counter == 0)
            autoResetEvent.Set();
    );
}

bool finalized = finalized = autoResetEvent.WaitOne(300000);
(...)
#########################################3

//Bulk Copy

 public bool SetData(SqlDataReader reader, string _destinationTableName, List<SqlBulkCopyColumnMapping> _sqlBulkCopyColumnMappings)
        {
            using (SqlConnection destinationConnection =
                            new SqlConnection(ConfigurationManager.ConnectionStrings["dconn"].ToString()))
            {
                destinationConnection.Open();

                // Set up the bulk copy object.
                // Note that the column positions in the source
                // data reader match the column positions in
                // the destination table so there is no need to
                // map columns.
                using (SqlBulkCopy bulkCopy =
                           new SqlBulkCopy(destinationConnection))                    {
                    bulkCopy.BulkCopyTimeout = 300000;
                    bulkCopy.DestinationTableName = _destinationTableName;

                    // Set up the column mappings by name.
                    foreach (SqlBulkCopyColumnMapping columnMapping in _sqlBulkCopyColumnMappings)
                        bulkCopy.ColumnMappings.Add(columnMapping);

                    try{
                        // Write from the source to the destination.
                        bulkCopy.WriteToServer(reader);
                    }
                    catch (Exception ex){return false;}
                    finally
                    {
                        try{reader.Close();}
                        catch (Exception e){//log}
                        try{bulkCopy.Close();}
                        catch (Exception e){//log}
                        try{destinationConnection.Close(); }
                        catch (Exception e){ //log    }
                    }
                }
            }
            return true;
        }
#

Semaphore

public sealed class SemaphoreFIFO
{
    private int _counter;
    private readonly LinkedList<int> waitQueue = new LinkedList<int>();

    public int Counter
    {
        get { return _counter; }
    }

    private void internalNotify()
    {
        if (waitQueue.Count > 0 && _counter == 0)
        {
            Monitor.PulseAll(waitQueue);
        }
    }

    public void Get()
    {
        lock (waitQueue)
        {
            _counter --;
            internalNotify();
        }
    }

    public bool Put(int n, int timeout)
    {
        if (timeout < 0 && timeout != Timeout.Infinite)
            throw new ArgumentOutOfRangeException("timeout");
        if (n < 0)
            throw new ArgumentOutOfRangeException("n");

        lock (waitQueue)
        {
            if (waitQueue.Count == 0 && _counter ==0)
            {
                _counter +=n;
                internalNotify();
                return true;
            }

            int endTime = Environment.TickCount + timeout;
            LinkedListNode<int> me = waitQueue.AddLast(n);
            try
            {
                while (true)
                {
                    Monitor.Wait(waitQueue, timeout);

                    if (waitQueue.First == me && _counter ==0)
                    {
                        _counter += n;
                        waitQueue.RemoveFirst();
                        internalNotify();
                        return true;
                    }

                    if (timeout != Timeout.Infinite)
                    {
                        int remainingTime = endTime - Environment.TickCount;
                        if (remainingTime <= 0)
                        {
                            // TIMEOUT
                            if (waitQueue.First == me)
                            {
                                waitQueue.RemoveFirst();
                                internalNotify();
                            }
                            else
                                waitQueue.Remove(me);
                            return false;
                        }
                        timeout = remainingTime;
                    }
                }
            }
            catch (ThreadInterruptedException e)
            {
                // INTERRUPT
                if (waitQueue.First == me)
                {
                    waitQueue.RemoveFirst();
                    internalNotify();
                }
                else
                    waitQueue.Remove(me);
                throw e;
            }
        }
    }
}
A: 

I would just go back to using SQLBulkCopy synchronously. I'm not sure what you gain by doing a bunch of bulk copies all at the same time (instead of one after the other). It may complete everything a bit faster, but I'm not even sure of that.

MusiGenesis
In my production environment it's synchronous, but what I want to achive is that the data in all tables being written almost at the same time because it's to be used by another application
Ruben F.
My guess is that you wouldn't gain much (if at all) from this approach. I think if you run two bulk copies at the same time, each will run about half as fast as it would if run separately, so the net completion time would be the same. I've never heard of this being done (multiple simultaneous bulk inserts), and it may be that your threads stop executing because you're running into a fundamental limit on how may bulk inserts can be done at the same time.
MusiGenesis
Thanks for the comment. Indeed if I comment the bulk insert and let the others data access, the application works fine!
Ruben F.
A: 

Henk, here goes some code

--- moved to question ----

Ruben F.