views:

164

answers:

3

We have a program that reads in a stream of input and writes it to the database. There is no user input.

This program is currently running in parallel on both the development and the production server, with the same data as input, writing to different output servers.

On the development server everything is fine; around 30-odd pooled connections are open at a time and it runs happily (this may sound high, but we run several consecutive, brief queries for each input, and there is a high volume of data). On the production server, it's always maxed out at 100 connections and sometimes throws an exception indicating it's run out of available connections in the pool.

Is there some sort of SQL Server setting that could be causing this disparity? The only other difference is that the production server is under additional load from various sources.

I could just increase the number of connections in the pool (although I don't know how many would satisfy it), but I would like to understand what is causing this.

A: 

Connection pooling is an ado.net problem because SQL server doesn't do connection pooling. From memory, ado.net's maximum size for connection pooling is 100, so you're within it's maximum. Several things that you can try in your application is to hard set the connection pooling using min pool size and max pool size in your connection string.

More can be found here and here.

GregD
+1  A: 

Usually when you are recieving an exception saying you've reached your connection pool threshold your code isn't closing/disposing of a connection properly.

My theory is that you are encountering a database exception in production that you aren't in dev and you're connections are staying open because of this.

You should always do your database work in a Try/Catch declaring the connection and command outside and initializing inside. You should never rely on closing a connection within the try/catch always close/dispose in a finally block:

try
{
    m_Connection = this.getConnection();
    m_Command = this.getCommand();
    m_Command.CommandTimeout = m_ConnectionTimeout;
    m_Command.CommandText = sql;
    m_Command.Connection = m_Connection;
    m_Command.CommandType = CommandType.Text;

    m_Connection.Open();

    return m_Command.ExecuteNonQuery();
}
finally
{
    if (m_Connection != null && m_Connection.State != ConnectionState.Closed)
    {
        m_Connection.Close();
        m_Connection.Dispose();
    }

    if (m_Command != null)
        m_Command.Dispose();
}

One application really shouldn't be consuming 100 connections to the database. I would check to see if you are closing your connections properly. At the very least put some logging around where you are communicating to the DB to see if there is an exception occurring there.

Jared
Thanks for this, but to the best of my knowledge there aren't any Open() calls in the code that aren't closed in a 'finally'. Plus when you look at the connections in Activity Monitor, the last activity for each connection is all within the past minute, so I imagine that means they are not leaked?
We have also implemented the logging, but you are quite right that we are getting more errors on the production server. Typically transport layer exceptions that we trap so we can retry the query - but again, this is all wrapped in a try/finally so it should really be OK....
+1  A: 

The answer turned out to be that the way I was setting CommandBehaviour.CloseConnection on my SqlDataReaders was wrong (I wasn't using bitwise combination correctly). So I WAS leaking connections after all.

Putting them in a using() would have helped a bit since the Dispose() would have been called rather than simply the Close(). Glad you found your issue though.
Jason Short