We have a program that reads in a stream of input and writes it to the database. There is no user input.
This program is currently running in parallel on both the development and the production server, with the same data as input, writing to different output servers.
On the development server everything is fine; around 30-odd pooled connections are open at a time and it runs happily (this may sound high, but we run several consecutive, brief queries for each input, and there is a high volume of data). On the production server, it's always maxed out at 100 connections and sometimes throws an exception indicating it's run out of available connections in the pool.
Is there some sort of SQL Server setting that could be causing this disparity? The only other difference is that the production server is under additional load from various sources.
I could just increase the number of connections in the pool (although I don't know how many would satisfy it), but I would like to understand what is causing this.