Every now and then in a high volume .NET application, you might see this exception when you try to execute a query:

System.Data.SqlClient.SqlException: A transport-level error has occurred when sending the request to the server.

According to my research, this is something that "just happens" and not much can be done to prevent it. It does not happen as a result of a bad query, and generally cannot be duplicated. It just crops up maybe once every few days in a busy OLTP system when the TCP connection to the database goes bad for some reason.

I am forced to detect this error by parsing the exception message, and then retrying the entire operation from scratch, to include using a new connection. None of that is pretty.

Anybody have any alternate solutions?


Do you have statistics for the load on your database server when these errors are thrown? You might have some database issues that are causing connections to fail.

John Christensen

You should also check hardware connectivity to the database.

Perhaps this thread will be helpful:

Kevin Goff

This should not happen, even under high transactional volume. We run an average of 25,000 transactions per second on SQL Server 2005 Standard, and we don't get this error. (Unless the cluster fails over, which is every 12+ months, not every few days.)

Without any more info, it sounds like there is a networking problem between your database server and your application servers. Can you post more info?

This really should be a comment.
Ian Boyd
There was no such thing as comments when this question was asked. People used to do @replies to one another as answers.

@Portman, I suspect it is due to the crappy onboard Dell NIC I'm forced to use since both of my PCIe slots are taken up with HBA cards connected to my DAS. I'm upgrading to a bigger machine so I can fit the (much) better Intel NIC.

How are you clustering with Standard Edition? That's an Enterprise Edition feature.

Eric Z Beard

@Eric: clustering, log shipping, and mirroring are all available in Standard.

Uhh... why the downmod?

I'm using reliability layer around my DB commands (abstracted away in the repository interfaece). Basically that's just code that intercepts any expected exception (DbException and also InvalidOperationException, that happens to get thrown on connectivity issues), logs it, captures statistics and retries everything again.

With that reliability layer present, the service has been able to survive stress-testing gracefully (constant dead-locks, network failures etc). Production is far less hostile than that.

PS: There is more on that here (along with a simple way to define reliability with the interception DSL)

Rinat Abdullin
+2  A: 

To answer your original question:

A more elegant way to detect this particular error, without parsing the error message, is to inspect the Number property of the SqlException.

(This actually returns the error number from the first SqlError in the Errors collection, but in your case the transport error should be the only one in the collection.)

Daniel Fortunov
+1 Not an answer, but is useful idea.
Ian Boyd

Hi Eric,

I had the same problem. I asked my network geek friends, and all said what people have replied here: Its the connection between the computer and the database server. In my case it was my Internet Service Provider, or there router that was the problem. After a Router update, the problem went away. But do you have any other drop-outs of internet connection from you're computer or server? I had...

Jesper Blad Jensen aka. Deldy
+3  A: 

I posted an answer on another question on another topic that might have some use here. That answer involved SMB connections, not SQL. However it was identical in that it involved a low-level transport error.

What we found was that in a heavy load situation, it was fairly easy for the remote server to time out connections at the TCP layer simply because the server was busy. Part of the reason was the defaults for how many times TCP will retransmit data on Windows weren't appropriate for our situation.

Take a look at the registry settings for tuning TCP/IP on Windows. In particular you want to look at TcpMaxDataRetransmissions and maybe TcpMaxConnectRetransmissions. These default to 5 and 2 respectively, try upping them a little bit on the client system and duplicate the load situation.

Don't go crazy! TCP doubles the timeout with each successive retransmission, so the timeout behavior for bad connections can go exponential on you if you increase these too much. As I recall upping TcpMaxDataRetransmissions to 6 or 7 solved our problem in the vast majority of cases.

Tim Farley
+1  A: 


I have seen this happen in my own environment a number of times. The client application in this case is installed on many machines. Some of those machines happen to be laptops people were leaving the application open disconnecting it and then plugging it back in and attempting to use it. This will then cause the error you have mentioned.

My first point would be to look at the network and ensure that servers aren't on DHCP and renewing IP Addresses causing this error. If that isn't the case then you have to start trawlling through your event logs looking for other network related.

Unforunately it is as stated above a network error. The main thing you can do is just monitor the connections using a tool like netmon and work back from there.

Good Luck.

Dale Wright
+1  A: 

This blog post by Michael Aspengren explains the error message "A transport-level error has occurred when sending the request to the server."

Manga Lee

I had the same problem albeit it was with service requests to a SQL DB.

This is what I had in my service error log:

System.Data.SqlClient.SqlException: A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)

I have a C# test suite that tests a service. The service and DB were both on external servers so I thought that might be the issue. So I deployed the service and DB locally to no avail. The issue continued. The test suite isn't even a hard pressing performance test at all, so I had no idea what was happening. The same test was failing each time, but when I disabled that test, another one would fail continuously.

I tried other methods suggested on the Internet that didn't work either:

  • Increase the registry values of TcpMaxDataRetransmissions and TcpMaxConnectRetransmissions.
  • Disable the "Shared Memory" option within SQL Server Configuration Manager under "Client Protocols" and sort TCP/IP to 1st in the list.
  • This might occur when you are testing scalability with a large number of client connection attempts. To resolve this issue, use the regedit.exe utility to add a new DWORD value named SynAttackProtect to the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\ with value data of 00000000.

My last resort was to use the old age saying "Try and try again". So I have nested try-catch statements to ensure that if the TCP/IP connection is lost in the lower communications protocol that it does't just give up there but tries again. This is now working for me, however it's not a very elegant solution.


use Enterprise Services with transactional components


As far as I can tell, class 20 is transport level.