We have a client that sends requests to a per-call WCF service (WAS hosted, NP binding, same machine) in a loop. The WCF service calls an external EXE to process requests.
The operations on the service can take a few seconds, or a few hours. To combat this, we have sendTimeout=00:01:00, receiveTimeout=00:05:00 and a built-in circuit-breaker on the service that kills the external process after 00:05:00.
Any service-side errors are returned to the client as FaultExceptions.
The client has configurable multithreading. In this example, it is running 3 threads, each of which creates a proxy, makes a call, and closes the proxy (or aborts it if an exception was received).
Since the service is instanced as per-call, and it never receives more than 3 simultaneous calls (each of which being followed by a .Close() or .Abort() on the proxy), I would not expect the client to be getting timeouts during the send, but I am. In fact, the send timeouts I am receiving would seem to imply that the WCF service is hitting the default session limit, even though the class is explicitly marked with [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)].
I put some trace logging on the client and I can see that the timeout is occurring when the proxy is being created.
Any ideas?
Thanks!