tags:

views:

349

answers:

1

We're using WCF to build a simple web service which our product uses to upload large files over a WAN link. It's supposed to be a simple HTTP PUT, and it's working fine for the most part.

Here's a simplified version of the service contract:

[ServiceContract, XmlSerializerFormat]
public interface IReplicationWebService
{
    [OperationContract]
    [WebInvoke(Method = "PUT", UriTemplate = "agents/{sourceName}/epoch/{guid}/{number}/{type}")]
    ReplayResult PutEpochFile(string sourceName, string guid, string number, string type, Stream stream);
}

In the implementation of this contract, we read data from stream and write it out to a file. This works great, so we added some error handling for cases when there's not enough disk space to store the file. Here's roughly what it looks like:

    public ReplayResult PutEpochFile(string sourceName, string guid, string number, string type, Stream inStream)
    {
        //Stuff snipped
        try
        {
            //Read from the stream and write to the file
        }
        catch (IOException ioe)
        {
            //IOException may mean no disk space
            try
            {
                inStream.Close();
            }
            // if instream caused the IOException, close may throw
            catch
            {
            }
            _logger.Debug(ioe.ToString());
            throw new FaultException<IOException>(ioe, new FaultReason(ioe.Message), new FaultCode("IO"));
        }
    }

To test this, I'm sending a 100GB file to a server that doesn't have enough space for the file. As expected this throws an exception, but the call to inStream.Close() appeared to hang. I checked into it, and what's actually happening is that the call to Close() made its way through the WCF plumbing until it reached System.ServiceModel.Channels.DrainOnCloseStream.Close(), which according to Reflector allocates a Byte[] buffer and keeps reading from the stream until it's at EOF.

In other words, the Close call is reading the entire 100GB of test data from the stream before returning!

Now it may be that I don't need to call Close() on this stream. If that's the case I'd like an explanation as to why. But more importantly, I'd appreciate it if anyone could explain to me why Close() is behaving this way, why it's not considered a bug, and how to reconfigure WCF so that doesn't happen.

+3  A: 

.Close() is intended to be a "safe" and "friendly" way of stopping your operation - and it will indeed complete the currently running requests before shutting down - by design.

If you want to throw down the sledgehammer, use .Abort() on your client proxy (or service host) instead. That just shuts down everything without checking and without being nice about waiting for operations to complete.

marc_s
Thanks for the response. The problem with `ServiceHost.Abort()` is that it aborts the entire service, not just the particular call being processed. If there was some way I could instruct WCF to forcibly close the socket in response to the exception at least that way I wouldn't have to wait for the transfer to finish before the client knew something was wrong.
anelson
I couldn't figure out any way to report an exception back to the HTTP client without waiting for the client to finish sending the file, so I've had to accept an interface that PUTs one chunk at a time. I'd hoped to avoid that but I don't see a way around it given the HTTP implementation I have to work with
anelson