views:

433

answers:

2

Hello

I'm developing an ASP.NET (3.5) webservice application, hosted on IIS6/Win2003. The webservice logs the timestamp of the call at start and just before the return. This allows me to query the time-taken. I've also added time-taken to the standard IIS log (W3C extended)

The webservice is then stress-tested (5 threads, 200 calls per thread) When comparing the time-taken from IIS with the timetaken in the datase, I see HUGE differences! The time-taken from IIS (which is also the time-take logged by the calling clients) is much higher than the time-take logged by ASP.NET. For example, the time spent according to ASP.NET is 1.7 secs, while IIS logs 12000 (miliseconds)!

What could be a cause for this?

Dummy-code for the service:

[WebMethod(Description = " Main entry point to the service.")]
public string MethodA(string theXmlInput)
{            
    //log first
    StoreInput(theXmlInput);

    //Run the job, should take about 1 sec
    string result = DoIt(theXmlInput);            

    //log output
    StoreResult(result);

    return result;
}
+5  A: 

Your page request comes into IIS (start the IIS timer). Only .NET sections of your code are handed to the .NET framework (start the .NET timer). The .net framework processes logic, and returns the result to IIS (stop the .net timer). IIS then finishes processing the request and sends teh HTTP response back to the browser (stop the IIS timer).

IIS might be doing a whole load of additional work - authentication, handling MIME types, handling content that is not parsed by the .NET framework (HTML, images, flash etc).

I personally wouldn't expect to see these two values aligned.

HTH, Mark

Mark Cooper
+1  A: 

Just found out the the IIS time-taken includes NETWORK time. (see microsoft) This is since IIS6, for those who care. Back to the network boys!

edosoft