I have a client server application which exchanges XML documents for data requested by the client. Essentially the user enters some search constraints (attributes to match) and the client communicates with two systems to get data back (some data from a database and some data from file servers).
The data returned from the file servers (files of archived data) are quite a bit bigger than the metadata returned from the server, and correspondingly takes more time to perform.
The users have asked me to provide some metrics on how long it takes to download the archive data and the rate at which it is being downloaded (after the download).
The client server communicate with asyncronous I/O and numerous threads so I cannot just use a Start/Stop timer to accomplish this.
My current implementation works as such:
- Record the current Ticks (this is a long running process so tick resolution is fine)
- Hand off the request to the Webservice Asyncronously.
- --- Wait ---
- get the current ticks
- get the size of the document returned (there is some overhead not accounted for from the SOAP envelope but this is ok, I think)
- Rate = (Document Size / 1024) / (End Ticks - Start Ticks) * Ticks/Second (I let a timespan object do this)
At first I thought this method was ok, but I have user reporting that the rate is much lower for small samples than it is for large samples and that the rates vary a great deal over a single execution.
Is there a better way to calculate this rate that would be more immune to this? It makes sense that the rate will be greater for larger archives, but in testing I see it being 10-40x higher than for a file have the size, which doesnt make sense.