I have a project that dynamically loads in unknown assemblies implementing a specified interface. I don't know the contents or purposes of the assembly, other than it implementing my interface.
I need to somehow restrict the amount of processing power available to these assemblies. Processor priority is not what I'm looking for. I can't use a stopwatch and assign a certain amount of time for the assembly to run as the server might be arbitrarily busy.
Optimally I'd like to specify some completely load independent measure of CPU usage. I can run the assemblies in their own process if necessary.
Is there any way to somehow measure the total over-time CPU usage of a given thread (or process, though thread would be the optimal)?
Might I use the process performance counters, or are they, as I suspect, too unreliable? While I don't need to-the-cycle accuracy, I would need rather high accuracy to limit the computing power allocated to each assembly execution.
To extrapolate a bit on my situation. The reason I'm not looking for prioritization of the processes is that I'm not afraid of exhausting my resources, I just need to ensure I can measure "how many" resources a given assembly uses - thus my point about the server being arbitrarily busy.
Imagine the example scenario where you have two assemblies X and Y. Each of them implement a given algorithm and I want to do a primitive test of which assembly gets the job done quickest. I run each assembly and let it run until it's used "Z" resources, at which point I evaluate which assembly did the best job. In this case, I don't mind if one assembly runs at 100% CPU for three seconds, while the other one runs for 2% CPU over 5 minutes - it's the total resource usage that's important.
I'm thinking I might be able to use the CPU time perfcounter to do a crude limitation. Runt each assembly in a new thread and let it run until it's used a given amount of CPU time, at which point I'll kill the process and evaluate the results. I'm just afraid it won't be accurate enough.