views:

226

answers:

4

I have a port scanning application that uses work queues and threads, it uses simple TCP connections and spends a lot of time waiting for packets to come back (up to half a second). Thus the threads don't need to fully execute (i.e. first half sends a packet, context switch, does stuff, comes back to thread which has network data waiting for it). I suspect I can improve performance by modifying the sys.setcheckinterval from the default of 100 (which lets up to 100 bytecodes execute before switching to another thread), but without knowing how many bytecodes are actually executing in a thread or function I'm flying blind and simply guessing values, testing and relying on the testing shows a measurable difference (which is difficult since the amount of code being executed is minimal; a simple socket connection, thus network jitter will likely affect any measurements more than changing sys.setcheckinterval). Thus I would like to find out how many bytecodes are in certain code executions (i.e. total for a function or in execution of a thread) so I can make more intelligent guesses at what to set sys.setcheckinterval to.

+1  A: 

For higher level (method, class) wise, dis module should help.

But if one needs finer grain, tracing will be unavoidable. Tracing does operate line by line basis but explained here is a great hack to dive deeper at the bytecode level. Hats off to Ned Batchelder.

utku_karatas
A: 

Posting as an answer since I can't comment on the answer.

The problem with the "dis" module is that it doesn't really help me when there are multiple code paths, so I'm not hitting all the code, dis does the entire thing:

Disassemble the bytesource object. bytesource can denote either a module, a class, a method, a function, or a code object. For a module, it disassembles all functions. For a class, it disassembles all methods.

I'd like to actually run it and get reports of how many bytecodes are going at various points.

Kurt
Kurt, just edited the answer for the issues mentioned.
utku_karatas
+2  A: 

" I suspect I can improve performance by modifying the sys.setcheckinterval"

This rarely works. Correct behavior can't depend on timing -- you can't control timing. Slight changes on OS, hardware, patch level of Python or phase of the moon will change how your application behaves.

The select module is what you use to wait for I/O's. Your application can be structured as a main loop that does the select and queues up work for other threads. The other threads are waiting for an entries in their queue of requests to process.

S.Lott
+1  A: 

Reasoning about a system of this complexity will rarely produce the right answer. Measure the results, and use the setting that runs the fastest. If as you say, testing can't measure the difference in various settings of setcheckinterval, then why bother changing it? Only measurable differences are interesting. If your test run is too short to provide meaningful data, then make the run longer until it does.

Ned Batchelder