views:

64

answers:

3

I am a dummy in profiling, please tell me what you people do to profile your application. Which one is the better, Profiling the whole application or make an isolation? If the choice is make an isolation how you do that?

A: 

Take a look http://www.ej-technologies.com/products/jprofiler/overview.html

Sergey
Thanks, but what i am looking for is the technique.
bonjorno
+5  A: 

As far as possible, profile the entire application, running a real (typical) workload. Anything else and you risk getting results that lead you to focus your optimization efforts in the wrong place.

EDIT

Isn't that too hard to get a correct result when profiling the whole application? so the test result is depends on the user interaction (button clicking etc) and not using automatic task? Tell me if I'm wrong.

Getting the "correct result" depends on how you interpret the profiling data. For instance, if you are profiling an interactive application, you should figure out which parts of the profile correspond to waiting for user interaction, and ignore them.

There are a number of problems with profiling your application in parts. For example:

  • By deciding beforehand which parts of the application to profile, you don't get a good picture of the relative contribution of the different parts, and you risk wasting effort on the wrong parts.

  • You pretty much have to use artificial workloads. Whenever you do that there is a risk that the workloads are not representative of "normal" workloads, and your profiling results are biased.

  • In many applications, the bottlenecks are due to the way that the parts of the application interact with each other, or with I/O or garbage collection. Profiling different parts of the application separately is likely to miss these interactions.

... what i am looking for is the technique

Roughly speaking, you start with the biggest "hotspots" identified by the profile data and drill down until you've figured out why the so much is being spent in a certain area. It really helps if your profiling tool can aggregate and present the data top down and bottom up.

But, at the end of the day going from the profiling evidence (hotspots, stack snapshots, etc) to the root cause and the remedy is often down to the practical knowledge and intuition that comes from experience.

(Yea ... I'm waffling a bit. But my point is that there is no magic formula for doing this. Ultimately, you've got to use your brain ... like you have to when debugging a complex application.)

Stephen C
Thanks for quick answer, But Isn't that too hard to get a correct result when profiling the whole application? so the test result is depends on the user interaction (button clicking etc) and not using automatic task? Tell me if I'm wrong.
bonjorno
@bonjomo: Stephen is exactly right. If you focus on a particular module, then you are effectively bringing a pre-judgement to bear, and when you do this work, the first thing you learn is that problems are not where you might have guessed they were. If you profile as-a-whole, then the problems tell you where they are. No need to guess.
Mike Dunlavey
I'm madly agreeing with you down to the last paragraph :-) I don't care for the concept of "hotspot", nor for the "drill down" approach, but I suspect when we get down to brass tacks, we're saying the same thing.
Mike Dunlavey
It must be said that I don't do much profiling ...
Stephen C
You probably don't need to, as you probably have great experience and intuition. I've found that in spite of my experience and intuition I can write stuff that seems to perform OK, yet has *lots* of room for speedup. However, finding the code to optimize doesn't require intuition. It only requires stackshots. There are problems requiring intuition, but performance isn't one of them: http://stackoverflow.com/questions/926266/performance-optimization-strategies-of-last-resort/927773#927773
Mike Dunlavey
+2  A: 

First I just time it with a watch to get an overall measurement.

Then I run it under a debugger and take stackshots. What these do is tell me which lines of code are responsible for large fractions of time. In particular, this means lines where functions are called without really needing to be, and I/O that I may not have been aware of.

Since it shows me lines of code that take time and can be done a better way, I fix those.

Then I start over at the top and see how much time I actually saved. I repeat these steps until I can no longer find things that a) take significant % of time, and b) I can fix.

This has been called "poor man's profiling". The little secret is not only is it cheap, but it is very effective, because it avoids the common myths about profiling.

P.S. If it is an interactive application, do all this just to the part of it that is slow, like if you press a "Do Useful Stuff" button, and it finishes a few seconds later. There's no point to taking stackshots when it's waiting for YOU.

P.P.S. Suppose there is some activity that should be faster, but finishes too quickly to take stackshots, like if it takes a second but should take a fraction of a second. Then what you can do is (temporarily) wrap a for loop around it, of 10 or 100 iterations. That will make it take long enough to get samples. After you've speeded it up, remove the loop.

Mike Dunlavey
+1 for the links and +10 for great answers on those links, thanks. I though you were write this in counter of my comment on Stephen C Answer so it is a bit out of the original answer. It is great but i should pick Stephen as Accepted Answer
bonjorno
@bonjorno: Thx. Yes. Stephen said it very well. You sort of hit my "general button" that performance tuning need not be a process of detective work, but can be more like just pruning a tree, the call tree. This is not generally known, so I tend to blab about it.
Mike Dunlavey