views:

120

answers:

4

My Question: Performance tests are generally done after an application is integrated with various modules and ready for deploy.

Is there any way to identify performance bottlenecks during the development phase. Does code analysis throw any hints @ performance?

+1  A: 

It all depends on rules that you run during code analysis but I don't think that you can prevent performance bottlenecks just by CA.

From my expired it looks that performance problems are usually quite complicated and to find real problems you have to run performance tests.

brzozow
It all depends on rules that you run during code analysis are there any basic rules that need to be run against the code that would help ?
balalakshmi
It depend on framework that you are using. For example I'm using FxCop (for .NET solutions). There is set of predefined performance rules: http://msdn.microsoft.com/en-us/library/ms182260.aspx but as I mentioned earlier I don't think they targeting real problems.
brzozow
A: 

No, except in very minor cases (eg for Java, use StringBuilder in a loop rather than string appends).

The reason is that you won't know how a particular piece of code will affect the application as a whole, until you're running the whole application with relevant dataset.

For example: changing bubblesort to quicksort wouldn't significantly affect your application if you're consistently sorting lists of a half-dozen elements. Or if you're running the sort once, in the middle of the night, and it doesn't delay other processing.

kdgregory
A: 

If we are talking .NET, then yes and no... FxCop (or built-in code analysis) has a number of rules in it that deal with performance concerns. However, this list is fairly short and limited in nature.

Having said that, there is no reason that FxCop could not be extended with a lot more rules (heuristic or otherwise) that catch potential problem areas and flag them. It's simply a fact that nobody (that I know of) has put significant work into this (yet).

jerryjvl
A: 

Generally, no, although from experience I can look at a system I've never seen before and recognize some design approaches that are prone to performance problems:

  • How big is it, in terms of lines of code, or number of classes? This correlates strongly with performance problems caused by over-design.

  • How many layers of abstraction are there? Each layer is a chance to spend more cycles than necessary, and this effect compounds, especially if each operation is perceived as being "pretty efficient".

  • Are there separate data structures that need to be kept in agreement? If so, how is this done? If there is an attempt, through notifications, to keep the data structures tightly in sync, that is a red flag.

  • Of the categories of input information to the system, does some of it change at low frequency? If so, chances are it should be "compiled" rather than "interpreted". This can be a huge win both in performance and ease of development.

  • A common motif is this: Programmer A creates functions that wrap complex operations, like DB access to collect a good chunk of information. Programmer A considers this very useful to other programmers, and expects these functions to be used with a certain respect, not casually. Programmer B appreciates these powerful functions and uses them a lot because they get so much done with only a single line of code. (Programmers B and A can be the same person.) You can see how this causes performance problems, especially if distributed over multiple layers.

Those are the first things that come to mind.

Mike Dunlavey