views:

74

answers:

2

I maintain the build process for a large (> 500,000 LOC) Java project. I've just added a Sonar analysis step to the end of the nightly builds. But it takes over three hours to execute ... This isn't a severe problem (it happens overnight), but I'd like to know if I can speed it up (so that I could run it manually during work hours if desired).

Any Sonar, Hudson, Maven or JDK options I can tweak that might improve the situation?

[INFO]  -------------  Analyzing Monolith
[INFO]  Selected quality profile : Sonar way, language=java
[INFO]  Configure maven plugins...
[INFO]  Sensor SquidSensor...
[INFO]  Java AST scan...
[INFO]  Java AST scan done: 103189 ms
[INFO]  Java bytecode scan...
... (snip)
[INFO]  Java bytecode scan done: 19159 ms
[INFO]  Squid extraction...
[INFO]  Package design analysis...
... (over three hour wait here)
[INFO]  Package design analysis done: 12000771 ms
[INFO]  Squid extraction done: 12277075 ms
[INFO]  Sensor SquidSensor done: 12404793 ms

12 million milliseconds = 200 minutes. That's a long time! By comparison, the compile and test steps before the sonar step take less than 10 minutes. From what I can tell, the process is CPU-bound; a larger heap has no effect. Maybe it has to be this way because of the tangle / duplication analysis, I don't know. Of course, I know that splitting up the project is the best option! But that will take a fair amount of work; if I can tweak some configuration in the meantime, that would be nice.

Any ideas?

A: 

Could you send an email to the user mailing-list please ?

Simon Brandhof
okay Simon I've done so
Zac Thompson
+1  A: 

From Freddy Mallet on the list:

"... the problem doesn't come from the DB but come from the algorithm to identify all the package dependencies to cut. ... If you manage to cut this project in several modules, then your problem will vanish."

I tested this theory by excluding a relatively large package, and sure enough it dropped dramatically. In theory the number of connections could grow quadratically with the number of packages, so this approach is probably as good as is possible with such a large codebase.

Zac Thompson