Currently our project has over 3000 unit tests, and "ant testAll" takes well over 20 minutes. besides getting better hardware, are there ways to speed things up?
Without further knowledge of what is being tested, the only two approaches that readily present themselves are:
- use better hardware (sorry)
- simplify the test logic
You may want to even run a profiler over the test run and see if there are any particularly inefficiently implemented tests.
The same way you'd speed up any other code. Find out which tests take the most time, and see how they can be optimized.
There are plenty of operations that can be slow, and if you do them 3000 times, it adds up. Sometimes, reusing data between tests is worthwhile (even if you're not supposed to do that in unit tests, it may be necessary if that's what it takes to get your tests to run at an acceptable speed).
Time your tests. Usually, 90% of them will execute almost instantly, and the last 10% will take almost all the time. Find those 10% and see what they're doing.
Run the code through a profiler, and note where the time is being spent. Guessing is a waste of time. What you think the test runner is doing is meaningless. Instead of guessing, find out what it is doing. Then you'll know how to speed it up.
Well, I dont know what your Unit-test are doing, but you need to ask yourself why it is taking 20 minutes. From my experience there are often a lot of tests that run within a few milliseconds and few tests that make up the rest of the required time. Most often these are tests involving IO/network/DB-Stuff. For example if you spend a lot of time waiting due to network-latency, you might consider running your tests in parallel.
You could hunt down those tests and look for improvements. But making your tests faster doesn't make your actual code better. You might want to look out for tests that require a lot of time because the class under test isn't optimal. Pinpointing and improving situations like these will most likely make your product better/faster as well.
Some ideas:
- Use a mocking framework to avoid hitting databases (or making web service calls, etc).
- If you're doing the same or similar set up for lots of individual tests, try doing it in a test fixture setup (i.e. something that gets done once per fixture rather than once per test).
- I believe some test frameworks allow you to run tests in parallel
You may want to split your unit tests into suites. Is your application modularized? How often do you really need to run all the tests? Would it be acceptable if your developers only ran the unit tests relevant to their own module, and you had the array of tests run nightly and/or on a CI?
Are there any particular unit tests which are very complex (I know, I'm slipping into functional and integration testing here, but the line is sometimes fuzzy), but can any of them be run on a sanity-level during development and then be run full out on the CI?
Edit: Just for kicks, I'll briefly describe the test routines at one of my previous projects
First of all, the growth of the test system was organic, meaning that it was not originally planned out but was modified and changed as it grew. Hence it wasn't perfect, and there were some naming conventions that had become apocryphal with time.
- At a developer level we used a simple two minute test suite called CheckIn, which verified that the code was healthy enough to join the trunk.
- On top of that we ran sanity tests continuously on a CI machine. These were simplified versions of the more complex integration and functional tests, all unit tests and all regression tests.
- Complex test suites (in the number of hours) were run during day and night remotely and the results compiled the next morning.
Automatic testing - it's the mutt's nuts.
To Start with :
a) Get the stats about running time of your Junit Tests.You may already be caputring that informaion in your test reports.
b) Take out top 10 test classes (in time taken) and try to reduce the time .This you need to do on ongoing basis.
c) Try to reduce the running time by refactoring or even changing the approach of testing.
One such case i came across is in one Test class for CRUD test cases.Update test case was first creating the funtionlaity and then updateing .But we were already tested create in seprate test case.So in these cases you can chain your test cases like
@Test()
public void testCreate() throws Exception
{}
@Test(dependsOnMethods = "testCreate")
public void testAmend() throws Exception
{}
@Test(dependsOnMethods = "testAmend")
public void testDelete() throws Exception
{}
So you save on doing dupicate testing.
d)One more instance where i was able to reduce time significalntly was. We had a system (inherietd )where each test case was calling SetUp(Strating Spring Server etc) and after running shut down system resources.This was very time consuming ,so i refactored it to start coomon resources before test suit and after entire suite is done then close those.
e) Depending on your project their can be other bottlenecks you may need to iron out.
http://stackoverflow.com/questions/930407/how-to-manage-build-time-in-tdd
Obviously there's something in your test that takes a long time.
Sometimes, you can't get around slow test. For instance, testing that Spring can read all its configuration files, testing that the hibernate mapping works, that sort of stuff. The good thing about those test is that they only need to run in a single test and then you can mock it all out, but you can also decide to run these as part of the integration test and let the build server worry about it.
The rest of the tests are slow either because they are doing IO or because they are overly CPU bound.
IO can be many things. Web service and database calls can be abstracted out and mocked, and the few real calls you have to do can be moved to the integration phase if need be. Logging can really slow down things as well - especially with 3.000 test cases. I'd say just turn off logging entirely and rely on your brain and your debugger when a test fails.
There may be cases where the IO itself is the unit being tested. For instance, if you are testing the part of a database server that writes table data to disk. In that case, try to keep as much IO in memory as possible. In Java, many of the IO abstractions have in-memory implementations.
CPU bounds tests comes in different flavors as well. Pure performance and throughput test should be in the integration test phase. If you are spinning up a bunch of threads to try an vet out a concurrency bug, then you might move the big test to the integration phase and keep a 'light' version in your regular test suite.
And lastly, the profiler is your friend. It may well be that parts of your code can be made more efficient and give a noticeably speed your tests up.
I'm assuming you've gone through all the other usual steps such as mocking database calls, optimising test setup phases etc. and what's taking the test run so long is that you have 3000 tests not that individual tests are very slow.
If so, one approach is to multi-thread your test run. Test-NG supports this very well. Converting your tests from junit to test-ng isn't so hard and needs only to be done once.
Tests that have to be run sequentially can be easily marked:
@Test(sequential = true)
public class ATest {
...
On a multi-core machine you'll see huge improvements in run-time. Even on a single core you'll see a good improvement as some threads wait for io operations.
See here for details on how to set this up:
http://beust.com/weblog/archives/000407.html
Hope this helps.
....
Some more suggestions - I can't believe you aren't using Continuous integration. Trust me 30 developers is not going to overload your CI server. Even if you can't get buy in for CI, install hudson on your own machine - it will take 10 minutes to setup and the benefits are huge. Ask your manager which is worse every developer sitting waiting for unit tests to complete or having a server do it for you. Having a stupid hat to wear for the person who broke the build is usually enough to convince developers to run their unit tests.
If the quality of checkins are really a massive concern (don't forget a check-in can always be rolled back) consider Teamcity - it runs the tests and doesn't commit the code if tests fails.
Finally, an option that may suit your use case also is clover and bamboo. The latest version keeps a record of what code is tested by what tests and when a change is made it only runs the relevant tests. This is potentially very powerful.
But remember clever tools like test-ng, teamcity and clover will only get you so far - good tests won't write themselves!
To summarise my solution is try all or some of following steps:
- Optimise the tests - mocks, common setup etc.
- Run the tests in Parallel
- Get something else to run the tests for you - make it an offline task using hudson or similar
- Run only the tests that need to be run - sort them into packages or use clover and bamboo.
I agree with Pablojim. Parallelize your tests. We are using clearcase and transfering everything from viewservers really slow things down. When we parallelized on a duelcore we got a 6-8 times faster shorter test run.
We are using the CPPUnit framework and we just added a python scripts to kickstart different test suites on different threads.
We have also used clearmake to parallelize the build process. Our next step will probably be to parallelize the tests on the clients of the developers.
Move the full test suite in a COntinuous INtegration engine system, so the developers do not have to run them all everytime. Such systems have a LOT more patience than developers.
Are you using fork="yes"
in your junit
call? If so, make sure you set forkMode="once"
, otherwise the junit task will start a new VM for each TestCase class. With 3000 unit tests, that will make a dramatic difference.
I would address this as you would any other performance issue:
- Don't make assumptions about what the problem is
- Analyze the test execution with a profiler to determine hotspots
- Analyze the hot spots one at a time, retesting after each code change.
You may find that you have to dig into that test runner eventually. You can use a decompilation tool like cavaj to generate source code from the class file (although it will be harder to read than the original code, obviously). You may find that something in the test runner implementation is affecting performance. For example, you have already mentioned reading XML configuration files as an activity that the test runner performs -- this is something that could potentially afffect performance.
Another area where you could eventually find performance issues is in custom 'base' test case classes. These tend to be things that add a lot of convenience, but it can be tough to remember that your convenience-adding behaviors are potentially going to be amortized over 10k tests in a large project, whether each test needs the convenience behavior or not.
I would suggest having two builds (Incremental which is run on every check in, and a Full build which is run over night)
The incremental runs shorter tests in about 7 minutes whereas the full build runs all tests for longer in less than 40 minutes.
Clearcase does encourage a branching nightmare, but you should be able to have two builds per developer. I would question the value in having every develop on their own branch as I believe there is some benefit in having developers work together (in pairs or more) on the same branch.
Note: One continuous integration server can have any number of agents and if you cannot afford more than one server, you can use PCs as builds agents. (You must have at least 30 of those)
Here is the approach that I would take.
- Review your test cases, look for any redundant tests. With 3000 tests, chances are you are double and quintuple covering parts that don't need to be.
- Pick out your "canaries". These are the tests that you want to always run, the ones that will smell out the danger in other parts. They are more than likely the higher level test cases that test the public API interfaces that are used between components. If one of those fails you can then go in and run the full test suite for the component.
- Start migrating to a framework like TestNG and start classifying your test cases, then run only the classifcation of what you are working on with nightly full tests.
The most effective way to speed up a large test suite is to run it incrementally, so that only tests that touch code changed since the last test run are re-executed. After all, the fastest tests will always be those which are not executed. 8^)
The hard part is actually getting this to work. I am currently working in incremental testing for JUnit 4, which is part of the "JMockit Coverage" tool in the JMockit developer testing toolkit. It's still immature, but I believe it will work well.
DB access and network latency might be an area to examine. If you're performing a lot of database access in your integration tests, you may want to explore using an in-memory database like HSQL, H2 or Derby instead of a "real" database like Oracle. If you're using Hibernate, you will also have to change settings in your Hibernate configs to use the dialect specific to that DB (eg, HSQLDialect instead of OracleDialect). Was on a project once where each full build would end up having to drop and recreate an entire Oracle schema and perform numerous db tests over the network and it would sometimes take up to 20 minutes, and then you find someone checked in and things were broken again. :(
Ideally you'd want to have just one DB script that is usable for both databases, but you might end up having to sychronize two different DB creation scripts - one for production, one for the integration tests.
DB in same JVM vs DB across the network - might make a difference.