views:

1155

answers:

6

I can think of plenty of good reasons to using it; however, what are the downsides to it?

(Apart from buying another server)

What are some advantages to using a daily build instead of it?

+9  A: 

I don't think there are any downsides to it. But for the sake of the argument, here is Eric Minick's article on UrbanCode ("It's about tests not builds.") He criticises the tools that are based on Martin Fowler's work saying that they don't let enough time for tests.

"To be truly successful in CI, Fowler asserts that the build should be self-testing and that these tests include both unit and end-to-end testing. At the same time, the build should be very fast - ideally less than ten minutes - because it should run on every commit. If there are a significant number of end-to-end tests, executing them at build time while keeping the whole process under ten minutes is unrealistic.

Add in the demand for a build on every commit, and the requirements start to feel improbable. The options are either slower feedback or the removal of some tests."

splattne
That's a good point; there's no point building if you don't test it.
TraumaPony
If the tests take too long to run for every commit (because there are a lot of commits), why not just run the builds every 30 or 60 minutes then, instead of per commit? Not doing CI because of tests taking too long is like treating the symptom, not the problem.
matt b
You can also define several levels of tests - run 'smoke' tests with the CI running with every checkin, and have scheduled runs of the more complete test suite nightly.
Chris Boran
For the record, I don't think the emphasis on testing is an argument against CI even a little bit, it's an argument for expanding the scope of CI past build time.
EricMinick
+19  A: 

(It's worth noting that by "continuous integration" I mean automated integration with an automated build process and automatically runs tests and automatically detects failure of each piece.

It's also worth noting that "continuous integration" just means to a trunk or test server. It does not mean "push every change live".

There are plenty of ways to do continuous integration wrong.)


I can't think of any reason not to do continuous integration testing. I guess I'm assuming that "continuous integration" includes testing. Just because it compiles doesn't mean it works.

If your build and/or tests take a long time then continuous integration can get expensive. In that case, run the tests obviously related to your change before the commit (coverage analysis tools, like Devel::CoverX::Covered can help discover what tests go with what code), do your integration testing after the commit using something like SVN::Notify, and alert the developers if it fails. Archive the test results using something like Smolder. That allows developers to work quickly without having to sit around watching test suites run, while still catching mistakes early.

That said, with a little work you can often you can speed up your build and test process. Many times slow tests are the result of each test having to do too much setup and teardown pointing at a system that's far too coupled requiring the whole system to be setup just to test a small piece.

Decoupling often helps, breaking out sub-systems into independent projects. The smaller scope makes for easier understanding and faster builds and tests. Each commit can do a full build and test without inconveniencing the programmer. Then all the sub-projects can be collected together to do integration testing.

One of the major advantages of running the test suite on every commit, even if it's after the commit, is you know just what broke the build. Rather than "something we did yesterday broke the build", or worse "four things we did yesterday broke the build in different ways and now we have to untangle it" it's "revision 1234 broke the build". You only have to examine that one revision to find the problem.

The advantage of doing a daily build is that at least you know there's a complete, clean build and test run happening every day. But you should be doing that anyway.

Schwern
A: 

The only good reason not to do continuous integration comes when you've gotten your project working to the point where your integration tests hadn't identified any defect in a good long while and they're taking too much time to run every time you do a build. In other words: you've done enough continuous integration that you've proven to yourself that you no longer need it.

Robert Rossney
This is a special case of the "it's just a one line fix" reason for not running tests. You get confident, you get sloppy, you release with a stupid bug.
Schwern
Not so. If your integration tests take four hours to run, and they're not uncovering defects when you run them, it's time to re-evaluate the tests. There comes a point where you can and must place trust in the stability of your components. You still test, but there are things you stop testing.
Robert Rossney
(I should point out that I'm speaking to the exact same issue that you are in your answer, though perhaps not as clearly.)
Robert Rossney
While I agree that if your test suite is not catching tests something is wrong with your tests, but that's different. Every change has the potential to add a defect, for example a simple typo can break the whole build, thus you have to re-run the whole test suite each change.
Schwern
I understand that, but we're talking about integration, Even though a change in component A could conceivably uncover a defect in component B, you're not going to run a complete test of all components every time you make a change. Otherwise rebuilding the kernel would be part of your daily build.
Robert Rossney
This would work only if it's project where no changes happen.
talonx
Like a project that includes an API for interoperating with a stable legacy system. Once every two years, I haul out my test suite and check to make sure that the new release that the vendor grudgingly pushed out didn't break my API. I could run it every day, but what would be the point?
Robert Rossney
A: 

When starting, it takes a while to set everything up.

If you add tests, coverage, static code inspections, duplicate search, documentation build and deploys, it can take a long time (weeks) to get it right. After that, maintaining the build can be a problem.

e.g, if you add tests to solution, you can have the build detect them automatically based on some criteria or you have to manualy update build settings. Auto detection is much harder to get right. Same for coverage. Same of documentation generation...

bh213
KISS: Any part of the process which fails returns non-zero, makes build automation simple. Most existing testing systems (Test::Harness, JUnit, PHPUnit, dejagnu, etc...) require no human interpretation of the results.YAGNI: Get something that works and start using it, add more later if needed.
Schwern
Yes, yes, but the question was for arguments against CI. I used CI on a couple of projects in different companies and there was always maintainance, especially if your unit tests aren't so "unit". E.g. they touch database, file system, etc...
bh213
That would be arguments against doing it wrong. :) (but I see your point)
Schwern
+4  A: 

There are generally two cases where I've seen continuous integration not really make sense. Keep in mind I am a big advocate of CI and try to use it when I can.

The first one is when the roi just doesn't make sense. I currently develop several small internal apps. The applications are normally very trivial and the whole lifecycle of the development is about a week or two. To properly setup everything for CI would probably double that and I probably would never see that investment back again. You can argue that I'll get it back in maintenance, but these apps are as likely to be discarded as they are updated. Keep in mind that your job is probably to ship software, not reach 100% code coverage.

The other scenario that I have heard mentioned is that CI doesn't make sense if you're not going to do anything with the results. For example, if your software has to be sent to QA, and the QA staff can only really look at a new version every couple of days, it makes no sense to have builds every few hours. If other developers aren't going to look at code metrics and try to improve them, it makes no sense to track them. Granted this is not the fault of CI not being a good technique, it is a lack of your team willing to embrace CI. Nevertheless, implementing a CI system in such a scenario doesn't make sense.

Jacob Adams
+4  A: 

James Shore had a great series of blog entries on the dangers of thinking that using a CI tool like CruiseControl meant you were doing continuous integration:

One danger of setting up a CI server is goal displacement, thinking that the important thing is to "keep the build passing" as opposed to "ensuring we have high quality software". So people stop caring about how long the tests take to run. Then they take too long to run all of them before checkin. Then the build keeps breaking. Then the build is always broken. So people comment out the tests to make the build pass. And the quality of the software goes down, but hey, the build is passing...

Jeffrey Fredrick