views:

814

answers:

6

I'm confused about what the various testing appliances in Ruby on Rails are for. I have been using the framework for about 6 months but I've never understood the testing part of it. The only testing I've used is JUnit3 in Java and that only briefly.

Everything I've read about it just shows testing validations. Shouldn't the validations in rails just work? It seems more like testing the framework than testing the your code. Why would you need to test validations?

Furthermore, the tests seem super fragile to any change in your code. So if you change anything in your models, you have to change your tests and fixtures to match. Doesn't this violate the DRY principle?

Third, writing test code seems to take alot of time. Is that normal? Wouldn't it just be faster to refresh my browser and see if it worked? I already have to play with my application just to see if it flows correctly and make sure my CSS hasn't exploded. Why wouldn't manual testing be enough?

I've asked these questions before and I haven't gotten more than "automated testing is automated". I am smart enough to figure out the advantages of automating a task. My problem is that costs of writing tests seem absurdly high compared to the benefits. That said, any detailed response is welcome because I probably missed a benefit or two.

+1  A: 

I haven't really used Rails much, but I would think that these automated tests would be useful as smoke tests to be sure that the thing you just did doesn't break something that you did last week. This will become increasingly important as your project grows.

Also, writing the tests before you write the code (using the Test-Driven-Development model) will help you write the code better and faster, since the tests force you to fully think the problem through. It will also help you to know where to break up complex methods into smaller methods that you can test individually.

You are right, writing and maintaining tests takes a lot of time. Sometimes more time than the code itself. However, it can save you time in bug fixing and refactoring for the reasons above.

pkaeding
+2  A: 

Tests should validate your application logic. Personally, I think my most important tests are the ones I run in Selenium. They check that what shows up in the browser is actually what I expect to see. However, if that's all I had, then I would find it hard to debug - it helps to have lower level tests as well and integration, functional, and unit tests are all useful tools. Unit tests let you check that the model behaves the way you expect it to (and that means every method, not just validatins). Validatins will certainly Just Work, but only if you get them right. If you get them wrong, they will Just Work, but not the way you expected. Writing a couple of lines of test is quicker than debugging later on.

A simple example like the one at http://wiseheartdesign.com/2006/01/16/testing-rails-validations just checks validations in a unit test. The O'Reilly article at http://www.oreillynet.com/pub/a/ruby/2007/06/07/rails-testing-not-just-for-the-paranoid.html?page=1 is a bit more complete (though still fairly basic).

Automated testing is particularly useful in regression testing where you change something and run a suite of tests to check that you didn't break anything else.

Tests are a form of repetition, but they don't violate DRY because they express things in a different way. A test says "I did X so Y should happen". Code says "X happened, so now I need to do Z, which happens to cause Y to happen". i.e. a test stimulates a cause and checks an effect, while code responds to a cause, and effects something.

Airsource Ltd
+14  A: 

Shouldn't the validations in rails just work? It seems more like testing the framework than testing the your code. Why would you need to test validations?

The validations in Rails do work -- in fact, there are unit tests in the Rails codebase to ensure it. When you test a model's validation, you're testing the specifics of the validation: the length, the accepted values, etc. You're making sure the code was written as intended. Some validations are simple helpers and you may opt not to test them on the notion that "no one can mess up a validates_numericality_of call." Is that true? Does every developer always remember to write it in the first place? Does every developer never accidentally delete a line on a bad copy paste? In my personal opinion, you don't need to test every last combination of values for a Rails' validation helper, but you need a line to test that it's there with the right values passed, just in case some punk changes it in the future without proper forethought.

Further, other validations are more complex, requiring lots of custom code -- they may warrant more thorough testing.

Furthermore, the tests seem super fragile to any change in your code. So if you change anything in your models, you have to change your tests and fixtures to match. Doesn't this violate the DRY principle?

I don't believe it violates DRY. They're communicating (that's what programming is, communication) two very different things. The test says the code should do something. The code says what it actually does. Testing is extremely important when there is a disconnect between those things.

Test code and application code are intimately linked, obviously. I think of them as two sides of a coin. You wouldn't want a front without a back, or a back without a front. Good test code reinforces good application code, and vice versa. The two together are used to understand the whole problem that you're trying to solve. And well written test code is documentation -- it shows how the application code should be used.

Third, writing test code seems to take alot of time. Is that normal? Wouldn't it just be faster to refresh my browser and see if it worked? I already have to play with my application just to see if it flows correctly and make sure my CSS hasn't exploded. Why wouldn't manual testing be enough?

You've only worked on very small projects, for which that testing is arguably sufficient. However, when you work on a project with several developers, thousands or tens of thousands of lines of code, integration points with web services, third party libraries, multiple databases, months of development and requirements changes, etc, there are a lot of other factors in play. Manual testing is simply not enough. In a project of any real complexity, changes in one place can often have unforeseen results in others. Proper architecture helps mitigate this problem, but automated testing helps as well (and helps identify points where the architecture can be improved) by identifying when a change in one place breaks another.

My problem is that costs of writing tests seem absurdly high compared to the benefits. That said, any detailed response is welcome because I probably missed a benefit or two.

I'll list a few more benefits.

If you test first (Test Driven Development) your code will probably be better. I haven't met a programmer who gave it a solid shot for whom this wasn't the case. Testing first forces you to think about the problem and actually design your solution, versus hacking it out. Further, it forces you to understand the problem domain well enough to where if you do have to hack it out, you know your code works within the limitations you've defined.

If you have full test coverage, you can refactor with NO RISK. If a software problem is very complicated (again, real world projects that last for months tend to be complicated) then you may wish to simplify code that has previously been written. So, you can write new code to replace the old code, and if it passes all of your tests, you're done. It does exactly what the old code did with respect to the tests. For a project that plans to use an agile development method, refactoring is absolutely essential. Changes will always need to be made.

To sum up, automated testing, especially test driven development, is basically a method of managing the complexity of software development. If your project isn't very complex, the cost may outweigh the benefits (although I doubt it). However, real world projects tend to be very complex, and the results of testing and TDD speak for themselves: they work.

(If you're curious, I find Dan North's article on Behavior Driven Development to be very helpful in understanding a lot of the value in testing: http://dannorth.net/introducing-bdd)

Ian Terrell
Thank you for your excellent response. I think that's actually answered all my questions.
epochwolf
"If you have full test coverage, you can refactor with NO RISK." - That's a fairy tale. Full code coverage does not mean you have tested all possible behaviour, not by a long shot. You'd have to test any piece of code (AND all interactions between them) with any data and state that could possibly be fed into it - and that kind of "behaviour" coverage is flat-out impossible to achieve. You can get to a point where the risk of missing bad side effects of a change is much lower, but never zero.
Michael Borgwardt
While Michael is of course right, attacking rhetorical hyperbole is an easy and largely unnecessary target; it's a sales pitch more than a proof. And the key phrase is "with respect to the tests."
Ian Terrell
+1  A: 

For example: I work on a 25000+ lines project (yes, in rails 1.2) and last monday I was told if I could make Users dissapear from every list except admin ones if they had "leave_date" attribute set to the past.

You can rewrite every list action (50+) to put a

@users.reject!{|u| Date.today > u.leave_date}

Or you can override the "find" method (DRY;-), but only if you have tests (on everything that finds users!) you will know you didn't break anything by overriding User#find !!

A very good point. If you want to rewrite find I would look into ActsAsParanoid. Good luck with that!
epochwolf
+1  A: 

Everything I've read about it just shows testing validations. Shouldn't the validations in rails just work? It seems more like testing the framework than testing the your code. Why would you need to test validations?

There's a good Railscast showing one way to test controllers.

Rich Apodaca
A friend dumped 124 railscasts on me. I'll get around to watching them eventually.
epochwolf
+1  A: 

A lot of the testing tutorials and the sample tests created by the Rails generators are pretty lame and IMHO that can give the mistaken impression that you're supposed to test stupid stuff like the built in Rails methods, etc.

Since Rails has it's own test suite, there's no point in you writing or running tests that only test built in Rails functionality. Your tests should exercise the code you're writing! :-)

As for the relative merit of running tests vs just refreshing in your browser.. The larger your app gets, the more of a pain in the ass it is to have to manually run through numerous scenarios and edge cases to make sure nothing in your application has broken. Eventually, you'll stop testing your entire application after each change and just start "spot testing" the areas you think should have been affected. Inevitably, you'll find something that used to work months ago that is now completely broken, and you have no certainty when it broke or which changes broke it. After that happens enough times... you'll come to value automated testing.... :-)

Bob McCormick