Or maybe with our implementation of it. I'm new to the team that wants to go through the process and it seemed to confuse some of our members on the team including me. We were given what I would consider a user acceptance test or list of manual actions that you would click on through to see if everything was working. I was thinking it would have just been more simple than that, as we'd just divide up the sections of the application and just generally go across it looking for huge errors. This would involve no scripting, just clicking each page, maybe filling stuff out, making sure it submits ok. And that gets me to my other point, it seems to me that smoke testing is basically worthless. I would think that instead you'd have unit tests or even automated tests that would go through a requirement list to make sure this sort of thing would be happening correctly. Even if that was not completed at the very least I'd think the developer who checked in the code in the first place would have at least done a mini smoke test that made sure that their functionality worked in the first place. We brought this up in the meeting so you can imagine the confusion, and it gets me back to my question, what value are we going to get out of this type of smoke testing?
The value of any testing is to increase your confidence in an implementation. Doing a 'smoke test" of this sort is here to increase the confidence that it did get built correctly, and that there are no major and embarrassing errors, like the UI won't come up. So it's basically a low-effort test to confirm that nothing really major went wrong.
It may not sound very useful, but I've seen heavily tested code fail a "smoke test" because of, for example, an unexpected failure in the build or a corrupted image. Usually when it's being demonstrated to an important customer.
We do this type of smoke testing where I work.
The big value is that we can make sure that all of the pieces fit together before turning it over to the users for user acceptance testing. The app that I'm thinking of lives on three servers, plus talks to a legacy system. Making sure that we can run through and that every piece can talk to the other pieces that it needs to is an easy and important test.
If you define smoke test as, "Run through the basic features of the system to make sure they work" then I think there is value in this. Unit tests, while necessary are not all inclusive. They don't find any integration bugs. As far as automating the testing, I have yet to see a system that can be completely tested with automation, and even if you can, such automation takes time and effort to accomplish.
Basically, the value I see in it is lets make sure that changes we made to the code today didn't break anything that was working yesterday. Depending on how diciplined your coding practices are, this is not always a given.
I'll assume that your meaning of "smoke test" is the same as mine -- i.e. try to blow the program up in any way you can.
I learned the value of this 20+ years ago when I had just finished (or so I thought) a major piece of code for a WYSIWYG editor. I proudly showed it to my officemate (Hey Dbell!) who said, "Neat!". He immediately created a paragraph with about 1000 characters in it, copied it thousands of times, and created a document of about 32MB, deleted everything, undid it, and the program absolutely exploded.
I was completely horrified and said, "You can't do that!". He grinned and said, "Why not?"
Finding and fixing the problem (which turned out to be easy to replicate, once you knew what the threshold events looked like) uncovered a subtle and serious bug in memory management. Simple click-the-button testing would have never uncovered it.
The modern day equivalent of this is Fuzz testing (http://en.wikipedia.org/wiki/Fuzz_testing). It is not substitute for step-by-step requirements testing, but it will often expose problems that no other method will.
Smoke test is because putting a broken software in the hand of users can be extremely costly: it can waste many people's time, it can require management, meetings, etc.. to fix. Thus, it's worth spending a little bit of time for every build, to makes sure it's not broken, can save you a lot of time if a problem ever occurs.
The general rule is: it's always cheaper to find problems as early as possible.
And, unless you have tried it exactly as a user would do, you cannot be sure that it will work exactly like it should. There are always unexpected issues that only a human can catch, e.g. visual issues, installation issues...
Thus, automated tests should always be complemented by a (small) manual test, before putting the work in the hand of users.
To answer the question "what is a smoke test?", imagine that you were testing a piece of electrical or electronic equipment.
Plug it in and turn the power on:
- Do you see it switch on (e.g. the screen, or the power indicator)? Good!
- Do you see any smoke coming out of it? Bad!
And that's all! A "smoke test" is a minimal test: not a rigorous test.
The value of a "smoke test" is that it's cheap, or cost-effective: for example, for 1% of the cost of a complete test it it catches 90% of most likely errors. For example in a primitive toaster factory you might do:
- Costly test of the initial design
- Costly test of the prototype
- Costly test of the first item off the mass-production line
- Cheap, smoke test of every subsequent item off the mass-production line
I'm not sure what place "smoke tests" have these days in software development, now that automated testing has become more popular. In the old days, people advocated "daily build" as a QA measure (specifically, to help with "continuous integration") ... and then, advocated "daily build and smoke test" as a way of doing better than just a daily build (i.e., do a daily build and verify whether it runs at all) ... but these days, better than that might be "daily build and an extensive automated test suite".
Smoke testing is definitely worth the effort.
Here's an excellent article by Steve McConnell called "Daily Build and Smoke Test". This was what introduced me to the idea.
For more info on Continuous Integration and daily builds take a look at Martin Fowler's excellent intro to the topic.
HTH
cheers,
Rob
In my opinion, a "Smoke Test" just before a final release, whether the release is to a user or a QA team, is absolutely essential. Automated testing can only test as well as the scripts were written, and there is always the potential that something will get missed.
A simple interface flaw could taint the perception of your app to the QA or user and cause them to suspect or question fundamental base components. Without a once over before releasing it you are opening yourself up to having your software perceived as being poor quality just because of some small interface bugs.
Like any best practice, you really need to believe in it to follow through on it's returns. A smoke test or Build Verification Test should be done at least every day with every build, or if you don't have a daily build, then every day anyway to check that your environment is up.
Knowing what your capable of every day of your testing cycle is the benefit. If you don't run a smoke test every day to test basic connectivity or functionality, you won't be able to report that you are blocked on testing and the development team won't have that direction on what are the highest priority bugs to fix.
I would recommend a recorded timeline metric of your smoke tests compared to blocking defects and test execution rate. This may prevent you from looking like the bottleneck and could save your job if anyone takes you seriously enough.