views:

157

answers:

6

I cannot believe I'm the first person to go through this thought process, so I'm wondering if anyone can help me out with it.

Current situation: developers write a web site, operations deploy it. Once deployed, a developer Smoke Tests it, to make sure the deployment went smoothly.

To me this feels wrong, it essentially means it takes two people to deploy an application; in our case those two people are on opposite sides of the planet and timezones come into play, causing havoc. But the fact remains that developers know what the minimum set of tests is and that may change over time (particularly for the web service portion of our app). Operations, with all due respect to them (and they would say this themselves), are button-pushers who need a set of instructions to follow.

The manual solution is that we document the test cases and operations follow that document each time they deploy. That sounds painful, plus they may be deploying different versions to different environments (specifically UAT and Production) and may need a different set of instructions for each.

On top of this, one of our near-future plans is to have an automated daily deploy environment, so then we'll have to instruct a computer as to how to deploy a given version of our app. I would dearly like to add to that instructions for how to smoke test the app.

Now developers are better at documenting instructions for computers than they are for people, so the obvious solution seems to be to use a combination of nUnit (I know these aren't unit tests per se, but it is a built-for-purpose test runner) and either the Watin or Selenium APIs to run through the obvious browser steps and call to the web service and explain to the Operations guys how to run those unit tests. I can do that; I have mostly done it already.

But wouldn't it be nice if I could make that process simpler still?

At this point, the Operations guys and the computer are going to have to know which set of tests relate to which version of the app and tell the nUnit runner which base URL it should point to (say, www.example.com = v3.2 or test.example.com = v3.3).

Wouldn't it be nicer if the test runner itself had a way of giving it a base URL and letting it download say a zip file, unpack it and edit a configuration file automatically before running any test fixtures it found in there?

Is there an open source app that would do that? Is there a need for one? Is there a solution using something other than nUnit, maybe Fitnesse?

For the record, I'm looking at .NET-based tools first because most of the developers are primarily .NET developers, but we're not married to it. If such a tool exists using other languages to write the tests, we'll happily adapt, as long as there is a test runner that works on Windows.

A: 

Typically, your nUnit tests are sufficient that if they all pass, the code base should be working fine. If you deploy the code, with passing nUnit tests, and encounter a failure on the website, then you need to add an additional nUnit that fails as well, for the same reason. Then, when you fix your code such that the nUnit is passing, you know that you have fixed the issue that the deployed code has. For this reason, most automatic build systems can be configured to automatically run all the nUnit tests first, and then 'fail' the build if any of the tests fail.

GWLlosa
Assume that the application has been built and unit tested by the time it gets deployed. It has also been QAed before it goes to production. But there are deployment issues sometimes, usually because a config change wasn't communicated to Operations. For that reason, we need a smoke test to follow deployment. This is what I'm discussing here
pdr
@pdr: stop talking to others like if you were on some high horse. What people are explaining to you is that you *think* your application has been fully tested but it really hasn't. You have a problem, right? And what your problem is? Some free-floating "test script" that nobody knows what version it's suppose to test. This *should* be in the branch/release of the version to test and you *should not* be able to deploy an application that passes the unit test but fails that script. See? We're using Selenium/Mercurial here and we're having exactly **zero** issue. That is what **I** am discussing.
Webinator
+1  A: 

I worked in a smoke test writer for an asp.net application. We used QuickTest Pro, the automation of test runs was done with Quality Center (it was called Test Director.). This involved writing hundreds of test scripts that automate a web browser interacting with the web application. These tests where used validate a build before rolling it out on our production servers. Quality Center allows you to define a "pool" of test machines to allow you to run a large list of test scripts in a multi-threaded manner.

A more simplistic smoke test would be to log all errors/exceptions that the application produces and run a spider against the system. This will not obtain very "deep" code coverage, but smoke tests aren't meant for deep code coverage. This error logging should be apart of the production application to deal with errors as they come up. Bugs will always slip though the cracks and sadly enough the best testers will be your users.

Rook
Yeah, I'm certainly not looking for deep code coverage; that should all have been done by this point. This is simply: does each app load and go through a couple of pages or service requests without failing horribly because we forgot a version-specific deployment instruction. QuickTest Pro looks like a quality product but heavy-weight for this kind of test; I want something the ops guys can install on their home machines, click and run.
pdr
+1  A: 

I've used Selenium in the past to do these sort of smoke tests for web deployments. You can write a suite of test scripts and then run then against the same site in different environments.

Jason
As I say, I've written a test script - I'm actually using Watin for now, but that's detail. The question is about how to make it easy for someone to pick up a script, know it's the correct one for the application version, and run it.
pdr
@pdr: Don't ask for help on SO if you think you know it all... +1 to Jason, you're (say Selenium) test suite is supposed to match whatever branch/release you're working on. The problem you have is that you're not practicing continuous integration and hence you're having a silly pointless script that you don't bother to maintain. It should be just like unit testing: tests are passing or you ain't shipping. Now it's up to you, as a developer, to figure out how to put the correct test suite in the correct branch/release and to make your test "script" not silly.
Webinator
@WizardOfOdds - I really think you need to read again and understand before getting so uptight.
pdr
A: 

Telerik has some free and not-free UI testing tools that can be ran in an automated way by anybody that might help with this too.

Jaxidian
A: 

I don't know which VCS you're using, but you could write a solution that pulls a version-specific configuration file from the VCS through an intermediary service.

You could write an powershell script or an application that would download the config file from a web service or web app, passing the test URL as a parameter. The servers or app would be running on a machine with access to VCS, so it could return the file contents. Once retrieved, the script or app could then initiate the tests.

John Fisher
A: 

After much time wasted trying to make up an easier solution, we eventually tought the ops team how to use NUnit's Gui runner. This was easier than expected and is working fine.

pdr