views:

494

answers:

8

I'm curious as to what sort of standards other teams make sure is in place before code ships (or deploys) out the door in major releases.

I'm not looking for specific answers to each, but here's an idea of what I'm trying to get an idea of.

  • For server-based apps, do you ensure monitoring is in place? To what degree...just that it responds to ping, that it can hit all of its dependencies at any given moment, that the logic that the app actually services is sound (e.g., a service that calculates 2+2 actually returns "4")
  • Do you require automated build scripts before code is released? Meaning, any dev can walk onto a new box, yank something from source control, and start developing? Given things like an OS and IDE, of course.
  • How about automated deployment scripts, for server-based apps?
  • What level of documentation do you require for a project to be "done?"
  • Do you make dang sure you have a full-fledged backup plan for all of the major components of the system, if it's server-based?
  • Do you enforce code quality standards? Think StyleCop for .NET or cyclomatic complexity evaluations.
  • Unit testing? Integration tests? Performance load testing?
  • Do you have standards for how your application's error logging is handled? How about error notification?

Again, not looking for a line-by-line punchlist of answers to anything above, necessarily. In short, what non-coding items must a code release have completed before it's officially considered "done" for your team?

+4  A: 

I mostly do web development, so my items may be different from yours. Just off the top of my head...

  • Ensure all web services are up-to-date
  • Ensure all database scripts/changes/migrations are already deployed to the production server
  • Min all js and css files.
  • Make sure all unit/functional/integration/Selenium tests are passing (We aim for 95%+ test coverage while we're developing, so these are usually pretty accurate in determining a problem)

There's more, I know there is, but I can't think of any right now.

Matt Grande
A: 
  1. no visible bugs? ok
  2. unit test work? ok (some ignored) ha well ok
  3. setup ya sure. ok
  4. error logging ? off course ! :-) we need this ! to fix the bugs!
  5. all on cruisecontrol.net nice.
abmv
+5  A: 

The minimun:

  1. unit tests work
  2. integration tests work
  3. deploy on test stage ok
  4. manual short check on test stage

Better:

  1. unit tests work
  2. checkstyle ok
  3. integration tests work
  4. metrics like jmeter and test coverage passed
  5. deploy on test stage ok
  6. some manual tests on test stage

finally deploy on production stage

All unit and integration tests work automatically, best on a continuous integration server like CruiseControl done by ant or maven. When developing webservices, testing with soapui works fine.

If a database used, automatic upgrade is done (with liquibase for example) before deployment. When external services are used, addidional configuration tests are needed, to ensure URLs are ok (head request from application, database connect, wsdl get, ...). When developing webpps, a HTML validation on some pages will be usefull. A manual check of the layout (use browsershots for example) would be usefull.

(All example links for Java development)

And last (but not least): are all acceptance tests still passing? Is the product what the owner wants? Make a live review with him on the test system before going further!

Arne Burmeister
+4  A: 

Each and every project is different, however as a rule of thumb here are the core things that I try to have done prior to letting code go out to the wild.

In no particular order:

1) A version identification in place where it can be found by a user later, this must be unique to this release. (very typically a "version number" associated on the distributable, the libraries and executable, or user visible from an "about" dialog. Could be a number at a well known register or offset in firmware)

2) A snapshot of the exact code used to produce the release. (a label or a branch of the release in the SCM system is good for this)

3) All the tools necessary to recreate the source must be noted and archived (source from step 2 becomes of limited use without this)

4) An archive of the actual release (a copy of the exact installer released, who knows in 7 years your tools may not be able to build it, but now at least you have the source code and an installable at your side for investigation purposes).

5) A set of documented changes between this release version and the previous one aka Release Notes (I like to use the style of appending to the list so that all release changes are available in one place for a user).

6) Candidate release test cycle complete. Using the distributable created load and test using full/vetted test plan to be sure core functionality is operational, all new features are present and operating as intended.

7) Defect tracking shows all outstanding items are flagged as a) fixed b) not a defect c) deferred.

You can sprinkle in many other steps depending upon domain or development style, but I would state that most software "should be" performing the above steps each and every release. YMMV.

Have fun storming the castle.

another average joe
+1  A: 
  • Codestyle (automated)
  • Automated Tests (Unit- & Integrationtests)
  • Manual Tests (including test and beta stages)
  • Whitebox penetration testing tool (automated)
  • Blackbox penetration testing tool (automated)
  • Manual Exception/Logging monitoring on test/beta stages before rollout
  • ability to revert to previous version at any time
  • code review & 'illegal checkins'
Josef
+1  A: 

For web / internal apps one thing in addition to the other suggestions.

Make sure to involve the ops/deployment team so you don't deliver software which requires more servers then they have (don't assume the people pushing the requirements already have).

LapTop006
+1  A: 
  • Review the checklist: check that all the new features, change requests and bug fixes planned for the version have been finished.
  • Build (in build machine) compiles without any warning nor error in Release mode.
  • All the automated Unit Tests run without error.
  • All the messages and images have been approved by the product team.
  • Performance checks are not worst than former version.
  • The full (manual) test plan has been checked by the test team without errors.
    • The application is tested in many possible scenarios (different OS, database engines, configurations and third party applications).
    • All the features of the application are tested: many times happened to us that a change in a feature broke another one thought unrelated, shit happens, so we have to minimize it.
    • The setup or deployment works in all the scenarios too
    • The setup is able to upgrade former versions
jmservera
+1  A: 

We did a major release recently, so this is still pretty fresh in my mind. We make a Windows application with a GUI for which we release a binary executable, so my list is necessarily going to be substantially different from that for a web-only release.

  1. Release candidates go out to the testing team. They need at least a few days to play with it. If they find any bugs that we consider show-stoppers, release is aborted. This presumes you have a testing team. We only clear a release candidate if at least one week has passed since its build date.

  2. All automated testing has to work and pass. Automated testing is considered a supplement to the live testers.

  3. Any bugs marked as "blockers" must be resolved for the final build.

  4. Publicity material has to be ready (in our case, a web-page update and an email newsletter). Resellers are alerted that a release is coming several weeks in advance, so that they can prepare their material as well. This mostly isn't a programmer concern, but we do check marketing claims for accuracy.

  5. Licensing has to be updated to reflect whatever copy-protection we're using. Our beta versions and the release versions use different licensing models, and this change requires programming effort.

  6. The installer and license agreement have to be updated. Since the beta versions have an installer, this is usually just a text change, but it still falls to the programmers to actually update the install script.

  7. Any references to the beta version need to be removed from the application itself. We missed a few of these, embarrassingly.

  8. Help files and manuals had to be brought completely up-to-date and proofread, since they were part of the release package.

  9. If there were bugs that couldn't be fixed in time, we would at least try to mitigate the damage -- for example, detect that such-and-such bug was occurring, and abort the operation with an apologetic error message. This contributes enormously to perceived product stability.

Far and away, the difficulties of a major release were not programming problems, they were administrative/marketing problems. Many of these things required programmer attention -- helping with installers, proof-reading the feature list to make sure none of it was nonsense, proof-reading technical sections of the manual, updating licensing, etc. The main technical difference was the shift from bug-fixing to bug-mitigating.

AHelps