views:

1605

answers:

8

Two main ways to deploy a J2EE/Java Web app (in a very simplistic sense):

Deploy assembled artifacts to production box

Here, we create the .war (or whatever) elsewhere, configure it for production (possibly creating numerous artifacts for numerous boxes) and place the resulting artifacts on the production servers.

  • Pros: No dev tools on production boxes, can re-use artifacts from testing directly, staff doing deployment doesn't need knowledge of build process
  • Cons: two processes for creating and deploying artifacts; potentially complex configuration of pre-built artifacts could make process hard to script/automate; have to version binary artifacts

Build the artifacts on the production box

Here, the same process used day-to-day to build and deploy locally on developer boxes is used to deploy to production.

  • Pros: One process to maintain; and it's heavily tested/validated by frequent use. Potentially easier to customize configuration at artifact creation time rather than customize pre-built artifact afterword; no versioning of binary artifacts needed.
  • Cons: Potentially complex development tools needed on all production boxes; deployment staff needs to understand build process; you aren't deploying what you tested

I've mostly used the second process, admittedly out of necessity (no time/priority for another deployment process). Personally I don't buy arguments like "the production box has to be clean of all compilers, etc.", but I can see the logic in deploying what you've tested (as opposed to building another artifact).

However, Java Enterprise applications are so sensitive to configuration, it feels like asking for trouble having two processes for configuring artifacts.

Thoughts?

Update

Here's a concrete example:

We use OSCache, and enable the disk cache. The configuration file must be inside the .war file and it references a file path. This path is different on every environment. The build process detects the user's configured location and ensures that the properties file placed in the war is correct for his environment.

If we were to use the build process for deployment, it would be a matter of creating the right configuration for the production environment (e.g. production.build.properties).

If we were to follow the "deploy assembled artifacts to the production box", we would need an additional process to extract the (incorrect) OSCache properties and replace it with one appropriate to the production environment.

This creates two processes to accomplish the same thing.

So, the questions are:

  • Is this avoidable without "compiling on production"?
  • If not, is this worth it? It the value of "no compiling on production" greater than "Don't Repeat Yourself"?
+1  A: 

There exist configuration services, like heavy weight ZooKeeper, and most containers enable you to use JNDI to do some configuration. These will separate the configuration from the build, but can be overkill. However, they do exist. Much depends on your needs.

I've also used a process whereby the artifacts are built with placeholders for config values. When the WAR is deployed, it is exploded and the placeholders replaced with the appropriate values.

sblundy
+2  A: 

Most of the places I've worked have used the first method with environment specific configuration information deployed separately (and updated much more rarely) outside of the war/ear.

18Rabbit
+1  A: 

I highly recommend "Deploy assembled artifacts to production box" such as a war file. This is why our developers use the same build script (Ant in our case) to construct the war on their development sandbox, as is used to create the finally artifact. This way it is debugged as well as the code itself, not to mention completely repeatable.

dacracot
I don't understand; if devs are using the same script, how is that not the "produce artifacts on production box" scenario? Do you have one script to compile and create the war and one to deploy it?
davetron5000
A: 

If you are asking this question relative to configuration management, then your answer needs to be based on what you consider to be a managed artifact. From a CM perspective, it is an unacceptable situation to have some collection of source files work in one environment and not in another. CM is sensitive to environment variables, optimization settings, compiler and runtime versions, etc. and you have to account for these things.

If you are asking this question relative to repeatable process creation, then the answer needs to be based on the location and quantity of pain you are willing to tolerate. Using a .war file may involve more up-front pain in order to save effort in test and deployment cycles. Using source files and build tools may save up-front cost, but you will have to endure additional pain in dealing with issues late in the deployment process.

Update for concrete example

Two things to consider relative to your example.

  1. A .war file is just a .zip file with an alternate extension. You could replace the configuration file in place using standard zip utilities.

  2. Potentially reconsider the need to put the configuration file within the .war file. Would it be enough to have it on the classpath or have properties specified in the execution command line at server startup.

Generally, I attempt to keep deployment configuration requirements specific to the deployment location.

+6  A: 

I'm firmly against building on the production box, because it means you're using a different build than you tested with. It also means every deployment machine has a different JAR/WAR file. If nothing else, do a unified build just so that when bug tracking you won't have to worry about inconsistencies between servers.

Also, you don't need to put the builds into version control if you can easily map between a build and the source that created it.

Where I work, our deployment process is as follows. (This is on Linux, with Tomcat.)

  1. Test changes and check into Subversion. (Not necessarily in that order; we don't require that committed code is tested. I'm the only full-time developer, so the SVN tree is essentially my development branch. Your mileage may vary.)

  2. Copy the JAR/WAR files to a production server in a shared directory named after the Subversion revision number. The web servers only have read access.

  3. The deployment directory contains relative symlinks to the files in the revision-named directories. That way, a directory listing will always show you what version of the source code produced the running version. When deploying, we update a log file which is little more than a directory listing. That makes roll-backs easy. (One gotcha, though; Tomcat checks for new WAR files by the modify date of the real file, not the symlink, so we have to touch the old file when rolling back.)

Our web servers unpack the WAR files onto a local directory. The approach is scalable, since the WAR files are on a single file server; we could have an unlimited number of web servers and only do a single deployment.

David Leppik
If you are confident that you can recreate the war from source, why not build it on the production box? If the war is treated as sacred, it seems it ought to go in version control...
davetron5000
For one thing, I'm not confident I can create it purely from source. Things don't get checked into SVN properly, or I need a special test-on-production-environment build for hard to reproduce bugs. For another thing, we've got backups for WARs; SVN history is used for debugging, not roll-backs.
David Leppik
You _can_ have automated deployment procedures on the production boxes, but don't do building on them (because you then need to test the build etc. etc).
Thorbjørn Ravn Andersen
A: 

Updated with a concrete scenario, see above

davetron5000
+1  A: 

I would champion the use of a continuous integration solution that supports distributed builds. Code checked into your SCM can trigger builds (for immediate testing) and you can schedule builds to create artifacts for QA. You can then promote these artifacts to production and have them deployed.

This is currently what I am working on setting up, using AnthillPro.

EDIT: We are now using Hudson. Highly recommend!

Instantsoup
Ditto that answer except that we use the open source build server Hudson instead. Build servers that are automatically building, _testing_, and generating all your build products for testing and deployment are a good thing.
John Munsch
A: 

Using 1 packaged war files for deploys is a good practice.
we use ant to replace the values that are different between environments. We check the file in with a @@@ variable that will get replaced by our ant script. The ant script replaces the correct item in the file and then updates the war file before the deploy to each

<replace file="${BUILDS.ROOT}/DefaultWebApp/WEB-INF/classes/log4j.xml" token="@@@" value="${LOG4J.WEBSPHERE.LOGS}"/>


<!-- update the war file We don't want the source files in the war file.-->
<war basedir="${BUILDS.ROOT}/DefaultWebApp" destfile="${BUILDS.ROOT}/myThomson.war" excludes="WEB-INF/src/**" update="true"/>

To summarize- ant does it all and we use anthill to manage ant. ant builds the war file, replaces the file paths, updates the war file, then deploys to the target environement. One process, in fact one click of a button in anthill.

Mike Pone