It is not uncommon to find a development team building their application binary from source every time they deploy a new version of their application to an environment. While this approach works (and depending on the technology, may be necessary), it introduces significant opportunities for errors and makes debugging a failed deployment increasingly difficult. By migrating to a Build Once, Deploy Many approach, your team can simplify their deployment process, by making it repeatable and easier to debug.
In my experience, if you are rebuilding the application from source every time you deploy, it makes it harder to isolate deployment issues. Specifically, it is hard to tell if the problem is the result of a build time issue (the developer's responsibility), or a runtime environment issue (the sysadmin's responsibility). So, when a deployment does fail, the inevitable battle between developers and sysadmins ensues, with fingers pointing at each other and denial emails flying. Hopefully you work in a collaborative environment where there isn't contention between developers and sysadmins, but I still argue, there is a better way.
What does "Build Once, Deploy Many" Mean?
Rather than compiling and rebuilding your application for every environment you plan to deploy to, build your application binary only once, and then migrate the same binary file from Development to Test and then to Production. By building once and deploying that binary many times, you have eliminated half of the aforementioned deploy time variables that could lead to errors. For example, if your binary file worked fine in the development environment, but failed to run in the test environment, it quickly becomes clear that there is a delta between these two environments.
In order to adopt this approach, you will need to make sure that your build process is highly automated and repeatable, allowing you to produce an official build. In order to produce official builds, you need to be drawing your source code from a version control system like Subversion, running an automated build with tools like Ant or Maven, and lastly you need a dedicated build machine, preferably a continuous integration server.
Most teams already use version control and build automation, so I won't belabor their tenets. By adding a dedicated build machine to the mix, you are increasing consistency at build time and eliminating variables when producing an official build. I would avoid using a developer's machine as the dedicated build machine, as this likely an unstable environment with limited accessibility. The dedicated build machine should be available to all team members. Continuous integration servers, like Hudson, provide a simple solution to this requirement and add much more value to your development process...for free!
Once the binary is produced, it should archived for future retrieval in a binary repository. In Java, this is greatly simplified with tools like Maven and Ant+Ivy which provide simple mechanisms for archiving binaries at build time. The Maven project has defined a standard structure for a Java binary files repository, which can be accessed in Ant through Ivy. These repositories are simple to setup, since they are built on a file system, so the only question is how you provide access to these repositories. You could setup an Apache HTTP server to provide access, but I have found that Maven repository managers, like Nexus, provide search repository caching that also adds value to an organization.
One of the major problems that most people have with this approach is Environment Variables. One of the reasons teams build a new binary per environment is to inject environment variables into the source code, building a binary specific to the target environment. Ant properties and Maven filters can make this very easy, but I would like to propose a much better approach.
The goal should be to build an environment agnostic binary file that discovers any environment specific information at startup. There are a number of ways to do this, but I find that it is easiest to do this in the database. When your application starts up, it looks into the database and loads data from a key/value pair table that contains environment specific properties. The reason I prefer using the database to say, a property file, is that you have already setup an environment specific configuration for your JNDI data source in the application server. You also likely already have some type of deployment process for the database, which you can utilize to manage the key/value table. If you don't, then I recommend looking at a database management tool like Liquibase, which I mentioned in my previous post.
To further simplify this implementation, I recommend using a configuration abstraction framework like Commons Configuration, which allows you to load properties from a variety of sources, including a database.
I would be remiss if I did not mention automated testing. You could implement a Build Once Deploy Many approach without implementing any automated tests, but you that your confidence in your builds increases significantly if you automate your tests and integrate them into your build process. If you are able to achieve test coverage of 80% or higher, you will find you are able to make changes faster and deploy more often with less headaches.
What you will find once you implement this process is that it enables a variety of other process improvements. One of these processes is automated deployments, which your continuous integration server can easily orchestrate this whole process and provide you with nightly and/or push button deployments.