Deployment Systems - Packaging

Picking back up on the earlier discussion of deployment in non-trivial systems, I’d like to suggest another useful pattern of behavior I’ve observed.

Deploy Complete, Singular Packages

Package up your service and its dependencies into a single file, then always test and deploy that package. When an artifact moves into test, be it automated or exploratory, the binary package is what is tested, not a tag, or branch or so on. Tags can be used to tell the automated build infrastructure to produce a candidate, or can be used by the automated build infrastructure to mark what was used to build a package, but they are used by developers and build the process, not by the deployment process.

At Ning we use a tarball containing the service, needed libraries, any containing server it runs in (such as Apache or Jetty), and the services post-deploy and control (rc) scripts. The project build produces this, interestingly, via a maven plugin, which we also use to build Apache/PHP based components at this point! Our deployment system, galaxy, defines the contract for this package. Aside from personal experiance, my understanding is that Google statically compiles everything into a monolithic binary for most of their services (go C++). Similarly, my understanding is that Apple deploys cpio bundles.

My rule of thumb for what needs to go into the bundle is that if it is configured for your service, you rely on a specific version, or you rely on something that is evolving rapidly (cough node cough redis cough) it goes in the bundle. This is why we package up even very stable things like Apache into our tarballs. Operational automation and configuration management (chef, etc) can handle all of the other dependencies.