Python Application Deployment with Native Packages

Speed, reproducibility, easy rollbacks, and predictability is what we strive for when deploying our diverse Python applications. And that’s what we achieved by leveraging virtual environments and Linux system packages.

Preamble & Disclaimer

A Note From 2017

A lot has changed since 2012 when this article was written. However in the grand scheme of things, it’s still relevant as before. So while we’re running a Nomad cluster with Docker containers nowadays, two things didn’t change:

  1. We still package whole virtualenvs because that’s the closest to a build fragment that you can get with Python (that, and pex) and installing your dependencies into your container’s site-packages leads to problems.
  2. We still have to package some of our apps as Debian packages because as it turns out, containers aren’t a great fit for everything.

Now without further ado, let’s build packages!

To avoid excessive length, I’ll assume you’re somewhat experienced in DevOps/deployment matters and at least loosely familiar with Fabric to follow the examples. I also won’t be explaining configuration management like Ansible, Puppet, Chef or Salt.

To reap all of the benefits, you’ll need to run a private Debian repository server for your packages. That’s not hard, but it takes some effort. Fortunately, you can avoid running your own Debian repository and still gain most of the advantages: a Debian package (or a rpm package for that matter) can also be installed by hand using dpkg -i your-package.deb (rpm -Uhv your-package.rpm).

If you want to go really light (or don’t have sufficient privileges on the production servers to install packages), you can employ most of the guidelines here except using vanilla tar balls instead of system packages and do the work of the package manager using custom tooling.

The key point I’m trying to make is that the best way to have painless and reproducible deployments is to package whole virtual environments of the application you want to deploy including all dependencies but without configuration.

How you achieve this goal is up to you, your requirements, and use cases.

On the other hand you can go bananas and package whole containers. It very much depends on your context how much effort you want to put into it.

Why Native Packages at All?

Both in public discussions as well as privately by mail, one of the most frequently asked questions was:

what’s wrong with Fabric+git-pull?

So let me clarify that first.

It doesn’t scale. As soon as you have more than a single deployment target, it quickly becomes a hassle to pull changes, check dependencies and restart the daemon on every single server. A new version of Django is out? Great, fetch it on every single server. A new version of psycopg2? Awesome, compile it on each of n servers.

It’s hard to integrate with configuration management. It’s easy to tell configuration management “on server X, keep package foo-bar up-to-date or keep it at a concrete version!” That’s a one-liner. Try that while baby sitting git and pip. Also setting up a new server is trivial: install DEB, put configuration in place. Done. This gets more important now the trend shifts towards immutable servers.

You have to install build tools on target servers. GCC and development files don’t belong on production servers. Not only are light weight systems better manageable and faster to set up, it’s also a security feature: each piece of software installed can and will be used against you. That goes doubly-true for compilers1.

It can leave your app in an inconsistent state. Sometimes git pull fails halfway through because of network problems, or pip times out while installing dependencies because PyPI went away (pre-CDN) or is inconsistent (now). Your app at this point is – put simply – broken. Don’t change anything on your production servers before you can guarantee success.

Rollbacks. Rolling back a git deploy is easy enough but what about the virtual environment? What if dependencies changed? Re-creating the virtualenv on n servers is both time consuming and annoying. Especially if your app is down and your customers are yelling at you. To avoid it you’ll have to resort to making backups of your source/virtualenv tree or even file system snapshots.

To summarize: there are too many moving parts. Since deployments are the only stage of development that affects our customers, I want as few moving parts as possible so the process is fast, predictable, and easily reversible.

On the other hand, deploying using self-contained native packages makes the update of an app a near-atomic, predictable operation. Rollbacks can be done easily by installing an older package version. You always know in what state your application is right now. You need to update an app on many servers? Build once, let configuration management deploy everywhere. No compiling of dependencies, no compilers or development packages at all on production servers.

Some of the problems mentioned above can be mitigated by running a private PyPI server – which you should do anyway – or employing some clever tricks. Nevertheless, in the grand picture, that’s just a short term hack. Dan Bravender also wrote an article how they overcome some of these problems; so if you still think Fabric-based deployments are the way to go, learn how to do it properly from him. For me, his approach has way too many moving parts just to avoid to build a package which takes a few minutes if you do it properly. Your mileage may vary so make it an informed decision.

That said, if you have one app on one server and you know it will never change (although people tend to err here), feel free to keep it simple until you have a real need. That’s the reason why I gave context about my work in the previous article. Some points may be anti-patterns, however you may get away with them if your situation is different from mine.

What a Deployment Looks Like in Practice

Before I dig into the actual packaging code, let me show you how easy it is for me to deploy an application after having put a bit of effort into it. Throughout this article, I’ll be using a simple Twisted application which is our whois server for ICANN domains (like .com) that uses a PostgreSQL database for backend as an example.

Every application we deploy has a YAML file called ‘deploy.yml’ that describes the build process, one ‘requirements.txt’ containing all of its run-time Python requirements, and a Debian/Ubuntu specific ‘postinst’ script that is executed after an installation/update.

At the top level, all I do to build a new Debian package of the app in whose directory I am now is running a tool called deploy without any other arguments. That uses Fabric internally to connect to the appropriate build server and triggers a new build. Typically, this run takes from 30 seconds to 1.5 minutes – depending on the amount of dependencies that have to be processed before the actual packaging.

Unless told otherwise, the resulting deb package gets copied to our package servers on success. From now on, it can be installed on our servers that carry the necessary apt configuration.

That’s also where it gets picked up as soon as configuration management realizes the packages on the production servers don’t match the configured one. I usually trigger the updates by hand after having verified the new version works in a staging environment.

Let’s start going into more details with deploy.yml:

app_name: whois
project: DOM
    - libpq-dev
    - libpq5
target_platform: precise

That’s all the information required to build a deb package that is ready for deployment. The only unclear field is project which is required because we run Atlassian’s Stash for our repos which requires each repo to belong to a project. The rest does exactly what you’d think: in our whois example we need ‘libpq-dev’ for compiling psycopg2 while building and ‘libpq5’ for running the application. target_platform chooses the build server and target package repository.

Want a more involved example? Here’s a Django app including JavaScript minification, LESS compilation, cache busting, and i18n translation:

app_name: somedjangoapp
project: PROJ
    - gettext
    - libpq-dev
    - lessc
    - yui-compressor
    - libpq5
        - style.less
        - bootstrap.min.css
        - style.css
        - base.html
    compile_i18n: True
        - css/styles.css
        - js/html5.js
        - resources/img/favicon.ico
target_platform: trusty

See the vast difference between build dependencies and runtime dependencies? That’s exactly what you get when you prepare your apps in advance. It keeps your production servers lean and clean.

Most of it should be rather obvious. I’d just like to point out add_cache_busting that looks for a special string in the supplied files and replaces it with the package version. This makes truly futuristic expiration headers for static files like CSS or JavaScript possible.

So, what about ‘postinst’? Let’s use a simple example:


set -e


case "$1" in
        chown -R $APP_NAME.$APP_NAME /vrmd/$APP_NAME/
        sv -v restart $APP_NAME


        echo "postinst called with unknown argument \`$1'" >&2
        exit 1

In this case we have two interesting lines (9 & 10). First we chown the the application directory to completely belong to the app itself (one could argue that belongs into configuration management, however they tend to be terribly slow at this). After that, we try to restart the daemon and if that fails (probably because it isn’t running yet), we try to start it.

Thanks to configuration management, all the necessary configuration files are already in place before this script has been run.

A more involved example would be using uwsgi in emperor mode which allows to restart gracefully without losing any requests/connections:



set -e

case "$1" in
        chown -R $APP_NAME.$APP_NAME /vrmd/$APP_NAME
        sv check $APP_NAME
        if [ $? = 0 ]; then
            touch /vrmd/$APP_NAME/etc/production.ini
            sv up $APP_NAME || true


        echo "postinst called with unknown argument \`$1'" >&2
        exit 1

We check whether the uwsgi emperor daemon is already running and if so, touch its configuration file which makes it reload the respective vassals gracefully. Please note that we use one emperor per app and virtualenv although it was originally intended for multi-app deployment.

I’ve watered your mouth even more and the article is too long already. So let’s move on quickly.


I’ll show you parts of my implementation and the reasoning behind it. There’s also already been an attempt to re-implement these principles by an open source project – I’m in no way affiliated with it though.

A Note From 2017

Nowadays we use a CI to build our packages and containers, but since the presented approach requires very little setup, it’s a nice way to get started so I keep it here.


We use dedicated VMs for building packages for respective platforms (right now Ubuntu 12.04 LTS and 14.04 LTS). On these, we expect a user called ‘pybuilder’ with no special privileges, virtualenv, and fpm (which we currently have to install by gem unfortunately).

We use ‘vrmd’ as a prefix for paths of our apps (for example ‘/vrmd/whois’) as well for packages (for example ‘vrmd-whois’).

Every app has its own user with the same name as the app (that has ‘/bin/false’ as login shell) and owns a home directory in ‘/vrmd/app-name’. This directory contains at least a virtualenv that is folded into this directory (i.e. there is no subdirectory called ‘venv’ or so, just ‘bin’, ‘lib’ et cetera), ‘etc’ for application configuration files, and the app itself (for example ‘/vrmd/whois/whois’ – this ‘double whois’ is necessary as some apps need more than one directory for code or static files, it could also be ‘/vrmd/whois/app’).

An example:

└── whois
    ├── bin
        ├── python
        └── …
    ├── etc
        └── production.yml
    ├── …
    └── whois
        └── …		       

Setting Up

Everything starts in the ‘deploy’ script. Here’s an excerpt from it:

# -*- coding: utf-8 -*-
Variomedia deployment tool.

from __future__ import absolute_import, division, print_function

import os

import click
import yaml

from fabric.api import settings, hide

from deploy import deb, pypi

@click.option("--build-only", "-b", is_flag=True,
              help="Do not push package to server.")
@click.option("--download", "-d", is_flag=True,
              help="Download to local host.")
@click.option("--force-version", metavar='VERSION-TO-USE', type=int,
              help="Don't determine the DEB version automatically.")
def main(build_only, download, force_version):
    if os.path.exists('deploy.yml'):
        p = yaml.safe_load(open('deploy.yml'))
        platform = p["target_platform"]
        with settings(host_string='{}'.format(platform),
                      abort_on_prompts=True), hide('stdout'):
            deploy = deb.Deployment(
                build_deps=p.get('build_deps', []),
                run_deps=p.get('run_deps', []),
                python_version=p.get('python_version', '2.7'),
            if os.path.exists(''):

            pl = p.get('pipeline')
            if pl:
                deploy.compile_less(pl.get('less_files', []))
                deploy.compress_css(pl.get('css_files', []))
                if pl.get('compile_i18n'):
                deploy.collect_static(pl.get('collect_static', []))
                deploy.add_cache_busting(pl.get('add_cache_busting', []))

                push_to_repo=not build_only,

The instantiation of Deployment sets up various paths and names according to its arguments, but nothing more than that.

Deployment.prepare_app() does most of the heavy lifting with the help of Fabric: it makes sure all build_deps are present, creates the necessary directories on the build server, checks out the app from the repo, creates a virtualenv and populates it with dependencies from requirements.txt.

After that, the sky is the limit on what you’ll do with the app before packaging it up like we do in lines 46–52.

Now, Deployment.build_deb() takes the whole app including the virtualenv and packages it using fpm. The version of the package is the build number – which is just the latest package version in our Ubuntu repositories plus one. Finally, Deployment.push_to_repo() takes the Debian package and pushes it to our mirrors.

Now let’s take a close look at the actual Deployment class. Its constructor just sets up some instance variables for later usage:

class Deployment(object):
    BASE_PATH = '/vrmd'

    def __init__(self, project, app_name, run_setup_install, build_deps,
                 run_deps, python_version):
        self.project = project
        self.original_name = self.git_repo = app_name
        self.app_name = app_name.replace('_', '').lower()
        self.pkg_name = 'vrmd-' + self.app_name
        self.app_path = os.path.join(self.BASE_PATH, self.app_name)
        self.src_path = os.path.join(self.app_path, self.app_name)
        self.run_setup_install = run_setup_install
        self.build_deps = build_deps
        self.run_deps = run_deps
        self.python_version = python_version
        self.git_branch = _current_git_branch()
        self.git_commit = _current_git_commit()

As you can see, this code alone reeks with company convention and could use some generalization. But – you know – YAGNI. :) And yes, we run Python 3 applications in production, that’s why we need the python_version argument!

_current_git_branch() is nothing more than

local('git symbolic-ref HEAD', capture=True)[11:]

and _current_git_commit() is just:

local('git rev-parse --short HEAD', capture=True)

The latter is used for cache busting and the package description.

I’m just factoring out the git code so I can use libgit2 in the future easily.

Building the Application

Now let’s look at one of the two key methods: the one that builds the whole virtualenv so that it can be packaged later:

    def prepare_app(self):
        with settings(user='root'):
            if self.build_deps:
                run('eatmydata apt-get install -qq {}'
                    .format(' '.join(self.build_deps)))
            run('rm -rf {}'.format(self.app_path))
            run('mkdir -p {}'.format(self.app_path))
            run('chown pybuilder.pybuilder {}'.format(self.app_path))
        if self.python_version == "pypy":
            pv = 'pypy'
            pv = 'python' + str(self.python_version)
        run('virtualenv -p {} -q {}'
            .format(pv, self.app_path))

        _git_push_current('origin', self.git_branch)
        _git_clone(self.git_repo, self.project, self.git_branch, self.src_path)
        with shell_env(
            CFLAGS='-pipe -Os -g0',
            run('{} install -r {}'
                .format(os.path.join(self.app_path, 'bin/pip'),
                        os.path.join(self.src_path, 'requirements.txt')))
            if self.run_setup_install:
                with cd(self.src_path):
                    run('{} install'
                        .format(os.path.join(self.app_path, 'bin/python')))

I believe this code is mostly self-explanatory. Here are some less obvious points:

  • We don’t use a sandboxed directory for building the packages to keep the application paths identical to the paths in production. Otherwise we’d have to fix the shebangs and refresh the virtualenvs on target servers after installation which costs unnecessary time.
  • _git_push_current() and _git_clone() are again just helper functions that push the current branch from the current host to the repo and then git clone from our repo to the build server.

Speeding Up the Builds

The easiest way to speed up your deployments is to make sure you use the pip cache so you don’t have to download all dependencies on each build. For example by adding the following into your ~/.pip/pip.conf:

download-cache = ~/.pip/download_cache

But even then I strongly suggest to use a fast PyPI proxy cache like devpi because pip still queries PyPI for meta data even though there’s a version that perfectly fulfills the requirements.txt in the cache.


Now there’s only one thing left, the actual packaging of the deb. Which is – thanks to fpm – really, really simple:

    def build_deb(self, version, distro, download, push_to_repo):
        if not version:
            version = _next_version_for(self.pkg_name)

        run('rm -rf {}'.format(os.path.join(self.src_path, '.git')))
        with cd(os.path.join(self.BASE_PATH, 'tmp')):
            run('rm -rf *')
            run('mv {} .'.format(os.path.join(self.src_path, 'debian')))
            deps_str = ('-d ' + ' -d '.join(self.run_deps)
                        if self.run_deps else '')
            hooks_str = ' '.join(
                '{} debian/{}'.format(opt, fname)
                for opt, fname in [
                    ('--before-remove', 'prerm'),
                    ('--after-remove', 'postrm'),
                    ('--before-install', 'preinst'),
                    ('--after-install', 'postinst'),
                if os.path.exists(os.path.join('debian', fname))
            fpm_output = run(
                'fpm '
                '-s dir '
                '-t deb '
                '-n {self.pkg_name} '
                '-v {version} '
                '-a all '
                '-x "*.bak" -x "*.orig" {hooks} '
                '--description '
                '"Branch: {self.git_branch} Commit: {self.git_commit}" '
                '{deps} {app_path}'
                .format(self=self, hooks=hooks_str, deps=deps_str,
                        app_path=self.app_path, version=version)
            deb_name = os.path.basename(fpm_output.split('"')[-2])
            if download:
                get(deb_name, 'debian/%(basename)s')
            if push_to_repo:
                run('scp {} '
                    .format(deb_name, distro))
                with settings(host_string='repo@mirror.local'), cd('/vrmd/repo'):
                        'prm '
                        '-t deb '
                        '-p /vrmd/repo/debs '
                        '-c main,dev '
                        '-r precise,trusty '
                        '-a amd64,i386 '

One last convention I have to mention: every app has a sub-directory called ‘debian’ in its source directory. This may contain an arbitrary number of the aforementioned ‘prerm’, ‘postrm’, ‘preinst’ and ‘postinst’ shell hooks and that’s why the debian directory is moved to the build directory in line 8. These files are automagically detected and later referenced using the appropriate command line switches. By the way, building a RPMs is just a matter of changing the fpm call from -t deb to -t rpm and adjusting the shell hooks to RedHat standards. prm is rpm’s cousin that simplifies the handling of your repos.

The only ‘magic’ in this method is fpm_output.split('"')[-2], which makes total sense if you know that fpm returns a string like

Created deb package {"path":"vrmd-whois_42424_all.deb"}

on success. I could use regular expressions or split and parse JSON – but in this case, just splitting is easier. :)

If you’re curious how the next version is computed:

def _next_version_for(pkg_name):
    Computes the next version for *pkg_name* according to APT data.
    with settings(user='root'):
        run('eatmydata apt-get update -qq')
        pkg_info = run('eatmydata apt-cache show {}'.format(pkg_name),
        return max([int(line.split(' ')[1])
                    for line in pkg_info.split('\n')
                    if line.startswith('Version: ')] or [0]) + 1

We simply query APT for the current version and add one.

One particularity I really like is the package description:

Branch: master Commit: deadbeef

Using the commit id, we can later check the git history for the exact specifics of this package, and what has happened since then.


We use Ansible for configuration management. For running the actual daemons we employ runit whose scripts also get installed using Ansible.

Here’s an example of our whois Twisted application that additionally requires to listen on privileged ports (43 and 80) but doesn’t run with root privileges:

#!/bin/sh -e
export LC_ALL="en_US.UTF-8"
cd /vrmd/whois/whois

# Allow listening on privileged ports.
/sbin/setcap CAP_NET_BIND_SERVICE+ep ../bin/python2.7

exec 2>&1 \
    chpst -u whois:whois \
        ../bin/twistd \
            -n \
            --logger structlog.twisted.plainJSONStdOutLogger \
            whois \
                -c ../etc/production.yml

Our logs are in JSON with the help of structlog and digested by logstash.

Schema Migrations

They are hard. I don’t have an universal solution and do it case-to-case. Don’t do them in ‘postinst’. Don’t e-mail me about it, I have no general solution for you. :)

ToDo as of 2013-06-21

There are still a few things I would like to optimize in our process but couldn’t justify to use time for that:

  • Investigate to use apt (currently not pip-installable and unlikely to change) and libgit2 (Not part of Precise but landed in Trusty) instead of parsing output of CLI tools.

P.S. I’m updating this article all the time – I want it to always reflect what we do.


  1. Admittedly, this isn’t the strongest argument anymore due to CFFI. ↩︎