Using Fabric to manage your deployments
I’m using Fabric very successfully to manage deployments of projects on my servers. Fabric is simple enough that it does not add extra layers of complexity to the setup, and easy enough to use that it makes deployments easy and much less error-prone. This blogpost is an updated and expanded version of a presentation that I did at the local Plone users group.
What is Fabric?
Fabric is a simple system to write commands that can be run on remote systems. It is written in Python and the command sets are also written in Python. Fabric can be called with a server and a command, and it will run that command on the server and present the output. It allows for some interactivity, letting you work with interactive commands.
Why not just manually run the commands?
Sure, I could manually run the commands on the server, but why should I
let a typo ruin my day? With Fabric, the commands are both simple and
the actions that they take are fully documented. This means that I can
map the same command to different actions on different servers. For
example, one server runs a django process via init.d, while another runs
a Plone process via supervisor. All I need to do to restart either one,
is to cd to the respective development directory, and run fab production restart
. I know that Fabric will do the correct, documented
thing every time.
Why not Puppet/chef?
I am detailing my reasons for this in another article. It comes down to ease of use, ease of setup and a high level of direct control.
Setup
The tutorial is based on the following assumptions:
- On the server, developers have ssh access and sudo rights under their own name.
- The server build has a separate user with its own home directory.
- The projects uses buildout to configure builds.
- There are 3 separate buildout configurations:
- dev.cfg (Developer config)
- qa.cfg (QA server)
- production.cfg (Production config)
- The project code is in a Git repository.
- The project is already set up on the server and is already built and running.
Getting the newest version of Fabric
Fabric is under development, so your distribution might not have the newest version. The version that Ubuntu ships with, for example, does not provide for an interactive command prompt to enter passwords. Since it’s a python project, it’s relatively easy to include in the buildout to get a local fabric command.
Add the following to dev.cfg:
1 [buildout]
2 parts +=
3 fabric
4
5 [fabric]
6 recipe= zc.recipe.egg
How Fabric works
Fabric uses the Paramiko python ssh library to execute commands via ssh. It looks for a file called fabfile.py in the current directory. This file is a python script with functions that can be executed. Each public function maps to a command. Commands can be chained:
1./bin/fab qa restart
2./bin/fab qa stop pull buildout start
3./bin/fab qa pull restart production status
Fabric uses a global environment object, called env, to provide context for each command. This would provide the name of the server it should run on, commands it should use, and so on. The normal usage pattern is to set values for env in a function, and then call that function before actual commands in the fabric command. In the examples above, ‘qa’ and ‘production’ are functions that set values for the env object.
Command-line usage
Running fabric with the -l switch gives you a list of all the available commands. For this tutorial, I use a Plone build.
./bin/fab -l:
1Fabric script for deploying Plone consistently.
2
3Available commands:
4
5buildout: Rerun buildout.
6extra: Should normally just contain 'pass'. Useful for testing in...
7nginx_restart: Restart nginx to load new config.
8nginx_test: Test nginx config.
9production: Settings for the production server.
10pull: Do a git pull.
11restart: Restart just the zope instance, not the zeo.
12status: Find out the running status of the server and deploy.
13update: Update code on the server and restart zope.
The commands are usually chained, with a ‘settings’ command to indicate the server and then one or more other commands.
./bin/fab qa update
:
1[gogo.clouditto.com] Executing task 'update'
2[gogo.clouditto.com] sudo: git pull
3[gogo.clouditto.com] out: sudo password:
4[gogo.clouditto.com] out:
5[gogo.clouditto.com] out: Enter passphrase for key '/home/plone/.ssh/id_rsa':
6[gogo.clouditto.com] out: remote: Counting objects: 5, done.
7[gogo.clouditto.com] out: remote: Compressing objects: 100% (3/3), done.
8[gogo.clouditto.com] out: remote: Total 3 (delta 2), reused 0 (delta 0)
9[gogo.clouditto.com] out: Unpacking objects: 100% (3/3), done.
10[gogo.clouditto.com] out: From git.assembla.com:simmonds_portal
11[gogo.clouditto.com] out: 66ab408..57e222d master -> origin/master
12[gogo.clouditto.com] out: Updating 66ab408..57e222d
13[gogo.clouditto.com] out: Fast-forward
14[gogo.clouditto.com] out: fabfile.py | 5 ++++-
15[gogo.clouditto.com] out: 1 files changed, 4 insertions(+), 1 deletions(-)
16[gogo.clouditto.com] out:
17[gogo.clouditto.com] sudo: ./bin/instance restart
18[gogo.clouditto.com] out: . . . . . . . . . . .
19[gogo.clouditto.com] out: daemon process restarted, pid=28387
20[gogo.clouditto.com] out:
21
22Done. Disconnecting from gogo.clouditto.com... done.
The example above logs in to the server, executes a git pull as the plone user (asking for the ssh key passphrase in the process), and restarts the running zope process.
The fabfile
When invoked, Fabric looks for fabfile.py in the current directory and maps functions in that file to Fabric commands. My fabfile.py usually starts with the following:
1# fabfile.py
2""" Fabric script for deploying Plone consistently. """
3from __future__ import with_statement
4from fabric.api import env, cd, sudo, run
5
6try: from fab_config import * except: pass
Some notes:
- The module comment line will display when running fab -l, so make it descriptive of the fabfile.
- The with statement allows us to use the cd context manager to run commands locally in a directory without having to worry about getting into or out of the directory.
- We try to import another file called fab_config. This is purely a practical point: fab_config.py is used for site-specific settings, so this pattern allows us to reuse the same fabfile.py over many projects without polluting it with project-specific details. It also allows us to keep the site configuration out of version control, if we wish.
A typical fab_config.py will look like this:
1
2# fab_config.py
3
4from fabric.api import env
5
6def qa():
7""" Settings for the qa server. """
8 env.buildout_config = 'qa'
9 env.hosts = ['myqaserver.mysite.com']
10 env.deploy_user = 'plone'
11 env.directory = '/home/%s/instances/qa.mysite' % env.deploy_user
This provides all the global variables needed in the other functions. The env object will accept any variable name in dotted form added to it. The hosts variable is a list of hosts, each of which will receive the same set of commands. This makes it very easy to control even multiple-machine layouts.
The rest of the variables provide info on where and as which user to deploy.
Commands and functions
Each command maps to a function. The ones below show stopping and starting of a zope instance.
1def stop():
2 """ Shutdown the instance and zeo. """
3 with cd(env.directory):
4 sudo('./bin/instance stop', user=env.deploy_user)
5 sudo('./bin/zeoserver stop', user=env.deploy_user)
6
7def start():
8 """ Start up the instance and zeo. """
9 with cd(env.directory):
10 sudo('./bin/zeoserver start', user=env.deploy_user)
11 sudo('./bin/instance start', user=env.deploy_user)
Note:
- With cd changes into a directory for the in-scope commands.
- The sudo command either sudos to root (no user specified) or the given user. This allows you to use a login under your own name and not worry about running the command as an incorrect user.
- The code above is relatively old, since I use supervisor (below) to restart these days.
The following code block lets me bundle a series of commands to run as the deploy user, in the deploy directory:
1def _with_deploy_env(commands=[]):
2 """ Run a set of commands as the deploy user in the deploy directory. """
3 with cd(env.directory):
4 for command in commands:
5 sudo(command, user=env.deploy_user)
Since the function starts with an underscore, it’s regarded as a private, internal function and does not show up in the list of possible fabric commands.
The following command lets me update code on the server:
1def pull(project=None):
2 """ Do a git pull. """
3 if project:
4 _with_deploy_env(['./bin/develop up %s' % project])
5 else:
6 _with_deploy_env(['git pull'])
With this, I actually combine 2 commands into one. If called like this:
./bin/fab production pull
it updates the buildout directory, and if I use:
./bin/fab production pull:abcd
it will update all mr.developer checkouts with abcd in their name. Very nice and flexible, and allows me to deploy minor changes to production almost immediately.
The code below restarts all supervisor-controlled processes on the server. This is slightly dangerous if there are many processes running on the server.
1def restart():
2 """ Restart the Zope server via Supervisor """
3 sudo('supervisorctl restart all')
Commands can be combined:
1def update():
2 """ Update code on the server and restart zope. """
3 pull()
4 restart()
I don’t use this anymore, it’s easy enough to run these as separate commands.
The following shows a combination of the general status of the server and the current status of the deploy. This gives a very nice early warning of possible problems, and shows how up to date the server code is.
1def status():
2""" Find out the running status of the server and deploy. """
3
4 # General health of the server. run('cat /proc/loadavg')
5 run('uptime')
6 run('free')
7 run('df -h')
8
9 # Deploy and running status
10 _with_deploy_env(['cat parts/instance/etc/zope.conf |grep address',
11 './bin/instance status',
12 './bin/develop info',
13 './bin/develop status',
14 'git status',
15 'git log -1'])
The following lets me do a buildout on the server, using the correct buildout flags and correct buildout config file every time. This prevents any mistakes when running buildout on the server.
1def buildout():
2 """ Rerun buildout. """
3 with cd(env.directory):
4 sudo('./bin/buildout -Nvc %s.cfg' % env.buildout_config, user=env.deploy_user)
The following useful bit of scaffolding lets me refine new commands as I’m developing them, and is usually only included as a convenience function:
1def extra():
2 """ Should normally just contain 'pass'. Useful for
3 testing individual commands before integrating them
4 into another function. """
5 pass
The future
Some future enhancements that I can think of:
- Get and put files from/to the server. A typical use case would be copying zexp files from the server to the local development instance to aid debugging.
- Do the initial buildout too.
- Make sure all the needed packages are installed on the server.
Links
- Fabric: http://fabfile.org
- Some of the code: https://github.com/jbeyers/projecttools
- rst2pdf for presentations: http://lateral.netmanagers.com.ar/stories/BBS52.html