PK â|B¼†•Úd- d- argyle-latest/settings.html
Below are the available settings for configuring Argyle. Each of these settings can be used by setting them in the Fabric environment env dictionary.
This settings configures the behavior of the system service_command() for the stop/start/restart tasks. This should be a string with takes the formatting parameters name and command.
Default: u'/etc/init.d/%(name)s %(command)s'
By default Argyle loads various configuration templates from the templates directory inside the argyle module. However, if you wish to override, extend or include additional templates you can include additional directories using this setting. Jinja will look for templates in any directories included here first before loading from the default template directory. See the upload_template() command for additional information on the usage.
Default: ()
Argyle is a collection of Fabric utilities for Django deployment. This project is still in very early phases.
These tasks were written primarily for deployments on Ubuntu Linux. Your mileage may vary with other operating systems and flavors of Linux.
Additional documentation on using argyle is available on Read The Docs.
If you have questions, issues or requests for improvements please let me know on Github.
We love Fabric for deployments. We also use it for our server provisioning scripts. However, I found myself repeatedly writing the same Fabric tasks over and over again without a good way to reuse them from project to project. This was leading to a lot of wasted time and copy-paste errors.
While there are a number of existing projects out there for Django deployments with Fabric, I found most to be very opinionated about project/server layout. Unfortunately I am also very opinionated about server layout. I also found that they did not scale up well to multiple servers (separate app and db servers, multiple app servers behind a load balancer).
In no particular order my goals in creating this project are:
Tasks for installing, updating and removing packages installed with NPM, the package manager for Node.JS.
New in version 0.2.
Execute any npm command on the remote server. This requires that npm is installed.
Install a given package from npm. The flags parameter can be used to pass flags such as --tag, --force, --global or --link.
Uninstall a package.
Update a package to the latest version.
Mostly a cleanup release. Added docs which were previously missing and a full test suite.
Initial public release
Please activate JavaScript to enable the search functionality.
From here you can search these documents. Enter your search words into the box below and click "search". Note that the search function will automatically search for all of the words. Pages containing fewer words won't appear in the result list.
Tasks for configuring processes which will be managed by Supervisord.
This is a simple way to execute a supervisorctl command on the remote server similar to the service_command().
Uploads a configuration to /etc/supervisor/conf.d/<app_name>.conf for managing a new process. If the template_name is not given it will look for templates named supervisor/<app_name>.conf and if not found it will use the supervisor/base.conf included with this project. The app_name parameter is passed in the context and additional context can be provided with the context parameter.
The default supervisor/base.conf is shown below.
[program:{% block name %}{{ app_name }}{% endblock %}]
{% block main %}
command={% block command %}{% endblock %}
{%- if directory -%}directory={{ directory }}{% endif %}
{%- if run_user -%}user={{ run_user }}{% endif %}
{% endblock %}
{% block logging %}
stdout_logfile={{ log_dir|default('/var/log') }}/%(program_name)s.log
redirect_stderr=true
stderr_logfile={{ log_dir|default('/var/log') }}/%(program_name)s.error.log
{% endblock %}
{% block additional %}
{% endblock %}
Deletes the /etc/supervisor/conf.d/<app_name>.conf configuration.
A wrapper around upload_supervisor_app_conf() for managing a Celery process such as celeryd or celerybeat. The app_name defaults to the command and both are pass in the context. A default supervisor/celery.conf is included which will be used instead of supervisor/base.conf if supervisor/<app_name>.conf is not found.
The default supervisor/celery.conf is shown below.
{% extends "supervisor/base.conf" %}
{% block name %}{{ command }}{% endblock %}
{% block command %}{{ bin_dir|default('/usr/local/bin') }}/{{ command }} {{ args }}{% endblock %}
{% block additional %}
stopwaitsecs=60
{% endblock %}
A wrapper around upload_supervisor_app_conf() for managing a Gunicorn process such as gunicorn or gunicorn_django. The app_name defaults to the command and both are pass in the context. A default supervisor/gunicorn.conf is included which will be used instead of supervisor/base.conf if supervisor/<app_name>.conf is not found.
The default supervisor/gunicorn.conf is shown below.
{% extends "supervisor/base.conf" %}
{% block name %}{{ app_name|default('gunicorn') }}{% endblock %}
{% block command %}{{ bin_dir|default('/usr/local/bin') }}/{{ command|default('gunicorn') }} {{ args }}{% endblock %}
argyle.system contains functions for managing system packages and users. Most of these system functions require sudo permissions. They should be run with a user with sufficient permission on the remote server.
This function installs a list of packages using apt-get. You can use this function from both the command-line:
fab -H 33.33.33.10 install_packages:nginx
fab -H 33.33.33.10 install_packages:git-core,subversion,mercurial
or from another Fabric task
def install_python_packages():
package_list = [
'python2.6', 'python-all-dev', 'python-setuptools'
]
install_packages(*package_list)
A common use case for installing packages is to install a list of packages from a file. install_packages_from_file is a thin wrapper around install_packages which takes a filename and installs the listed packages. This file should contain single package name per line. The file is read from the local filesystem not the remote.
Not much to say here. It just runs apt-get update to reload the sources.
Again this is pretty simple. It runs a apt-get upgrade on the remote system.
Adds a personal package archive (PPA) and updates the sources. This requires that python-software-properties is installed on the system.
A common use case for adding PPAs is to add them from a list of PPAs from a file. add_ppas_from_file is a thin wrapper around add_ppa which takes a filename and adds the listed PPAs. This file should contain single package name per line. The file is read from the local filesystem not the remote.
Adds a source to /etc/apt/sources.list. There is an optional key parameter which is the url to the key. If given it will be fetched and added via apt-key add.
A wrapper around add_apt_source which parses a list of source/key pairs from a file. The format is:
deb http://example.com/deb lucid main (http://example.com/key)
This is used to create a new user on the remote server. You can optionally pass a list of groups and the user will be added to them. In addition you can pass the location of a key file. If given the public key will be added to the newly created user`s authorized_keys. All users are created without passwords.
This is a task for calling service commands (such as init.d). This takes the name of the service and the command to run:
fab -H 33.33.33.10 service_command:apache2,reload
By default this will call sudo /etc/init.d/name command. You can configure this by setting env.ARGYLE_SERVICE_COMMAND_TEMPLATE.
from fabric.api import env
env.ARGYLE_SERVICE_COMMAND_TEMPLATE = u'invoke-rc.d %(name)s %(command)'
start_service, stop_service and restart_service are wrappers around service_command to call start, stop and restart commands for a particular service. As such they are also impacted by setting ARGYLE_SERVICE_COMMAND_TEMPLATE.
Tasks for managing vhosts, users and configurations for RabbitMQ.
This is a simple way to execute a rabbitmqctl command on the remote server similar to the service_command().
Creates a new RabbitMQ user with the given name and password.
Creates a new vhost on the remote RabbitMQ server.
Grants the given permissions to the user on the given vhost. By default all permissions are granted.
Uploads the RabbitMQ environment configuration to /etc/rabbitmq/rabbitmq-env.conf. This looks for a template named rabbitmq/rabbitmq-env.conf if the template_name is not given. A default for this template is not given.
Uploads the RabbitMQ configuration to /etc/rabbitmq/rabbitmq.config. This looks for a template named rabbitmq/rabbitmq.config if the template_name is not given. A default for this template is not given.
Tasks for configuring sites running under the Nginx webserver.
Nginx installs with a default server listening on 80 defined in /etc/nginx/sites-enabled/default.conf. This task removes that configuration.
This task uploads a new configuration to /etc/nginx/sites-available/<site_name>. This looks for a template named nginx/<site_name>.conf and if not found uploads the default nginx/site.conf unless template_name is given. By default this site configuration will be enabled /etc/nginx/sites-enabled/.
Enables a site in /etc/nginx/sites-available/<site_name> and links it to /etc/nginx/sites-enabled/<site_name>.
Disables a site in /etc/nginx/sites-enabled/ by the name. The configuration in /etc/nginx/sites-available/ is not touched.
Tasks for managing clusters, databases, users and configurations for a Postgres server.
Creates a database user and sets a password if given. You can pass additional creation flags with the flags parameter.
Return True if the database user already exists.
Execute a SQL query on the remote server. You can specify the DB and additional flags with the db and flags parameter. If use_sudo is True then this will be executed as the postgres user. Additional **kwargs are passed to sudo or run.
Creates a new database with a given owner (if given) and encoding. You can specify which database is copied to create the new one with the template parameter. Using 'template0' as the template will allow creating a database with a different encoding from the default, which can’t be done from template1.
Return True if the database already exists.
Uploads a configuration to /etc/postgresql/<version>/<cluster>/pg_hba.conf from a template. If not given the Postgres version will be detected on the server. The default template name is postgres/pg_hba.conf.
Drops and restores a given cluster. This is mainly used for provisioning a new server to ensure the cluster has the desired default encoding.
argyle.base contains functions which extend the Fabric api to make the rest of the Argyle framework work.
The base ssh library paramiko and hence Fabric does not support SSH Agent forwarding. However, using agent forwarding can be helpful for accessing private repositories (such as hg/git) using ssh without having to generate and allow the ssh key for the server on the repository. To use this function you must enable SSH forwarding on your system.:
ForwardAgent yes
This code is based on code from Lincoln Loop. For more info you can see the Fabric issue and paramiko issue for adding support for agent forwarding.
Note
This issue has been fixed in Fabric 1.4.
Fabric comes with support for uploading files using a template. See fabric.contrib.files.upload_template. With this you can use either Python string formatting or Jinja2. Argyle uses this same idea but with a few differences:
- All templates use Jinja2.
- The current Fabric environment is always passed into the template context.
- You can pass list of template names. The first matched template will be used.
Argyle ships with templates which are loaded using jinja2.PackageLoader. You can override these template by defining env.ARGYLE_TEMPLATE_DIRS as a tuple of template locations.