Welcome to Dhub’s documentation!

Contents:

Ansible

Playbooks

Playbooks examples

每个playbook 至少包含一个 play列表,play的作用就是把一组主机和一组定义好的roles关联起来。

最基本的task就是直接调用ansible的模块:

---
- hosts: web
  name: install httpd
  gather_facts: true

  tasks:
    - name: install httpd
      yum: name={{ item }} state=installed
      with_items:
        - httpd
        - httpd-devel
      when: ansible_os_family == "Redhat"
      notify:
        - restart apache2

    - name: install httpd
      apt: name={{ item }} state=installed
      with_items:
        - apache2
        - apache2-dev
      when: ansible_os_family == "Debian"
      notify:
        - restart apache2

  handlers:
    - name: restart httpd
      service:
        name=httpd
        state=restarted
      when: ansible_os_family == "Debian"

    - name: restart apache2
      service: name=apache2 state=restarted
      when: ansible_os_family == "Debian"

Hosts and Users

---
- hosts: webservers
  remote_user: root

The hosts line is a list of one or more groups or host patterns,separated by colons.

The remote_user is just the name of the user account.

Remote users can also be defined per task:

---
- hosts: webserver
  remote_user: root
  tasks:
    - name: test connection
      ping:
      remote_user: localuser

Support for running things as another user is also available,you could become a user as root or other different user:

---
- hosts: webserver
  remote_user: root
  tasks:
    - name: restart httpd
      service: name=httpd state=restarted
      remote_user: localuser
      become: yes
      become_method: sudo

    - name: restart mysql
      service: name=mysqld state=restarted
      remote_user: localuser
      become: yes
      become_user: mysql
      become_method: su

Tasks list

Within a play, all hosts are going to get the same task directives. It is the purpose of a play to map a selection of hosts to tasks.

Hosts with failed tasks are taken out of the rotation for the entire playbook. If things fail, simply correct the playbook file and rerun.

The goal of each task is to execute a module with very specific arguments.

Modules are ‘idempotent’ (幂等) , meaning if you run them again, they will make only the changes they must in order to bring the system to the desired state. This makes it very safe to rerun the same playbook multiple times.

Every task should have a name , which is included in the output from running the playbook. If the name is not provided though, the string fed to ‘action’ will be used for output.

tasks:
  - name: make sure apache is running
    service: name=httpd state=running
  - name: disable selinux
    command: /sbin/setenforce 0

command and shell module care about return codes, so if you have a command whose successful exit code is not zero, you may wish to do this:

tasks:
  - name: run this command and ignore the result
    shell: /usr/bin/somecommand || /bin/true

Or this:

tasks:
  - name: run this command and ignore the result
    shell: /usr/bin/somecommand
    ignore_errors: True

Variables can be used in action lines:

tasks:
  - name: create a virtual host file for {{ vhost }}
    template: src=somefile.j2 dest=/etc/httpd/conf.d/{{ vhost }}

Handlers: Running Operations On change

Playbooks recognize changes and have a basic event system that can be used to respond to change.

These notify actions are triggered at the end of each block of tasks in a playbook, and will only be triggered once even if notified by multiple different tasks:

- name: template configuration file
  template: src=template.j2 dest=/etc/foo.conf
  notify:
    - restart memcached
    - restart apache

The things listed in the notify section of a task are called handler.

Handlers are lists of tasks, not really any different from regular tasks, that are referenced by name:

handlers:
  - name: restart memcached
    service: name=memcached state=restarted
  - name: restart apache
    service: name=apache state=restarted

Handlers are best used to restart services and trigger reboots.

Handlers are automatically processed between ‘pre_tasks’,’roles’,’tasks’,’post_tasks’ sections. if you ever want to flush all handler commands immediately though:

tasks:
  - shell: some tasks go here
  - meta: flush_handlers
  - shell: some other tasks

Executing A Playbook

ansible-playbook playbook.yml -f 10

Ansible-Pull

ansible-pull is a small script that will checkout a repo of configuration instructions from git, and the run ansible-playbook against that content.

Tips and Tricks

If you ever wanted to see detailed output from successful modules as well as unsuccessful ones, use --verbose flag.

To see what hosts would be affected by a playbook before you run it, you can do this:

ansible-playbook playbook.yml --list-hosts

Ansible quick start

install

yum install ansible # RHEL/CentOS/Fedora

apt-get install ansible # Debian/Ubuntu

emerge -avt ansible  # Gentoo/Funtoo

pip install ansible # will also install  paramiko PyYAML jinja2

Inventory

Inventory文件用来定义需要管理的主机,默认位置在 /etc/ansible/hosts , 使用 -i 选项指定非默认位置。

被管理的机器通过IP或域名指定,未分组的机器保留在hosts文件的顶部,分组使用 [] 指定,同时支持嵌套:

[miss]
dn0[1:3]

[lvs:children]
web
db

在配置中可以为主机设定端口、默认用户、私钥等参数:

[db]
orcl01.example.com ansible_ssh_user=oracle

Try it out

ansible -i hosts all -m ping -u root -f 3

*. -i 指定了inventory文件,当前目录下的hosts文件,不指定则使用默认位置。

*. all 针对hosts定义的所有主机执行,可以指定组名或者模式

*. -m 指定所使用的模块,不指定则使用默认模块 command

*. -u 指定远端机器的用户

*. -a 指定传给模块的参数,Ansible以键值对(Key-Value)的方式接受参数,并以JSON格式返回结果。

*. -f 同一时间只在指定数量的机器上执行

Ad-Hoc 简单任务

执行命令

ansible -m raw -a 'yum install -y python-simplejson'

可以在对方机器上python版本为2.4或者其他没有装python的设备上使用raw模块执行命令。类似于直接在远端执行 shell命令。

ansible-doc raw

传输文件
ansible all -m copy -a "src=/etc/hosts dest=/tmp/hosts"

模块

setup

ansible -i hosts -m setup -a 'filter="ansible_processor_count"' 可以获取主机的系统信息,并将其保存在内置变量中方便其他模块进行引用。

user

ansible -i hosts -m user -a 'name="test" shell="/bin/false"'

file

ansible -i hosts -m file -a 'path=/etc/fstab'

ansible -i hosts -m file -a 'path=/tmp/ksops state=directory mode=0755 owner=nobody'

ansible -i hosts -m file -a 'path=/tmp/file state=absent'

copy

ansible -i hosts -m copy -a 'src=vimrc dest=/root/.vimrc mode=644 owner=root'

command

ansible -i hosts -m command -a 'rm -rfv /tmp/test removes=/tmp/test'

ansible -i hosts -m command -a 'touch /tmp/test creates=/tmp/test'

使用command模块无法通过返回值判断命令是否执行成功,如果定义了creates属性,当文件存在时,不执行创建文件的命令,如果定义了removes属性,当文件不存在时,不执行删除文件的命令。

shell

ansible -i hosts -m shell -a '/home/monitor.sh > /home/applog/test.log creates=/home/applog/test.log'

shell模块提供了对重定向、管道、后台任务等特性的支持。

Playbook 复杂任务

创建新用户:

cat user.yml

---
- name:create user  #Playbook名称
  hosts:db    #起作用的主机组
  user:root   #以指定用户执行
  gather_facts:false  #收集远端机器的相关信息

  vars:     #定义变量 user
  - user: "toy"
  tasks:    #指定要执行的任务
  - name:create {{user}} on miss
    user:name="{{user}}"

Playbook Roles

Task Include Files And Encouraging Reuse

The goal of a play in a playbook is to map a group of systems into multiple roles:

---
# possibly saved as tasks/foo.yml
- name: foo
  command: /bin/foo
- name: bar
  command: /bin/bar

---
tasks:
  - include: tasks/foo.yml

You can also pass variables into includes:

tasks:
  - include: wordpress.yml wp_user=timmy
  - include: wordpress.yml wp_user=alice
  - include: wordpress.yml wp_user=bob

Includes can also be used in the ‘handlers’ section:

---
# handlers/handlers.yml
- name: restart apache
  serice: name=apache state=restarted

---
# in your main playbook file
handlers:
  - include: handlers/handlers.yml

Roles

Roles are ways of automatically loading certain vars_files, tasks, and handlers based on a known file structure. Grouping content by roles also allows easy sharing of roles with other users.

Example project structure:

site.yml
webservers.yml
fooservers.yml
roles/
   common/
     files/
     templates/
     tasks/
     handlers/
     vars/
     defaults/
     meta/
   webservers/
     files/
     templates/
     tasks/
     handlers/
     vars/
     defaults/
     meta/

In a playbook, it would look like this:

---
- hosts: webservers
  roles:
    - common
    - webservers
    - { role: foo_app_instance, dir: '/opt/a', port: 5000 }
    - { role: some_role, when: "ansible_os_family == 'RedHat'" }
    - { role: foo, tags: ["bar", "baz"] }

main.yml in tasks/ handlers/ vars/ will all be added to the play, and main.yml in meta/ will add any role dependencies to the list of roles.

copy tasks, script tasks can reference files in roles/xxx/files/ without have to path them relatively or absolutely.

template tasks can reference files in roles/xxx/templates/ without have to path them relatively or absolutely.

include tasks can reference files in roles/x/tasks/ without having to path them relatively or absolutely.

Role Default Variables

Role default variables allow you to set default variables for included or dependent roles (see below). To create defaults, simply add a defaults/main.yml file in your role directory. These variables will have the lowest priority of any variables available, and can be easily overridden by any other variable, including inventory variables.

Role Dependencies

Role dependencies are stored in the meta/main.yml file contained within the role directory:

---
dependencies:
  - { role: common, some_parameter: 3 }
  - { role: apache, port: 80 }
  - { role: postgres, dbname: blarg, other_parameter: 12 }
  - { role: '/path/to/common/roles/foo', x: 1 }

Roles dependencies are always executed before the role that includes them, and are recursive.

By default, roles can also only be added as a dependency once - if another role also lists it as a dependency it will not be run again. This behavior can be overridden by adding allow_duplicates: yes to the meta/main.yml file.

Embedding Modules In Roles

You may distribute a custom module as part of a role. You could add a directory named ‘library’ which include the module directly inside of it. And the module will be usable in the role itselft, as well as any roles that are called after this role.

Playbook Best Practises

Content Organization

Roles are great!

Directory Layout

The top level of the directory would contain files and directories like so:

production                # inventory file for production servers
stage                     # inventory file for stage environment

group_vars/
   group1                 # here we assign variables to particular groups
   group2                 # ""
host_vars/
   hostname1              # if systems need specific variables, put them here
   hostname2              # ""

library/                  # if any custom modules, put them here (optional)
filter_plugins/           # if any custom filter plugins, put them here (optional)

site.yml                  # master playbook
webservers.yml            # playbook for webserver tier
dbservers.yml             # playbook for dbserver tier

roles/
    common/               # this hierarchy represents a "role"
        tasks/            #
            main.yml      #  <-- tasks file can include smaller files if warranted
        handlers/         #
            main.yml      #  <-- handlers file
        templates/        #  <-- files for use with the template resource
            ntp.conf.j2   #  <------- templates end in .j2
        files/            #
            bar.txt       #  <-- files for use with the copy resource
            foo.sh        #  <-- script files for use with the script resource
        vars/             #
            main.yml      #  <-- variables associated with this role
        defaults/         #
            main.yml      #  <-- default lower priority variables for this role
        meta/             #
            main.yml      #  <-- role dependencies

    webtier/              # same kind of structure as "common" was above, done for the webtier role
    monitoring/           # ""
    fooapp/               # ""
Use Dynamic Inventory With Clouds

If you are using a cloud provider, you should not be managing your inventory in a static file. See Dynamic Inventory

If you have another system maintaining a canonical list of systems in your infrastructure, usage of dynamic inventory is a great idea in general.

How to Differentiate Stage vs Production

It is suggested that you define groups based on purpose of the hosts and also geography or datacenter location:

# file: production

[atlanta-webservers]
www-atl-1.example.com
www-atl-2.example.com

[boston-webservers]
www-bos-1.example.com
www-bos-2.example.com

[atlanta-dbservers]
db-atl-1.example.com
db-atl-2.example.com

[boston-dbservers]
db-bos-1.example.com

# webservers in all geos
[webservers:children]
atlanta-webservers
boston-webservers

# dbservers in all geos
[dbservers:children]
atlanta-dbservers
boston-dbservers

# everything in the atlanta geo
[atlanta:children]
atlanta-webservers
atlanta-dbservers

# everything in the boston geo
[boston:children]
boston-webservers
boston-dbservers
Group And Host Variables

Group Variables and Host Variables will set different values to different groups or hosts.

For instance, atlanta-webservers have different http port with boston-webservers:

---
# file: host_vars/atlanta-webservers
http_port: 8080
---
# file: host_vars/boston-webservers
http_port: 80
Top Level Playbooks Are Separated By Role

In site.yml, we include a playbook that defines our entire infrastructure, just including some other playbooks:

---
# file: site.yml
- include: webservers.yml
- include: dbservers.yml

In a file like webservers.yml, we simplay map the configuration of the webservers group to the roles performed by the webservers group:

---
# file: webservers.yml
- hosts: webservers
  roles:
    - common
    - webtier

The idea here is that we can choose to configure our whole infrastructure by running site.yml, or we could choose to run a subset by running webservers.yml:

ansible-playbook site.yml --limit webservers
ansible-playbook webservers.yml
Task And Handler Organization For A Role

Here is an example tasks file that explains how a role works:

---
# file: roles/common/tasks/main.yml

- name: be sure ntp is installed
  yum: pkg=ntp state=installed
  tags: ntp

- name: be sure ntp is configured
  template: src=ntp.conf.j2 dest=/etc/ntp.conf
  notify:
    - restart ntpd
  tags: ntp

- name: be sure ntpd is running and enabled
  service: name=ntpd state=running enabled=yes
  tags: ntp

Role common just sets up NTP, Handlers are only fired when certain tasks report changes, and are run at the end of each play:

---
# file: roles/common/handlers/main.yml
- name: restart ntpd
  service: name=ntpd state=restarted

Always Mention The State

The ‘state’ parameter is optional to a lot of modules. Whether ‘state=present’ or ‘state=absent’, it’s always best to leave that parameter in your playbooks to make it clear.

Group By Roles

This allows playbooks to target machines based on role, as well as to assign role specific variables using the group variable system.

Operating System and Distribution Variance

When dealing with a parameter that is different between two different operating systems, a great way to handle this is by using the group_by module:

---
#talk to all hosts just so we can learn about them
- hosts: all
  tasks:
    - group_by: key=os_{{ ansible_distribution }}

- hosts: os_CentOS
  gather_facts: False
  tasks:
    - # tasks that only happen on CentOS go here

This will throw all systems into a dynamic group based on the os name.

If group-specific settings are needed, this can also be done. For example:

---
# file: group_vars/all
applist: 20

---
# file: group_vars/os_CentOS
applist: 10

Bundling Ansible Modules With Playbooks

If a playbook has a ‘./library’ directory relative to its YAML file, this directory can be used to add ansible modules that will automatically be in the ansible module path.

Whitespace and Comments

Generous use of whitespace to break things up, and use of comments (which start with ‘#’), is encouraged.

Always Name Tasks

This name is shown when the playbook is run.

Keep It Simple

If something feels complicated, it probably is, and may be a good opportunity to simplify things.

Version Control

Use version control.

Inventory

Multiple inventory files can be used at the same time and also pull inventory from dynamic or cloud sources as described in Dynamic Inventory .

Hosts and Groups

/etc/ansible/hosts is an INI-like format hostfile.To make things explicit, you could describe hosts like this:

[webserver]
ws01.bj.example.com
ws02.bj.example.com:3322
[dbserver]
ws01.bj.example.com

[ops]
jumper ansible_ssh_port=6222 ansible_ssh_host=192.168.1.80

Patterns like [1-5] [a-z] can be included to define large range of hosts.

[lbserver]
lvs[a-c][00:21].example.com

Host Variables

It is easy to assign variables to hosts that will be used later in playbooks:

[vast]
server01 app_port=3876 Maxrequests=300
server03 app_port=3876 Maxrequests=500

Group Variables

However, variables can also be applied to an entire group at once:

[vast]
server01
server02
server03

[vast:vars]
proxy=proxy.local.com
nextJump=next.local.com

Groups of Groups, and Group Variables

It is possible to make groups of groups and assign variables to groups. These variables can be used by ansible-playbook , but not ansible .

[region01]
server01
server02

[region02]
serverA
serverB

[region03]
serverX
serverY

[large:children]
region01
region02

[large:vars]
hostDistance=2
allowFailure=False

Splitting Out Host and Group Specific Data

The preferred practice in Ansible is actually not to store variables in the main inventory file. Host and group variables can be stored in individual files relative to the inventory file.

Assuming the inventory file path is:

/etc/ansible/hosts

You can assign varialbes in /etc/ansible/group_vars/region02 /etc/ansible/host_vars/server01

List of Behavioral Inventory Parameters

ansible_ssh_host
  The name of the host to connect to, if different from the alias you wish to give to it.
ansible_ssh_port
  The ssh port number, if not 22
ansible_ssh_user
  The default ssh user name to use.
ansible_ssh_pass
  The ssh password to use (this is insecure, we strongly recommend using --ask-pass or SSH keys)
ansible_sudo
  The boolean to decide if sudo should be used for this host. Defaults to false.
ansible_sudo_pass
  The sudo password to use (this is insecure, we strongly recommend using --ask-sudo-pass)
ansible_sudo_exe (new in version 1.8)
  The sudo command path.
ansible_connection
  Connection type of the host. Candidates are local, ssh or paramiko.  The default is paramiko before Ansible 1.2, and 'smart' afterwards which detects whether usage of 'ssh' would be feasible based on whether ControlPersist is supported.
ansible_ssh_private_key_file
  Private key file used by ssh.  Useful if using multiple keys and you don't want to use SSH agent.
ansible_shell_type
  The shell type of the target system. Commands are formatted using 'sh'-style syntax by default. Setting this to 'csh' or 'fish' will cause commands executed on target systems to follow those shell's syntax instead.
ansible_python_interpreter
  The target host python path. This is useful for systems with more
  than one Python or not located at "/usr/bin/python" such as \*BSD, or where /usr/bin/python
  is not a 2.X series Python.  We do not use the "/usr/bin/env" mechanism as that requires the remote user's
  path to be set right and also assumes the "python" executable is named python, where the executable might
  be named something like "python26".
ansible\_\*\_interpreter
  Works for anything such as ruby or perl and works just like ansible_python_interpreter.
  This replaces shebang of modules which will run on that host.

Dynamic Inventory

Ansible provides a basic text-based system as described in Inventory. And it easily supports all of these options via an external invnetory system.

Example. AWS EC2 External Inventory Script

You can use this script in one of two ways. The easiest is to use Ansible’s -i command line option and specify the path to the script after marking it executable:

ansible -i ec2.py -u ubuntu us-east-1d -m ping

The second option is to copy the script to /etc/ansible/hosts and chmod +x it. You will also need to copy the ec2.ini file to /etc/ansible/ec2.ini

Using Multiple Inventory Sources

If the location given to -i in Ansible is a directory, Ansible can use multiple inventory sources at the same time.

Static Groups of Dynamic Groups

When defining a static group of dynamic child groups, define the dynamic groups as empty in the static inventory file:

[foo]

[bar]

[cluster:children]
foo
bar

Developing Dynamic Inventory Sources

You just need to create a script or program that can return JSON in the right format when fed the proper arguments.

Script Conventions

When the external node script is called with the single argument --list , the script must return a JSON dictionary of all the groups to be managed.

{
  "databases":{
    "hosts": ["host01", "host02"],
    "vars": {
      "foo": bar,
      "flag": true
    }
  },
  "webservers":["web01", "web02"],
  "apps":{
    "hosts":["backend01", "backend02"],
    "vars":{
      "local": false
    },
    "children":["databases","webservers"]
  }
}

When called with the arguments --host <hostname> , the script must return either an empty JSON dictionary, or a dictionary of variables to make available to templates and playbooks.

Variables

Variables in Ansible are how we deal with differences between systems.

What Makes A Valid Variable Name

Variable names should be letters, numbers and underscores. Variables should always start with a letter.

Variable Defined in Inventory

See Inventory document for multiple ways on how to define variables in inventory.

Varialbe Defined in a Playbook

- hosts: webservers
  vars:
    http_port: 80

In a playbook, varialbes are defined directly inline like above.

Varialbes defined from include files and roles

Usage of roles is preferred as it provides a nice organizational system. Variables can also be included in the playbook via include files.

Using Variables: About Jinja2

Ansible allows you to reference variables in your playbooks using the Jinja2 templating system.

This is basic form of variable substituion My amp goes to {{ max_amp_value }}

And it’s also valid directly in playbooks template: src=foo.cfg dest={{ remote_install_path }}/foo.cfg

YAML Quote

YAML syntax requires that if you start a value with {{ foo }} you quote the whole line, since it wants to be sure you aren’t try to start a YAML dictionary.

This won’t work:

- hosts: backend
  vars:
    srv_path: {{ base_path }}/source

Do it like this and you’ll be fine:

- hosts: backend
  vars:
    srv_path: "{{ base_path }}/source"

Information Discoverd From Systems: Facts

Facts are information derived from speaking with your remote systems.

Try it out ansible -i hosts all -m setup and ansible -i hosts all -m setup -a filter=ansible_mounts

So, facts may be referenced in a template or playbook as {{ ansible_devices.sda.model }}

Facts are frequently used in Conditionals and also in templates.

Turning Off Facts

- hosts: lbservers
  gather_facts: false

Local Facts(Facts.d)

If a remotely managed system has an “/etc/ansible/facts.d” directory, any files in this directory ending in ”.fact”, can be JSON, INI, or executable files returning JSON, and these can supply local facts in Ansible.

For instance assume a /etc/ansible/facts.d/preferences.fact:

[general]
asdf=1
bar=2

ansible localhost -m setup -a "filter=ansible_local"

You will see the following fact added:

"ansible_local": {
    "preferences": {
      "general": {
        "asdf": "1",
        "bar": "2"
      }
    }
}

and this data can be accessed in a template/playbook as:

{{ ansible_local.preferences.general.asdf }}

Fact Caching

It is possible for one server to reference variables about another, like so:

{{ hostvars['asdf.example.com']['ansible_os_family'] }}

Ansible must have already talked to ‘asdf.example.com’ in the current or another play up higher playbook.

To avoid this, ansible allows the ability to save facts between playbook runs. With fact-caching enabled, it would not be necessary to “hit” all servers to reference variables and information about them.

To enable fact-caching in ansible.cfg as follows:

[defaults]
gatering = smart
fact_caching = redis
fact_caching_timeout = 86400

redis is the only supported fact caching engine:

yum install redis
service redis start
pip install redis

Registered Variables

Registered varialbes save the result of a command. Use of -v when executing playbooks will show possible values for the results.

Here’s a quick syntax example:

- hosts: localhost
  tasks:
    - name: add user ansible_test
      user: name=ansible_test state=present shell=/bin/false
      register: user_result
      ignore_errors: True

    - name: copy vimrc files
      copy: src=.vimrc dest=/home/ansible_test/.vimrc
      when: user_result.state == "present"

Accessing Complex Variable Data

Different ways to access variable data:

{{ ansible_eth0["ipv4"]["address"] }}
{{ ansible_eth0.ipv4.address }}
{{ ansible_eth0[0] }}

Magic Variables, and How TO Access Information About Other Hosts

Ansible provides a few variables for you automatically, mostly like hostvars group_names groups

  • hostvars

    hostvars lets you ask about the variables of another host including facts that have been gathered about that host.

    {{ hostvars['node1.server.com']['ansible_os_family'] }}
    
  • group_names

    group_names is a list of all the groups the current host is in.

    {% if 'webserver' in group_names %}
      # some part of a configuration file that only applies to webservers
    {% endif %}
    
  • groups

    groups is a list of all the groups in the inventory.

    {% for host in groups['webserver'] %}
      # something that applies to all app servers.
      {{ hostvars[host]['ansible_eth0']['ipv4']['address']}} # this could walk out all IP addresses in a group
    {% endfor %}
    
  • play_hosts

    play_hosts is available as a list of hostnames that are in scope for the current play.

  • delegate_to

    delegate_to is the inventory hostname of the host that the current task has been deledated to using ‘delegate_to’.

  • inventory_dir && inventory_file

    inventory_dir is the pathname of the directory holding Ansible’s inventory host file.

    inventory_file is the pathname and the filename pointing to the Ansible’s inventory host file.

Variable File Separation

To keep certain important variables private or just to keep certain information in different files, you can do this using external varialbes files:

---
- hosts: all
  remote_user: root
  vars:
    favcolor: blue
  vars_files:
    - /vars/external_vars.yml

  tasks:
  - name: create a file
    copy: src=/tmp/ansible_test content="{{ external_vars }}"

Using external varialbe files could removes the risk of sharing sensitive data with others when sharing your playbook source with them.

The content of each varialbes file is simple YAML dictionary, like this:

---
somevar: somevalue
passowrd: sensitive

Passing Variables On The Command Line

ansible-playbook release.yml --extra-vars "hosts=lbs user=tdm"

---
- hosts: '{{ hosts }}'
  remote_user: '{{ user }}'
  tasks:
    - name: update apps
      git: repo=git@github.com/gituser/somerepo.git dest='/home/{{user}}'

Extra vars can be loaded as quoted JSON or from a JSON file with “@”:

--extra-vars '{"hosts": "lbs", "user": "tdm"}'
--extra-vars "@some_file.json"

Variable Precedence: Where Should I Put A Variable ?

  • extra vars (-e in the command line) always win
  • then comes connection variables defined in inventory (ansible_ssh_user, etc)
  • then comes “most everything else” (command line switches, vars in play, included vars, role vars, etc)
  • then comes the rest of the variables defined in inventory
  • then comes facts discovered about a system
  • then “role defaults”, which are the most “defaulty” and lose in priority to everything.

Site wide defaults should be defined as a ‘group_vars/all’ setting.

Official document provides some examples

Conditionals

The When Statement

With the when clause, which contains a Jinja2 expression, It is easy to do something particular on a particular host:

tasks:
  - name: "shutdown CentOS 6 and 7 system"
    command: /sbin/shutdown -t now
    when: ansible_distribution == "CentOS" and
          (ansible_distribution_major_version == "6" or ansible_distribution_major_version == "7")

A number of Jinja2 filters can be used in when statements.

Suppose we want to ignore the error of one statement and then decide to do something conditionaly based on success or failure:

tasks:
  - command: /bin/false
    register: result
    ignore_errors: True

  - command: /bin/something
    when: result|failed
  - command: /bin/something_else
    when: result|success
  - command: /bin/still/something_else
    when: result|skipped

If you get some variable that’s a string and you’ll do a matn comparison on it:

tasks:
  - shell: echo "only on RedHat 6, derivatives, and later"
    when: ansible_os_family == "RedHat" and ansible_lsb.major_release|int >= 6

Note

The above example requires the lsb_release package on the target host in order to return the ansible_lsb.major_release fact.

tasks:
  - command: echo {{item}}
    with_items: [1, 2, 3, 4, 5]
    when: item > 3

When combining when with with_items , be aware that the when statement is processed separately for each item.

Applying ‘When’ to roles and includes

Note that if you have several tasks that all share the same conditional statement, you can affix the conditional to a task include statement as below. Note this does not work with playbook includes, just task includes. All the tasks get evaluated, but the conditional is applied to each and every task:

- include: tasks/sometasks.yml
  when: ansible_os_family == "RedHat"


- hosts: webservers
  roles:
    - { role: debian_stock_config, when: ansible_os_family == "Debian" }

Conditional Imports

Here is an example:

---
- hosts: all
  remote_user: root
  var_files:
    - "vars/common.yml"
    - [ "vars/{{ansible_os_family}}.yml", "vars/os_defaults.yml" ]

  tasks:
    - name: make sure apache is running
      service: name={{ apache }} state=running

Note

Ansible’s approach to configuration - separating variables from tasks, keeps your playbooks from turning into arbitrary code with ugly nested ifs, conditionals, and so on - and results in more streamlines & auditable configuration rules - especially because there are a minimum of decision points to track.

Selecting Files and Templates Based On Variables

- name: template a file
  template: src={{item}} dest=/etc/apps/foo.conf
  with_first_found:
    - files:
        - {{ansible_os_family}}.conf
        - defaults.conf
      path:
        - search_path_one/somedir/
        - /opt/other/dir/

Note

Sometimes a configuration file you want to copy, or a template you will use may depend on a variable. This example construct selects the first available file appropriate for the variables of a given host, which is often much cleaner than putting a lot of if conditionals in a template.

Register Variables

The register keyword decides what variable to save a result in. The resulting can be used in templates, action lines, or when statements.

---
- hosts: all
  remote_user: root
  gather_facts: true

  tasks:
    - name: register results
      command: cat /run/mysqld/mysqld.pid
      register: result

    - name: result failed
      service: name=mysqld state=running
      when: result|failed

    - name: result success
      shell: cat /run/mysqld/mysqld.pid > /root/logfile
      when: result.stdout != ''

As result quite farmiliar with:

{u'changed': True, u'end': u'2015-06-10 12:51:43.636894', u'stdout': u'1351', u'cmd': [u'cat', u'/run/sshd.pid'], u'start': u'2015-06-10 12:51:43.630985', u'delta': u'0:00:00.005909', u'stderr': u'', u'rc': 0, 'invocation': {'module_name': 'command', 'module_args': 'cat /run/sshd.pid'}, 'stdout_lines': [u'1351']}

The registered result can be used in the with_item of a task if it is converted into a list as shown below:

tasks:
  - name: get home dirs
    command: ls /home
    register: home_dirs

  - name: backup all home dirs
    file: path=/mnt/backup/{{item}} src=/home/{{item}} state=link
    with_items: home_dirs.stdout_lines
       # same as with_items: home_dirs.stdout.split()

Loops

Standard Loops

- name: common packages
  apt: name={{item.name}} state={{item.state}}
  with_items:
    - {name:"vim-enhanced", state:"present"}
    - {name:"build-essentials", state:"present"}
    - {name:"firefox", state:"abent"}

Just to save some typing of repeat sections, we use loop statements. It works fine with a YAML list in a variables file or the vars section.

Nested Loops

- name: make three copies of original file
  copy: src={{item[0]}} dest={{item[1]}}
  with_nested:
    - ["foo.conf", "bar.conf"]
    - ["/etc/app.d/", "/usr/share/app.d/, "/usr/local/app.d"]

Looping over Hashed

Suppose you have the following variable:

---
users:
  alice:
    name: Alice Brown
    tele: 32323232
  bob:
    name: Bob Green
    tele: 1010100101

You can loop through the elements of a hash using with_dict like this:

tasks:
- name: print info
  debug: msg="User {{item.key}} is {{item.value.name}} {{item.value.tele}}"
  with_dict: "{{users}}"

Looping with Fileglobs

with_fileglobs matched all files in a single directory, non-recursively, that match a pattern:

---
- hosts: apps
  tasks:
    - file: dest=/etc/apps state=directory
    - copy: src={{item}} dest=/etc/apps owner=root mode=600
      with_fileglob:
        - /playbooks/files/apps/*.conf

Note

When using a relative path with with_fileglob in a role, Ansible resolves the path relative to the roles/<rolename>/files directory

Looping over Integer Sequences

tasks:
- name: sequence loops
  file: dest=/tmp/loops.{{item}} state=touch
  with_sequence: start=4 end=23 stride=3

Random Choices

Random Choices is not a good load balancer approach. It can be used to add chaos and excitement to otherwise predictable automation environments:

tasks:
  - name: delete files by random choice
    file: src={{item}} state=absent
    with_random_choice:
      - "/root/.bashrc"
      - "/etc/default/grub"
      - "/var/log/message"

Do-Until Loops

tasks:
  - action: shell /usr/bin/foo
    register: result
    until: result.stdout.find("all systems go") != -1
    retries: 5
    delay: 10

Do-Until loops would retry a task untill acertain condition is met.

Playbooks Special Topics

Ansible Privilege Escalation

directives
become
equivalent to adding ‘sudo:’ or ‘su:’ to a play or task
become_user
equivalent to adding ‘sudo_user:’ or ‘su_user:’ to a play or task
become_method
at play or task level overrides the default method set in ansible.cfg, set to ‘sudo’ ‘su’ ‘pbrun’ ‘pfexec’
ansible variables
ansible_become
equivalent to ansible_sudo or ansible_su, allows to force privilege escalation
ansible_become_method
allows to set privilege escalation method
ansible_become_user
equivalent to ansible_sudo_user or ansible_su_user, allows to set the user you become through privilege escalation
ansible_become_pass
equivalent to ansible_sudo_pass or ansible_su_pass, allows you to set the privilege escalation password
command line options
--ask-become-pass
 ask for privilege escalation password
-b, --become run operation with become
--become-method=BECOME_METHOD
 privilege escalation method to use(default=sudo) choices:[sudo|su|pbrunpfexec]
--become-user=BECOME_USER
 run operations as this user(default=root)

Asynchronous Action and Polling

By default tasks in playbooks block, meaning the connections stay open until task is done on each node. But you may be running operations that take longer than the SSH timeout.

To launch a task asynchronously, specify its maximum runtime and how frequently you would like to poll for status:

---
- hosts: all
  remote_user: root
  tasks:
   - name: run a command that costs a very long time
     command: /bin/sleep 30
     async: 45
     poll: 5

Note

There is no default for the async time limit. If you leave off the ‘async’ keyword, the task runs asynchronously. If you dont need to wait on the task to complete, you my “fire and forget” by specifying a poll value of 0.

If you would like to perform a variation of the “fire and forget” where you “fire and forget, check on it later” you can perform a task similar to the following:

---
- hosts: localhost
  remote_user: root
  tasks:
   - name: run a command that costs a very long time
     apt: name=docker.io state=installed
     async: 40
     poll: 3
     register: result

   - name: result
     debug: msg={{result.ansible_job_id}}

   - name: jobid
     async_status: jid={{result.ansible_job_id}}
     register: jobresult
     until: jobresult.finished
     retries: 30
     delay: 2

   - name: async mesg
     debug: msg={{jobresult}}

Check Mode

When ansible-playbook is executed with --check it will not take any changes on remote systems. Any module instrumented to support ‘check mode’ will report what changes they would have made rather that make them.

Other modules that do not support check mode will also take no action.

Check mode is just a simulation, and if you have steps that use conditionals that depend on the result of prior command, it may be less useful for you.

Somtetimes you may want to have a task to be executed even in check mode. Use the always_run clause on the task:

tasks:
  - name: this task is run even in check mode
    command: /something/need/to/run/even/in/check/mode
    always_run: yes

A task with a when clause evaluated to false, will still be skipped even if it has a always_run clause evaluated to true.

Error Handling In Playbooks

Generally playbooks will stop executing any more steps on a host that has a failure. Somtetimes, you want to continue on:

- name: this will not be counted as a failure
  command: /bin/false
  ignore_errors: yes
Handlers and Failure

When a task fails on a host, handlers which were previously notified will not be run on that host. This can lead to cases where an unrelated failure can leave a host in an unexpected state.

For example, a task could update a configuration file and notify a handler to restart some service. If a task later on in the same play fails, the service will not be restarted despite the configuration change.

You can change this behavior with the --force-handlers command-line option, or by including force_handlers: True in a play, or force_handlers = True in ansible.cfg.

When handlers are forced, they will run when notified even if a task fails on that host. (Note that certain errors could still prevent the handler from running, such as a host becoming unreachable.)

Controlling What Defines Failure

Suppose the error code of a command is meaningless and to tell if there is a failure what really matters is the output of the command, for instance if the string “FAILED” is in the output.

- name: this command prints FAILED when it fails
  command: /usr/bin/df -h
  register: r
  failed_when: "'100' in r.stdout"
Overriding The Changed Result

Sometimes you will know, based on the return code or output that it did not make any changes, and wish to override the “changed” result such that it does not appear in report output or does not cause handlers to fire:

tasks:
  - name: this will never report 'changed' status
    command: file /bin/sh
    changed_when: False

  - name: this will report 'changed' status
    command: toucn /var/log/event.log
    register: r
    ignore_errors: true
    changed_when: "r.rc!=2"

Lookups

Lookup plugins allow access of data in Ansible from outside sources. These plugins are evaluated on the Ansible control machine, and can include reading the filesystem but also contacting external datastores and services. These values are then made available using the standard templating system in Ansible, and are typically used to load variables or templates with information from those systems.

File Lookup

Contents can be read off the filesystem as follows:

- hosts: all
  vars:
    contents: "{{lookup('file', '/etc/foo.txt')}}"

  tasks:
    - debug: msg="the value of foo.txt is {{contents}}"
Password Lookup

password generates a random plaintext password and stores it in a file at a given filepath.

More Details will be found as The Password Lookup

The CSV File Lookup

csvfile lookup reads the contents of a file in CSV format. The lookup ollks for the row where the first column matches keyname , and returns the value in the first column.

The csvfile lookup supports several arguments:

lookup('csvfile', 'key arg1=val1 arg2=val2 ...')

The first value in the argument is the key , which must be an entry that appears exactly once in column 0 of the table. All other arguments are optional.

Field Default Description
file ansible.csv Name of the file to load
delimiter TAB Delimiter used by CSV file
col 1 The column to output, indexed by 0
default empty string return value if the key is not in the csv file
More Lookups

Here are more examples:

---
- hosts: all
  tasks:
    - debug: msg="{{lookup('env','HOME')}} is and environment varialbe"

    - debug: msg="{{item}}" is a line from the result of this command"
      with_lines:
        -cat /etc/motd
    - debug: msg="{{lookup('pipe','date'}} is the raw result of running this command"
    # redis_kv lookup requires the Python redis package
    - debug: msg="{{lookup('redis_kv','redis://localhost:6379,somekey')}} is value is Redis for somekey"

    # dnstxt lookup requires the Python dnspython package
    - debug: msg="{{ lookup('dnstxt', 'example.com') }} is a DNS TXT record for example.com"

    - debug: msg="{{ lookup('template', './some_template.j2') }} is a value from evaluation of this template"

    - debug: msg="{{ lookup('etcd', 'foo') }} is a value from a locally running etcd"

    # The following lookups were added in 1.9
    - debug: msg="{{item}}"
      with_url:
           - 'http://github.com/gremlin.keys'

    # outputs the cartesian product of the supplied lists
    - debug: msg="{{item}}"
      with_cartesian:
           - list1
           - list2
           - list3

Delegation, Rolling Updates, and Local Actions

Rolling Update Batch Size

For a rolling updates use case, you can define how many hosts Ansible should manage at a single time by using the serial keyword:

- name: rolling updates
  hosts: webservers
  serial: 3
  # or serial: "30%"

And serial keyword can also be specified as a percentage.

Maximum Failure Percentage

It may be desirable to abort the play when a certain threshold of failures have been reached. To achieve this, you can set a maximum failure percentage on a play as follows:

- hosts: webservers
  max_fail_percentage: 30
  serial: 10

If more than 3 of the 10 servers in the group were to fail, the rest of the play would be aborted.

Note

The percentage set must be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task to abort when 2 of the systems failed, the percentage should be set at 49 rather than 50.

Delegation

If you want to perform a task on one host with reference to other hosts, use the delegate_to keyword on a task. Using this with the serial keyword to control the numbers of hosts executing at one time is also a good idea:

---
- hosts: webservers
  serial: 5
  tasks:
    - name: task out of load balancer pool
      command: /usr/bin/task_out_of_pool {{inventory_hostname}}
      delegate_to: 127.0.0.1

    - name: actual steps would go here
      yum: name=acme-web-stack state=latest

    - name: add back to load balancer pool
      command: /usr/bin/add_back_to_pool {{inventory_hostname}}
      delegate_to: 127.0.0.1

There is also a shorthand syntax that you can use on e per-task basis: local_action . A common pattern is to use a local action to call rsync to recursively copy files to the managed servers:

---
# ...
  tasks:
    - name: recursively copy file from management server to target
      local_action: command rsync -a /path/to/files {{inventory_hostname}}:/path/to/target/
Run Once

It can be achieved by configuring run_once on a task to only run a task one time and only on one host.

---
#...
  tasks:
    - name: run once
      command: /opt/app/update_db.py
      run_once: true

This can be optionally paired with delegate_to to specify an individual host to execute on. When run_once is not used with delegate_to it will execute on the first host:

---
#...
  tasks:
    - name: run once
      command: /opt/app/update_db.py
      run_once: true
      delegate_to: web1.app.com
Local Playbooks

To run an entire playbook locally rather than by connecting over SSH, just set the hosts: line to hosts: 127.0.0.1 and then run the playbook like so:

ansible-playbook playbook.yml --connection=local

Alternatively, a local connection can be used in a single playbook play:

- hosts: 127.0.0.1
  connection: local

Setting the Environment

It is easy to configure your environment by using the environment keyword:

- hosts: all
  remote_user: root
  tasks:
    - apt: name=cobble state=installed
      environment:
        http_proxy: http://proxy.example.com:8080

The environment can also be stored in a variable, and accessed like so:

- hosts: all
  remote_user: root
  vars:
    proxy_env:
      http_proxy: http://proxy.example.com:8080

  tasks:
    - apt: name=cobble state=installed
      environment: proxy_env

The most logical place to define an environment hash might be a group_vars file:

---
# file: group_vars/hf
net_server: ntp.hf.example.com
backup: bak.hf.example.com
proxy_env:
  http_proxy: http://proxy.hf.example.com:8080

Prompts

When running a playbook, you may wish to prompt the user for certain input, and can do so with the vars_prompt section:

---
- hosts: all
  remote_user: root
  vars_prompt:
    name: "what is your name? "
    quest: "what is your quest? "

You can set a default argument:

---
- hosts: all
  remote_user: root
  vars_prompt:
    - name: "release_version"
      prompt: "Input Version"
      default: "1.0"

An alternative form of vars_prompt allows for hiding input from the user:

vars_prompt:
  - name: "u_pwd"
    prompt: "Enter your password"
    private: yes

If passlib is installed, vars_prompt can also crypt the entered value so you can use it to define a password:

vars_prompt:
  - name: "passwd"
    prompt: "Input Your Password"
    private: yes
    confirm: yes
    encrypt: "sha512_crypt"
    salt_size: 7

You can use your own salt using salt , or have one generated automatically using salt_size . If nothing is specified, a salt of size 8 will be generated.

Here are some crypt scheme supported by passlib:

* des_crypt - DES Crypt
* bsdi_crypt - BSDi Crypt
* bigcrypt - BigCrypt
* crypt16 - Crypt16
* md5_crypt - MD5 Crypt
* bcrypt - BCrypt
* sha1_crypt - SHA-1 Crypt
* sun_md5_crypt - Sun MD5 Crypt
* sha256_crypt - SHA-256 Crypt
* sha512_crypt - SHA-512 Crypt
* apr_md5_crypt - Apache’s MD5-Crypt variant
* phpass - PHPass’ Portable Hash
* pbkdf2_digest - Generic PBKDF2 Hashes
* cta_pbkdf2_sha1 - Cryptacular’s PBKDF2 hash
* dlitz_pbkdf2_sha1 - Dwayne Litzenberger’s PBKDF2 hash
* scram - SCRAM Hash
* bsd_nthash - FreeBSD’s MCF-compatible nthash encoding

Tags

If you have a large playbook it may become useful to be able to run a specific part of the configuration without running the whole playbook.

Both plays and tasks support a “tags:” attribute for this reason:

tasks:
  - yum: name="{{item}}" state=installed
    with_items:
      - httpd
      - memcached
    tags:
      - packages

  - template: src=templates/src.j2 dest=/etc/foo.conf
    tags:
      - configuration

If you wanted to just run the “configuration” and “packages” part of a very long playbook, you could do this:

ansible-playbook site.yml --tags "configuration,packages"

On the other hand, if you want to run a playbook without certain tasks, you would do this:

ansible-playbook site.yml --skip-tags "notification"

You may also apply tags to roles:

roles:
  - {role: webserver, port: 5000, tags:['web','foo']}

And you may also tag basic include statements:

- include: foo.yml tags=web,foo

There is a special always tag that will always run a task, unless specifically skipped (–skip-tags always).

tasks:
  - debug: msg="Always runs"
    tags:
      - always
  - debug: msg="runs when you use tag1"
    tags:
      - tag1

There are another 3 special keywords for tags tagged untagged all

ansible-playbook site.yml --tags all
ansible-playbook site.yml --tags tagged
ansible-playbook site.yml --tags untagged

Start and Step

If you want to start executing your playbook at a particular task, you can do so with the --start-at-task option:

ansible-playbook playbook.yml --start-at-task="install packages"

The above will start executing your playbook at a task named “install packages”

Playbooks can also be executed interactively with --step

ansible-playbook playbook.yml --step

Answering ‘y’ will execute the task , ‘n’ will skip the task and ‘c’ will continue executing all the remaining tasks without asking.

Tips and Tricks

Reboot a server and wait for it to come back

- name: restart machine
  command: shutdown -r now "Ansible updates triggered"
  async: 0
  poll: 0
  ignore_errors: true

This task uses the command module to send the shutdown command to the host. By using async=0 and poll=0 , we background the process. The ignore_errors is there just to ensure that the task runs even though it is running a command that could prematurely terminate the process.

- name: waiting for server to come back
  local_action: wait_for host={{inventory_hostname}}
        state=started
        delay=30
        timeout=600
        connect_timeout=30
  sudo:false

The second task waits for the server to come back online. This is a local action that was delegated to run on the Ansible control node.

handlers:
  - name: restart server
    command: shutdown -r now 'Reboot triggered by Ansible'
    async: 0
    poll: 0
    ignore_errors: true

  - name: wait for server to restart
    local_action:
      module: wait_for
        host={{inventory_hostname}}
        delay=3
        timeout=600
        state=started
      sudo: false


tasks:
  - name: Set hostname
    hostname: name=host01
    notify:
      - restart server
      - wait for server to restart

How do I split an action into a multi-line format

To split a long task line into multiple lines, such as “action: copy src=httpd.conf dest=/etc/httpd/httpd.conf”, you could format it as follows (note indentations.):

- name: Update the Apache config
  copy:
    src: httpd.conf
    dest: /etc/httpd/httpd.conf

Or, conversely, using the old ‘action’ syntax:

- name: Update the Apache config
  action:
    module: copy
    src: httpd.conf
    dest: /etc/httpd/httpd.conf

Use YAML line continuations:

- name: Update the Apache config
  copy:>
    src=httpd.conf
    dest=/etc/httpd/httpd.conf

Or:

- name: Update the Apache config
  action: copy >
    src=httpd.conf
    dest=/etc/httpd/httpd.conf

Continuous Delivery and Rolling Upgrades

This document describes how to achieve the continuous delivery and zero-downtime updates, using one of Ansible’s most complete example playbooks as a template: lamp_haproxy .

This playbook deploy Apache, PHP, MySQL, Nagios and HAProxy to a CentOS-based set of servers.

Site Deployment

Let us start with site.yml

---
# This playbook deploys the whole application stack in this site.

# Apply common configuration to all hosts
- hosts: all

  roles:
  - common

# Configure and deploy database servers.
- hosts: dbservers

  roles:
  - db

# Configure and deploy the web servers. Note that we include two roles
# here, the 'base-apache' role which simply sets up Apache, and 'web'
# which includes our example web application.

- hosts: webservers

  roles:
  - base-apache
  - web

# Configure and deploy the load balancer(s).
- hosts: lbservers

  roles:
  - haproxy

# Configure and deploy the Nagios monitoring node(s).
- hosts: monitoring

  roles:
  - base-apache
  - nagios

In this playbook we have 5 plays. The first one targets all hosts and applies the common role to all of the hosts. This is for site-wide things like yum repository configuration, firewall configuration, and anything else that needs to apply to all of the servers.

The next four plays run against specific host groups and apply specific roles to those servers. Along with the roles for Nagios monitoring, the database, and the web application, we’ve implemented a base-apache role that installs and configures a basic Apache setup. This is used by both the sample web application and the Nagios hosts.

Reusable Content: Roles

Roles are a way to organize content: tasks, handlers, templates, and files, into reusable components.

This example has six roles: common , base-apache , db , haproxy , nagios , and web . Most sites will have one or more common roles that are applied to all systems, and then a series of application-specific roles that install and configure particular parts of the site.

Roles can have variables and dependencies and you can pass in parameters to roles to modify their behavior.

Configuration: Group Variables

Group variables are variables that are applied to groups of servers. They can be used in templates and in playbooks to customized behavior and to provide easily-changed settings and parameters.

They are stored in a directory called group_vars in the same location as your inventory. These variables are used in a variety of places. You can use them in playbooks, templates.

The Rolling Upgrade

rolling_upgrade.yml was made up of two plays.

- hosts: monitoring
  tasks: []

This play forces a fact-gathering step on our monitoring servers.

The next part is the update play which first part looks like this:

- hosts: webservers
  user: root
  serial: 1

This is a play definition, operating on the webservers group. The serial keyword tells ansible how many servers to operate on at once.If it is not specified, Ansible will parallelize these operations up to the default ‘forks’ limit specified in the configuration file. But for a zero-downtime rolling upgrade, you may not want to operate on that many hosts at once.

Here is the next part of the update play:

pre_tasks:
- name: disable nagios alerts for this host webserver service
  nagios: action=disable_alerts host={{inventory_hostname}} services=webserver
  delegate_to: "{{item}}"
  with_items: groups.monitoring

- name: disable the server in haproxy
  shell: echo "disable server myapplb/{{inventory_hostname}} "| socat stdio /var/lib/haproxy/stats
  delegate_to: "{{item}}"
  with_items: groups.lbservers

The pre_tasks keyword just lets you list tasks to run before the roles are called. The delegate_to and with_items arguments, used together, cause Ansible to loop over each monitoring server and load balancer, and perform that operation(delegate that operation) on the monitor or load balancing server, “on behalf” of the webserver.

In programming terms, the outer loop is the list of web servers, and the inner loop is the list of monitoring servers.

In the post_tasks section, we reverse the changes to the Nagios configuration and put the web server back in the load balancing pool:

post_tasks:
- name: Enable the server in haproxy
  shell: echo "enable server myapplb/{{inventory_hostname}}"| socat stdio /var/lib/haproxy/stats
  delegate_to: "{{item}}"
  with_items: groups.lbservers
- name: re-enable nagios alerts
  nagios: action=enable_alerts host={{inventory_hostname}} services=webserver
  delegate_to: groups.monitoring

Continuous Delivery End-To-End

Now that you have an automated way to deploy updates to your application, how do you tie it all together? A lot of organizations use a continuous integration tool like Jenkins or Atlassian Bamboo to tie the development, test, release, and deploy steps together. You may also want to use a tool like Gerrit to add a code review step to commits to either the application code itself, or to your Ansible playbooks, or both.

Depending on your environment, you might be deploying continuously to a test environment, running an integration test battery against that environment, and then deploying automatically into production. Or you could keep it simple and just use the rolling-update for on-demand deployment into test or production specifically. This is all up to you.

This should give you a good idea of how to structure a multi-tier application with Ansible, and orchestrate operations upon that app, with the eventual goal of continuous delivery to your customers. You could extend the idea of the rolling upgrade to lots of different parts of the app; maybe add front-end web servers along with application servers, for instance, or replace the SQL database with something like MongoDB or Riak. Ansible gives you the capability to easily manage complicated environments and automate common operations.

Test Strategies

Check Mode as A Drift Test

--check mode is Ansible can be used as a layer of testing as well. Using the --check flag to the ansible command will report if Ansible thinks it would have had to have make any changes to bring the system into a desired state.

Ordinarily scripts and commands dont run in check mode, so if you want certain steps to always execute in check mode, add the always_run flag:

tasks:
  - script: varify.sh
    always_run: True

Modules That Are Useful for Testing

Certain playbook modules are particularly good for testing:

task:
  - wait_for: host={{inventory_hostname}} port=22
    delegate_to: localhost
    register: wait
    ignore_errors: true

  - fail: msg="port start failed."
    when: wait.failed

>WAIT_FOR

Waiting for a port to become available is useful for when services are not immediately available after their init scripts return - which is true of certain Java application servers. It is also useful when starting guests with the [virt] module and needing to pause until they are ready. This module can also be used to wait for a file to be available on the filesystem or with a regex match a string to be present in a file.
tasks:
  - uri: url=http://www.baidu.com return_content=yes
    register: webpage

  - fail: msg='service is not start'
    when: "'something' not in webpage.content"

It is easy to push an arbitrary script on a remote host and the script will automatically fail if it has a non-zero return code:

tasks:
  - script: test_script
    registry: r_code

  - debug: msg="{{r_code}}"

With the assert module makes it very easy to validate various kinds of truth:

tasks:
  - shell: /usr/bin/some-command
    register: cmd_result

  - assert:
      that:
        - "'not ready' not in cmd_result.stderr"
        - "'gizmo enabled' in cmd_result.stdout"

>ASSERT

This module asserts that a given expression is true and can be a simpler alternative to the ‘fail’ module in some cases.

You could use the stat module to test for existence of files that are not declaratively set by your Ansible configuration:

tasks:
  - stat: path=/path/to/test/
    register: f
  - assert:
      that:
        - f.stat.exists and f.stat.isdir

>STAT

Retrieves facts for a file similar to the linux/unix ‘stat’ command.

There is no need to check things like the return code of commands, Ansible is checking them automatically. Rather than checking for a user to exist, consider using the user module to make it exist.

Integrating Testing With Rolling Updates

This is the great culmination of embedded tests:

---
- hosts: webservers
  serial: 5
  pre_tasks:
    - name: take out of load balancer pool
      command: /usr/bin/task_out_of_pool
      delegate_to: 127.0.0.1

  roles:
    - common
    - webserver
    - apply_testing_checks
  or
  tasks:
    - script: /srv/qa_team/app_testing_script.sh
      delegate_to: testing_server

  post_tasks:
    - name: add back
      command: /usr/bin/add_back_to_pool
      delegate_to: 127.0.0.1

If the “apply_testing_checks” step is not performed the machine will not go back into the pool.

Achieving Continuous Deployment

The workflow may look like this:

- Write and use antomation to deploy local development VMs
- Have a CI system like jenkins deploy to a stage environment on every code change
- The deploy job calls testing scripts to pass/fail a build on every deploy
- If the deploy job succeeds, it runs the same deploy playbook against production inventory

Conclusion

Ansible believes you should not need another framework to validate basic things of your infrastructure is true. Ansible is an order-based system that will fail immediately on unhandled errors for a host, and prevent further configuration of that host.

Since Ansible is designed as a multi-tier orchestration system, it makes it very easy to incorporate tests into the end of a playbook run, either using loose tasks or roles.

Finally, because Ansible errors propagate all the way up to the return code of the Ansible program itself, and Ansible by default runs in an easy push-based mode, Ansible is a great step to put into a build environment if you wish to use it to roll out systems as part of a Continuous Integration/Continuous Delivery pipeline, as is covered in sections above.

The focus should not be on infrastructure testing, but on application testing. Obviously at the development stage, unit tests are great too.Ansible describes states of resources declaratively, so you dont have to do unit test on your playbook.

In all, testing is a very organizational and site-specific thing. Everybody should be doing it, but what makes the most sense for your environment will vary with what you are deploying and who is using it – but everyone benefits from a more robust and reliable deployment system.

Playbooks Filters

Filters For Formatting Data

The following filters will take a data structure in a template and render it in a slightly different format. These are occasionally useful for debugging:

{{some_varialbe|to_nice_json}}
{{some_varialbe|to_nice_yaml}}

Filters Often Used With Conditionals

The following takes are illustrative of how filters can be used with conditionals:

tasks:
  - shell: /usr/bin/foo
    register: resutl
    ignore_errors: true
  - debug: msg="previous failed"
    when: result|failed
  - debug: msg="previous changed"
    when: result|changed
  - debug: msg="previous succeeded"
    when: result|success
  - debug: msg="previous skipped"
    when: result|skipped

Forcing Variables To Be Defined

The default behavior from ansible and ansible.cfg is to fail if varialbes are undefined, but you can turn this off.

This allows an explicit check with this feature off:

{{ variable | mandatory }}

The variable value will be used as is, but the template evaluation will raise an error if it is undefined.

Defaulting Undefined Variables

Jinja2 provides a usefule default filter, that is often a better approach to failing if a variable is not defined:

{{ some_variable|default(5)}}

Omitting Undefined Variables and Parameters

It is possible to use the default filter to omit variables and module parameters using the special omit variable:

- name: touch files with an optional mode
  file: dest={{item.path}} state=touch mode={{item.mode|default(omit)}}
  with_items:
    - path: /tmp/foo
    - path: /tmp/bar
      mode: "0444"

Only the final file will receive the mode=0444 option.

List Filters

These filters all operate on list variables.

To get the minimum value from list of numbers:

{{list|min}}

To get the maximum value from list of numbers:

{{list|max}}

Set Theory Filters

All these functions return a unique set from sets or lists.

To get a unique set from a list:

{{ list1|unique }}

To get a union of two lists:

{{ list1|union(list2) }}

To get the intersection of 2 lists:

{{ list1|intersect(list2) }}

To get the difference of 2 lists:

{{ list1|difference(list2) }}

To get the symmetric difference of 2 lists:

{{ list1|symmetric_difference(list2) }}

Version Comparison Filters

version_compare

{{ ansible_distribution_version|version_campare('12.04','>='}})

version_compare filter accepts the following operators:

<, lt, <=, le, >, gt, >=, ge, ==, =, eq, !=, <>, ne

This filter also accepts a 3rd parameter, strict which defines if strict version parsing should be used. The default is False, and if set as True will use more strict version parsing:

{{ sample_version_ver|version_compare('1.0',operator='lt',strict=True) }}

Random Number Filter

To get a random item from a list:

{{ ['a','b','c']|random}}

To get a random number from 0 to supplied end:

{{ 59|random }}

To get a random number from 0 to supplied end but in steps of 10:

{{ 100|random(step=10) }}

To get a random number from 30 to supplied end but in steps of 10:

{{ 100|random(start=30,step=10) }}

Shuffle Filter

This filter will randomize an existing list, giving a different order every invocation.

To get a random list from an existing list:

{{ ['a','b','c']|shuffle }}

Math

To see if something is actually a number:

{{ myvar|isnan }}

Get the logarithm (default is e):

{{ myvar|log }}

Get the base 10 logarithm:

{{ myvar|log(10) }}

Power of 2:

{{ myvar|pow(2) }}

square root or the 5th:

{{ myvar|root(3) }}

IP Address Filter

To test if a string is a valid IP address:

{{ myvar|ipaddr }}
{{ myvar|ipv4 }}
{{ myvar|ipv6 }}

Get IP address from a CIDR:

{{ '192.0.2.1/24'|ipaddr('address') }}

Hashing filters

To get the sha1 hash of a string:

{{ 'test'|hash('sha1') }}

To get the md5 hash of a string:

{{ 'test'|hash('md5') }}

Get a string checksum:

{{ 'test'|checksum }}

To get a sha512 password hash (random salt):

{{ 'passwordsaresecret'|password_hash('sha512') }}

To get a sha256 password hash with a specific salt:

{{ 'secretpassword'|password_hash('sha256', 'mysecretsalt') }}

Hash types available depend on the master system running ansible, ‘hash’ depends on hashlib password_hash depends on crypt.

Other Useful Filters

To use one value on true and another on false:

{{ (name == 'jhon')|ternary('Mr','Ms') }}

To concatenate a list into a string:

{{ list|join(" ") }}

To get the last name of a file path:

{{ path|basename }}

To get the directory name from a path:

{{ path|dirname }}

To expand a path containing a tilde (~) character:

{{ path|expanduser }}

To work with Base64 encoded strings:

{{ encoded|b64decode }}
{{ decoded|b64encode }}

To create a UUID from a string:

{{ hostname|to_uuid }}

To match strings against a regex, use the “match” or “search” filter. match will require a complete match in the string, while search will require a match inside of the string.

 vars:
   url: "http://example.com/users/foo/resources/bar"
 tasks:
   - debug: "msg='matched pattern 1'"
     when: url | match("http://example.com/users/.*/resources/.*")
   - debug: "msg='matched pattern 2'"
     when: url | search("/users/.*/resources/.*")

To replace text in a string with regex, use the ``regex_replace`` filter::

  # convert ansible to able
  {{ 'ansible'|regex_replace('^a.*i(.*)$','a\\1') }}

  # convert foobar to bar
  {{ 'foobar'|regex_replace('^f.*o(.*)$','\\1) }}

Note

If “regex_replace” filter is used with variables inside YAML arguments (as opposed to simpler ‘key=value’ arguments), then you need to escape backreferences (e.g. \1) with 4 backslashes (\\) instead of 2 (\).

Quick reference

Forked from ansible-quickref
Thanks, Lorin

Quick reference to parameters and special variables.

Facts

See facts.

Built-in variables

These are variables that are always defined by ansible.

Parameter Description
hostvars A dict whose keys are Ansible host names and values are dicts that map variable names to values
group_names A list of all groups that the current host is a member of
groups A dict whose keys are Ansible group names and values are list of hostnames that are members of the group. Includes all and ungrouped groups: {"all": [...], "web": [...], "ungrouped": [...]}
inventory_hostname Name of the current host as known by ansible.
play_hosts A list of inventory hostnames that are active in the current play (or current batch if running serial)
ansible_version A dict with ansible version info: {"full": 1.8.1", "major": 1, "minor": 8, "revision": 1, "string": "1.8.1"}

These can be useful if you want to use a variable associated with a different host. For example, if you are using the EC2 dynamic inventory and have a single host with the tag “Name=foo”, and you want to access the instance id in a different play, you can do something like this:

- hosts: tag_Name_foo
  tasks:
    - action: ec2_facts

  ...

- hosts: localhost
  vars:
    instance_id: {{ hostvars[groups['tag_Name_foo'][0]]['ansible_ec2_instance_id'] }}
  tasks:
    - name: print out the instance id for the foo instance
      debug: msg=instance-id is {{ instance_id }}

Internal variables

These are used internally by Ansible.

Variable Description
playbook_dir Directory that contains the playbook being executed
inventory_dir Directory that contains the inventory
inventory_file Host file or script path (?)
Play parameters
Parameter Description
any_errors_fatal  
gather_facts Specify whether to gather facts or not
handlers List of handlers
hosts Hosts in the play (e.g., webservers).
include Include a playbook defined in another file
max_fail_percentage When serial is set on a play, and some hosts fail on a task, if the percentage of hosts that fail exceeds this number, Ansible will fail the whole play. (e.g., 20).
name Name of the play, displayed when play runs (e.g., Deploy a foo).
pre_tasks List of tasks to execute before roles.
port Remote ssh port to connect to
post_tasks List of tasks to execute after roles.
remote_user Alias for user.
role_names  
roles List of roles.
serial Integer that indicates how many hosts Ansible should manage at a single
su  
su_user  
sudo Boolean that indicates whether ansible should use sudo (e.g., True).
sudo_user If sudo’ing, user to sudo as. Defaults: root.
tasks List of tasks.
user User to ssh as. Default: root (unless overridden in config file)
no_log  
vars Dictionary of variables.
vars_files List of files that contain dictionary of variables.
vars_prompt Description of vars that user will be prompted to specify.
vault_password  

Task parameters

Parameter Description
name Name of the task, displayed when task runs (e.g., Ensure foo is present).
action Name of module to specify. Legacy format, prefer specifying module name directly instead
args A dictionary of arguments. See docs for ec2_tag for an example.
include Name of a separate YAML file that includes additional tasks.
register Record the result to the specified variable (e.g., result)
delegate_to Run task on specified host instead.
local_action Equivalent to: delegate_to: 127.0.0.1.
user User to ssh as for this task
sudo Boolean that indicates whether ansible should use sudo on this task
sudo_user If sudo’ing, user to sudo as.
when Boolean. Only run task when this evaluates to True. Default: True
ignore_errors Boolean. If True, ansible will treat task as if it has succeeded even if it returned an error, Default: False
module More verbose notation for specifying module parameters. See docs for ec2 for an example.
environment Mapping that contains environment variables to pass
failed_when Specify criteria for identifying task has failed (e.g., "'FAILED' in command_result.stderr")
changed_when Specify criteria for identifying task has changed server state
with_items List of items to iterate over
with_nested List of list of items to iterate over in nested fashion
with_fileglob List of local files to iterate over, described using shell fileglob notation (e.g., /playbooks/files/fooapp/*)
with_first_found Return the first file or path, in the given list, that exists on the control machine
with_together Dictionary of lists to iterate over in parallel
with_random_choice List of items to be selected from at random
with_dict Loop through the elements of a hash
until Boolean, task will retry until evaluates true or until retries
retries Used with “until”, number of times to retry. Default: 3
delay Used with “until”, seconds to wait between retries. Default: 10
run_once If true, runs task on only one of the hosts
always_run If true, runs task even when in –check mode

Complex args

There are three ways to specify complex arguments:

just pass them:

- ec2_tag:
    resource: vol-abcdefg
    tags:
      Name: my-volume

action/module parameter:

- action:
    module: ec2_tag
    resource: vol-abcdefg
    tags:
      Name: my-volume

args parameter:

- ec2_tag: resource=vol-abcdefg
  args:
    tags:
      Name: my-volume

Host variables that modify ansible behavior

Parameter Description
ansible_ssh_host hostname to connect to for a given host
ansible_ssh_port ssh port to connect to for a given host
ansible_ssh_user ssh user to connect as for a given host
ansible_ssh_pass ssh password to connect as for a given host
ansible_ssh_private_key_file ssh private key file to connect as for a given host
ansible_connection connection type to use for a given host (e.g. local)
ansible_python_interpreter python interpreter to use
ansible_*_interpreter interpreter to use

Variables returned by setup

These are the same as the output of Facts described in a previous section. Currently, this just has one variable defined.

Parameter Description Example
ansible_date_time Dictionary that contains date info {"date": "2013-10-02", "day": "02", "epoch": "1380756810", "hour": "19","iso8601": "2013-10-02T23:33:30Z","iso8601_micro": "2013-10-02T23:33:30.036070Z","minute": "33","month": "10","second": "30","time": "19:33:30","tz": "EDT","year": "2013"}

Return value of a loop

If you register a variable with a task that has an iteration, e.g.:

- command: echo {{ item }}
  with_items:
    - foo
    - bar
    - baz
  register: echos

Then the result is a dictionary with the following values:

Field name Description
changed boolean, true if anything has changed
msg a message such as “All items completed”
results a list that contains the return value for each loop iteration

For example, the echos variable would have the following value:

{
    "changed": true,
    "msg": "All items completed",
    "results": [
        {
            "changed": true,
            "cmd": [
                "echo",
                "foo"
            ],
            "delta": "0:00:00.002780",
            "end": "2014-06-08 16:57:52.843478",
            "invocation": {
                "module_args": "echo foo",
                "module_name": "command"
            },
            "item": "foo",
            "rc": 0,
            "start": "2014-06-08 16:57:52.840698",
            "stderr": "",
            "stdout": "foo"
        },
        {
            "changed": true,
            "cmd": [
                "echo",
                "bar"
            ],
            "delta": "0:00:00.002736",
            "end": "2014-06-08 16:57:52.911243",
            "invocation": {
                "module_args": "echo bar",
                "module_name": "command"
            },
            "item": "bar",
            "rc": 0,
            "start": "2014-06-08 16:57:52.908507",
            "stderr": "",
            "stdout": "bar"
        },
        {
            "changed": true,
            "cmd": [
                "echo",
                "baz"
            ],
            "delta": "0:00:00.003050",
            "end": "2014-06-08 16:57:52.979928",
            "invocation": {
                "module_args": "echo baz",
                "module_name": "command"
            },
            "item": "baz",
            "rc": 0,
            "start": "2014-06-08 16:57:52.976878",
            "stderr": "",
            "stdout": "baz"
        }
    ]
}

EC2 stuff

Values returned by ec2 module

This assumes you’re using tagging.

If the instances don’t exist yet:

{
       "changed": false,
       "instance_ids": [
          "i-db2fd037",
       ],
       "instances": [
           {
               "ami_launch_index": "0",
               "architecture": "x86_64",
               "dns_name": "ec2-54-173-62-41.compute-1.amazonaws.com",
               "ebs_optimized": false,
               "hypervisor": "xen",
               "id": "i-db2fd037",
               "image_id": "ami-9aaa1cf2",
               "instance_type": "t2.micro",
               "kernel": null,
               "key_name": "mykey",
               "launch_time": "2014-11-16T03:53:31.000Z",
               "placement": "us-east-1d",
               "private_dns_name": "ip-10-0-0-17.ec2.internal",
               "private_ip": "10.0.0.17",
               "public_dns_name": "ec2-54-173-62-41.compute-1.amazonaws.com",
               "public_ip": "54.173.62.41",
               "ramdisk": null,
               "region": "us-east-1",
               "root_device_name": "/dev/sda1",
               "root_device_type": "ebs",
               "state": "running",
               "state_code": 16,
               "virtualization_type": "hvm"
           }
       ]
       "invocation": {
           "module_args": "",
           "module_name": "ec2"
       },
       "tagged_instances": [
           {
               "ami_launch_index": "0",
               "architecture": "x86_64",
               "dns_name": "ec2-54-173-62-41.compute-1.amazonaws.com",
               "ebs_optimized": false,
               "hypervisor": "xen",
               "id": "i-db2fd037",
               "image_id": "ami-9aaa1cf2",
               "instance_type": "t2.micro",
               "kernel": null,
               "key_name": "mykey",
               "launch_time": "2014-11-16T03:53:31.000Z",
               "placement": "us-east-1d",
               "private_dns_name": "ip-10-0-0-17.ec2.internal",
               "private_ip": "10.0.0.17",
               "public_dns_name": "ec2-54-173-62-41.compute-1.amazonaws.com",
               "public_ip": "54.173.62.41",
               "ramdisk": null,
               "region": "us-east-1",
               "root_device_name": "/dev/sda1",
               "root_device_type": "ebs",
               "state": "running",
               "state_code": 16,
               "virtualization_type": "hvm"
           }
       ]
   }

If the instances already exist:

{
       "changed": false,
       "instance_ids": null,
       "instances": null,
       "invocation": {
           "module_args": "",
           "module_name": "ec2"
       },
       "tagged_instances": [
           {
               "ami_launch_index": "0",
               "architecture": "x86_64",
               "dns_name": "ec2-54-173-62-41.compute-1.amazonaws.com",
               "ebs_optimized": false,
               "hypervisor": "xen",
               "id": "i-db2fd037",
               "image_id": "ami-9aaa1cf2",
               "instance_type": "t2.micro",
               "kernel": null,
               "key_name": "mykey",
               "launch_time": "2014-11-16T03:53:31.000Z",
               "placement": "us-east-1d",
               "private_dns_name": "ip-10-0-0-17.ec2.internal",
               "private_ip": "10.0.0.17",
               "public_dns_name": "ec2-54-173-62-41.compute-1.amazonaws.com",
               "public_ip": "54.173.62.41",
               "ramdisk": null,
               "region": "us-east-1",
               "root_device_name": "/dev/sda1",
               "root_device_type": "ebs",
               "state": "running",
               "state_code": 16,
               "virtualization_type": "hvm"
           }
       ]
   }
Parameter Description
instance_ids List of instance ids for new instaces
instances List of instance dicts for new instances (see table below)
tagged_instances List of instance dicts that already exist if exact_count is used
EC2 instance dicts
Parameter Description
id instance id
ami_launch_index instance index within a reservation (between 0 and N-1) if N launched
private_ip internal IP address (not routable outside of EC2)
private_dns_name internal DNS name (not routable outside of EC2)
public_ip public IP address
public_dns_name public DNS name
state_code reason code for the state change
architecture CPU architecture
image_id AMI
key_name keypair name
placement location where the instance was launched
kernel AKI
ramdisk ARI
launch_time time instance was launched
instance_type instance type
root_device_type type of root device (ephemeral, EBS)
root_device_name name of root device
state state of instance
hypervisor hypervisor type

Values returned by ec2_vpc module

Example output:

{
  "changed": false,
  "invocation": {
    "module_args": "",
    "module_name": "ec2_vpc"
  },
  "subnets": [
    {
      "az": "us-east-1d",
      "cidr": "10.0.0.0/24",
      "id": "subnet-30d30549",
      "resource_tags": {
        "env": "production",
        "tier": "web"
      }
    },
    {
      "az": "us-east-1d",
      "cidr": "10.0.1.0/24",
      "id": "subnet-43d3054a",
      "resource_tags": {
        "env": "production",
        "tier": "db"
      }
    }
  ],
  "vpc": {
    "cidr_block": "10.0.0.0/16",
    "dhcp_options_id": "dopt-203f5742",
    "id": "vpc-83a135e6",
    "region": "us-east-1",
    "state": "available"
  },
  "vpc_id": "vpc-83a135e6"
}
Parameter Description
subnets List of subnet dicts (see below)
vpc vpc dict (see below)
vpc_id vpc id (e.g. vpc-12345678)
subnet dict
Parameter Description
az availability zone (e.g., us-east-1d)
cidr subnet in CIDR format (e.g., 10.0.0.0/24)
id subnet id (e.g. subnet-12345678)
resource_tags dictionary of resource tags
vpc dict
Parameter Description
cidr_block subnet in CIDR format (e.g. 10.0.0.0/16)
dhcp_options_id e.g. dopt-12345678
id vpc id (e.g., vpc-12345678)
region ec2 region (e.g., us-east-1)
state state of vpc (e.g., available)

hostvars from ec2.py dynamic inventory script

ec2.py defines the following host variables:

Variable Description
ec2__in_monitoring_element  
ec2_ami_launch_index  
ec2_architecture  
ec2_client_token  
ec2_dns_name  
ec2_ebs_optimized  
ec2_eventsSet  
ec2_group_name  
ec2_hypervisor  
ec2_id instance id
ec2_image_id  
ec2_instance_profile  
ec2_instance_type  
ec2_ip_address  
ec2_item  
ec2_kernel  
ec2_key_name  
ec2_launch_time  
ec2_monitored  
ec2_monitoring  
ec2_monitoring_state  
ec2_persistent  
ec2_placement  
ec2_platform  
ec2_previous_state  
ec2_previous_state_code  
ec2_private_dns_name  
ec2_private_ip_address  
ec2_public_dns_name  
ec2_ramdisk  
ec2_reason  
ec2_region  
ec2_requester_id  
ec2_root_device_name  
ec2_root_device_type  
ec2_security_group_ids  
ec2_security_group_names  
ec2_spot_instance_request_id  
ec2_state  
ec2_state_code  
ec2_state_reason  
ec2_subnet_id  
ec2_tag_Name  
ec2_tag_env  
ec2_virtualization_type  
ec2_vpc_id  

Values returned by ec2_facts module

This will connect to the EC2 metadata service and set the variables, prefixed with ansible_ec2_. Any variable that has a dash (-) or colon (:) in the name will also have a copied version of that variable with underscores instead (e.g., ansible_ec2_ami-id and ansible_ec2_ami_id).

Here we just show the underscore-replaced versions

Parameter Description
ansible_ec2_ami_launch_index ? (e.g., 0)
ansible_ec2_ami_manifest_path ? (e.g., (unknown))
ansible_ec2_hostname hostname
ansible_ec2_instance_action tbd
ansible_ec2_instance_id instance id
ansible_ec2_instance_type instance type
ansible_ec2_kernel_id AKI
ansible_ec2_local_hostname internal hostname
ansible_ec2_local_ipv4 internal IP address
ansible_ec2_mac MAC address (e.g., 22:00:0a:1f:b2:34)
ansible_ec2_network_interfaces_macs_XX_XX_XX_XX_XX_XX_device_number device number (e.g., 0)
ansible_ec2_network_interfaces_macs_XX_XX_XX_XX_XX_XX_local_hostname internal hostname for interface (e.g., ip-10-31-178-52.ec2.internal)
ansible_ec2_network_interfaces_macs_XX_XX_XX_XX_XX_XX_local_ipv4s internal IP for interface (e.g., 10.31.178.52)
ansible_ec2_network_interfaces_macs_XX_XX_XX_XX_XX_XX_mac MAC address (e.g., 22:00:0a:1f:b2:34)
ansible_ec2_network_interfaces_macs_XX_XX_XX_XX_XX_XX_owner_id Owner ID (e.g., 635425997824)
ansible_ec2_network_interfaces_macs_XX_XX_XX_XX_XX_XX_public_hostname public hostname (e.g., ec2-107-20-42-224.compute-1.amazonaws.com)
ansible_ec2_network_interfaces_macs_XX_XX_XX_XX_XX_XX_public_ipv4s” public IP (e.g., 107.20.42.224)
ansible_ec2_public_hostname public hostname (e.g., ec2-107-20-42-224.compute-1.amazonaws.com)
ansible_ec2_public_key ssh public key
ansible_ec2_public_ipv4 public IP address (e.g., 107.20.42.224)
ansible_ec2_reservation_id reservation id
ansible_ec2_security_groups comma-delimited list of security groups (e.g., ssh,ping)
ansible_ec2_instance_type instance type (e.g., t1.micro)
ansible_ec2_placement_availability_zone availability zone (e.g., us-east-1b)
ansible_ec2_placement_region region (e.g., us-east-1)
ansible_ec2_profile profile (e.g. default-paravitual)
ansible_ec2_user_data user data

Values returned by ec2_ami module

Parameter Description
image_id AMI id
state state of the image

Values returned by ec2_vol module

Parameter Description
volume_id volume id
device device name

Values returned by ec2_key module

Parameter Description
key.fingerprint SSH public key fingerprint
key.name SSH keypair name
key.private_key SSH private key string (only if creating new key)

Facts

Example facts generated from a Vagrant machine.

Facts that start with facter_ will only be collected if the facter program is present. Facts that start with ohai_ will only be present if the ohai program is present:

{
    "ansible_all_ipv4_addresses": [
        "10.0.2.15",
        "192.168.33.10"
    ],
    "ansible_all_ipv6_addresses": [
        "fe80::a00:27ff:fefe:1e4d",
        "fe80::a00:27ff:fe23:ae8e"
    ],
    "ansible_architecture": "x86_64",
    "ansible_bios_date": "12/01/2006",
    "ansible_bios_version": "VirtualBox",
    "ansible_cmdline": {
        "BOOT_IMAGE": "/boot/vmlinuz-3.13.0-35-generic",
        "console": "ttyS0",
        "ro": true,
        "root": "UUID=6e008225-f9bd-4b27-ad0d-e7d323b7a780"
    },
    "ansible_date_time": {
        "date": "2014-12-08",
        "day": "08",
        "epoch": "1418007742",
        "hour": "03",
        "iso8601": "2014-12-08T03:02:22Z",
        "iso8601_micro": "2014-12-08T03:02:22.861624Z",
        "minute": "02",
        "month": "12",
        "second": "22",
        "time": "03:02:22",
        "tz": "UTC",
        "tz_offset": "+0000",
        "weekday": "Monday",
        "year": "2014"
    },
    "ansible_default_ipv4": {
        "address": "10.0.2.15",
        "alias": "eth0",
        "gateway": "10.0.2.2",
        "interface": "eth0",
        "macaddress": "08:00:27:fe:1e:4d",
        "mtu": 1500,
        "netmask": "255.255.255.0",
        "network": "10.0.2.0",
        "type": "ether"
    },
    "ansible_default_ipv6": {},
    "ansible_devices": {
        "sda": {
            "holders": [],
            "host": "SATA controller: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) SATA
                     Controller [AHCI mode] (rev 02)",
            "model": "VBOX HARDDISK",
            "partitions": {
                "sda1": {
                    "sectors": "83884032",
                    "sectorsize": 512,
                    "size": "40.00 GB",
                    "start": "2048"
                }
            },
            "removable": "0",
            "rotational": "1",
            "scheduler_mode": "deadline",
            "sectors": "83886080",
            "sectorsize": "512",
            "size": "40.00 GB",
            "support_discard": "0",
            "vendor": "ATA"
        }
    },
    "ansible_distribution": "Ubuntu",
    "ansible_distribution_major_version": "14",
    "ansible_distribution_release": "trusty",
    "ansible_distribution_version": "14.04",
    "ansible_domain": "",
    "ansible_env": {
        "HOME": "/home/vagrant",
        "LANG": "en_US.UTF-8",
        "LC_CTYPE": "en_US.UTF-8",
        "LOGNAME": "vagrant",
        "MAIL": "/var/mail/vagrant",
        "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:
                 /usr/local/games",
        "PWD": "/home/vagrant",
        "SHELL": "/bin/bash",
        "SHLVL": "1",
        "SSH_AUTH_SOCK": "/tmp/ssh-FTJFi2Tu1j/agent.12316",
        "SSH_CLIENT": "192.168.33.1 58426 22",
        "SSH_CONNECTION": "192.168.33.1 58426 192.168.33.10 22",
        "USER": "vagrant",
        "XDG_RUNTIME_DIR": "/run/user/1000",
        "XDG_SESSION_ID": "5",
        "_": "/bin/sh"
    },
    "ansible_eth0": {
        "active": true,
        "device": "eth0",
        "ipv4": {
            "address": "10.0.2.15",
            "netmask": "255.255.255.0",
            "network": "10.0.2.0"
        },
        "ipv6": [
            {
                "address": "fe80::a00:27ff:fefe:1e4d",
                "prefix": "64",
                "scope": "link"
            }
        ],
        "macaddress": "08:00:27:fe:1e:4d",
        "module": "e1000",
        "mtu": 1500,
        "promisc": false,
        "type": "ether"
    },
    "ansible_eth1": {
        "active": true,
        "device": "eth1",
        "ipv4": {
            "address": "192.168.33.10",
            "netmask": "255.255.255.0",
            "network": "192.168.33.0"
        },
        "ipv6": [
            {
                "address": "fe80::a00:27ff:fe23:ae8e",
                "prefix": "64",
                "scope": "link"
            }
        ],
        "macaddress": "08:00:27:23:ae:8e",
        "module": "e1000",
        "mtu": 1500,
        "promisc": false,
        "type": "ether"
    },
    "ansible_fips": false,
    "ansible_form_factor": "Other",
    "ansible_fqdn": "vagrant-ubuntu-trusty-64",
    "ansible_hostname": "vagrant-ubuntu-trusty-64",
    "ansible_interfaces": [
        "lo",
        "eth1",
        "eth0"
    ],
    "ansible_kernel": "3.13.0-35-generic",
    "ansible_lo": {
        "active": true,
        "device": "lo",
        "ipv4": {
            "address": "127.0.0.1",
            "netmask": "255.0.0.0",
            "network": "127.0.0.0"
        },
        "ipv6": [
            {
                "address": "::1",
                "prefix": "128",
                "scope": "host"
            }
        ],
        "mtu": 65536,
        "promisc": false,
        "type": "loopback"
    },
    "ansible_lsb": {
        "codename": "trusty",
        "description": "Ubuntu 14.04.1 LTS",
        "id": "Ubuntu",
        "major_release": "14",
        "release": "14.04"
    },
    "ansible_machine": "x86_64",
    "ansible_memfree_mb": 101,
    "ansible_memtotal_mb": 994,
    "ansible_mounts": [
        {
            "device": "/dev/sda1",
            "fstype": "ext4",
            "mount": "/",
            "options": "rw",
            "size_available": 38925029376,
            "size_total": 42241163264
        }
    ],
    "ansible_nodename": "vagrant-ubuntu-trusty-64",
    "ansible_os_family": "Debian",
    "ansible_pkg_mgr": "apt",
    "ansible_processor": [
        "GenuineIntel",
        "Intel(R) Core(TM) i7-4960HQ CPU @ 2.60GHz"
    ],
    "ansible_processor_cores": 1,
    "ansible_processor_count": 1,
    "ansible_processor_threads_per_core": 1,
    "ansible_processor_vcpus": 1,
    "ansible_product_name": "VirtualBox",
    "ansible_product_serial": "NA",
    "ansible_product_uuid": "NA",
    "ansible_product_version": "1.2",
    "ansible_python_version": "2.7.6",
    "ansible_selinux": false,
    "ansible_ssh_host_key_dsa_public":
      "AAAAB3NzaC1kc3MAAACBAJ7d5+Srn6T30vRnMBNnfQNcfSB...",
    "ansible_ssh_host_key_ecdsa_public":
      "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTY...",
    "ansible_ssh_host_key_rsa_public":
      "AAAAB3NzaC1yc2EAAAADAQABAAABAQDK0HsEEopBN2+N801...",
    "ansible_swapfree_mb": 0,
    "ansible_swaptotal_mb": 0,
    "ansible_system": "Linux",
    "ansible_system_vendor": "innotek GmbH",
    "ansible_user_id": "vagrant",
    "ansible_userspace_architecture": "x86_64",
    "ansible_userspace_bits": "64",
    "ansible_virtualization_role": "guest",
    "ansible_virtualization_type": "virtualbox",
    "facter_architecture": "amd64",
    "facter_augeasversion": "1.2.0",
    "facter_blockdevice_sda_model": "VBOX HARDDISK",
    "facter_blockdevice_sda_size": 42949672960,
    "facter_blockdevice_sda_vendor": "ATA",
    "facter_blockdevices": "sda",
    "facter_facterversion": "1.7.5",
    "facter_filesystems": "ext2,ext3,ext4,vfat",
    "facter_hardwareisa": "x86_64",
    "facter_hardwaremodel": "x86_64",
    "facter_hostname": "vagrant-ubuntu-trusty-64",
    "facter_id": "vagrant",
    "facter_interfaces": "eth0,eth1,lo",
    "facter_ipaddress": "10.0.2.15",
    "facter_ipaddress_eth0": "10.0.2.15",
    "facter_ipaddress_eth1": "192.168.33.10",
    "facter_ipaddress_lo": "127.0.0.1",
    sshdsakey":
      "AAAAB3NzaC1kc3MAAACBAJ7d5+Srn6T30vRnMBNnfQNcfSB...",
    "facter_sshecdsakey":
      "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTY...",
    "facter_sshfp_dsa":
      "SSHFP 2 1 6b52b74c0ea2bbd5276f7148509bfa0318e55...",
    "facter_sshfp_ecdsa":
      "SSHFP 3 1 d7be510097620ad9f6705c7641ba0d695b73d...",
    "facter_sshfp_rsa":
      "SSHFP 1 1 100db6f684fe47130dfdef5bd1b4e4cda28cb...",
    "facter_sshrsakey":
      "AAAAB3NzaC1yc2EAAAADAQABAAABAQDK0HsEEopBN2+N801...",
    "facter_swapfree": "0.00 MB",
    "facter_swapfree_mb": "0.00",
    "facter_swapsize": "0.00 MB",
    "facter_swapsize_mb": "0.00",
    "facter_timezone": "UTC",
    "facter_uniqueid": "000a0f02",
    "facter_uptime": "0:10 hours",
    "facter_uptime_days": 0,
    "facter_uptime_hours": 0,
    "facter_uptime_seconds": 637,
    "facter_virtual": "virtualbox",
    "module_setup": true,
    "ohai_block_device": {
        "loop0": {
            "removable": "0",
            "size": "0"
        },
        "loop1": {
            "removable": "0",
            "size": "0"
        },
        "loop2": {
            "removable": "0",
            "size": "0"
        },
        "loop3": {
            "removable": "0",
            "size": "0"
        },
        "loop4": {
            "removable": "0",
            "size": "0"
        },
        "loop5": {
            "removable": "0",
            "size": "0"
        },
        "loop6": {
            "removable": "0",
            "size": "0"
        },
        "loop7": {
            "removable": "0",
            "size": "0"
        },
        "ram0": {
            "removable": "0",
            "size": "131072"
        },
        "ram1": {
            "removable": "0",
            "size": "131072"
        },
        "ram10": {
            "removable": "0",
            "size": "131072"
        },
        "ram11": {
            "removable": "0",
            "size": "131072"
        },
        "ram12": {
            "removable": "0",
            "size": "131072"
        },
        "ram13": {
            "removable": "0",
            "size": "131072"
        },
        "ram14": {
            "removable": "0",
            "size": "131072"
        },
        "ram15": {
            "removable": "0",
            "size": "131072"
        },
        "ram2": {
            "removable": "0",
            "size": "131072"
        },
        "ram3": {
            "removable": "0",
            "size": "131072"
        },
        "ram4": {
            "removable": "0",
            "size": "131072"
        },
        "ram5": {
            "removable": "0",
            "size": "131072"
        },
        "ram6": {
            "removable": "0",
            "size": "131072"
        },
        "ram7": {
            "removable": "0",
            "size": "131072"
        },
        "ram8": {
            "removable": "0",
            "size": "131072"
        },
        "ram9": {
            "removable": "0",
            "size": "131072"
        },
        "sda": {
            "model": "VBOX HARDDISK",
            "removable": "0",
            "rev": "1.0",
            "size": "83886080",
            "state": "running",
            "timeout": "30",
            "vendor": "ATA"
        }
    },
    "ohai_chef_packages": {
        "chef": {
            "chef_root": "/usr/lib/ruby/vendor_ruby",
            "version": "11.8.2"
        },
        "ohai": {
            "ohai_root": "/usr/lib/ruby/vendor_ruby/ohai",
            "version": "6.14.0"
        }
    },
    "ohai_command": {
        "ps": "ps -ef"
    },
    "ohai_counters": {
        "network": {
            "interfaces": {
                "eth0": {
                    "rx": {
                        "bytes": "87120229",
                        "drop": "0",
                        "errors": "0",
                        "overrun": "0",
                        "packets": "95129"
                    },
                    "tx": {
                        "bytes": "2411491",
                        "carrier": "0",
                        "collisions": "0",
                        "drop": "0",
                        "errors": "0",
                        "packets": "38200",
                        "queuelen": "1000"
                    }
                },
                "eth1": {
                    "rx": {
                        "bytes": "342365",
                        "drop": "0",
                        "errors": "0",
                        "overrun": "0",
                        "packets": "430"
                    },
                    "tx": {
                        "bytes": "36139",
                        "carrier": "0",
                        "collisions": "0",
                        "drop": "0",
                        "errors": "0",
                        "packets": "218",
                        "queuelen": "1000"
                    }
                },
                "lo": {
                    "rx": {
                        "bytes": "761691",
                        "drop": "0",
                        "errors": "0",
                        "overrun": "0",
                        "packets": "2740"
                    },
                    "tx": {
                        "bytes": "761691",
                        "carrier": "0",
                        "collisions": "0",
                        "drop": "0",
                        "errors": "0",
                        "packets": "2740"
                    }
                }
            }
        }
    },
    "ohai_cpu": {
        "0": {
            "cache_size": "6144 KB",
            "core_id": "0",
            "cores": "1",
            "family": "6",
            "flags": [
                "fpu",
                "vme",
                "de",
                "pse",
                "tsc",
                "msr",
                "pae",
                "mce",
                "cx8",
                "apic",
                "sep",
                "mtrr",
                "pge",
                "mca",
                "cmov",
                "pat",
                "pse36",
                "clflush",
                "mmx",
                "fxsr",
                "sse",
                "sse2",
                "syscall",
                "nx",
                "rdtscp",
                "lm",
                "constant_tsc",
                "rep_good",
                "nopl",
                "pni",
                "monitor",
                "ssse3",
                "lahf_lm"
            ],
            "mhz": "2591.391",
            "model": "70",
            "model_name": "Intel(R) Core(TM) i7-4960HQ CPU @ 2.60GHz",
            "physical_id": "0",
            "stepping": "1",
            "vendor_id": "GenuineIntel"
        },
        "real": 1,
        "total": 1
    },
    "ohai_current_user": "vagrant",
    "ohai_dmi": {
        "dmidecode_version": "2.12"
    },
    "ohai_domain": null,
    "ohai_etc": {
        "group": {
            "adm": {
                "gid": 4,
                "members": [
                    "syslog",
                    "ubuntu"
                ]
            },
            "admin": {
                "gid": 110,
                "members": []
            },
            "audio": {
                "gid": 29,
                "members": [
                    "ubuntu"
                ]
            },
            "backup": {
                "gid": 34,
                "members": []
            },
            "bin": {
                "gid": 2,
                "members": []
            },
            "cdrom": {
                "gid": 24,
                "members": [
                    "ubuntu"
                ]
            },
            "crontab": {
                "gid": 103,
                "members": []
            },
            "daemon": {
                "gid": 1,
                "members": []
            },
            "dialout": {
                "gid": 20,
                "members": [
                    "ubuntu"
                ]
            },
            "dip": {
                "gid": 30,
                "members": [
                    "ubuntu"
                ]
            },
            "disk": {
                "gid": 6,
                "members": []
            },
            "fax": {
                "gid": 21,
                "members": []
            },
            "floppy": {
                "gid": 25,
                "members": [
                    "ubuntu"
                ]
            },
            "fuse": {
                "gid": 105,
                "members": []
            },
            "games": {
                "gid": 60,
                "members": []
            },
            "gnats": {
                "gid": 41,
                "members": []
            },
            "irc": {
                "gid": 39,
                "members": []
            },
            "kmem": {
                "gid": 15,
                "members": []
            },
            "landscape": {
                "gid": 109,
                "members": []
            },
            "libuuid": {
                "gid": 101,
                "members": []
            },
            "list": {
                "gid": 38,
                "members": []
            },
            "lp": {
                "gid": 7,
                "members": []
            },
            "mail": {
                "gid": 8,
                "members": []
            },
            "man": {
                "gid": 12,
                "members": []
            },
            "memcache": {
                "gid": 113,
                "members": []
            },
            "messagebus": {
                "gid": 106,
                "members": []
            },
            "mlocate": {
                "gid": 107,
                "members": []
            },
            "netdev": {
                "gid": 102,
                "members": [
                    "ubuntu"
                ]
            },
            "news": {
                "gid": 9,
                "members": []
            },
            "nogroup": {
                "gid": 65534,
                "members": []
            },
            "operator": {
                "gid": 37,
                "members": []
            },
            "plugdev": {
                "gid": 46,
                "members": [
                    "ubuntu"
                ]
            },
            "postgres": {
                "gid": 115,
                "members": []
            },
            "proxy": {
                "gid": 13,
                "members": []
            },
            "puppet": {
                "gid": 112,
                "members": []
            },
            "root": {
                "gid": 0,
                "members": []
            },
            "sasl": {
                "gid": 45,
                "members": []
            },
            "shadow": {
                "gid": 42,
                "members": []
            },
            "src": {
                "gid": 40,
                "members": []
            },
            "ssh": {
                "gid": 108,
                "members": []
            },
            "ssl-cert": {
                "gid": 114,
                "members": [
                    "postgres"
                ]
            },
            "staff": {
                "gid": 50,
                "members": []
            },
            "sudo": {
                "gid": 27,
                "members": [
                    "ubuntu"
                ]
            },
            "sys": {
                "gid": 3,
                "members": []
            },
            "syslog": {
                "gid": 104,
                "members": []
            },
            "tape": {
                "gid": 26,
                "members": []
            },
            "tty": {
                "gid": 5,
                "members": []
            },
            "ubuntu": {
                "gid": 1001,
                "members": []
            },
            "users": {
                "gid": 100,
                "members": []
            },
            "utmp": {
                "gid": 43,
                "members": []
            },
            "uucp": {
                "gid": 10,
                "members": []
            },
            "vagrant": {
                "gid": 1000,
                "members": []
            },
            "vboxsf": {
                "gid": 111,
                "members": []
            },
            "video": {
                "gid": 44,
                "members": [
                    "ubuntu"
                ]
            },
            "voice": {
                "gid": 22,
                "members": []
            },
            "www-data": {
                "gid": 33,
                "members": []
            }
        },
        "passwd": {
            "backup": {
                "dir": "/var/backups",
                "gecos": "backup",
                "gid": 34,
                "shell": "/usr/sbin/nologin",
                "uid": 34
            },
            "bin": {
                "dir": "/bin",
                "gecos": "bin",
                "gid": 2,
                "shell": "/usr/sbin/nologin",
                "uid": 2
            },
            "daemon": {
                "dir": "/usr/sbin",
                "gecos": "daemon",
                "gid": 1,
                "shell": "/usr/sbin/nologin",
                "uid": 1
            },
            "games": {
                "dir": "/usr/games",
                "gecos": "games",
                "gid": 60,
                "shell": "/usr/sbin/nologin",
                "uid": 5
            },
            "gnats": {
                "dir": "/var/lib/gnats",
                "gecos": "Gnats Bug-Reporting System (admin)",
                "gid": 41,
                "shell": "/usr/sbin/nologin",
                "uid": 41
            },
            "irc": {
                "dir": "/var/run/ircd",
                "gecos": "ircd",
                "gid": 39,
                "shell": "/usr/sbin/nologin",
                "uid": 39
            },
            "landscape": {
                "dir": "/var/lib/landscape",
                "gecos": "",
                "gid": 109,
                "shell": "/bin/false",
                "uid": 103
            },
            "libuuid": {
                "dir": "/var/lib/libuuid",
                "gecos": "",
                "gid": 101,
                "shell": "",
                "uid": 100
            },
            "list": {
                "dir": "/var/list",
                "gecos": "Mailing List Manager",
                "gid": 38,
                "shell": "/usr/sbin/nologin",
                "uid": 38
            },
            "lp": {
                "dir": "/var/spool/lpd",
                "gecos": "lp",
                "gid": 7,
                "shell": "/usr/sbin/nologin",
                "uid": 7
            },
            "mail": {
                "dir": "/var/mail",
                "gecos": "mail",
                "gid": 8,
                "shell": "/usr/sbin/nologin",
                "uid": 8
            },
            "man": {
                "dir": "/var/cache/man",
                "gecos": "man",
                "gid": 12,
                "shell": "/usr/sbin/nologin",
                "uid": 6
            },
            "memcache": {
                "dir": "/nonexistent",
                "gecos": "Memcached,,,",
                "gid": 113,
                "shell": "/bin/false",
                "uid": 108
            },
            "messagebus": {
                "dir": "/var/run/dbus",
                "gecos": "",
                "gid": 106,
                "shell": "/bin/false",
                "uid": 102
            },
            "news": {
                "dir": "/var/spool/news",
                "gecos": "news",
                "gid": 9,
                "shell": "/usr/sbin/nologin",
                "uid": 9
            },
            "nobody": {
                "dir": "/nonexistent",
                "gecos": "nobody",
                "gid": 65534,
                "shell": "/usr/sbin/nologin",
                "uid": 65534
            },
            "pollinate": {
                "dir": "/var/cache/pollinate",
                "gecos": "",
                "gid": 1,
                "shell": "/bin/false",
                "uid": 105
            },
            "postgres": {
                "dir": "/var/lib/postgresql",
                "gecos": "PostgreSQL administrator,,,",
                "gid": 115,
                "shell": "/bin/bash",
                "uid": 109
            },
            "proxy": {
                "dir": "/bin",
                "gecos": "proxy",
                "gid": 13,
                "shell": "/usr/sbin/nologin",
                "uid": 13
            },
            "puppet": {
                "dir": "/var/lib/puppet",
                "gecos": "Puppet configuration management daemon,,,",
                "gid": 112,
                "shell": "/bin/false",
                "uid": 107
            },
            "root": {
                "dir": "/root",
                "gecos": "root",
                "gid": 0,
                "shell": "/bin/bash",
                "uid": 0
            },
            "sshd": {
                "dir": "/var/run/sshd",
                "gecos": "",
                "gid": 65534,
                "shell": "/usr/sbin/nologin",
                "uid": 104
            },
            "statd": {
                "dir": "/var/lib/nfs",
                "gecos": "",
                "gid": 65534,
                "shell": "/bin/false",
                "uid": 106
            },
            "sync": {
                "dir": "/bin",
                "gecos": "sync",
                "gid": 65534,
                "shell": "/bin/sync",
                "uid": 4
            },
            "sys": {
                "dir": "/dev",
                "gecos": "sys",
                "gid": 3,
                "shell": "/usr/sbin/nologin",
                "uid": 3
            },
            "syslog": {
                "dir": "/home/syslog",
                "gecos": "",
                "gid": 104,
                "shell": "/bin/false",
                "uid": 101
            },
            "ubuntu": {
                "dir": "/home/ubuntu",
                "gecos": "Ubuntu",
                "gid": 1001,
                "shell": "/bin/bash",
                "uid": 1001
            },
            "uucp": {
                "dir": "/var/spool/uucp",
                "gecos": "uucp",
                "gid": 10,
                "shell": "/usr/sbin/nologin",
                "uid": 10
            },
            "vagrant": {
                "dir": "/home/vagrant",
                "gecos": "",
                "gid": 1000,
                "shell": "/bin/bash",
                "uid": 1000
            },
            "www-data": {
                "dir": "/var/www",
                "gecos": "www-data",
                "gid": 33,
                "shell": "/usr/sbin/nologin",
                "uid": 33
            }
        }
    },
    "ohai_filesystem": {
        "/dev/disk/by-uuid/6e008225-f9bd-4b27-ad0d-e7d323b7a780": {
            "fs_type": "ext4",
            "mount": "/",
            "mount_options": [
                "rw",
                "relatime",
                "data=ordered"
            ]
        },
        "/dev/sda1": {
            "fs_type": "ext4",
            "kb_available": "38012724",
            "kb_size": "41251136",
            "kb_used": "1502500",
            "label": "cloudimg-rootfs",
            "mount": "/",
            "mount_options": [
                "rw"
            ],
            "percent_used": "4%",
            "uuid": "6e008225-f9bd-4b27-ad0d-e7d323b7a780"
        },
        "devpts": {
            "fs_type": "devpts",
            "mount": "/dev/pts",
            "mount_options": [
                "rw",
                "noexec",
                "nosuid",
                "gid=5",
                "mode=0620"
            ]
        },
        "none": {
            "fs_type": "pstore",
            "kb_available": "102400",
            "kb_size": "102400",
            "kb_used": "0",
            "mount": "/sys/fs/pstore",
            "mount_options": [
                "rw"
            ],
            "percent_used": "0%"
        },
        "proc": {
            "fs_type": "proc",
            "mount": "/proc",
            "mount_options": [
                "rw",
                "noexec",
                "nosuid",
                "nodev"
            ]
        },
        "rootfs": {
            "fs_type": "rootfs",
            "mount": "/",
            "mount_options": [
                "rw"
            ]
        },
        "rpc_pipefs": {
            "fs_type": "rpc_pipefs",
            "mount": "/run/rpc_pipefs",
            "mount_options": [
                "rw"
            ]
        },
        "sysfs": {
            "fs_type": "sysfs",
            "mount": "/sys",
            "mount_options": [
                "rw",
                "noexec",
                "nosuid",
                "nodev"
            ]
        },
        "systemd": {
            "fs_type": "cgroup",
            "mount": "/sys/fs/cgroup/systemd",
            "mount_options": [
                "rw",
                "noexec",
                "nosuid",
                "nodev",
                "none",
                "name=systemd"
            ]
        },
        "tmpfs": {
            "fs_type": "tmpfs",
            "kb_available": "101416",
            "kb_size": "101788",
            "kb_used": "372",
            "mount": "/run",
            "mount_options": [
                "rw",
                "noexec",
                "nosuid",
                "size=10%",
                "mode=0755"
            ],
            "percent_used": "1%"
        },
        "udev": {
            "fs_type": "devtmpfs",
            "kb_available": "503952",
            "kb_size": "503964",
            "kb_used": "12",
            "mount": "/dev",
            "mount_options": [
                "rw",
                "mode=0755"
            ],
            "percent_used": "1%"
        },
        "vagrant": {
            "fs_type": "vboxsf",
            "kb_available": "305450008",
            "kb_size": "487385240",
            "kb_used": "181935232",
            "mount": "/vagrant",
            "mount_options": [
                "uid=1000",
                "gid=1000",
                "rw"
            ],
            "percent_used": "38%"
        }
    },
    "ohai_fqdn": "vagrant-ubuntu-trusty-64",
    "ohai_hostname": "vagrant-ubuntu-trusty-64",
    "ohai_idletime": "9 minutes 26 seconds",
    "ohai_idletime_seconds": 566,
    "ohai_ipaddress": "10.0.2.15",
    "ohai_kernel": {
        "machine": "x86_64",
        "modules": {
            "ahci": {
                "refcount": "1",
                "size": "25819"
            },
            "auth_rpcgss": {
                "refcount": "1",
                "size": "59338"
            },
            "dm_crypt": {
                "refcount": "0",
                "size": "23177"
            },
            "e1000": {
                "refcount": "0",
                "size": "145174"
            },
            "fscache": {
                "refcount": "1",
                "size": "63988"
            },
            "libahci": {
                "refcount": "1",
                "size": "32716"
            },
            "lockd": {
                "refcount": "2",
                "size": "93977"
            },
            "nfs": {
                "refcount": "0",
                "size": "236636"
            },
            "nfs_acl": {
                "refcount": "1",
                "size": "12837"
            },
            "nfsd": {
                "refcount": "2",
                "size": "280289"
            },
            "parport": {
                "refcount": "2",
                "size": "42348"
            },
            "parport_pc": {
                "refcount": "0",
                "size": "32701"
            },
            "ppdev": {
                "refcount": "0",
                "size": "17671"
            },
            "psmouse": {
                "refcount": "0",
                "size": "106678"
            },
            "serio_raw": {
                "refcount": "0",
                "size": "13462"
            },
            "sunrpc": {
                "refcount": "6",
                "size": "284939"
            },
            "vboxguest": {
                "refcount": "2",
                "size": "248441"
            },
            "vboxsf": {
                "refcount": "1",
                "size": "43786"
            }
        },
        "name": "Linux",
        "os": "GNU/Linux",
        "release": "3.13.0-35-generic",
        "version": "#62-Ubuntu SMP Fri Aug 15 01:58:42 UTC 2014"
    },
    "ohai_keys": {
        "ssh": {
            "host_dsa_public": "AAAAB3NzaC1kc3MAAACBAJ7d5+Srn6T30...",
            "host_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDK0..."
        }
    },
    "ohai_languages": {
        "perl": {
            "archname": "x86_64-linux-gnu-thread-multi",
            "version": "5.18.2"
        },
        "python": {
            "builddate": "Mar 22 2014, 22:59:56",
            "version": "2.7.6"
        },
        "ruby": {
            "bin_dir": "/usr/bin",
            "gem_bin": "/usr/bin/gem1.9.1",
            "gems_dir": "/var/lib/gems/1.9.1",
            "host": "x86_64-pc-linux-gnu",
            "host_cpu": "x86_64",
            "host_os": "linux-gnu",
            "host_vendor": "pc",
            "platform": "x86_64-linux",
            "release_date": "2013-11-22",
            "ruby_bin": "/usr/bin/ruby1.9.1",
            "target": "x86_64-pc-linux-gnu",
            "target_cpu": "x86_64",
            "target_os": "linux",
            "target_vendor": "pc",
            "version": "1.9.3"
        }
    },
    "ohai_lsb": {
        "codename": "trusty",
        "description": "Ubuntu 14.04.1 LTS",
        "id": "Ubuntu",
        "release": "14.04"
    },
    "ohai_macaddress": "08:00:27:FE:1E:4D",
    "ohai_memory": {
        "active": "548644kB",
        "anon_pages": "212420kB",
        "bounce": "0kB",
        "buffers": "60088kB",
        "cached": "551148kB",
        "commit_limit": "508928kB",
        "committed_as": "488724kB",
        "dirty": "100kB",
        "free": "91524kB",
        "inactive": "275012kB",
        "mapped": "29856kB",
        "nfs_unstable": "0kB",
        "page_tables": "6036kB",
        "slab": "82572kB",
        "slab_reclaimable": "72836kB",
        "slab_unreclaim": "9736kB",
        "swap": {
            "cached": "0kB",
            "free": "0kB",
            "total": "0kB"
        },
        "total": "1017856kB",
        "vmalloc_chunk": "34359715831kB",
        "vmalloc_total": "34359738367kB",
        "vmalloc_used": "18700kB",
        "writeback": "0kB"
    },
    "ohai_network": {
        "default_gateway": "10.0.2.2",
        "default_interface": "eth0",
        "interfaces": {
            "eth0": {
                "addresses": {
                    "08:00:27:FE:1E:4D": {
                        "family": "lladdr"
                    },
                    "10.0.2.15": {
                        "broadcast": "10.0.2.255",
                        "family": "inet",
                        "netmask": "255.255.255.0",
                        "prefixlen": "24",
                        "scope": "Global"
                    },
                    "fe80::a00:27ff:fefe:1e4d": {
                        "family": "inet6",
                        "prefixlen": "64",
                        "scope": "Link"
                    }
                },
                "arp": {
                    "10.0.2.2": "52:54:00:12:35:02",
                    "10.0.2.3": "52:54:00:12:35:03"
                },
                "encapsulation": "Ethernet",
                "flags": [
                    "BROADCAST",
                    "MULTICAST",
                    "UP",
                    "LOWER_UP"
                ],
                "mtu": "1500",
                "number": "0",
                "routes": [
                    {
                        "destination": "default",
                        "family": "inet",
                        "via": "10.0.2.2"
                    },
                    {
                        "destination": "10.0.2.0/24",
                        "family": "inet",
                        "proto": "kernel",
                        "scope": "link",
                        "src": "10.0.2.15"
                    },
                    {
                        "destination": "fe80::/64",
                        "family": "inet6",
                        "metric": "256",
                        "proto": "kernel"
                    }
                ],
                "state": "up",
                "type": "eth"
            },
            "eth1": {
                "addresses": {
                    "08:00:27:23:AE:8E": {
                        "family": "lladdr"
                    },
                    "192.168.33.10": {
                        "broadcast": "192.168.33.255",
                        "family": "inet",
                        "netmask": "255.255.255.0",
                        "prefixlen": "24",
                        "scope": "Global"
                    },
                    "fe80::a00:27ff:fe23:ae8e": {
                        "family": "inet6",
                        "prefixlen": "64",
                        "scope": "Link"
                    }
                },
                "arp": {
                    "192.168.33.1": "0a:00:27:00:00:00"
                },
                "encapsulation": "Ethernet",
                "flags": [
                    "BROADCAST",
                    "MULTICAST",
                    "UP",
                    "LOWER_UP"
                ],
                "mtu": "1500",
                "number": "1",
                "routes": [
                    {
                        "destination": "192.168.33.0/24",
                        "family": "inet",
                        "proto": "kernel",
                        "scope": "link",
                        "src": "192.168.33.10"
                    },
                    {
                        "destination": "fe80::/64",
                        "family": "inet6",
                        "metric": "256",
                        "proto": "kernel"
                    }
                ],
                "state": "up",
                "type": "eth"
            },
            "lo": {
                "addresses": {
                    "127.0.0.1": {
                        "family": "inet",
                        "netmask": "255.0.0.0",
                        "prefixlen": "8",
                        "scope": "Node"
                    },
                    "::1": {
                        "family": "inet6",
                        "prefixlen": "128",
                        "scope": "Node"
                    }
                },
                "encapsulation": "Loopback",
                "flags": [
                    "LOOPBACK",
                    "UP",
                    "LOWER_UP"
                ],
                "mtu": "65536",
                "state": "unknown"
            }
        },
        "listeners": {
            "tcp": {
                "111": {
                    "address": "*",
                    "pid": 0
                },
                "11211": {
                    "address": "127.0.0.1",
                    "pid": 0
                },
                "22": {
                    "address": "*",
                    "name": "gunicorn: maste",
                    "pid": 0
                },
                "443": {
                    "address": "*",
                    "name": "",
                    "pid": 0
                },
                "49788": {
                    "address": "*",
                    "name": "gunicorn: maste",
                    "pid": 0
                },
                "50583": {
                    "address": "*",
                    "name": "",
                    "pid": 0
                },
                "5432": {
                    "address": "127.0.0.1",
                    "name": "",
                    "pid": 0
                },
                "80": {
                    "address": "*",
                    "name": "",
                    "pid": 0
                },
                "8000": {
                    "address": "127.0.0.1",
                    "name": "gunicorn: maste",
                    "pid": 9601
                }
            }
        }
    },
    "ohai_ohai_time": 1418007743.6774073,
    "ohai_os": "linux",
    "ohai_os_version": "3.13.0-35-generic",
    "ohai_platform": "ubuntu",
    "ohai_platform_family": "debian",
    "ohai_platform_version": "14.04",
    "ohai_uptime": "10 minutes 37 seconds",
    "ohai_uptime_seconds": 637,
    "ohai_virtualization": {
        "role": "guest",
        "system": "vbox"
    }
}

Docker stuff

Values returned by docker module

The Docker module sets facts, so there’s no need to use register in order to access these variables. Instead, just access the docker_containers variable, which is a list that looks like this:

"docker_containers": [
    {
        "AppArmorProfile": "",
        "Args": [
            "postgres"
        ],
        "Config": {
            "AttachStderr": false,
            "AttachStdin": false,
            "AttachStdout": false,
            "Cmd": [
                "postgres"
            ],
            "CpuShares": 0,
            "Cpuset": "",
            "Domainname": "",
            "Entrypoint": [
                "/docker-entrypoint.sh"
            ],
            "Env": [
                "POSTGRES_PASSWORD=password",
                "POSTGRES_USER=mezzanine",
                "PATH=/usr/lib/postgresql/9.4/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "LANG=en_US.utf8",
                "PG_MAJOR=9.4",
                "PG_VERSION=9.4.0-1.pgdg70+1",
                "PGDATA=/var/lib/postgresql/data"
            ],
            "ExposedPorts": {
                "5432/tcp": {}
            },
            "Hostname": "71f40ec4b58c",
            "Image": "postgres",
            "MacAddress": "",
            "Memory": 0,
            "MemorySwap": 0,
            "NetworkDisabled": false,
            "OnBuild": null,
            "OpenStdin": false,
            "PortSpecs": null,
            "StdinOnce": false,
            "Tty": false,
            "User": "",
            "Volumes": {
                "/var/lib/postgresql/data": {}
            },
            "WorkingDir": ""
        },
        "Created": "2014-12-25T22:59:15.841107151Z",
        "Driver": "aufs",
        "ExecDriver": "native-0.2",
        "HostConfig": {
            "Binds": null,
            "CapAdd": null,
            "CapDrop": null,
            "ContainerIDFile": "",
            "Devices": null,
            "Dns": null,
            "DnsSearch": null,
            "ExtraHosts": null,
            "IpcMode": "",
            "Links": null,
            "LxcConf": null,
            "NetworkMode": "",
            "PortBindings": {
                "5432/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": ""
                    }
                ]
            },
            "Privileged": false,
            "PublishAllPorts": false,
            "RestartPolicy": {
                "MaximumRetryCount": 0,
                "Name": ""
            },
            "SecurityOpt": null,
            "VolumesFrom": [
                "data-volume"
            ]
        },
        "HostnamePath": "/mnt/sda1/var/lib/docker/containers/71f40ec4b58c3176030274afb025fbd3eb130fe79d4a6a69de473096f335e7eb/hostname",
        "HostsPath": "/mnt/sda1/var/lib/docker/containers/71f40ec4b58c3176030274afb025fbd3eb130fe79d4a6a69de473096f335e7eb/hosts",
        "Id": "71f40ec4b58c3176030274afb025fbd3eb130fe79d4a6a69de473096f335e7eb",
        "Image": "b58a816df10fb20c956d39724001d4f2fabddec50e0d9099510f0eb579ec8a45",
        "MountLabel": "",
        "Name": "/high_lovelace",
        "NetworkSettings": {
            "Bridge": "docker0",
            "Gateway": "172.17.42.1",
            "IPAddress": "172.17.0.12",
            "IPPrefixLen": 16,
            "MacAddress": "02:42:ac:11:00:0c",
            "PortMapping": null,
            "Ports": {
                "5432/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "49153"
                    }
                ]
            }
        },
        "Path": "/docker-entrypoint.sh",
        "ProcessLabel": "",
        "ResolvConfPath": "/mnt/sda1/var/lib/docker/containers/71f40ec4b58c3176030274afb025fbd3eb130fe79d4a6a69de473096f335e7eb/resolv.conf",
        "State": {
            "Error": "",
            "ExitCode": 0,
            "FinishedAt": "0001-01-01T00:00:00Z",
            "OOMKilled": false,
            "Paused": false,
            "Pid": 9625,
            "Restarting": false,
            "Running": true,
            "StartedAt": "2014-12-25T22:59:16.219732465Z"
        },
        "Volumes": {
            "/var/lib/postgresql/data": "/mnt/sda1/var/lib/docker/vfs/dir/4ccd3150c8d74b9b0feb56df928ac915599e12c3ab573cd4738a18fe3dc6f474"
        },
        "VolumesRW": {
            "/var/lib/postgresql/data": true
        }
    }
]

Ansible Configuration File

Configuration file will processed in the following order:

* ANSIBLE_CONFIG (an environment varialbe)
* ansible.cfg (in the current directory)
* .ansible.cfg (in the home directory)
* /etc/ansible/ansible.cfg

Ansible will process the above list and use the first file found. Setting s in files are not merged.

If you installed Ansible from pip or from source, you may want to create this file in order to override default settings in Ansible.

You may wish to consult the ansible.cfg in source control for all of the possible lastest values.

> ansible_managed

Ansible-managed is a string that can be inserted into files written by Ansible’s config templating system, if you use a string like:

{{ ansible_managed }}

The default configuration shows who modified a file and when:

ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}

This is useful to tell users that a file has been placed by Ansible and manual changes are likely to be overwritten.

Note that if using this feature, and there is a date in the string, the template will be reported changed each time as the date is updated.

> ask_pass

This controls whether an Ansible playbook should prompt for a password by default. The default behavior is no.

> ask_sudo_pass

Similar to ask_pass, this controls whether an Ansible playbook should prompt for a sudo password by default when sudoing. The default behavior is also no:

ask_sudo_pass=True

Users on platforms where sudo passwords are enabled should consider changing this setting.

> bin_ansible_callbacks

Controls whether callback plugins are loaded when running /usr/bin/ansible. This may be used to log activity from the command line, send notifications, and so on. Callback plugins are always loaded for /usr/bin/ansible-playbook if present and cannot be disabled.

> command_warnings

By default since Ansible 1.8, Ansible will warn when usage of the shell and command module appear to be simplified by using a default Ansible module instead. This can include reminders to use the ‘git’ module instead of shell commands to execute ‘git’. Using modules when possible over arbitrary shell commands can lead to more reliable and consistent playbook runs, and also easier to maintain playbooks:

command_warnings=False

These warnings can be silenced by adjusting the following setting or adding warn=yes or warn=no to the end of the command line parameter string, like so:

- name: usage of git that could be replaced with the git module
  shell: git update foo warn=yes

> deprecation_warnings

Allows disabling of deprecating warnings in ansible-playbook output:

deprecation_warnings = True

> display_skipped_hosts

If set to False, ansible will not display any status for a task that is skipped. The default behavior is to display skipped tasks:

display_skipped_hosts=True

Note that Ansible will always show the task header for any task, regardless of whether or not the task is skipped.

> error_on_undefined_vars

On by default since Ansible 1.3, this causes ansible to fail steps that reference variable names that are likely typoed:

error_on_undefined_vars=True

If set to False, any ‘{{ template_expression }}’ that contains undefined variables will be rendered in a template or ansible action line exactly as written.

> force_color

This option forces color mode even when running without a TTY:

force_color=1

> force_handlers

This option causes notified handlers to run on a host even if a failure occurs on that host:

force_handlers=True

The default is False, meaning that handlers will not run if a failure has occurred on a host.

> forks

This is the default number of parallel processes to spawn when communicating with remote hosts. Many users may set this to 50, some set it to 500 or more. The default is very very conservative:

forks=5

> timeout

This is the default SSH timeout to use on connection attempts:

timeout=10

> gathering

The ‘gathering’ setting controls the default policy of facts gathering (variables discovered about remote systems).

The value ‘implicit’ is the default, meaning facts will be gathered per play unless ‘gather_facts: False’ is set in the play. The value ‘explicit’ is the inverse, facts will not be gathered unless directly requested in the play.

The value ‘smart’ means each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the playbook run. This option can be useful for those wishing to save fact gathering time.

> host_key_checking

If a host is reinstalled and has a different key in ‘known_hosts’, this will result in an error message until corrected. If a host is not initially in ‘known_hosts’ this will result in prompting for confirmation of the key, which results in an interactive experience if using Ansible, from say, cron. You might not want this.

> poll_interval

For asynchronous tasks in ansible, this is how often to check back on the status of those tasks when an explicit poll interval is not supplited:

poll_interval=15

> inventory

This is the default location of the inventory file, script, or directory that Ansible will use to determine what hosts it has available to talk to:

inventory=/etc/ansible/hosts

> jinja2_extensions

This is a developer-specific feature that allows enabling additional Jinja2 extensions:

jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n

> library

This is the default location Ansible looks to find modules:

library=/usr/share/ansible:~/ansible

It also will look for modules in the ‘./library’ directory alongside a playboogk.

> log_path

If present and configured in ansible.cfg, Ansible will log information about executions at the designated location. Be sure the user running Ansible has permissions on the logfile:

log_path=/var/log/ansible.log

This behavior is not on by default. Note that ansible will, without this setting, record module arguments called to the syslog of managed machines. Password arguments are excluded.

> module_lang

This is to set the default language to communicate between the module and the system. By default, the value is ‘C’.

> module_name

This is the default module name (-m) value for /usr/bin/ansible. The default is the ‘command’ module. This module doesnot support shell variables, pipes, or quotes, so you might wish to change it to ‘shell’:

module_name=command

> nocolor

> nocows

> pattern

This is the default group of hosts to talk to in a playbook if no “hosts:” stanza is supplied. The default is to talk to all hosts. You may wish to change this to protect yourself from surprises:

pattern=*

This is only used in ansible-playbook , and ansible always requires a host pattern.

> private_key_file

If you are using a pem file to authenticate with machines rather than SSH agent or passwords, you can set the default value here to avoid re-specifying ================private-key with every invocation:

private_key_file=/path/to/file.pem

> remote_port

This sets the default SSH port on all of your systems, for systems that didn’t specify an alternative value in inventory. The default is the standard 22:

remote_port=22

> remote_tmp

Ansible works by transferring modules to your remote machines, running them, and then cleaning up after itself. In some cases, you may not wish to use the default location and would like to change the path. You can do so by altering this setting:

remote_tmp=$HOME/.ansible/tmp

> remote_user

This is the default username ansible will connect as for /usr/bin/ansible-playbook. Note that /usr/bin/ansible will always default to the current user if this is not defined:

remote_user=root

> roles_path

The roles path indicate additional directories beyond the ‘roles/’ subdirectory of a playbook project to search to find Ansible roles. For instance, if there was a source control repository of common roles and a different repository of playbooks, you might choose to establish a convention to checkout roles in /opt/mysite/roles like so:

roles_path=/opt/mysite/roles:/etc/ansible/roles

> sudo_exe

If using an alternative sudo implementation on remote machines, the path to sudo can be replaced here provided the sudo implementation is matching CLI flags with the standard sudo:

sudo_exe=sudo

> sudo_flags

Additional flags to pass to sudo when engaging sudo support. The default is ‘-H’ which preserves the $HOME environment variable of the original user. In some situations you may wish to add or remove flags, but in general most users will not need to change this setting:

sudo_flags=-H

> sudo_user

This is the default user to sudo to if ================sudo-user is not specified or ‘sudo_user’ is not specified in an Ansible playbook:

sudo_user=root

> become

The equivalent of adding sudo: or su: to a play or task, set to true/yes to activate privilege escalation. The default behavior is no:

become=True

> become_method

Set the privilege escalation method. The default is sudo , other options are su , pbrun , pfexec

become_method=su

> become_user

The equivalent to ansible_sudo_user or ansible_su_user, allows to set the user you become through privilege escalation. The default is ‘root’:

become_user=root

> become_ask_pass

Ask for privilege escalation password, the default is False:

become_ask_pass=True

> record_host_keys

The default setting of yes will record newly discovered and approved (if host key checking is enabled) hosts in the user’s hostfile. This setting may be inefficient for large numbers of hosts, and in those situations, using the ssh transport is definitely recommended instead. Setting it to False will improve performance and is recommended when host key checking is disabled:

record_host_keys=True

> ssh_args

If set, this will pass a specific set of options to Ansible rather than Ansible’s usual defaults:

ssh_args=-o ControlMaster=auto -o ControlPersist=60s

In particular, users may wish to raise the ControlPersist time to encourage performance. A value of 30 minutes may be appropriate.

> control_path

This is the location to save ControlPath sockets. This defaults to:

control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r

> scp_if_ssh

Occasionally users may be managing a remote system that doesn’t have SFTP enabled. If set to True, we can cause scp to be used to transfer remote files instead:

scp_if_ssh=False

> pipelining

Enabling pipelining reduces the number of SSH operations required to execute a module on the remote server, by executing many ansible modules without actual file transfer. This can result in a very significant performance improvement when enabled, however when using “sudo:” operations you must first disable ‘requiretty’ in /etc/sudoers on all managed hosts.

By default, this option is disabled to preserve compatibility with sudoers configurations that have requiretty, but is highly recommended if yo can enable it.

> accelerate_port

This is the port to use for accelerated mode:

accelerate_port=5099

> accelerate_timeout

This setting controls the timeout for receiving data from a client. If no data is received during this time, the socket connection will be closed. A keepalive packet is sent back to the controller every 15 seconds, so this timeout should not be set lower than 15 (by default, the timeout is 30 seconds):

accelerate_timeout=30

> accelerate_connect_timeout

This setting controls the timeout for the socket connect call, and should be kept relatively low. The connection to the accelerate_port will be attempted 3 times before Ansible will fall back to ssh or paramiko (depending on your default connection setting) to try and start the accelerate daemon remotely. The default setting is 1.0 seconds:

accelerate_connect_timeout=1.0

> accelerate_multi_key

If enabled, this setting allows multiple private keys to be uploaded to the daemon. Any clients connecting to the daemon must also enable this option:

accelerate_multi_key=yes

YAML Syntax

Basics

All YAML files should begin with --- .

All members of a list are lines beginning at the same indentation level starting with a - (a dash and a space):

---
# A list of tasty fruits
- Apple
- Orange
- Strawberry
- Mango

Dictionaries can also be represented in an abbreviated form if you really want to:

---
{ name: MEM, job: Dev, skill: delta }

Ansible doesnt really use these too much, but you can also specify a boolean value(true/false) in several forms:

---
create_key: yes
needs_agent: no
knows_oop: True
likes_emacs: TRUE
uses_cvs: false

Gotchas

While YAML is generally friendly, the following is going to result in a YAML syntax error:

foo: somebody said I should put a colon here: so I did

You wil want to quote any hash values using colons:

foo: "someone said i should put a colon: so i did"

Further, Ansible uses “{{var}}” for variables. If a value after a colon starts with a “{” , YAML will think it is a dictionary, so you must quote it:

foo: "{{ varialbe }}"

Ansible Module Docs

> ACL

Sets and retrieves file ACL information.

Options (= is mandatory):

- default
      if the target is a directory, setting this to yes will make
      it the default acl for entities created inside the
      directory. It causes an error if name is a file. (Choices:
      yes, no)

- entity
      actual user or group that the ACL applies to when matching
      entity types user or group are selected.

- entry
      DEPRECATED. The acl to set or remove.  This must always be
      quoted in the form of '<etype>:<qualifier>:<perms>'.  The
      qualifier may be empty for some types, but the type and
      perms are always requried. '-' can be used as placeholder
      when you do not care about permissions. This is now
      superceeded by entity, type and permissions fields.

- etype
      if the target is a directory, setting this to yes will make
      it the default acl for entities created inside the
      directory. It causes an error if name is a file. (Choices:
      user, group, mask, other)

- follow
      whether to follow symlinks on the path if a symlink is
      encountered. (Choices: yes, no)

= name
      The full path of the file or object.

- permissions
      Permissions to apply/remove can be any combination of r, w
      and  x (read, write and execute respectively)

- state
      defines whether the ACL should be present or not.  The
      `query' state gets the current acl `present' without
      changing it, for use in 'register' operations. (Choices:
      query, present, absent)

Notes:    The "acl" module requires that acls are enabled on the target
      filesystem and that the setfacl and getfacl binaries are
      installed.

# Grant user Joe read access to a file
- acl: name=/etc/foo.conf entity=joe etype=user permissions="r" state=present

# Removes the acl for Joe on a specific file
- acl: name=/etc/foo.conf entity=joe etype=user state=absent

# Sets default acl for joe on foo.d
- acl: name=/etc/foo.d entity=joe etype=user permissions=rw default=yes state=present

# Same as previous but using entry shorthand
- acl: name=/etc/foo.d entrty="default:user:joe:rw-" state=present

# Obtain the acl for a specific file
- acl: name=/etc/foo.conf
  register: acl_info

> ADD_HOST

Use variables to create new hosts and groups in inventory for use
in later plays of the same playbook. Takes variables so you can
define the new hosts more fully.

Options (= is mandatory):

- groups
      The groups to add the hostname to, comma separated.

= name
      The hostname/ip of the host to add to the inventory, can
      include a colon and a port number.

# add host to group 'just_created' with variable foo=42
- add_host: name={{ ip_from_ec2 }} groups=just_created foo=42

# add a host with a non-standard port local to your machines
- add_host: name={{ new_ip }}:{{ new_port }}

# add a host alias that we reach through a tunnel
- add_host: hostname={{ new_ip }}
            ansible_ssh_host={{ inventory_hostname }}
            ansible_ssh_port={{ new_port }}

> AIRBRAKE_DEPLOYMENT

Notify airbrake about app deployments (see
http://help.airbrake.io/kb/api-2/deploy-tracking)

Options (= is mandatory):

= environment
      The airbrake environment name, typically 'production',
      'staging', etc.

- repo
      URL of the project repository

- revision
      A hash, number, tag, or other identifier showing what
      revision was deployed

= token
      API token.

- url
      Optional URL to submit the notification to. Use to send
      notifications to Airbrake-compliant tools like Errbit.

- user
      The username of the person doing the deployment

- validate_certs
      If `no', SSL certificates for the target url will not be
      validated. This should only be used on personally controlled
      sites using self-signed certificates. (Choices: yes, no)

Requirements:    urllib, urllib2

- airbrake_deployment: token=AAAAAA
                       environment='staging'
                       user='ansible'
                       revision=4.2

> APT

Manages `apt' packages (such as for Debian/Ubuntu).

Options (= is mandatory):

- cache_valid_time
      If `update_cache' is specified and the last run is less or
      equal than `cache_valid_time' seconds ago, the
      `update_cache' gets skipped.

- default_release
      Corresponds to the `-t' option for `apt' and sets pin
      priorities

- dpkg_options
      Add dpkg options to apt command. Defaults to '-o
      "Dpkg::Options::=--force-confdef" -o "Dpkg::Options
      ::=--force-confold"'Options should be supplied as comma
      separated list

- force
      If `yes', force installs/removes. (Choices: yes, no)

- install_recommends
      Corresponds to the `--no-install-recommends' option for
      `apt', default behavior works as apt's default behavior,
      `no' does not install recommended packages. Suggested
      packages are never installed. (Choices: yes, no)

- pkg
      A package name or package specifier with version, like `foo'
      or `foo=1.0'. Shell like wildcards (fnmatch) like apt* are
      also supported.

- purge
      Will force purging of configuration files if the module
      state is set to `absent'. (Choices: yes, no)

- state
      Indicates the desired package state (Choices: latest,
      absent, present)

- update_cache
      Run the equivalent of `apt-get update' before the operation.
      Can be run as part of the package installation or as a
      separate step (Choices: yes, no)

- upgrade
      If yes or safe, performs an aptitude safe-upgrade.If full,
      performs an aptitude full-upgrade.If dist, performs an apt-
      get dist-upgrade.Note: This does not upgrade a specific
      package, use state=latest for that. (Choices: yes, safe,
      full, dist)

Notes:    Three of the upgrade modes (`full', `safe' and its alias `yes')
      require `aptitude', otherwise `apt-get' suffices.

Requirements:    python-apt, aptitude

# Update repositories cache and install "foo" package
- apt: pkg=foo update_cache=yes

# Remove "foo" package
- apt: pkg=foo state=absent

# Install the package "foo"
- apt: pkg=foo state=present

# Install the version '1.00' of package "foo"
- apt: pkg=foo=1.00 state=present

# Update the repository cache and update package "nginx" to latest version using default release squeeze-backport
- apt: pkg=nginx state=latest default_release=squeeze-backports update_cache=yes

# Install latest version of "openjdk-6-jdk" ignoring "install-recommends"
- apt: pkg=openjdk-6-jdk state=latest install_recommends=no

# Update all packages to the latest version
- apt: upgrade=dist

# Run the equivalent of "apt-get update" as a separate step
- apt: update_cache=yes

# Only run "update_cache=yes" if the last one is more than more than 3600 seconds ago
- apt: update_cache=yes cache_valid_time=3600

# Pass options to dpkg on run
- apt: upgrade=dist update_cache=yes dpkg_options='force-confold,force-confdef'

> APT_KEY

Add or remove an `apt' key, optionally downloading it

Options (= is mandatory):

- data
      keyfile contents

- file
      keyfile path

- id
      identifier of key

- keyring
      path to specific keyring file in /etc/apt/trusted.gpg.d

- state
      used to specify if key is being added or revoked (Choices:
      absent, present)

- url
      url to retrieve key from.

- validate_certs
      If `no', SSL certificates for the target url will not be
      validated. This should only be used on personally controlled
      sites using self-signed certificates. (Choices: yes, no)

Notes:    doesn't download the key unless it really needs itas a sanity
      check, downloaded key id must match the one specifiedbest
      practice is to specify the key id and the url

# Add an Apt signing key, uses whichever key is at the URL
- apt_key: url=https://ftp-master.debian.org/keys/archive-key-6.0.asc state=present

# Add an Apt signing key, will not download if present
- apt_key: id=473041FA url=https://ftp-master.debian.org/keys/archive-key-6.0.asc state=present

# Remove an Apt signing key, uses whichever key is at the URL
- apt_key: url=https://ftp-master.debian.org/keys/archive-key-6.0.asc state=absent

# Remove a Apt specific signing key, leading 0x is valid
- apt_key: id=0x473041FA state=absent

# Add a key from a file on the Ansible server
- apt_key: data="{{ lookup('file', 'apt.gpg') }}" state=present

# Add an Apt signing key to a specific keyring file
- apt_key: id=473041FA url=https://ftp-master.debian.org/keys/archive-key-6.0.asc keyring=/etc/apt/trusted.gpg.d/debian.gpg state=present

> APT_REPOSITORY

Add or remove an APT repositories in Ubuntu and Debian.

Options (= is mandatory):

= repo
      A source string for the repository.

- state
      A source string state. (Choices: absent, present)

- update_cache
      Run the equivalent of `apt-get update' if has changed.
      (Choices: yes, no)

Notes:    This module works on Debian and Ubuntu and requires `python-apt'
      and `python-pycurl' packages.This module supports Debian
      Squeeze (version 6) as well as its successors.This module
      treats Debian and Ubuntu distributions separately. So PPA
      could be installed only on Ubuntu machines.

Requirements:    python-apt, python-pycurl

# Add specified repository into sources list.
apt_repository: repo='deb http://archive.canonical.com/ubuntu hardy partner' state=present

# Add source repository into sources list.
apt_repository: repo='deb-src http://archive.canonical.com/ubuntu hardy partner' state=present

# Remove specified repository from sources list.
apt_repository: repo='deb http://archive.canonical.com/ubuntu hardy partner' state=absent

# On Ubuntu target: add nginx stable repository from PPA and install its signing key.
# On Debian target: adding PPA is not available, so it will fail immediately.
apt_repository: repo='ppa:nginx/stable'

> ARISTA_INTERFACE

Manage physical Ethernet interface resources on Arista EOS network
devices

Options (= is mandatory):

- admin
      controls the operational state of the interface (Choices:
      up, down)

- description
      a single line text string describing the interface

- duplex
      sets the interface duplex setting (Choices: auto, half,
      full)

= interface_id
      the full name of the interface

- logging
      enables or disables the syslog facility for this module
      (Choices: true, false, yes, no)

- mtu
      configureds the maximum transmission unit for the interface

- speed
      sets the interface speed setting (Choices: auto, 100m, 1g,
      10g)

Notes:    Requires EOS 4.10 or laterThe Netdev extension for EOS must be
      installed and active in the available extensions (show
      extensions from the EOS CLI)See
      http://eos.aristanetworks.com for details

Requirements:    Arista EOS 4.10, Netdev extension for EOS

Example playbook entries using the arista_interface module to manage resource
state.  Note that interface names must be the full interface name not shortcut
names (ie Ethernet, not Et1)

    tasks:
    - name: enable interface Ethernet 1
      action: arista_interface interface_id=Ethernet1 admin=up speed=10g duplex=full logging=true

    - name: set mtu on Ethernet 1
      action: arista_interface interface_id=Ethernet1 mtu=1600 speed=10g duplex=full logging=true

    - name: reset changes to Ethernet 1
      action: arista_interface interface_id=Ethernet1 admin=down mtu=1500 speed=10g duplex=full logging=true

> ARISTA_L2INTERFACE

Manage layer 2 interface resources on Arista EOS network devices

Options (= is mandatory):

= interface_id
      the full name of the interface

- logging
      enables or disables the syslog facility for this module
      (Choices: true, false, yes, no)

- state
      describe the desired state of the interface related to the
      config (Choices: present, absent)

- tagged_vlans
      specifies the list of vlans that should be allowed to
      transit this interface

- untagged_vlan
      specifies the vlan that untagged traffic should be placed in
      for transit across a vlan tagged link

- vlan_tagging
      specifies whether or not vlan tagging should be enabled for
      this interface (Choices: enable, disable)

Notes:    Requires EOS 4.10 or laterThe Netdev extension for EOS must be
      installed and active in the available extensions (show
      extensions from the EOS CLI)See
      http://eos.aristanetworks.com for details

Requirements:    Arista EOS 4.10, Netdev extension for EOS

Example playbook entries using the arista_l2interface module to manage resource
state. Note that interface names must be the full interface name not shortcut
names (ie Ethernet, not Et1)

    tasks:
    - name: create switchport ethernet1 access port
      action: arista_l2interface interface_id=Ethernet1 logging=true

    - name: create switchport ethernet2 trunk port
      action: arista_l2interface interface_id=Ethernet2 vlan_tagging=enable logging=true

    - name: add vlans to red and blue switchport ethernet2
      action: arista_l2interface interface_id=Ethernet2 tagged_vlans=red,blue logging=true

    - name: set untagged vlan for Ethernet1
      action: arista_l2interface interface_id=Ethernet1 untagged_vlan=red logging=true

    - name: convert access to trunk
      action: arista_l2interface interface_id=Ethernet1 vlan_tagging=enable tagged_vlans=red,blue logging=true

    - name: convert trunk to access
      action: arista_l2interface interface_id=Ethernet2 vlan_tagging=disable untagged_vlan=blue logging=true

    - name: delete switchport ethernet1
      action: arista_l2interface interface_id=Ethernet1 state=absent logging=true

> ARISTA_LAG

Manage port channel interface resources on Arista EOS network
devices

Options (= is mandatory):

= interface_id
      the full name of the interface

- lacp
      enables the use of the LACP protocol for managing link
      bundles (Choices: active, passive, off)

- links
      array of physical interface links to include in this lag

- logging
      enables or disables the syslog facility for this module
      (Choices: true, false, yes, no)

- minimum_links
      the minimum number of physical interaces that must be
      operationally up to consider the lag operationally up

- state
      describe the desired state of the interface related to the
      config (Choices: present, absent)

Notes:    Requires EOS 4.10 or laterThe Netdev extension for EOS must be
      installed and active in the available extensions (show
      extensions from the EOS CLI)See
      http://eos.aristanetworks.com for details

Requirements:    Arista EOS 4.10, Netdev extension for EOS

Example playbook entries using the arista_lag module to manage resource
state.  Note that interface names must be the full interface name not shortcut
names (ie Ethernet, not Et1)

    tasks:
    - name: create lag interface
      action: arista_lag interface_id=Port-Channel1 links=Ethernet1,Ethernet2 logging=true

    - name: add member links
      action: arista_lag interface_id=Port-Channel1 links=Ethernet1,Ethernet2,Ethernet3 logging=true

    - name: remove member links
      action: arista_lag interface_id=Port-Channel1 links=Ethernet2,Ethernet3 logging=true

    - name: remove lag interface
      action: arista_lag interface_id=Port-Channel1 state=absent logging=true

> ARISTA_VLAN

Manage VLAN resources on Arista EOS network devices.  This module
requires the Netdev EOS extension to be installed in EOS.  For
detailed instructions for installing and using the Netdev module
please see [link]

Options (= is mandatory):

- logging
      enables or disables the syslog facility for this module
      (Choices: true, false, yes, no)

- name
      a descriptive name for the vlan

- state
      describe the desired state of the vlan related to the config
      (Choices: present, absent)

= vlan_id
      the vlan id

Notes:    Requires EOS 4.10 or laterThe Netdev extension for EOS must be
      installed and active in the available extensions (show
      extensions from the EOS CLI)See
      http://eos.aristanetworks.com for details

Requirements:    Arista EOS 4.10, Netdev extension for EOS

Example playbook entries using the arista_vlan module to manage resource
state.

  tasks:
  - name: create vlan 999
    action: arista_vlan vlan_id=999 logging=true

  - name: create / edit vlan 999
    action: arista_vlan vlan_id=999 name=test logging=true

  - name: remove vlan 999
    action: arista_vlan vlan_id=999 state=absent logging=true

> ASSEMBLE

Assembles a configuration file from fragments. Often a particular
program will take a single configuration file and does not support
a `conf.d' style structure where it is easy to build up the
configuration from multiple sources. [assemble] will take a
directory of files that can be local or have already been
transferred to the system, and concatenate them together to
produce a destination file. Files are assembled in string sorting
order. Puppet calls this idea `fragments'.

Options (= is mandatory):

- backup
      Create a backup file (if `yes'), including the timestamp
      information so you can get the original file back if you
      somehow clobbered it incorrectly. (Choices: yes, no)

- delimiter
      A delimiter to seperate the file contents.

= dest
      A file to create using the concatenation of all of the
      source files.

- others
      all arguments accepted by the [file] module also work here

- regexp
      Assemble files only if `regex' matches the filename. If not
      set, all files are assembled. All "" (backslash) must be
      escaped as "\\" to comply yaml syntax. Uses Python regular
      expressions; see http://docs.python.org/2/library/re.html.

- remote_src
      If False, it will search for src at originating/master
      machine, if True it will go to the remote/target machine for
      the src. Default is True. (Choices: True, False)

= src
      An already existing directory full of source files.

# Example from Ansible Playbooks
- assemble: src=/etc/someapp/fragments dest=/etc/someapp/someapp.conf

# When a delimiter is specified, it will be inserted in between each fragment
- assemble: src=/etc/someapp/fragments dest=/etc/someapp/someapp.conf delimiter='### START FRAGMENT ###'

> ASSERT

This module asserts that a given expression is true and can be a
simpler alternative to the 'fail' module in some cases.

Options (= is mandatory):

= that
      A string expression of the same form that can be passed to
      the 'when' statement

- assert: ansible_os_family != "RedHat"
- assert: "'foo' in some_command_result.stdout"

> AT

Use this module to schedule a command or script to run once in the
future.All jobs are executed in the a queue.

Options (= is mandatory):

= action
      The action to take for the job defaulting to add. Unique
      will verify that there is only one entry in the queue.Delete
      will remove all existing queued jobs. (Choices: add, delete,
      unique)

- command
      A command to be executed in the future.

- script_file
      An existing script to be executed in the future.

= unit_count
      The count of units in the future to execute the command or
      script.

= unit_type
      The type of units in the future to execute the command or
      script. (Choices: minutes, hours, days, weeks)

- user
      The user to execute the at command as.

Requirements:    at

# Schedule a command to execute in 20 minutes as root.
- at: command="ls -d / > /dev/null" unit_count=20 unit_type="minutes"

# Schedule a script to execute in 1 hour as the neo user.
- at: script_file="/some/script.sh" user="neo" unit_count=1 unit_type="hours"

# Match a command to an existing job and delete the job.
- at: command="ls -d / > /dev/null" action="delete"

# Schedule a command to execute in 20 minutes making sure it is unique in the queue.
- at: command="ls -d / > /dev/null" action="unique" unit_count=20 unit_type="minutes"

> AUTHORIZED_KEY

Adds or removes authorized keys for particular user accounts

Options (= is mandatory):

= key
      The SSH public key, as a string

- key_options
      A string of ssh key options to be prepended to the key in
      the authorized_keys file

- manage_dir
      Whether this module should manage the directory of the
      authorized_keys file. Make sure to set `manage_dir=no' if
      you are using an alternate directory for authorized_keys set
      with `path', since you could lock yourself out of SSH
      access. See the example below. (Choices: yes, no)

- path
      Alternate path to the authorized_keys file

- state
      Whether the given key (with the given key_options) should or
      should not be in the file (Choices: present, absent)

= user
      The username on the remote host whose authorized_keys file
      will be modified

# Example using key data from a local file on the management machine
- authorized_key: user=charlie key="{{ lookup('file', '/home/charlie/.ssh/id_rsa.pub') }}"

# Using alternate directory locations:
- authorized_key: user=charlie
                  key="{{ lookup('file', '/home/charlie/.ssh/id_rsa.pub') }}"
                  path='/etc/ssh/authorized_keys/charlie'
                  manage_dir=no

# Using with_file
- name: Set up authorized_keys for the deploy user
  authorized_key: user=deploy
                  key="{{ item }}"
  with_file:
    - public_keys/doe-jane
    - public_keys/doe-john

# Using key_options:
- authorized_key: user=charlie
                  key="{{ lookup('file', '/home/charlie/.ssh/id_rsa.pub') }}"
                  key_options='no-port-forwarding,host="10.0.1.1"'

> BIGIP_MONITOR_HTTP

Manages F5 BIG-IP LTM monitors via iControl SOAP API

Options (= is mandatory):

- interval
      The interval specifying how frequently the monitor instance
      of this template will run. By default, this interval is used
      for up and down states. The default API setting is 5.

- ip
      IP address part of the ipport definition. The default API
      setting is "0.0.0.0".

= name
      Monitor name

- parent
      The parent template of this monitor template

- parent_partition
      Partition for the parent monitor

- partition
      Partition for the monitor

= password
      BIG-IP password

- port
      port address part op the ipport definition. Tyhe default API
      setting is 0.

= receive
      The receive string for the monitor call

= receive_disable
      The receive disable string for the monitor call

= send
      The send string for the monitor call

= server
      BIG-IP host

- state
      Monitor state (Choices: present, absent)

- time_until_up
      Specifies the amount of time in seconds after the first
      successful response before a node will be marked up. A value
      of 0 will cause a node to be marked up immediately after a
      valid response is received from the node. The default API
      setting is 0.

- timeout
      The number of seconds in which the node or service must
      respond to the monitor request. If the target responds
      within the set time period, it is considered up. If the
      target does not respond within the set time period, it is
      considered down. You can change this number to any number
      you want, however, it should be 3 times the interval number
      of seconds plus 1 second. The default API setting is 16.

= user
      BIG-IP username

Notes:    Requires BIG-IP software version >= 11F5 developed module
      'bigsuds' required (see http://devcentral.f5.com)Best run as
      a local_action in your playbookMonitor API documentation: ht
      tps://devcentral.f5.com/wiki/iControl.LocalLB__Monitor.ashx

Requirements:    bigsuds

- name: BIGIP F5 | Create HTTP Monitor
  local_action:
    module:             bigip_monitor_http
    state:              present
    server:             "{{ f5server }}"
    user:               "{{ f5user }}"
    password:           "{{ f5password }}"
    name:               "{{ item.monitorname }}"
    send:               "{{ item.send }}"
    receive:            "{{ item.receive }}"
  with_items: f5monitors
- name: BIGIP F5 | Remove HTTP Monitor
  local_action:
    module:             bigip_monitor_http
    state:              absent
    server:             "{{ f5server }}"
    user:               "{{ f5user }}"
    password:           "{{ f5password }}"
    name:               "{{ monitorname }}"

> BIGIP_MONITOR_TCP

Manages F5 BIG-IP LTM tcp monitors via iControl SOAP API

Options (= is mandatory):

- interval
      The interval specifying how frequently the monitor instance
      of this template will run. By default, this interval is used
      for up and down states. The default API setting is 5.

- ip
      IP address part of the ipport definition. The default API
      setting is "0.0.0.0".

= name
      Monitor name

- parent
      The parent template of this monitor template (Choices: tcp,
      tcp_echo, tcp_half_open)

- parent_partition
      Partition for the parent monitor

- partition
      Partition for the monitor

= password
      BIG-IP password

- port
      port address part op the ipport definition. Tyhe default API
      setting is 0.

= receive
      The receive string for the monitor call

= send
      The send string for the monitor call

= server
      BIG-IP host

- state
      Monitor state (Choices: present, absent)

- time_until_up
      Specifies the amount of time in seconds after the first
      successful response before a node will be marked up. A value
      of 0 will cause a node to be marked up immediately after a
      valid response is received from the node. The default API
      setting is 0.

- timeout
      The number of seconds in which the node or service must
      respond to the monitor request. If the target responds
      within the set time period, it is considered up. If the
      target does not respond within the set time period, it is
      considered down. You can change this number to any number
      you want, however, it should be 3 times the interval number
      of seconds plus 1 second. The default API setting is 16.

- type
      The template type of this monitor template (Choices:
      TTYPE_TCP, TTYPE_TCP_ECHO, TTYPE_TCP_HALF_OPEN)

= user
      BIG-IP username

Notes:    Requires BIG-IP software version >= 11F5 developed module
      'bigsuds' required (see http://devcentral.f5.com)Best run as
      a local_action in your playbookMonitor API documentation: ht
      tps://devcentral.f5.com/wiki/iControl.LocalLB__Monitor.ashx

Requirements:    bigsuds


- name: BIGIP F5 | Create TCP Monitor
  local_action:
    module:             bigip_monitor_tcp
    state:              present
    server:             "{{ f5server }}"
    user:               "{{ f5user }}"
    password:           "{{ f5password }}"
    name:               "{{ item.monitorname }}"
    type:               tcp
    send:               "{{ item.send }}"
    receive:            "{{ item.receive }}"
  with_items: f5monitors-tcp
- name: BIGIP F5 | Create TCP half open Monitor
  local_action:
    module:             bigip_monitor_tcp
    state:              present
    server:             "{{ f5server }}"
    user:               "{{ f5user }}"
    password:           "{{ f5password }}"
    name:               "{{ item.monitorname }}"
    type:               tcp
    send:               "{{ item.send }}"
    receive:            "{{ item.receive }}"
  with_items: f5monitors-halftcp
- name: BIGIP F5 | Remove TCP Monitor
  local_action:
    module:             bigip_monitor_tcp
    state:              absent
    server:             "{{ f5server }}"
    user:               "{{ f5user }}"
    password:           "{{ f5password }}"
    name:               "{{ monitorname }}"
  with_flattened:
  - f5monitors-tcp
  - f5monitors-halftcp

> BIGIP_NODE

Manages F5 BIG-IP LTM nodes via iControl SOAP API

Options (= is mandatory):

- description
      Node description. (Choices: )

= host
      Node IP. Required when state=present and node does not
      exist. Error when state=absent. (Choices: )

- name
      Node name (Choices: )

- partition
      Partition (Choices: )

= password
      BIG-IP password (Choices: )

= server
      BIG-IP host (Choices: )

= state
      Pool member state (Choices: present, absent)

= user
      BIG-IP username (Choices: )

Notes:    Requires BIG-IP software version >= 11F5 developed module
      'bigsuds' required (see http://devcentral.f5.com)Best run as
      a local_action in your playbook

Requirements:    bigsuds


## playbook task examples:

---
# file bigip-test.yml
# ...
- hosts: bigip-test
  tasks:
  - name: Add node
    local_action: >
      bigip_node
      server=lb.mydomain.com
      user=admin
      password=mysecret
      state=present
      partition=matthite
      host="{{ ansible_default_ipv4["address"] }}"
      name="{{ ansible_default_ipv4["address"] }}"

# Note that the BIG-IP automatically names the node using the
# IP address specified in previous play's host parameter.
# Future plays referencing this node no longer use the host
# parameter but instead use the name parameter.
# Alternatively, you could have specified a name with the
# name parameter when state=present.

  - name: Modify node description
    local_action: >
      bigip_node
      server=lb.mydomain.com
      user=admin
      password=mysecret
      state=present
      partition=matthite
      name="{{ ansible_default_ipv4["address"] }}"
      description="Our best server yet"

  - name: Delete node
    local_action: >
      bigip_node
      server=lb.mydomain.com
      user=admin
      password=mysecret
      state=absent
      partition=matthite
      name="{{ ansible_default_ipv4["address"] }}"

> BIGIP_POOL

Manages F5 BIG-IP LTM pools via iControl SOAP API

Options (= is mandatory):

- host
      Pool member IP (Choices: )

- lb_method
      Load balancing method (Choices: round_robin, ratio_member,
      least_connection_member, observed_member, predictive_member,
      ratio_node_address, least_connection_node_address,
      fastest_node_address, observed_node_address,
      predictive_node_address, dynamic_ratio,
      fastest_app_response, least_sessions, dynamic_ratio_member,
      l3_addr, unknown, weighted_least_connection_member,
      weighted_least_connection_node_address, ratio_session,
      ratio_least_connection_member,
      ratio_least_connection_node_address)

- monitor_type
      Monitor rule type when monitors > 1 (Choices: and_list,
      m_of_n)

- monitors
      Monitor template name list. Always use the full path to the
      monitor. (Choices: )

= name
      Pool name (Choices: )

- partition
      Partition of pool/pool member (Choices: )

= password
      BIG-IP password (Choices: )

- port
      Pool member port (Choices: )

- quorum
      Monitor quorum value when monitor_type is m_of_n (Choices: )

= server
      BIG-IP host (Choices: )

- service_down_action
      Sets the action to take when node goes down in pool
      (Choices: none, reset, drop, reselect)

- slow_ramp_time
      Sets the ramp-up time (in seconds) to gradually ramp up the
      load on newly added or freshly detected up pool members
      (Choices: )

- state
      Pool/pool member state (Choices: present, absent)

= user
      BIG-IP username (Choices: )

Notes:    Requires BIG-IP software version >= 11F5 developed module
      'bigsuds' required (see http://devcentral.f5.com)Best run as
      a local_action in your playbook

Requirements:    bigsuds


## playbook task examples:

---
# file bigip-test.yml
# ...
- hosts: localhost
  tasks:
  - name: Create pool
    local_action: >
      bigip_pool
      server=lb.mydomain.com
      user=admin
      password=mysecret
      state=present
      name=matthite-pool
      partition=matthite
      lb_method=least_connection_member
      slow_ramp_time=120

  - name: Modify load balancer method
    local_action: >
      bigip_pool
      server=lb.mydomain.com
      user=admin
      password=mysecret
      state=present
      name=matthite-pool
      partition=matthite
      lb_method=round_robin

- hosts: bigip-test
  tasks:
  - name: Add pool member
    local_action: >
      bigip_pool
      server=lb.mydomain.com
      user=admin
      password=mysecret
      state=present
      name=matthite-pool
      partition=matthite
      host="{{ ansible_default_ipv4["address"] }}"
      port=80

  - name: Remove pool member from pool
    local_action: >
      bigip_pool
      server=lb.mydomain.com
      user=admin
      password=mysecret
      state=absent
      name=matthite-pool
      partition=matthite
      host="{{ ansible_default_ipv4["address"] }}"
      port=80

- hosts: localhost
  tasks:
  - name: Delete pool
    local_action: >
      bigip_pool
      server=lb.mydomain.com
      user=admin
      password=mysecret
      state=absent
      name=matthite-pool
      partition=matthite

> BIGIP_POOL_MEMBER

Manages F5 BIG-IP LTM pool members via iControl SOAP API

Options (= is mandatory):

- connection_limit
      Pool member connection limit. Setting this to 0 disables the
      limit. (Choices: )

- description
      Pool member description (Choices: )

= host
      Pool member IP (Choices: )

- partition
      Partition (Choices: )

= password
      BIG-IP password (Choices: )

= pool
      Pool name. This pool must exist. (Choices: )

= port
      Pool member port (Choices: )

- rate_limit
      Pool member rate limit (connections-per-second). Setting
      this to 0 disables the limit. (Choices: )

- ratio
      Pool member ratio weight. Valid values range from 1 through
      100. New pool members -- unless overriden with this value --
      default to 1. (Choices: )

= server
      BIG-IP host (Choices: )

= state
      Pool member state (Choices: present, absent)

= user
      BIG-IP username (Choices: )

Notes:    Requires BIG-IP software version >= 11F5 developed module
      'bigsuds' required (see http://devcentral.f5.com)Best run as
      a local_action in your playbookSupersedes bigip_pool for
      managing pool members

Requirements:    bigsuds


## playbook task examples:

---
# file bigip-test.yml
# ...
- hosts: bigip-test
  tasks:
  - name: Add pool member
    local_action: >
      bigip_pool_member
      server=lb.mydomain.com
      user=admin
      password=mysecret
      state=present
      pool=matthite-pool
      partition=matthite
      host="{{ ansible_default_ipv4["address"] }}"
      port=80
      description="web server"
      connection_limit=100
      rate_limit=50
      ratio=2

  - name: Modify pool member ratio and description
    local_action: >
      bigip_pool_member
      server=lb.mydomain.com
      user=admin
      password=mysecret
      state=present
      pool=matthite-pool
      partition=matthite
      host="{{ ansible_default_ipv4["address"] }}"
      port=80
      ratio=1
      description="nginx server"

  - name: Remove pool member from pool
    local_action: >
      bigip_pool_member
      server=lb.mydomain.com
      user=admin
      password=mysecret
      state=absent
      pool=matthite-pool
      partition=matthite
      host="{{ ansible_default_ipv4["address"] }}"
      port=80

> BOUNDARY_METER

This module manages boundary meters

Options (= is mandatory):

= apiid
      Organizations boundary API ID

= apikey
      Organizations boundary API KEY

= name
      meter name

- state
      Whether to create or remove the client from boundary
      (Choices: present, absent)

- validate_certs
      If `no', SSL certificates will not be validated. This should
      only be used on personally controlled sites using self-
      signed certificates. (Choices: yes, no)

Notes:    This module does not yet support boundary tags.

Requirements:    Boundary API access, bprobe is required to send data, but not to
      register a meter, Python urllib2

- name: Create meter
  boundary_meter: apiid=AAAAAA api_key=BBBBBB state=present name={{ inventory_hostname }}"

- name: Delete meter
  boundary_meter: apiid=AAAAAA api_key=BBBBBB state=absent name={{ inventory_hostname }}"

> BZR

Manage `bzr' branches to deploy files or software.

Options (= is mandatory):

= dest
      Absolute path of where the branch should be cloned to.

- executable
      Path to bzr executable to use. If not supplied, the normal
      mechanism for resolving binary paths will be used.

- force
      If `yes', any modified files in the working tree will be
      discarded. (Choices: yes, no)

= name
      SSH or HTTP protocol address of the parent branch.

- version
      What version of the branch to clone.  This can be the bzr
      revno or revid.

# Example bzr checkout from Ansible Playbooks
- bzr: name=bzr+ssh://foosball.example.org/path/to/branch dest=/srv/checkout version=22

> CAMPFIRE

Send a message to Campfire.Messages with newlines will result in a
"Paste" message being sent.

Options (= is mandatory):

= msg
      The message body.

- notify
      Send a notification sound before the message. (Choices: 56k,
      bueller, crickets, dangerzone, deeper, drama, greatjob,
      horn, horror, inconceivable, live, loggins, noooo, nyan,
      ohmy, ohyeah, pushit, rimshot, sax, secret, tada, tmyk,
      trombone, vuvuzela, yeah, yodel)

= room
      Room number to which the message should be sent.

= subscription
      The subscription name to use.

= token
      API token.

Requirements:    urllib2, cgi

- campfire: subscription=foo token=12345 room=123 msg="Task completed."

- campfire: subscription=foo token=12345 room=123 notify=loggins
        msg="Task completed ... with feeling."

> CLOUDFORMATION

Launches an AWS CloudFormation stack and waits for it complete.

Options (= is mandatory):

- aws_access_key
      AWS access key. If not set then the value of the
      AWS_ACCESS_KEY environment variable is used.

- aws_secret_key
      AWS secret key. If not set then the value of the
      AWS_SECRET_KEY environment variable is used.

- disable_rollback
      If a stacks fails to form, rollback will remove the stack
      (Choices: yes, no)

- region
      The AWS region to use. If not specified then the value of
      the EC2_REGION environment variable, if any, is used.

= stack_name
      name of the cloudformation stack

= state
      If state is "present", stack will be created.  If state is
      "present" and if stack exists and template has changed, it
      will be updated. If state is absent, stack will be removed.

- tags
      Dictionary of tags to associate with stack and it's
      resources during stack creation. Cannot be updated later.
      Requires at least Boto version 2.6.0.

= template
      the path of the cloudformation template

- template_parameters
      a list of hashes of all the template variables for the stack

Requirements:    boto

# Basic task example
tasks:
- name: launch ansible cloudformation example
  action: cloudformation >
    stack_name="ansible-cloudformation" state=present
    region=us-east-1 disable_rollback=yes
    template=files/cloudformation-example.json
  args:
    template_parameters:
      KeyName: jmartin
      DiskType: ephemeral
      InstanceType: m1.small
      ClusterSize: 3
    tags:
      Stack: ansible-cloudformation

> COMMAND

The [command] module takes the command name followed by a list of
space-delimited arguments.The given command will be executed on
all selected nodes. It will not be processed through the shell, so
variables like `$HOME' and operations like `"<"', `">"', `"|"',
and `"&"' will not work (use the [shell] module if you need these
features).

Options (= is mandatory):

- chdir
      cd into this directory before running the command

- creates
      a filename, when it already exists, this step will *not* be
      run.

- executable
      change the shell used to execute the command. Should be an
      absolute path to the executable.

= free_form
      the command module takes a free form command to run

- removes
      a filename, when it does not exist, this step will *not* be
      run.

Notes:    If you want to run a command through the shell (say you are using
      `<', `>', `|', etc), you actually want the [shell] module
      instead. The [command] module is much more secure as it's
      not affected by the user's environment. `creates',
      `removes', and `chdir' can be specified after the command.
      For instance, if you only want to run a command if a certain
      file does not exist, use this.

# Example from Ansible Playbooks
- command: /sbin/shutdown -t now

# Run the command if the specified file does not exist
- command: /usr/bin/make_database.sh arg1 arg2 creates=/path/to/database

> COPY

The [copy] module copies a file on the local box to remote
locations.

Options (= is mandatory):

- backup
      Create a backup file including the timestamp information so
      you can get the original file back if you somehow clobbered
      it incorrectly. (Choices: yes, no)

- content
      When used instead of 'src', sets the contents of a file
      directly to the specified value.

= dest
      Remote absolute path where the file should be copied to. If
      src is a directory, this must be a directory too.

- directory_mode
      When doing a recursive copy set the mode for the
      directories. If this is not set we will default the system
      defaults.

- force
      the default is `yes', which will replace the remote file
      when contents are different than the source.  If `no', the
      file will only be transferred if the destination does not
      exist. (Choices: yes, no)

- others
      all arguments accepted by the [file] module also work here

- src
      Local path to a file to copy to the remote server; can be
      absolute or relative. If path is a directory, it is copied
      recursively. In this case, if path ends with "/", only
      inside contents of that directory are copied to destination.
      Otherwise, if it does not end with "/", the directory itself
      with all contents is copied. This behavior is similar to
      Rsync.

- validate
      The validation command to run before copying into place.
      The path to the file to validate is passed in via '%s' which
      must be present as in the visudo example below.

Notes:    The "copy" module recursively copy facility does not scale to lots
      (>hundreds) of files. For alternative, see synchronize
      module, which is a wrapper around rsync.

# Example from Ansible Playbooks
- copy: src=/srv/myfiles/foo.conf dest=/etc/foo.conf owner=foo group=foo mode=0644

# Copy a new "ntp.conf file into place, backing up the original if it differs from the copied version
- copy: src=/mine/ntp.conf dest=/etc/ntp.conf owner=root group=root mode=644 backup=yes

# Copy a new "sudoers" file into place, after passing validation with visudo
- copy: src=/mine/sudoers dest=/etc/sudoers validate='visudo -cf %s'

> CRON

Use this module to manage crontab entries. This module allows you
to create named crontab entries, update, or delete them.The module
includes one line with the description of the crontab entry
`"#Ansible: <name>"' corresponding to the "name" passed to the
module, which is used by future ansible/module calls to find/check
the state.

Options (= is mandatory):

- backup
      If set, create a backup of the crontab before it is
      modified. The location of the backup is returned in the
      `backup' variable by this module.

- cron_file
      If specified, uses this file in cron.d instead of an
      individual user's crontab.

- day
      Day of the month the job should run ( 1-31, *, */2, etc )

- hour
      Hour when the job should run ( 0-23, *, */2, etc )

- job
      The command to execute. Required if state=present.

- minute
      Minute when the job should run ( 0-59, *, */2, etc )

- month
      Month of the year the job should run ( 1-12, *, */2, etc )

- name
      Description of a crontab entry.

- reboot
      If the job should be run at reboot. This option is
      deprecated. Users should use special_time. (Choices: yes,
      no)

- special_time
      Special time specification nickname. (Choices: reboot,
      yearly, annually, monthly, weekly, daily, hourly)

- state
      Whether to ensure the job is present or absent. (Choices:
      present, absent)

- user
      The specific user who's crontab should be modified.

- weekday
      Day of the week that the job should run ( 0-7 for Sunday -
      Saturday, *, etc )

Requirements:    cron

# Ensure a job that runs at 2 and 5 exists.
# Creates an entry like "* 5,2 * * ls -alh > /dev/null"
- cron: name="check dirs" hour="5,2" job="ls -alh > /dev/null"

# Ensure an old job is no longer present. Removes any job that is prefixed
# by "#Ansible: an old job" from the crontab
- cron: name="an old job" state=absent

# Creates an entry like "@reboot /some/job.sh"
- cron: name="a job for reboot" special_time=reboot job="/some/job.sh"

# Creates a cron file under /etc/cron.d
- cron: name="yum autoupdate" weekday="2" minute=0 hour=12
        user="root" job="YUMINTERACTIVE=0 /usr/sbin/yum-autoupdate"
        cron_file=ansible_yum-autoupdate

# Removes a cron file from under /etc/cron.d
- cron: cron_file=ansible_yum-autoupdate state=absent

> DATADOG_EVENT

Allows to post events to DataDog (www.datadoghq.com) service.Uses
http://docs.datadoghq.com/api/#events API.

Options (= is mandatory):

- aggregation_key
      An arbitrary string to use for aggregation.

- alert_type
      Type of alert. (Choices: error, warning, info, success)

= api_key
      Your DataDog API key.

- date_happened
      POSIX timestamp of the event.Default value is now.

- priority
      The priority of the event. (Choices: normal, low)

- tags
      Comma separated list of tags to apply to the event.

= text
      The body of the event.

= title
      The event title.

- validate_certs
      If `no', SSL certificates will not be validated. This should
      only be used on personally controlled sites using self-
      signed certificates. (Choices: yes, no)

Requirements:    urllib2

# Post an event with low priority
datadog_event: title="Testing from ansible" text="Test!" priority="low"
               api_key="6873258723457823548234234234"
# Post an event with several tags
datadog_event: title="Testing from ansible" text="Test!"
               api_key="6873258723457823548234234234"
               tags=aa,bb,cc

> DEBUG

This module prints statements during execution and can be useful
for debugging variables or expressions without necessarily halting
the playbook. Useful for debugging together with the 'when:'
directive.

Options (= is mandatory):

- msg
      The customized message that is printed. If omitted, prints a
      generic message.

- var
      A variable name to debug.  Mutually exclusive with the 'msg'
      option.

# Example that prints the loopback address and gateway for each host
- debug: msg="System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}"

- debug: msg="System {{ inventory_hostname }} has gateway {{ ansible_default_ipv4.gateway }}"
  when: ansible_default_ipv4.gateway is defined

- shell: /usr/bin/uptime
  register: result
- debug: var=result

> DIGITAL_OCEAN

Create/delete a droplet in DigitalOcean and optionally waits for
it to be 'running', or deploy an SSH key.

Options (= is mandatory):

- api_key
      Digital Ocean api key.

- client_id
      Digital Ocean manager id.

- command
      Which target you want to operate on. (Choices: droplet, ssh)

- id
      Numeric, the droplet id you want to operate on.

- image_id
      Numeric, this is the id of the image you would like the
      droplet created with.

- name
      String, this is the name of the droplet - must be formatted
      by hostname rules, or the name of a SSH key.

- private_networking
      Bool, add an additional, private network interface to
      droplet for inter-droplet communication (Choices: yes, no)

- region_id
      Numeric, this is the id of the region you would like your
      server

- size_id
      Numeric, this is the id of the size you would like the
      droplet created at.

- ssh_key_ids
      Optional, comma separated list of ssh_key_ids that you would
      like to be added to the server

- ssh_pub_key
      The public SSH key you want to add to your account.

- state
      Indicate desired state of the target. (Choices: present,
      active, absent, deleted)

- unique_name
      Bool, require unique hostnames.  By default, digital ocean
      allows multiple hosts with the same name.  Setting this to
      "yes" allows only one host per name.  Useful for
      idempotence. (Choices: yes, no)

- virtio
      Bool, turn on virtio driver in droplet for improved network
      and storage I/O (Choices: yes, no)

- wait
      Wait for the droplet to be in state 'running' before
      returning.  If wait is "no" an ip_address may not be
      returned. (Choices: yes, no)

- wait_timeout
      How long before wait gives up, in seconds.

Notes:    Two environment variables can be used, DO_CLIENT_ID and
      DO_API_KEY.

Requirements:    dopy

# Ensure a SSH key is present
# If a key matches this name, will return the ssh key id and changed = False
# If no existing key matches this name, a new key is created, the ssh key id is returned and changed = False

- digital_ocean: >
      state=present
      command=ssh
      name=my_ssh_key
      ssh_pub_key='ssh-rsa AAAA...'
      client_id=XXX
      api_key=XXX

# Create a new Droplet
# Will return the droplet details including the droplet id (used for idempotence)

- digital_ocean: >
      state=present
      command=droplet
      name=mydroplet
      client_id=XXX
      api_key=XXX
      size_id=1
      region_id=2
      image_id=3
      wait_timeout=500
  register: my_droplet
- debug: msg="ID is {{ my_droplet.droplet.id }}"
- debug: msg="IP is {{ my_droplet.droplet.ip_address }}"

# Ensure a droplet is present
# If droplet id already exist, will return the droplet details and changed = False
# If no droplet matches the id, a new droplet will be created and the droplet details (including the new id) are returned, changed = True.

- digital_ocean: >
      state=present
      command=droplet
      id=123
      name=mydroplet
      client_id=XXX
      api_key=XXX
      size_id=1
      region_id=2
      image_id=3
      wait_timeout=500

# Create a droplet with ssh key
# The ssh key id can be passed as argument at the creation of a droplet (see ssh_key_ids).
# Several keys can be added to ssh_key_ids as id1,id2,id3
# The keys are used to connect as root to the droplet.

- digital_ocean: >
      state=present
      ssh_key_ids=id1,id2
      name=mydroplet
      client_id=XXX
      api_key=XXX
      size_id=1
      region_id=2
      image_id=3

> DJANGO_MANAGE

Manages a Django application using the `manage.py' application
frontend to `django-admin'. With the `virtualenv' parameter, all
management commands will be executed by the given `virtualenv'
installation.

Options (= is mandatory):

= app_path
      The path to the root of the Django application where
      *manage.py* lives.

- apps
      A list of space-delimited apps to target. Used by the 'test'
      command.

- cache_table
      The name of the table used for database-backed caching. Used
      by the 'createcachetable' command.

= command
      The name of the Django management command to run. Allowed
      commands are cleanup, createcachetable, flush, loaddata,
      syncdb, test, validate. (Choices: cleanup, flush, loaddata,
      runfcgi, syncdb, test, validate, migrate, collectstatic)

- database
      The database to target. Used by the 'createcachetable',
      'flush', 'loaddata', and 'syncdb' commands.

- failfast
      Fail the command immediately if a test fails. Used by the
      'test' command. (Choices: yes, no)

- fixtures
      A space-delimited list of fixture file names to load in the
      database. *Required* by the 'loaddata' command.

- link
      Will create links to the files instead of copying them, you
      can only use this parameter with 'collectstatic' command

- merge
      Will run out-of-order or missing migrations as they are not
      rollback migrations, you can only use this parameter with
      'migrate' command

- pythonpath
      A directory to add to the Python path. Typically used to
      include the settings module if it is located external to the
      application directory.

- settings
      The Python path to the application's settings module, such
      as 'myapp.settings'.

- skip
      Will skip over out-of-order missing migrations, you can only
      use this parameter with `migrate'

- virtualenv
      An optional path to a `virtualenv' installation to use while
      running the manage application.

Notes:    `virtualenv' (http://www.virtualenv.org) must be installed on the
      remote host if the virtualenv parameter is specified.This
      module will create a virtualenv if the virtualenv parameter
      is specified and a virtualenv does not already exist at the
      given location.This module assumes English error messages
      for the 'createcachetable' command to detect table
      existence, unfortunately.To be able to use the migrate
      command, you must have south installed and added as an app
      in your settingsTo be able to use the collectstatic command,
      you must have enabled staticfiles in your settings

Requirements:    virtualenv, django

# Run cleanup on the application installed in 'django_dir'.
- django_manage: command=cleanup app_path={{ django_dir }}

# Load the initial_data fixture into the application
- django_manage: command=loaddata app_path={{ django_dir }} fixtures={{ initial_data }}

#Run syncdb on the application
- django_manage: >
      command=syncdb
      app_path={{ django_dir }}
      settings={{ settings_app_name }}
      pythonpath={{ settings_dir }}
      virtualenv={{ virtualenv_dir }}

#Run the SmokeTest test case from the main app. Useful for testing deploys.
- django_manage: command=test app_path=django_dir apps=main.SmokeTest

> DNSMADEEASY

Manages DNS records via the v2 REST API of the DNS Made Easy
service.  It handles records only; there is no manipulation of
domains or monitor/account support yet. See:
http://www.dnsmadeeasy.com/services/rest-api/

Options (= is mandatory):

= account_key
      Accout API Key.

= account_secret
      Accout Secret Key.

= domain
      Domain to work with. Can be the domain name (e.g.
      "mydomain.com") or the numeric ID of the domain in DNS Made
      Easy (e.g. "839989") for faster resolution.

- record_name
      Record name to get/create/delete/update. If record_name is
      not specified; all records for the domain will be returned
      in "result" regardless of the state argument.

- record_ttl
      record's "Time to live".  Number of seconds the record
      remains cached in DNS servers.

- record_type
      Record type. (Choices: A, AAAA, CNAME, HTTPRED, MX, NS, PTR,
      SRV, TXT)

- record_value
      Record value. HTTPRED: <redirection URL>, MX: <priority>
      <target name>, NS: <name server>, PTR: <target name>, SRV:
      <priority> <weight> <port> <target name>, TXT: <text
      value>If record_value is not specified; no changes will be
      made and the record will be returned in 'result' (in other
      words, this module can be used to fetch a record's current
      id, type, and ttl)

= state
      whether the record should exist or not (Choices: present,
      absent)

- validate_certs
      If `no', SSL certificates will not be validated. This should
      only be used on personally controlled sites using self-
      signed certificates. (Choices: yes, no)

Notes:    The DNS Made Easy service requires that machines interacting with
      the API have the proper time and timezone set. Be sure you
      are within a few seconds of actual time by using NTP.This
      module returns record(s) in the "result" element when
      'state' is set to 'present'. This value can be be registered
      and used in your playbooks.

Requirements:    urllib, urllib2, hashlib, hmac

# fetch my.com domain records
- dnsmadeeasy: account_key=key account_secret=secret domain=my.com state=present
  register: response

# create / ensure the presence of a record
- dnsmadeeasy: account_key=key account_secret=secret domain=my.com state=present record_name="test" record_type="A" record_value="127.0.0.1"

# update the previously created record
- dnsmadeeasy: account_key=key account_secret=secret domain=my.com state=present record_name="test" record_value="192.168.0.1"

# fetch a specific record
- dnsmadeeasy: account_key=key account_secret=secret domain=my.com state=present record_name="test"
  register: response

# delete a record / ensure it is absent
- dnsmadeeasy: account_key=key account_secret=secret domain=my.com state=absent record_name="test"

> DOCKER

Manage the life cycle of docker containers.

Options (= is mandatory):

- command
      Set command to run in a container on startup

- count
      Set number of containers to run

- detach
      Enable detached mode on start up, leaves container running
      in background

- dns
      Set custom DNS servers for the container

- docker_url
      URL of docker host to issue commands to

- env
      Set environment variables (e.g.
      env="PASSWORD=sEcRe7,WORKERS=4")

- expose
      Set container ports to expose for port mappings or links.
      (If the port is already exposed using EXPOSE in a
      Dockerfile, you don't need to expose it again.)

- hostname
      Set container hostname

= image
      Set container image to use

- links
      Link container(s) to other container(s) (e.g.
      links=redis,postgresql:db)

- lxc_conf
      LXC config parameters,  e.g. lxc.aa_profile:unconfined

- memory_limit
      Set RAM allocated to container

- name
      Set the name of the container (cannot use with count)

- password
      Set remote API password

- ports
      Set private to public port mapping specification using
      docker CLI-style syntax [([<host_interface>:[host_port]])|(<
      host_port>):]<container_port>[/udp]

- privileged
      Set whether the container should run in privileged mode

- publish_all_ports
      Publish all exposed ports to the host interfaces

- state
      Set the state of the container (Choices: present, stopped,
      absent, killed, restarted)

- username
      Set remote API username

- volumes
      Set volume(s) to mount on the container

- volumes_from
      Set shared volume(s) from another container

Requirements:    docker-py >= 0.3.0

Start one docker container running tomcat in each host of the web group and bind tomcat's listening port to 8080
on the host:

- hosts: web
  sudo: yes
  tasks:
  - name: run tomcat servers
    docker: image=centos command="service tomcat6 start" ports=8080

The tomcat server's port is NAT'ed to a dynamic port on the host, but you can determine which port the server was
mapped to using docker_containers:

- hosts: web
  sudo: yes
  tasks:
  - name: run tomcat servers
    docker: image=centos command="service tomcat6 start" ports=8080 count=5
  - name: Display IP address and port mappings for containers
    debug: msg={{inventory_hostname}}:{{item['HostConfig']['PortBindings']['8080/tcp'][0]['HostPort']}}
    with_items: docker_containers

Just as in the previous example, but iterates over the list of docker containers with a sequence:

- hosts: web
  sudo: yes
  vars:
    start_containers_count: 5
  tasks:
  - name: run tomcat servers
    docker: image=centos command="service tomcat6 start" ports=8080 count={{start_containers_count}}
  - name: Display IP address and port mappings for containers
    debug: msg="{{inventory_hostname}}:{{docker_containers[{{item}}]['HostConfig']['PortBindings']['8080/tcp'][0]['HostPort']}}"
    with_sequence: start=0 end={{start_containers_count - 1}}

Stop, remove all of the running tomcat containers and list the exit code from the stopped containers:

- hosts: web
  sudo: yes
  tasks:
  - name: stop tomcat servers
    docker: image=centos command="service tomcat6 start" state=absent
  - name: Display return codes from stopped containers
    debug: msg="Returned {{inventory_hostname}}:{{item}}"
    with_items: docker_containers

Create a named container:

- hosts: web
  sudo: yes
  tasks:
  - name: run tomcat server
    docker: image=centos name=tomcat command="service tomcat6 start" ports=8080

Create multiple named containers:

- hosts: web
  sudo: yes
  tasks:
  - name: run tomcat servers
    docker: image=centos name={{item}} command="service tomcat6 start" ports=8080
    with_items:
      - crookshank
      - snowbell
      - heathcliff
      - felix
      - sylvester

Create containers named in a sequence:

- hosts: web
  sudo: yes
  tasks:
  - name: run tomcat servers
    docker: image=centos name={{item}} command="service tomcat6 start" ports=8080
    with_sequence: start=1 end=5 format=tomcat_%d.example.com

Create two linked containers:

- hosts: web
  sudo: yes
  tasks:
  - name: ensure redis container is running
    docker: image=crosbymichael/redis name=redis

  - name: ensure redis_ambassador container is running
    docker: image=svendowideit/ambassador ports=6379:6379 links=redis:redis name=redis_ambassador_ansible

Create containers with options specified as key-value pairs and lists:

- hosts: web
  sudo: yes
  tasks:
  - docker:
        image: namespace/image_name
        links:
          - postgresql:db
          - redis:redis


Create containers with options specified as strings and lists as comma-separated strings:

- hosts: web
  sudo: yes
  tasks:
  docker: image=namespace/image_name links=postgresql:db,redis:redis

> DOCKER_IMAGE

Create, check and remove docker images

Options (= is mandatory):

- docker_url
      URL of docker host to issue commands to

= name
      Image name to work with

- nocache
      Do not use cache with building

- path
      Path to directory with Dockerfile

- state
      Set the state of the image (Choices: present, absent, build)

- tag
      Image tag to work with

- timeout
      Set image operation timeout

Requirements:    docker-py

Build docker image if required. Path should contains Dockerfile to build image:

- hosts: web
  sudo: yes
  tasks:
  - name: check or build image
    docker_image: path="/path/to/build/dir" name="my/app" state=present

Build new version of image:

- hosts: web
  sudo: yes
  tasks:
  - name: check or build image
    docker_image: path="/path/to/build/dir" name="my/app" state=build

Remove image from local docker storage:

- hosts: web
  sudo: yes
  tasks:
  - name: run tomcat servers
    docker_image: name="my/app" state=absent

> EASY_INSTALL

Installs Python libraries, optionally in a `virtualenv'

Options (= is mandatory):

- executable
      The explicit executable or a pathname to the executable to
      be used to run easy_install for a specific version of Python
      installed in the system. For example `easy_install-3.3', if
      there are both Python 2.7 and 3.3 installations in the
      system and you want to run easy_install for the Python 3.3
      installation.

= name
      A Python library name

- virtualenv
      an optional `virtualenv' directory path to install into. If
      the `virtualenv' does not exist, it is created automatically

- virtualenv_command
      The command to create the virtual environment with. For
      example `pyvenv', `virtualenv', `virtualenv2'.

- virtualenv_site_packages
      Whether the virtual environment will inherit packages from
      the global site-packages directory.  Note that if this
      setting is changed on an already existing virtual
      environment it will not have any effect, the environment
      must be deleted and newly created. (Choices: yes, no)

Notes:    Please note that the [easy_install] module can only install Python
      libraries. Thus this module is not able to remove libraries.
      It is generally recommended to use the [pip] module which
      you can first install using [easy_install].Also note that
      `virtualenv' must be installed on the remote host if the
      `virtualenv' parameter is specified.

Requirements:    virtualenv

# Examples from Ansible Playbooks
- easy_install: name=pip

# Install Bottle into the specified virtualenv.
- easy_install: name=bottle virtualenv=/webapps/myapp/venv

> EC2

Creates or terminates ec2 instances. When created optionally waits
for it to be 'running'. This module has a dependency on python-
boto >= 2.5

Options (= is mandatory):

- assign_public_ip
      when provisioning within vpc, assign a public IP address.
      Boto library must be 2.13.0+

- aws_access_key
      AWS access key. If not set then the value of the
      AWS_ACCESS_KEY environment variable is used.

- aws_secret_key
      AWS secret key. If not set then the value of the
      AWS_SECRET_KEY environment variable is used.

- count
      number of instances to launch

- count_tag
      Used with 'exact_count' to determine how many nodes based on
      a specific tag criteria should be running.  This can be
      expressed in multiple ways and is shown in the EXAMPLES
      section.  For instance, one can request 25 servers that are
      tagged with "class=webserver".

- ec2_url
      Url to use to connect to EC2 or your Eucalyptus cloud (by
      default the module will use EC2 endpoints).  Must be
      specified if region is not used. If not set then the value
      of the EC2_URL environment variable, if any, is used

- exact_count
      An integer value which indicates how many instances that
      match the 'count_tag' parameter should be running. Instances
      are either created or terminated based on this value.

- group
      security group (or list of groups) to use with the instance

- group_id
      security group id (or list of ids) to use with the instance

- id
      identifier for this instance or set of instances, so that
      the module will be idempotent with respect to EC2 instances.
      This identifier is valid for at least 24 hours after the
      termination of the instance, and should not be reused for
      another call later on. For details, see the description of
      client token at http://docs.aws.amazon.com/AWSEC2/latest/Use
      rGuide/Run_Instance_Idempotency.html.

= image
      `emi' (or `ami') to use for the instance

- instance_ids
      list of instance ids, currently only used when
      state='absent'

- instance_profile_name
      Name of the IAM instance profile to use. Boto library must
      be 2.5.0+

- instance_tags
      a hash/dictionary of tags to add to the new instance;
      '{"key":"value"}' and '{"key":"value","key":"value"}'

= instance_type
      instance type to use for the instance

- kernel
      kernel `eki' to use for the instance

- key_name
      key pair to use on the instance

- monitoring
      enable detailed monitoring (CloudWatch) for instance

- placement_group
      placement group for the instance when using EC2 Clustered
      Compute

- private_ip
      the private ip address to assign the instance (from the vpc
      subnet)

- ramdisk
      ramdisk `eri' to use for the instance

- region
      The AWS region to use.  Must be specified if ec2_url is not
      used. If not specified then the value of the EC2_REGION
      environment variable, if any, is used.

- state
      create or terminate instances

- user_data
      opaque blob of data which is made available to the ec2
      instance

- validate_certs
      When set to "no", SSL certificates will not be validated for
      boto versions >= 2.6.0. (Choices: yes, no)

- volumes
      a list of volume dicts, each containing device name and
      optionally ephemeral id or snapshot id. Size and type (and
      number of iops for io device type) must be specified for a
      new volume or a root volume, and may be passed for a
      snapshot volume. For any volume, a volume size less than 1
      will be interpreted as a request not to create the volume.

- vpc_subnet_id
      the subnet ID in which to launch the instance (VPC)

- wait
      wait for the instance to be in state 'running' before
      returning (Choices: yes, no)

- wait_timeout
      how long before wait gives up, in seconds

- zone
      AWS availability zone in which to launch the instance

Requirements:    boto

# Note: None of these examples set aws_access_key, aws_secret_key, or region.
# It is assumed that their matching environment variables are set.

# Basic provisioning example
- local_action:
    module: ec2
    key_name: mykey
    instance_type: c1.medium
    image: emi-40603AD1
    wait: yes
    group: webserver
    count: 3

# Advanced example with tagging and CloudWatch
- local_action:
    module: ec2
    key_name: mykey
    group: databases
    instance_type: m1.large
    image: ami-6e649707
    wait: yes
    wait_timeout: 500
    count: 5
    instance_tags:
       db: postgres
    monitoring: yes

# Single instance with additional IOPS volume from snapshot
local_action:
    module: ec2
    key_name: mykey
    group: webserver
    instance_type: m1.large
    image: ami-6e649707
    wait: yes
    wait_timeout: 500
    volumes:
    - device_name: /dev/sdb
      snapshot: snap-abcdef12
      device_type: io1
      iops: 1000
      volume_size: 100
    monitoring: yes

# Multiple groups example
local_action:
    module: ec2
    key_name: mykey
    group: ['databases', 'internal-services', 'sshable', 'and-so-forth']
    instance_type: m1.large
    image: ami-6e649707
    wait: yes
    wait_timeout: 500
    count: 5
    instance_tags:
        db: postgres
    monitoring: yes

# Multiple instances with additional volume from snapshot
local_action:
    module: ec2
    key_name: mykey
    group: webserver
    instance_type: m1.large
    image: ami-6e649707
    wait: yes
    wait_timeout: 500
    count: 5
    volumes:
    - device_name: /dev/sdb
      snapshot: snap-abcdef12
      volume_size: 10
    monitoring: yes

# VPC example
- local_action:
    module: ec2
    key_name: mykey
    group_id: sg-1dc53f72
    instance_type: m1.small
    image: ami-6e649707
    wait: yes
    vpc_subnet_id: subnet-29e63245
    assign_public_ip: yes

# Launch instances, runs some tasks
# and then terminate them


- name: Create a sandbox instance
  hosts: localhost
  gather_facts: False
  vars:
    key_name: my_keypair
    instance_type: m1.small
    security_group: my_securitygroup
    image: my_ami_id
    region: us-east-1
  tasks:
    - name: Launch instance
      local_action: ec2 key_name={{ keypair }} group={{ security_group }} instance_type={{ instance_type }} image={{ image }} wait=true region={{ region }}
      register: ec2
    - name: Add new instance to host group
      local_action: add_host hostname={{ item.public_ip }} groupname=launched
      with_items: ec2.instances
    - name: Wait for SSH to come up
      local_action: wait_for host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started
      with_items: ec2.instances

- name: Configure instance(s)
  hosts: launched
  sudo: True
  gather_facts: True
  roles:
    - my_awesome_role
    - my_awesome_test

- name: Terminate instances
  hosts: localhost
  connection: local
  tasks:
    - name: Terminate instances that were previously launched
      local_action:
        module: ec2
        state: 'absent'
        instance_ids: '{{ ec2.instance_ids }}'

# Start a few existing instances, run some tasks
# and stop the instances

- name: Start sandbox instances
  hosts: localhost
  gather_facts: false
  connection: local
  vars:
    instance_ids:
      - 'i-xxxxxx'
      - 'i-xxxxxx'
      - 'i-xxxxxx'
    region: us-east-1
  tasks:
    - name: Start the sandbox instances
      local_action:
        module: ec2
        instance_ids: '{{ instance_ids }}'
        region: '{{ region }}'
        state: running
        wait: True
  role:
    - do_neat_stuff
    - do_more_neat_stuff

- name: Stop sandbox instances
  hosts: localhost
  gather_facts: false
  connection: local
  vars:
    instance_ids:
      - 'i-xxxxxx'
      - 'i-xxxxxx'
      - 'i-xxxxxx'
    region: us-east-1
  tasks:
    - name: Stop the sanbox instances
      local_action:
      module: ec2
      instance_ids: '{{ instance_ids }}'
      region: '{{ region }}'
      state: stopped
      wait: True

#
# Enforce that 5 instances with a tag "foo" are running
#

- local_action:
    module: ec2
    key_name: mykey
    instance_type: c1.medium
    image: emi-40603AD1
    wait: yes
    group: webserver
    instance_tags:
        foo: bar
    exact_count: 5
    count_tag: foo

#
# Enforce that 5 running instances named "database" with a "dbtype" of "postgres"
#

- local_action:
    module: ec2
    key_name: mykey
    instance_type: c1.medium
    image: emi-40603AD1
    wait: yes
    group: webserver
    instance_tags:
        Name: database
        dbtype: postgres
    exact_count: 5
    count_tag:
        Name: database
        dbtype: postgres

#
# count_tag complex argument examples
#

    # instances with tag foo
    count_tag:
        foo:

    # instances with tag foo=bar
    count_tag:
        foo: bar

    # instances with tags foo=bar & baz
    count_tag:
        foo: bar
        baz:

    # instances with tags foo & bar & baz=bang
    count_tag:
        - foo
        - bar
        - baz: bang

> EC2_AMI

Creates or deletes ec2 images. This module has a dependency on
python-boto >= 2.5

Options (= is mandatory):

- aws_access_key
      AWS access key. If not set then the value of the
      AWS_ACCESS_KEY environment variable is used.

- aws_secret_key
      AWS secret key. If not set then the value of the
      AWS_SECRET_KEY environment variable is used.

- delete_snapshot
      Whether or not to deleted an AMI while deregistering it.

- description
      An optional human-readable string describing the contents
      and purpose of the AMI.

- ec2_url
      Url to use to connect to EC2 or your Eucalyptus cloud (by
      default the module will use EC2 endpoints).  Must be
      specified if region is not used. If not set then the value
      of the EC2_URL environment variable, if any, is used

- image_id
      Image ID to be deregistered.

- instance_id
      instance id of the image to create

- name
      The name of the new image to create

- no_reboot
      An optional flag indicating that the bundling process should
      not attempt to shutdown the instance before bundling. If
      this flag is True, the responsibility of maintaining file
      system integrity is left to the owner of the instance. The
      default choice is "no". (Choices: yes, no)

- region
      The AWS region to use.  Must be specified if ec2_url is not
      used. If not specified then the value of the EC2_REGION
      environment variable, if any, is used.

- state
      create or deregister/delete image

- validate_certs
      When set to "no", SSL certificates will not be validated for
      boto versions >= 2.6.0. (Choices: yes, no)

- wait
      wait for the AMI to be in state 'available' before
      returning. (Choices: yes, no)

- wait_timeout
      how long before wait gives up, in seconds

Requirements:    boto

# Basic AMI Creation
- local_action:
    module: ec2_ami
    aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
    aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    instance_id: i-xxxxxx
    wait: yes
    name: newtest
  register: instance

# Basic AMI Creation, without waiting
- local_action:
    module: ec2_ami
    aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
    aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    region: xxxxxx
    instance_id: i-xxxxxx
    wait: no
    name: newtest
  register: instance

# Deregister/Delete AMI
- local_action:
    module: ec2_ami
    aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
    aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    region: xxxxxx
    image_id: ${instance.image_id}
    delete_snapshot: True
    state: absent

# Deregister AMI
- local_action:
    module: ec2_ami
    aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
    aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    region: xxxxxx
    image_id: ${instance.image_id}
    delete_snapshot: False
    state: absent

> EC2_EIP

This module associates AWS EC2 elastic IP addresses with instances

Options (= is mandatory):

- ec2_access_key
      EC2 access key. If not specified then the EC2_ACCESS_KEY
      environment variable is used.

- ec2_secret_key
      EC2 secret key. If not specified then the EC2_SECRET_KEY
      environment variable is used.

- ec2_url
      URL to use to connect to EC2-compatible cloud (by default
      the module will use EC2 endpoints)

- in_vpc
      allocate an EIP inside a VPC or not

- instance_id
      The EC2 instance id

- public_ip
      The elastic IP address to associate with the instance.If
      absent, allocate a new address

- region
      the EC2 region to use

- state
      If present, associate the IP with the instance.If absent,
      disassociate the IP with the instance. (Choices: present,
      absent)

- validate_certs
      When set to "no", SSL certificates will not be validated for
      boto versions >= 2.6.0. (Choices: yes, no)

Notes:    This module will return `public_ip' on success, which will contain
      the public IP address associated with the instance.There may
      be a delay between the time the Elastic IP is assigned and
      when the cloud instance is reachable via the new address.
      Use wait_for and pause to delay further playbook execution
      until the instance is reachable, if necessary.

Requirements:    boto

- name: associate an elastic IP with an instance
  ec2_eip: instance_id=i-1212f003 ip=93.184.216.119

- name: disassociate an elastic IP from an instance
  ec2_eip: instance_id=i-1212f003 ip=93.184.216.119 state=absent

- name: allocate a new elastic IP and associate it with an instance
  ec2_eip: instance_id=i-1212f003

- name: allocate a new elastic IP without associating it to anything
  ec2_eip:
  register: eip
- name: output the IP
  debug: msg="Allocated IP is {{ eip.public_ip }}"

- name: provision new instances with ec2
  ec2: keypair=mykey instance_type=c1.medium image=emi-40603AD1 wait=yes group=webserver count=3
  register: ec2
- name: associate new elastic IPs with each of the instances
  ec2_eip: "instance_id={{ item }}"
  with_items: ec2.instance_ids

- name: allocate a new elastic IP inside a VPC in us-west-2
  ec2_eip: region=us-west-2 in_vpc=yes
  register: eip
- name: output the IP
  debug: msg="Allocated IP inside a VPC is {{ eip.public_ip }}"

> EC2_ELB

This module de-registers or registers an AWS EC2 instance from the
ELBs that it belongs to.Returns fact "ec2_elbs" which is a list of
elbs attached to the instance if state=absent is passed as an
argument.Will be marked changed when called only if there are ELBs
found to operate on.

Options (= is mandatory):

- aws_access_key
      AWS access key. If not set then the value of the
      AWS_ACCESS_KEY environment variable is used.

- aws_secret_key
      AWS secret key. If not set then the value of the
      AWS_SECRET_KEY environment variable is used.

- ec2_elbs
      List of ELB names, required for registration. The ec2_elbs
      fact should be used if there was a previous de-register.

- enable_availability_zone
      Whether to enable the availability zone of the instance on
      the target ELB if the availability zone has not already been
      enabled. If set to no, the task will fail if the
      availability zone is not enabled on the ELB. (Choices: yes,
      no)

= instance_id
      EC2 Instance ID

- region
      The AWS region to use. If not specified then the value of
      the EC2_REGION environment variable, if any, is used.

= state
      register or deregister the instance (Choices: present,
      absent)

- validate_certs
      When set to "no", SSL certificates will not be validated for
      boto versions >= 2.6.0. (Choices: yes, no)

- wait
      Wait for instance registration or deregistration to complete
      successfully before returning. (Choices: yes, no)

Requirements:    boto

# basic pre_task and post_task example
pre_tasks:
  - name: Gathering ec2 facts
    ec2_facts:
  - name: Instance De-register
    local_action: ec2_elb
    args:
      instance_id: "{{ ansible_ec2_instance_id }}"
      state: 'absent'
roles:
  - myrole
post_tasks:
  - name: Instance Register
    local_action: ec2_elb
    args:
      instance_id: "{{ ansible_ec2_instance_id }}"
      ec2_elbs: "{{ item }}"
      state: 'present'
    with_items: ec2_elbs

> EC2_ELB_LB

Creates or destroys Amazon ELB.

Options (= is mandatory):

- aws_access_key
      AWS access key. If not set then the value of the
      AWS_ACCESS_KEY environment variable is used.

- aws_secret_key
      AWS secret key. If not set then the value of the
      AWS_SECRET_KEY environment variable is used.

- health_check
      An associative array of health check configuration settigs
      (see example)

- listeners
      List of ports/protocols for this ELB to listen on (see
      example)

= name
      The name of the ELB

- purge_listeners
      Purge existing listeners on ELB that are not found in
      listeners

- purge_zones
      Purge existing availability zones on ELB that are not found
      in zones

- region
      The AWS region to use. If not specified then the value of
      the EC2_REGION environment variable, if any, is used.

= state
      Create or destroy the ELB

- validate_certs
      When set to "no", SSL certificates will not be validated for
      boto versions >= 2.6.0. (Choices: yes, no)

- zones
      List of availability zones to enable on this ELB

Requirements:    boto

# Note: None of these examples set aws_access_key, aws_secret_key, or region.
# It is assumed that their matching environment variables are set.

# Basic provisioning example
- local_action:
    module: ec2_elb_lb
    name: "test-please-delete"
    state: present
    zones:
      - us-east-1a
      - us-east-1d
    listeners:
      - protocol: http # options are http, https, ssl, tcp
        load_balancer_port: 80
        instance_port: 80
      - protocol: https
        load_balancer_port: 443
        instance_protocol: http # optional, defaults to value of protocol setting
        instance_port: 80
        # ssl certificate required for https or ssl
        ssl_certificate_id: "arn:aws:iam::123456789012:server-certificate/company/servercerts/ProdServerCert"

# Configure a health check
- local_action:
    module: ec2_elb_lb
    name: "test-please-delete"
    state: present
    zones:
      - us-east-1d
    listeners:
      - protocol: http
        load_balancer_port: 80
        instance_port: 80
    health_check:
        ping_protocol: http # options are http, https, ssl, tcp
        ping_port: 80
        ping_path: "/index.html" # not required for tcp or ssl
        response_timeout: 5 # seconds
        interval: 30 # seconds
        unhealthy_threshold: 2
        healthy_threshold: 10

# Ensure ELB is gone
- local_action:
    module: ec2_elb_lb
    name: "test-please-delete"
    state: absent

# Normally, this module will purge any listeners that exist on the ELB
# but aren't specified in the listeners parameter. If purge_listeners is
# false it leaves them alone
- local_action:
    module: ec2_elb_lb
    name: "test-please-delete"
    state: present
    zones:
      - us-east-1a
      - us-east-1d
    listeners:
      - protocol: http
        load_balancer_port: 80
        instance_port: 80
    purge_listeners: no

# Normally, this module will leave availability zones that are enabled
# on the ELB alone. If purge_zones is true, then any extreneous zones
# will be removed
- local_action:
    module: ec2_elb_lb
    name: "test-please-delete"
    state: present
    zones:
      - us-east-1a
      - us-east-1d
    listeners:
      - protocol: http
        load_balancer_port: 80
        instance_port: 80
    purge_zones: yes

> EC2_FACTS

This module fetches data from the metadata servers in ec2 (aws).
Eucalyptus cloud provides a similar service and this module should
work this cloud provider as well.

Options (= is mandatory):

- validate_certs
      If `no', SSL certificates will not be validated. This should
      only be used on personally controlled sites using self-
      signed certificates. (Choices: yes, no)

Notes:    Parameters to filter on ec2_facts may be added later.

# Conditional example
- name: Gather facts
  action: ec2_facts

- name: Conditional
  action: debug msg="This instance is a t1.micro"
  when: ansible_ec2_instance_type == "t1.micro"

> EC2_GROUP

maintains ec2 security groups. This module has a dependency on
python-boto >= 2.5

Options (= is mandatory):

= description
      Description of the security group.

- ec2_access_key
      EC2 access key

- ec2_secret_key
      EC2 secret key

- ec2_url
      Url to use to connect to EC2 or your Eucalyptus cloud (by
      default the module will use EC2 endpoints)

= name
      Name of the security group.

- region
      the EC2 region to use

= rules
      List of firewall rules to enforce in this group (see
      example).

- state
      create or delete security group

- validate_certs
      When set to "no", SSL certificates will not be validated for
      boto versions >= 2.6.0. (Choices: yes, no)

- vpc_id
      ID of the VPC to create the group in.

Requirements:    boto

- name: example ec2 group
  local_action:
    module: ec2_group
    name: example
    description: an example EC2 group
    vpc_id: 12345
    region: eu-west-1a
    ec2_secret_key: SECRET
    ec2_access_key: ACCESS
    rules:
      - proto: tcp
        from_port: 80
        to_port: 80
        cidr_ip: 0.0.0.0/0
      - proto: tcp
        from_port: 22
        to_port: 22
        cidr_ip: 10.0.0.0/8
      - proto: udp
        from_port: 10050
        to_port: 10050
        cidr_ip: 10.0.0.0/8
      - proto: udp
        from_port: 10051
        to_port: 10051
        group_id: sg-12345678
      - proto: all
        # the containing group name may be specified here
        group_name: example

> EC2_KEY

maintains ec2 key pairs. This module has a dependency on python-
boto >= 2.5

Options (= is mandatory):

- ec2_access_key
      EC2 access key

- ec2_secret_key
      EC2 secret key

- ec2_url
      Url to use to connect to EC2 or your Eucalyptus cloud (by
      default the module will use EC2 endpoints)

- key_material
      Public key material.

= name
      Name of the key pair.

- region
      the EC2 region to use

- state
      create or delete keypair

- validate_certs
      When set to "no", SSL certificates will not be validated for
      boto versions >= 2.6.0. (Choices: yes, no)

Requirements:    boto

# Note: None of these examples set aws_access_key, aws_secret_key, or region.
# It is assumed that their matching environment variables are set.

# Creates a new ec2 key pair named `example` if not present, returns generated
# private key
- name: example ec2 key
  local_action:
    module: ec2_key
    name: example

# Creates a new ec2 key pair named `example` if not present using provided key
# material
- name: example2 ec2 key
  local_action:
    module: ec2_key
    name: example2
    key_material: 'ssh-rsa AAAAxyz...== me@example.com'
    state: present

# Creates a new ec2 key pair named `example` if not present using provided key
# material
- name: example3 ec2 key
  local_action:
    module: ec2_key
    name: example3
    key_material: "{{ item }}"
  with_file: /path/to/public_key.id_rsa.pub

# Removes ec2 key pair by name
- name: remove example key
  local_action:
    module: ec2_key
    name: example
    state: absent

> EC2_SNAPSHOT

creates an EC2 snapshot from an existing EBS volume

Options (= is mandatory):

- description
      description to be applied to the snapshot

- device_name
      device name of a mounted volume to be snapshotted

- ec2_access_key
      AWS access key. If not set then the value of the
      AWS_ACCESS_KEY environment variable is used.

- ec2_secret_key
      AWS secret key. If not set then the value of the
      AWS_SECRET_KEY environment variable is used.

- ec2_url
      Url to use to connect to EC2 or your Eucalyptus cloud (by
      default the module will use EC2 endpoints).  Must be
      specified if region is not used. If not set then the value
      of the EC2_URL environment variable, if any, is used

- instance_id
      instance that has a the required volume to snapshot mounted

- region
      The AWS region to use. If not specified then the value of
      the EC2_REGION environment variable, if any, is used.

- volume_id
      volume from which to take the snapshot

Requirements:    boto

# Simple snapshot of volume using volume_id
- local_action:
    module: ec2_snapshot
    volume_id: vol-abcdef12
    description: snapshot of /data from DB123 taken 2013/11/28 12:18:32

# Snapshot of volume mounted on device_name attached to instance_id
- local_action:
    module: ec2_snapshot
    instance_id: i-12345678
    device_name: /dev/sdb1
    description: snapshot of /data from DB123 taken 2013/11/28 12:18:32

> EC2_TAG

Creates and removes tags from any EC2 resource.  The resource is
referenced by its resource id (e.g. an instance being i-XXXXXXX).
It is designed to be used with complex args (tags), see the
examples.  This module has a dependency on python-boto.

Options (= is mandatory):

- aws_access_key
      AWS access key. If not set then the value of the
      AWS_ACCESS_KEY environment variable is used.

- aws_secret_key
      AWS secret key. If not set then the value of the
      AWS_SECRET_KEY environment variable is used.

- ec2_url
      Url to use to connect to EC2 or your Eucalyptus cloud (by
      default the module will use EC2 endpoints).  Must be
      specified if region is not used. If not set then the value
      of the EC2_URL environment variable, if any, is used.

- region
      region in which the resource exists.

= resource
      The EC2 resource id.

- state
      Whether the tags should be present or absent on the
      resource. (Choices: present, absent)

- validate_certs
      When set to "no", SSL certificates will not be validated for
      boto versions >= 2.6.0. (Choices: yes, no)

Requirements:    boto

# Basic example of adding tag(s)
tasks:
- name: tag a resource
  local_action: ec2_tag resource=vol-XXXXXX region=eu-west-1 state=present
  args:
    tags:
      Name: ubervol
      env: prod

# Playbook example of adding tag(s) to spawned instances
tasks:
- name: launch some instances
  local_action: ec2 keypair={{ keypair }} group={{ security_group }} instance_type={{ instance_type }} image={{ image_id }} wait=true region=eu-west-1
  register: ec2

- name: tag my launched instances
  local_action: ec2_tag resource={{ item.id }} region=eu-west-1 state=present
  with_items: ec2.instances
  args:
    tags:
      Name: webserver
      env: prod

> EC2_VOL

creates an EBS volume and optionally attaches it to an instance.
If both an instance ID and a device name is given and the instance
has a device at the device name, then no volume is created and no
attachment is made.  This module has a dependency on python-boto.

Options (= is mandatory):

- aws_access_key
      AWS access key. If not set then the value of the
      AWS_ACCESS_KEY environment variable is used.

- aws_secret_key
      AWS secret key. If not set then the value of the
      AWS_SECRET_KEY environment variable is used.

- device_name
      device id to override device mapping. Assumes /dev/sdf for
      Linux/UNIX and /dev/xvdf for Windows.

- ec2_url
      Url to use to connect to EC2 or your Eucalyptus cloud (by
      default the module will use EC2 endpoints).  Must be
      specified if region is not used. If not set then the value
      of the EC2_URL environment variable, if any, is used

- instance
      instance ID if you wish to attach the volume.

- iops
      the provisioned IOPs you want to associate with this volume
      (integer).

- region
      The AWS region to use. If not specified then the value of
      the EC2_REGION environment variable, if any, is used.

- snapshot
      snapshot ID on which to base the volume

- validate_certs
      When set to "no", SSL certificates will not be validated for
      boto versions >= 2.6.0. (Choices: yes, no)

= volume_size
      size of volume (in GB) to create.

- zone
      zone in which to create the volume, if unset uses the zone
      the instance is in (if set)

Requirements:    boto

# Simple attachment action
- local_action:
    module: ec2_vol
    instance: XXXXXX
    volume_size: 5
    device_name: sdd

# Example using custom iops params
- local_action:
    module: ec2_vol
    instance: XXXXXX
    volume_size: 5
    iops: 200
    device_name: sdd

# Example using snapshot id
- local_action:
    module: ec2_vol
    instance: XXXXXX
    snapshot: "{{ snapshot }}"

# Playbook example combined with instance launch
- local_action:
    module: ec2
    keypair: "{{ keypair }}"
    image: "{{ image }}"
    wait: yes
    count: 3
    register: ec2
- local_action:
    module: ec2_vol
    instance: "{{ item.id }} "
    volume_size: 5
    with_items: ec2.instances
    register: ec2_vol

> EC2_VPC

Create or terminates AWS virtual private clouds.  This module has
a dependency on python-boto.

Options (= is mandatory):

- aws_access_key
      AWS access key. If not set then the value of the
      AWS_ACCESS_KEY environment variable is used.

- aws_secret_key
      AWS secret key. If not set then the value of the
      AWS_SECRET_KEY environment variable is used.

= cidr_block
      The cidr block representing the VPC, e.g. 10.0.0.0/16

- dns_hostnames
      toggles the "Enable DNS hostname support for instances" flag
      (Choices: yes, no)

- dns_support
      toggles the "Enable DNS resolution" flag (Choices: yes, no)

- instance_tenancy
      The supported tenancy options for instances launched into
      the VPC. (Choices: default, dedicated)

- internet_gateway
      Toggle whether there should be an Internet gateway attached
      to the VPC (Choices: yes, no)

- region
      region in which the resource exists.

- route_tables
      A dictionary array of route tables to add of the form: {
      subnets: [172.22.2.0/24, 172.22.3.0/24,], routes: [{ dest:
      0.0.0.0/0, gw: igw},] }. Where the subnets list is those
      subnets the route table should be associated with, and the
      routes list is a list of routes to be in the table.  The
      special keyword for the gw of igw specifies that you should
      the route should go through the internet gateway attached to
      the VPC. gw also accepts instance-ids in addition igw. This
      module is currently unable to affect the 'main' route table
      due to some limitations in boto, so you must explicitly
      define the associated subnets or they will be attached to
      the main table implicitly.

= state
      Create or terminate the VPC

- subnets
      A dictionary array of subnets to add of the form: { cidr:
      ..., az: ... }. Where az is the desired availability zone of
      the subnet, but it is not required. All VPC subnets not in
      this list will be removed.

- validate_certs
      When set to "no", SSL certificates will not be validated for
      boto versions >= 2.6.0. (Choices: yes, no)

- vpc_id
      A VPC id to terminate when state=absent

- wait
      wait for the VPC to be in state 'available' before returning
      (Choices: yes, no)

- wait_timeout
      how long before wait gives up, in seconds

Requirements:    boto

# Note: None of these examples set aws_access_key, aws_secret_key, or region.
# It is assumed that their matching environment variables are set.

# Basic creation example:
      local_action:
        module: ec2_vpc
        state: present
        cidr_block: 172.23.0.0/16
        region: us-west-2
# Full creation example with subnets and optional availability zones.
# The absence or presense of subnets deletes or creates them respectively.
      local_action:
        module: ec2_vpc
        state: present
        cidr_block: 172.22.0.0/16
        subnets:
          - cidr: 172.22.1.0/24
            az: us-west-2c
          - cidr: 172.22.2.0/24
            az: us-west-2b
          - cidr: 172.22.3.0/24
            az: us-west-2a
        internet_gateway: True
        route_tables:
          - subnets:
              - 172.22.2.0/24
              - 172.22.3.0/24
            routes:
              - dest: 0.0.0.0/0
                gw: igw
          - subnets:
              - 172.22.1.0/24
            routes:
              - dest: 0.0.0.0/0
                gw: igw
        region: us-west-2
      register: vpc

# Removal of a VPC by id
      local_action:
        module: ec2_vpc
        state: absent
        vpc_id: vpc-aaaaaaa
        region: us-west-2
If you have added elements not managed by this module, e.g. instances, NATs, etc then
the delete will fail until those dependencies are removed.

> EJABBERD_USER

This module provides user management for ejabberd servers

Options (= is mandatory):

= host
      the ejabberd host associated with this username

- logging
      enables or disables the local syslog facility for this
      module (Choices: true, false, yes, no)

- password
      the password to assign to the username

- state
      describe the desired state of the user to be managed
      (Choices: present, absent)

= username
      the name of the user to manage

Notes:    Password parameter is required for state == present onlyPasswords
      must be stored in clear text for this release

Requirements:    ejabberd

Example playbook entries using the ejabberd_user module to manage users state.

    tasks:

    - name: create a user if it does not exists
      action: ejabberd_user username=test host=server password=password

    - name: delete a user if it exists
      action: ejabberd_user username=test host=server state=absent

> ELASTICACHE

Manage cache clusters in Amazon Elasticache.Returns information
about the specified cache cluster.

Options (= is mandatory):

- aws_access_key
      AWS access key. If not set then the value of the
      AWS_ACCESS_KEY environment variable is used.

- aws_secret_key
      AWS secret key. If not set then the value of the
      AWS_SECRET_KEY environment variable is used.

- cache_engine_version
      The version number of the cache engine

- cache_port
      The port number on which each of the cache nodes will accept
      connections

- cache_security_groups
      A list of cache security group names to associate with this
      cache cluster

- engine
      Name of the cache engine to be used (memcached or redis)

- hard_modify
      Whether to destroy and recreate an existing cache cluster if
      necessary in order to modify its state (Choices: yes, no)

= name
      The cache cluster identifier

- node_type
      The compute and memory capacity of the nodes in the cache
      cluster

- num_nodes
      The initial number of cache nodes that the cache cluster
      will have

- region
      The AWS region to use. If not specified then the value of
      the EC2_REGION environment variable, if any, is used.

= state
      `absent' or `present' are idempotent actions that will
      create or destroy a cache cluster as needed. `rebooted' will
      reboot the cluster, resulting in a momentary outage.
      (Choices: present, absent, rebooted)

- wait
      Wait for cache cluster result before returning (Choices:
      yes, no)

- zone
      The EC2 Availability Zone in which the cache cluster will be
      created

Requirements:    boto

# Note: None of these examples set aws_access_key, aws_secret_key, or region.
# It is assumed that their matching environment variables are set.

# Basic example
- local_action:
    module: elasticache
    name: "test-please-delete"
    state: present
    engine: memcached
    cache_engine_version: 1.4.14
    node_type: cache.m1.small
    num_nodes: 1
    cache_port: 11211
    cache_security_groups:
      - default
    zone: us-east-1d


# Ensure cache cluster is gone
- local_action:
    module: elasticache
    name: "test-please-delete"
    state: absent

# Reboot cache cluster
- local_action:
    module: elasticache
    name: "test-please-delete"
    state: rebooted

> FACTER

Runs the `facter' discovery program
(https://github.com/puppetlabs/facter) on the remote system,
returning JSON data that can be useful for inventory purposes.

Requirements:    facter, ruby-json

# Example command-line invocation
ansible www.example.net -m facter

> FAIL

This module fails the progress with a custom message. It can be
useful for bailing out when a certain condition is met using
`when'.

Options (= is mandatory):

- msg
      The customized message used for failing execution. If
      omitted, fail will simple bail out with a generic message.

# Example playbook using fail and when together
- fail: msg="The system may not be provisioned according to the CMDB status."
  when: cmdb_status != "to-be-staged"

> FETCH

This module works like [copy], but in reverse. It is used for
fetching files from remote machines and storing them locally in a
file tree, organized by hostname. Note that this module is written
to transfer log files that might not be present, so a missing
remote file won't be an error unless fail_on_missing is set to
'yes'.

Options (= is mandatory):

= dest
      A directory to save the file into. For example, if the
      `dest' directory is `/backup' a `src' file named
      `/etc/profile' on host `host.example.com', would be saved
      into `/backup/host.example.com/etc/profile'

- fail_on_missing
      Makes it fails when the source file is missing. (Choices:
      yes, no)

- flat
      Allows you to override the default behavior of prepending
      hostname/path/to/file to the destination.  If dest ends with
      '/', it will use the basename of the source file, similar to
      the copy module.  Obviously this is only handy if the
      filenames are unique.

= src
      The file on the remote system to fetch. This `must' be a
      file, not a directory. Recursive fetching may be supported
      in a later release.

- validate_md5
      Verify that the source and destination md5sums match after
      the files are fetched. (Choices: yes, no)

# Store file into /tmp/fetched/host.example.com/tmp/somefile
- fetch: src=/tmp/somefile dest=/tmp/fetched

# Specifying a path directly
- fetch: src=/tmp/somefile dest=/tmp/prefix-{{ ansible_hostname }} flat=yes

# Specifying a destination path
- fetch: src=/tmp/uniquefile dest=/tmp/special/ flat=yes

# Storing in a path relative to the playbook
- fetch: src=/tmp/uniquefile dest=special/prefix-{{ ansible_hostname }} flat=yes

> FILE

Sets attributes of files, symlinks, and directories, or removes
files/symlinks/directories. Many other modules support the same
options as the [file] module - including [copy], [template], and
[assemble].

Options (= is mandatory):

- force
      force the creation of the symlinks in two cases: the source
      file does not exist (but will appear later); the destination
      exists and is a file (so, we need to unlink the "path" file
      and create symlink to the "src" file in place of it).
      (Choices: yes, no)

- group
      name of the group that should own the file/directory, as
      would be fed to `chown' (Choices: )

- mode
      mode the file or directory should be, such as 0644 as would
      be fed to `chmod' (Choices: )

- owner
      name of the user that should own the file/directory, as
      would be fed to `chown' (Choices: )

= path
      path to the file being managed.  Aliases: `dest', `name'

- recurse
      recursively set the specified file attributes (applies only
      to state=directory) (Choices: yes, no)

- selevel
      level part of the SELinux file context. This is the MLS/MCS
      attribute, sometimes known as the `range'. `_default'
      feature works as for `seuser'. (Choices: )

- serole
      role part of SELinux file context, `_default' feature works
      as for `seuser'. (Choices: )

- setype
      type part of SELinux file context, `_default' feature works
      as for `seuser'. (Choices: )

- seuser
      user part of SELinux file context. Will default to system
      policy, if applicable. If set to `_default', it will use the
      `user' portion of the policy if available (Choices: )

- src
      path of the file to link to (applies only to `state=link').
      Will accept absolute, relative and nonexisting paths.
      Relative paths are not expanded. (Choices: )

- state
      If `directory', all immediate subdirectories will be created
      if they do not exist. If `file', the file will NOT be
      created if it does not exist, see the [copy] or [template]
      module if you want that behavior. If `link', the symbolic
      link will be created or changed. Use `hard' for hardlinks.
      If `absent', directories will be recursively deleted, and
      files or symlinks will be unlinked. If `touch' (new in 1.4),
      an empty file will be created if the c(dest) does not exist,
      while an existing file or directory will receive updated
      file access and modification times (similar to the way
      `touch` works from the command line). (Choices: file, link,
      directory, hard, touch, absent)

Notes:    See also [copy], [template], [assemble]

- file: path=/etc/foo.conf owner=foo group=foo mode=0644
- file: src=/file/to/link/to dest=/path/to/symlink owner=foo group=foo state=link

> FILESYSTEM

This module creates file system.

Options (= is mandatory):

= dev
      Target block device.

- force
      If yes, allows to create new filesystem on devices that
      already has filesystem. (Choices: yes, no)

= fstype
      File System type to be created.

- opts
      List of options to be passed to mkfs command.

Notes:    uses mkfs command

# Create a ext2 filesystem on /dev/sdb1.
- filesystem: fstype=ext2 dev=/dev/sdb1

# Create a ext4 filesystem on /dev/sdb1 and check disk blocks.
- filesystem: fstype=ext4 dev=/dev/sdb1 opts="-cc"

> FIREBALL

This modules launches an ephemeral `fireball' ZeroMQ message bus
daemon on the remote node which Ansible can use to communicate
with nodes at high speed.The daemon listens on a configurable port
for a configurable amount of time.Starting a new fireball as a
given user terminates any existing user fireballs.Fireball mode is
AES encrypted

Options (= is mandatory):

- minutes
      The `fireball' listener daemon is started on nodes and will
      stay around for this number of minutes before turning itself
      off.

- port
      TCP port for ZeroMQ

Notes:    See the advanced playbooks chapter for more about using fireball
      mode.

Requirements:    zmq, keyczar

# This example playbook has two plays: the first launches 'fireball' mode on all hosts via SSH, and
# the second actually starts using it for subsequent management over the fireball connection

- hosts: devservers
  gather_facts: false
  connection: ssh
  sudo: yes
  tasks:
      - action: fireball

- hosts: devservers
  connection: fireball
  tasks:
      - command: /usr/bin/anything

> FIREWALLD

This module allows for addition or deletion of services and ports
either tcp or udp in either running or permanent firewalld rules

Options (= is mandatory):

= permanent
      Should this configuration be in the running firewalld
      configuration or persist across reboots

- port
      Name of a port to add/remove to/from firewalld must be in
      the form PORT/PROTOCOL

- rich_rule
      Rich rule to add/remove to/from firewalld

- service
      Name of a service to add/remove to/from firewalld - service
      must be listed in /etc/services

= state
      Should this port accept(enabled) or reject(disabled)
      connections

- timeout
      The amount of time the rule should be in effect for when
      non-permanent

- zone
      The firewalld zone to add/remove to/from (NOTE: default zone
      can be configured per system but "public" is default from
      upstream. Available choices can be extended based on per-
      system configs, listed here are "out of the box" defaults).
      (Choices: work, drop, internal, external, trusted, home,
      dmz, public, block)

Notes:    Not tested on any debian based system

Requirements:    firewalld >= 0.2.11

- firewalld: service=https permanent=true state=enabled
- firewalld: port=8081/tcp permanent=true state=disabled
- firewalld: zone=dmz service=http permanent=true state=enabled
- firewalld: rich_rule='rule service name="ftp" audit limit value="1/m" accept' permanent=true state=enabled

> FLOWDOCK

Send a message to a flowdock team inbox or chat using the push API
(see https://www.flowdock.com/api/team-inbox and
https://www.flowdock.com/api/chat)

Options (= is mandatory):

- external_user_name
      (chat only - required) Name of the "user" sending the
      message

- from_address
      (inbox only - required) Email address of the message sender

- from_name
      (inbox only) Name of the message sender

- link
      (inbox only) Link associated with the message. This will be
      used to link the message subject in Team Inbox.

= msg
      Content of the message

- project
      (inbox only) Human readable identifier for more detailed
      message categorization

- reply_to
      (inbox only) Email address for replies

- source
      (inbox only - required) Human readable identifier of the
      application that uses the Flowdock API

- subject
      (inbox only - required) Subject line of the message

- tags
      tags of the message, separated by commas

= token
      API token.

= type
      Whether to post to 'inbox' or 'chat' (Choices: inbox, chat)

- validate_certs
      If `no', SSL certificates will not be validated. This should
      only be used on personally controlled sites using self-
      signed certificates. (Choices: yes, no)

Requirements:    urllib, urllib2

- flowdock: type=inbox
            token=AAAAAA
            from_address=user@example.com
            source='my cool app'
            msg='test from ansible'
            subject='test subject'

- flowdock: type=chat
            token=AAAAAA
            external_user_name=testuser
            msg='test from ansible'
            tags=tag1,tag2,tag3

> GC_STORAGE

This module allows users to manage their objects/buckets in Google
Cloud Storage.  It allows upload and download operations and can
set some canned permissions. It also allows retrieval of URLs for
objects for use in playbooks, and retrieval of string contents of
objects.  This module requires setting the default project in GCS
prior to playbook usage.  See https://developers.google.com/storag
e/docs/reference/v1/apiversion1 for information about setting the
default project.

Options (= is mandatory):

= bucket
      Bucket name.

- dest
      The destination file path when downloading an object/key
      with a GET operation.

- expiration
      Time limit (in seconds) for the URL generated and returned
      by GCA when performing a mode=put or mode=get_url operation.
      This url is only avaialbe when public-read is the acl for
      the object.

- force
      Forces an overwrite either locally on the filesystem or
      remotely with the object/key. Used with PUT and GET
      operations.

= gcs_access_key
      GCS access key. If not set then the value of the
      GCS_ACCESS_KEY environment variable is used.

= gcs_secret_key
      GCS secret key. If not set then the value of the
      GCS_SECRET_KEY environment variable is used.

= mode
      Switches the module behaviour between upload, download,
      get_url (return download url) , get_str (download object as
      string), create (bucket) and delete (bucket). (Choices: get,
      put, get_url, get_str, delete, create)

- object
      Keyname of the object inside the bucket. Can be also be used
      to create "virtual directories" (see examples).

- permission
      This option let's the user set the canned permissions on the
      object/bucket that are created. The permissions that can be
      set are 'private', 'public-read', 'authenticated-read'.

- src
      The source file path when performing a PUT operation.

Requirements:    boto 2.9+

# upload some content
- gc_storage: bucket=mybucket object=key.txt src=/usr/local/myfile.txt mode=put permission=public-read

# download some content
- gc_storage: bucket=mybucket object=key.txt dest=/usr/local/myfile.txt mode=get

# Download an object as a string to use else where in your playbook
- gc_storage: bucket=mybucket object=key.txt mode=get_str

# Create an empty bucket
- gc_storage: bucket=mybucket mode=create

# Create a bucket with key as directory
- gc_storage: bucket=mybucket object=/my/directory/path mode=create

# Delete a bucket and all contents
- gc_storage: bucket=mybucket mode=delete

> GCE

Creates or terminates Google Compute Engine (GCE) instances.  See
https://cloud.google.com/products/compute-engine for an overview.
Full install/configuration instructions for the gce* modules can
be found in the comments of ansible/test/gce_tests.py.

Options (= is mandatory):

- image
      image string to use for the instance

- instance_names
      a comma-separated list of instance names to create or
      destroy

- machine_type
      machine type to use for the instance, use 'n1-standard-1' by
      default

- metadata
      a hash/dictionary of custom data for the instance;
      '{"key":"value",...}'

- name
      identifier when working with a single instance

- network
      name of the network, 'default' will be used if not specified

- persistent_boot_disk
      if set, create the instance with a persistent boot disk

- state
      desired state of the resource (Choices: active, present,
      absent, deleted)

- tags
      a comma-separated list of tags to associate with the
      instance

= zone
      the GCE zone to use (Choices: us-central1-a, us-central1-b,
      us-central2-a, europe-west1-a, europe-west1-b)

Requirements:    libcloud

# Basic provisioning example.  Create a single Debian 7 instance in the
# us-central1-a Zone of n1-standard-1 machine type.
- local_action:
    module: gce
    name: test-instance
    zone: us-central1-a
    machine_type: n1-standard-1
    image: debian-7

# Example using defaults and with metadata to create a single 'foo' instance
- local_action:
    module: gce
    name: foo
    metadata: '{"db":"postgres", "group":"qa", "id":500}'


# Launch instances from a control node, runs some tasks on the new instances,
# and then terminate them
- name: Create a sandbox instance
  hosts: localhost
  vars:
    names: foo,bar
    machine_type: n1-standard-1
    image: debian-6
    zone: us-central1-a
  tasks:
    - name: Launch instances
      local_action: gce instance_names={{names}} machine_type={{machine_type}}
                    image={{image}} zone={{zone}}
      register: gce
    - name: Wait for SSH to come up
      local_action: wait_for host={{item.public_ip}} port=22 delay=10
                    timeout=60 state=started
      with_items: {{gce.instance_data}}

- name: Configure instance(s)
  hosts: launched
  sudo: True
  roles:
    - my_awesome_role
    - my_awesome_tasks

- name: Terminate instances
  hosts: localhost
  connection: local
  tasks:
    - name: Terminate instances that were previously launched
      local_action:
        module: gce
        state: 'absent'
        instance_names: {{gce.instance_names}}

> GCE_LB

This module can create and destroy Google Compute Engine
`loadbalancer' and `httphealthcheck' resources.  The primary LB
resource is the `load_balancer' resource and the health check
parameters are all prefixed with `httphealthcheck'. The full
documentation for Google Compute Engine load balancing is at
https://developers.google.com/compute/docs/load-balancing/.
However, the ansible module simplifies the configuration by
following the libcloud model. Full install/configuration
instructions for the gce* modules can be found in the comments of
ansible/test/gce_tests.py.

Options (= is mandatory):

- external_ip
      the external static IPv4 (or auto-assigned) address for the
      LB

- httphealthcheck_healthy_count
      number of consecutive successful checks before marking a
      node healthy

- httphealthcheck_host
      host header to pass through on HTTP check requests

- httphealthcheck_interval
      the duration in seconds between each health check request

- httphealthcheck_name
      the name identifier for the HTTP health check

- httphealthcheck_path
      the url path to use for HTTP health checking

- httphealthcheck_port
      the TCP port to use for HTTP health checking

- httphealthcheck_timeout
      the timeout in seconds before a request is considered a
      failed check

- httphealthcheck_unhealthy_count
      number of consecutive failed checks before marking a node
      unhealthy

- members
      a list of zone/nodename pairs, e.g ['us-central1-a/www-a',
      ...]

- name
      name of the load-balancer resource

- port_range
      the port (range) to forward, e.g. 80 or 8000-8888 defaults
      to all ports

- protocol
      the protocol used for the load-balancer packet forwarding,
      tcp or udp (Choices: tcp, udp)

- region
      the GCE region where the load-balancer is defined (Choices:
      us-central1, us-central2, europe-west1)

- state
      desired state of the LB (Choices: active, present, absent,
      deleted)

Requirements:    libcloud

# Simple example of creating a new LB, adding members, and a health check
- local_action:
    module: gce_lb
    name: testlb
    region: us-central1
    members: ["us-central1-a/www-a", "us-central1-b/www-b"]
    httphealthcheck_name: hc
    httphealthcheck_port: 80
    httphealthcheck_path: "/up"

> GCE_NET

This module can create and destroy Google Compue Engine networks
and firewall rules
https://developers.google.com/compute/docs/networking. The `name'
parameter is reserved for referencing a network while the `fwname'
parameter is used to reference firewall rules. IPv4 Address ranges
must be specified using the CIDR http://en.wikipedia.org/wiki
/Classless_Inter-Domain_Routing format. Full install/configuration
instructions for the gce* modules can be found in the comments of
ansible/test/gce_tests.py.

Options (= is mandatory):

- allowed
      the protocol:ports to allow ('tcp:80' or 'tcp:80,443' or
      'tcp:80-800')

- fwname
      name of the firewall rule

- ipv4_range
      the IPv4 address range in CIDR notation for the network

- name
      name of the network

- src_range
      the source IPv4 address range in CIDR notation

- src_tags
      the source instance tags for creating a firewall rule

- state
      desired state of the persistent disk (Choices: active,
      present, absent, deleted)

Requirements:    libcloud

# Simple example of creating a new network
- local_action:
    module: gce_net
    name: privatenet
    ipv4_range: '10.240.16.0/24'

# Simple example of creating a new firewall rule
- local_action:
    module: gce_net
    name: privatenet
    allowed: tcp:80,8080
    src_tags: ["web", "proxy"]

> GCE_PD

This module can create and destroy unformatted GCE persistent
disks
https://developers.google.com/compute/docs/disks#persistentdisks.
It also supports attaching and detaching disks from running
instances but does not support creating boot disks from images or
snapshots.  The 'gce' module supports creating instances with boot
disks. Full install/configuration instructions for the gce*
modules can be found in the comments of ansible/test/gce_tests.py.

Options (= is mandatory):

- detach_only
      do not destroy the disk, merely detach it from an instance
      (Choices: yes, no)

- instance_name
      instance name if you wish to attach or detach the disk

- mode
      GCE mount mode of disk, READ_ONLY (default) or READ_WRITE
      (Choices: READ_WRITE, READ_ONLY)

= name
      name of the disk

- size_gb
      whole integer size of disk (in GB) to create, default is 10
      GB

- state
      desired state of the persistent disk (Choices: active,
      present, absent, deleted)

- zone
      zone in which to create the disk

Requirements:    libcloud

# Simple attachment action to an existing instance
- local_action:
    module: gce_pd
    instance_name: notlocalhost
    size_gb: 5
    name: pd

> GEM

Manage installation and uninstallation of Ruby gems.

Options (= is mandatory):

- executable
      Override the path to the gem executable

- gem_source
      The path to a local gem used as installation source.

- include_dependencies
      Wheter to include dependencies or not. (Choices: yes, no)

= name
      The name of the gem to be managed.

- repository
      The repository from which the gem will be installed

= state
      The desired state of the gem. `latest' ensures that the
      latest version is installed. (Choices: present, absent,
      latest)

- user_install
      Install gem in user's local gems cache or for all users

- version
      Version of the gem to be installed/removed.

# Installs version 1.0 of vagrant.
- gem: name=vagrant version=1.0 state=present

# Installs latest available version of rake.
- gem: name=rake state=latest

# Installs rake version 1.0 from a local gem on disk.
- gem: name=rake gem_source=/path/to/gems/rake-1.0.gem state=present

> GET_URL

Downloads files from HTTP, HTTPS, or FTP to the remote server. The
remote server `must' have direct access to the remote resource.By
default, if an environment variable `<protocol>_proxy' is set on
the target host, requests will be sent through that proxy. This
behaviour can be overridden by setting a variable for this task
(see `setting the environment
<http://docs.ansible.com/playbooks_environment.html>`_), or by
using the use_proxy option.

Options (= is mandatory):

= dest
      absolute path of where to download the file to.If `dest' is
      a directory, either the server provided filename or, if none
      provided, the base name of the URL on the remote server will
      be used. If a directory, `force' has no effect. If `dest' is
      a directory, the file will always be downloaded (regardless
      of the force option), but replaced only if the contents
      changed.

- force
      If `yes' and `dest' is not a directory, will download the
      file every time and replace the file if the contents change.
      If `no', the file will only be downloaded if the destination
      does not exist. Generally should be `yes' only for small
      local files. Prior to 0.6, this module behaved as if `yes'
      was the default. (Choices: yes, no)

- others
      all arguments accepted by the [file] module also work here

- sha256sum
      If a SHA-256 checksum is passed to this parameter, the
      digest of the destination file will be calculated after it
      is downloaded to ensure its integrity and verify that the
      transfer completed successfully.

= url
      HTTP, HTTPS, or FTP URL in the form
      (http|https|ftp)://[user[:pass]]@host.domain[:port]/path

- use_proxy
      if `no', it will not use a proxy, even if one is defined in
      an environment variable on the target hosts. (Choices: yes,
      no)

- validate_certs
      If `no', SSL certificates will not be validated. This should
      only be used on personally controlled sites using self-
      signed certificates. (Choices: yes, no)

Notes:    This module doesn't yet support configuration for proxies.

Requirements:    urllib2, urlparse

- name: download foo.conf
  get_url: url=http://example.com/path/file.conf dest=/etc/foo.conf mode=0440

- name: download file with sha256 check
  get_url: url=http://example.com/path/file.conf dest=/etc/foo.conf sha256sum=b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c

> GIT

Manage `git' checkouts of repositories to deploy files or
software.

Options (= is mandatory):

- accept_hostkey
      Add the hostkey for the repo url if not already added. If
      ssh_args contains "-o StrictHostKeyChecking=no", this
      parameter is ignored.

- bare
      if `yes', repository will be created as a bare repo,
      otherwise it will be a standard repo with a workspace.
      (Choices: yes, no)

- depth
      Create a shallow clone with a history truncated to the
      specified number or revisions. The minimum possible value is
      `1', otherwise ignored.

= dest
      Absolute path of where the repository should be checked out
      to.

- executable
      Path to git executable to use. If not supplied, the normal
      mechanism for resolving binary paths will be used.

- force
      If `yes', any modified files in the working repository will
      be discarded.  Prior to 0.7, this was always 'yes' and could
      not be disabled. (Choices: yes, no)

- key_file
      Uses the same wrapper method as ssh_opts to pass "-i
      <key_file>" to the ssh arguments used by git

- reference
      Reference repository (see "git clone --reference ...")

- remote
      Name of the remote.

= repo
      git, SSH, or HTTP protocol address of the git repository.

- ssh_opts
      Creates a wrapper script and exports the path as GIT_SSH
      which git then automatically uses to override ssh arguments.
      An example value could be "-o StrictHostKeyChecking=no"

- update
      If `yes', repository will be updated using the supplied
      remote.  Otherwise the repo will be left untouched. Prior to
      1.2, this was always 'yes' and could not be disabled.
      (Choices: yes, no)

- version
      What version of the repository to check out.  This can be
      the full 40-character `SHA-1' hash, the literal string
      `HEAD', a branch name, or a tag name.

Notes:    If the task seems to be hanging, first verify remote host is in
      `known_hosts'. SSH will prompt user to authorize the first
      contact with a remote host.  To avoid this prompt, one
      solution is to add the remote host public key in
      `/etc/ssh/ssh_known_hosts' before calling the git module,
      with the following command: ssh-keyscan remote_host.com >>
      /etc/ssh/ssh_known_hosts.

# Example git checkout from Ansible Playbooks
- git: repo=git://foosball.example.org/path/to/repo.git
       dest=/srv/checkout
       version=release-0.22

# Example read-write git checkout from github
- git: repo=ssh://git@github.com/mylogin/hello.git dest=/home/mylogin/hello

# Example just ensuring the repo checkout exists
- git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout update=no

> GITHUB_HOOKS

Adds service hooks and removes service hooks that have an error
status.

Options (= is mandatory):

= action
      This tells the githooks module what you want it to do.
      (Choices: create, cleanall)

- hookurl
      When creating a new hook, this is the url that you want
      github to post to. It is only required when creating a new
      hook.

= oauthkey
      The oauth key provided by github. It can be found/generated
      on github under "Edit Your Profile" >> "Applications" >>
      "Personal Access Tokens"

= repo
      This is the API url for the repository you want to manage
      hooks for. It should be in the form of:
      https://api.github.com/repos/user:/repo:. Note this is
      different than the normal repo url.

= user
      Github username.

- validate_certs
      If `no', SSL certificates for the target repo will not be
      validated. This should only be used on personally controlled
      sites using self-signed certificates. (Choices: yes, no)

# Example creating a new service hook. It ignores duplicates.
- github_hooks: action=create hookurl=http://11.111.111.111:2222 user={{ gituser }} oauthkey={{ oauthkey }} repo=https://api.github.com/repos/pcgentry/Github-Auto-Deploy

# Cleaning all hooks for this repo that had an error on the last update. Since this works for all hooks in a repo it is probably best that this would be called from a handler.
- local_action: github_hooks action=cleanall user={{ gituser }} oauthkey={{ oauthkey }} repo={{ repo }}

> GLANCE_IMAGE

Add or Remove images from the glance repository.

Options (= is mandatory):

- auth_url
      The keystone url for authentication

- container_format
      The format of the container

- copy_from
      A url from where the image can be downloaded, mutually
      exclusive with file parameter

- disk_format
      The format of the disk that is getting uploaded

- file
      The path to the file which has to be uploaded, mutually
      exclusive with copy_from

- is_public
      Whether the image can be accessed publicly

= login_password
      Password of login user

= login_tenant_name
      The tenant name of the login user

= login_username
      login username to authenticate to keystone

- min_disk
      The minimum disk space required to deploy this image

- min_ram
      The minimum ram required to deploy this image

= name
      Name that has to be given to the image

- owner
      The owner of the image

- region_name
      Name of the region

- state
      Indicate desired state of the resource (Choices: present,
      absent)

- timeout
      The time to wait for the image process to complete in
      seconds

Requirements:    glanceclient, keystoneclient

# Upload an image from an HTTP URL
- glance_image: login_username=admin
                login_password=passme
                login_tenant_name=admin
                name=cirros
                container_format=bare
                disk_format=qcow2
                state=present
                copy_from=http:launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img

> GROUP

Manage presence of groups on a host.

Options (= is mandatory):

- gid
      Optional `GID' to set for the group.

= name
      Name of the group to manage.

- state
      Whether the group should be present or not on the remote
      host. (Choices: present, absent)

- system
      If `yes', indicates that the group created is a system
      group. (Choices: yes, no)

Requirements:    groupadd, groupdel, groupmod

# Example group command from Ansible Playbooks
- group: name=somegroup state=present

> GROUP_BY

Use facts to create ad-hoc groups that can be used later in a
playbook.

Options (= is mandatory):

= key
      The variables whose values will be used as groups

Notes:    Spaces in group names are converted to dashes '-'.

# Create groups based on the machine architecture
-  group_by: key=machine_{{ ansible_machine }}
# Create groups like 'kvm-host'
-  group_by: key=virt_{{ ansible_virtualization_type }}_{{ ansible_virtualization_role }}

> GROVE

The [grove] module sends a message for a service to a Grove.io
channel.

Options (= is mandatory):

= channel_token
      Token of the channel to post to.

- icon_url
      Icon for the service

= message
      Message content

- service
      Name of the service (displayed as the "user" in the message)

- url
      Service URL for the web client

- validate_certs
      If `no', SSL certificates will not be validated. This should
      only be used on personally controlled sites using self-
      signed certificates. (Choices: yes, no)

- grove: >
    channel_token=6Ph62VBBJOccmtTPZbubiPzdrhipZXtg
    service=my-app
    message=deployed {{ target }}

> HG

Manages Mercurial (hg) repositories. Supports SSH, HTTP/S and
local address.

Options (= is mandatory):

= dest
      Absolute path of where the repository should be cloned to.

- executable
      Path to hg executable to use. If not supplied, the normal
      mechanism for resolving binary paths will be used.

- force
      Discards uncommitted changes. Runs `hg update -C'. (Choices:
      yes, no)

- purge
      Deletes untracked files. Runs `hg purge'. (Choices: yes, no)

= repo
      The repository address.

- revision
      Equivalent `-r' option in hg command which could be the
      changeset, revision number, branch name or even tag.

Notes:    If the task seems to be hanging, first verify remote host is in
      `known_hosts'. SSH will prompt user to authorize the first
      contact with a remote host.  To avoid this prompt, one
      solution is to add the remote host public key in
      `/etc/ssh/ssh_known_hosts' before calling the hg module,
      with the following command: ssh-keyscan remote_host.com >>
      /etc/ssh/ssh_known_hosts.

# Ensure the current working copy is inside the stable branch and deletes untracked files if any.
- hg: repo=https://bitbucket.org/user/repo1 dest=/home/user/repo1 revision=stable purge=yes

> HIPCHAT

Send a message to hipchat

Options (= is mandatory):

- color
      Background color for the message. Default is yellow.
      (Choices: yellow, red, green, purple, gray, random)

- from
      Name the message will appear be sent from. max 15
      characters. Over 15, will be shorten.

= msg
      The message body.

- msg_format
      message format. html or text. Default is text. (Choices:
      text, html)

- notify
      notify or not (change the tab color, play a sound, etc)
      (Choices: yes, no)

= room
      ID or name of the room.

= token
      API token.

- validate_certs
      If `no', SSL certificates will not be validated. This should
      only be used on personally controlled sites using self-
      signed certificates. (Choices: yes, no)

Requirements:    urllib, urllib2

- hipchat: token=AAAAAA room=notify msg="Ansible task finished"

> HOMEBREW

Manages Homebrew packages

Options (= is mandatory):

- install_options
      options flags to install a package

= name
      name of package to install/remove

- state
      state of the package (Choices: present, absent)

- update_homebrew
      update homebrew itself first (Choices: yes, no)

- homebrew: name=foo state=present
- homebrew: name=foo state=present update_homebrew=yes
- homebrew: name=foo state=absent
- homebrew: name=foo,bar state=absent
- homebrew: name=foo state=present install_options=with-baz,enable-debug

> HOSTNAME

Set system's hostnameCurrently implemented on only Debian, Ubuntu,
RedHat and CentOS.

Options (= is mandatory):

= name
      Name of the host

Requirements:    hostname

- hostname: name=web01

> HTPASSWD

Add and remove username/password entries in a password file using
htpasswd.This is used by web servers such as Apache and Nginx for
basic authentication.

Options (= is mandatory):

- create
      Used with `state=present'. If specified, the file will be
      created if it does not already exist. If set to "no", will
      fail if the file does not exist (Choices: yes, no)

- crypt_scheme
      Encryption scheme to be used. (Choices: apr_md5_crypt,
      des_crypt, ldap_sha1, plaintext)

= name
      User name to add or remove

- password
      Password associated with user.Must be specified if user does
      not exist yet.

= path
      Path to the file that contains the usernames and passwords

- state
      Whether the user entry should be present or not (Choices:
      present, absent)

Notes:    This module depends on the `passlib' Python library, which needs
      to be installed on all target systems.On Debian, Ubuntu, or
      Fedora: install `python-passlib'.On RHEL or CentOS: Enable
      EPEL, then install `python-passlib'.

# Add a user to a password file and ensure permissions are set
- htpasswd: path=/etc/nginx/passwdfile name=janedoe password=9s36?;fyNp owner=root group=www-data mode=0640
# Remove a user from a password file
- htpasswd: path=/etc/apache2/passwdfile name=foobar state=absent

> INCLUDE_VARS

Loads variables from a YAML file dynamically during task runtime.
It can work with conditionals, or use host specific variables to
determine the path name to load from.

Options (= is mandatory):

= free-form
      The file name from which variables should be loaded, if
      called from a role it will look for the file in vars/
      subdirectory of the role, otherwise the path would be
      relative to playbook. An absolute path can also be provided.

# Conditionally decide to load in variables when x is 0, otherwise do not.
- include_vars: contingency_plan.yml
  when: x == 0

# Load a variable file based on the OS type, or a default if not found.
- include_vars: "{{ item }}"
  with_first_found:
   - "{{ ansible_os_distribution }}.yml"
   - "default.yml"

> INI_FILE

Manage (add, remove, change) individual settings in an INI-style
file without having to manage the file as a whole with, say,
[template] or [assemble]. Adds missing sections if they don't
exist.Comments are discarded when the source file is read, and
therefore will not show up in the destination file.

Options (= is mandatory):

- backup
      Create a backup file including the timestamp information so
      you can get the original file back if you somehow clobbered
      it incorrectly. (Choices: yes, no)

= dest
      Path to the INI-style file; this file is created if required

- option
      if set (required for changing a `value'), this is the name
      of the option.May be omitted if adding/removing a whole
      `section'.

- others
      all arguments accepted by the [file] module also work here

= section
      Section name in INI file. This is added if `state=present'
      automatically when a single value is being set.

- value
      the string value to be associated with an `option'. May be
      omitted when removing an `option'.

Notes:    While it is possible to add an `option' without specifying a
      `value', this makes no sense.A section named `default'
      cannot be added by the module, but if it exists, individual
      options within the section can be updated. (This is a
      limitation of Python's `ConfigParser'.) Either use
      [template] to create a base INI file with a `[default]'
      section, or use [lineinfile] to add the missing line.

Requirements:    ConfigParser

# Ensure "fav=lemonade is in section "[drinks]" in specified file
- ini_file: dest=/etc/conf section=drinks option=fav value=lemonade mode=0600 backup=yes

- ini_file: dest=/etc/anotherconf
            section=drinks
            option=temperature
            value=cold
            backup=yes

> IRC

Send a message to an IRC channel. This is a very simplistic
implementation.

Options (= is mandatory):

= channel
      Channel name

- color
      Text color for the message. Default is black. (Choices:
      yellow, red, green, blue, black)

= msg
      The message body.

- nick
      Nickname

- passwd
      Server password

- port
      IRC server port number

- server
      IRC server name/address

- timeout
      Timeout to use while waiting for successful registration and
      join messages, this is to prevent an endless loop

Requirements:    socket

- irc: server=irc.example.net channel="#t1" msg="Hello world"

- local_action: irc port=6669
                channel="#t1"
                msg="All finished at {{ ansible_date_time.iso8601 }}"
                color=red
                nick=ansibleIRC

> JABBER

Send a message to jabber

Options (= is mandatory):

- encoding
      message encoding

- host
      host to connect, overrides user info

= msg
      The message body.

= password
      password for user to connect

- port
      port to connect to, overrides default

= to
      user ID or name of the room, when using room use a slash to
      indicate your nick.

= user
      User as which to connect

Requirements:    xmpp

# send a message to a user
- jabber: user=mybot@example.net
          password=secret
          to=friend@example.net
          msg="Ansible task finished"

# send a message to a room
- jabber: user=mybot@example.net
          password=secret
          to=mychaps@conference.example.net/ansiblebot
          msg="Ansible task finished"

# send a message, specifying the host and port
- jabber user=mybot@example.net
         host=talk.example.net
         port=5223
         password=secret
         to=mychaps@example.net
         msg="Ansible task finished"

> JBOSS

Deploy applications to JBoss standalone using the filesystem

Options (= is mandatory):

- deploy_path
      The location in the filesystem where the deployment scanner
      listens

= deployment
      The name of the deployment

- src
      The remote path of the application ear or war to deploy

- state
      Whether the application should be deployed or undeployed
      (Choices: present, absent)

Notes:    The JBoss standalone deployment-scanner has to be enabled in
      standalone.xmlEnsure no identically named application is
      deployed through the JBoss CLI

# Deploy a hello world application
- jboss: src=/tmp/hello-1.0-SNAPSHOT.war deployment=hello.war state=present
# Update the hello world application
- jboss: src=/tmp/hello-1.1-SNAPSHOT.war deployment=hello.war state=present
# Undeploy the hello world application
- jboss: deployment=hello.war state=absent

> KERNEL_BLACKLIST

Add or remove kernel modules from blacklist.

Options (= is mandatory):

- blacklist_file
      If specified, use this blacklist file instead of
      `/etc/modprobe.d/blacklist-ansible.conf'.

= name
      Name of kernel module to black- or whitelist.

- state
      Whether the module should be present in the blacklist or
      absent. (Choices: present, absent)

# Blacklist the nouveau driver module
- kernel_blacklist: name=nouveau state=present

> KEYSTONE_USER

Manage users,tenants, roles from OpenStack.

Options (= is mandatory):

- description
      A description for the tenant

- email
      An email address for the user

- endpoint
      The keystone url for authentication

- login_password
      Password of login user

- login_tenant_name
      The tenant login_user belongs to

- login_user
      login username to authenticate to keystone

- password
      The password to be assigned to the user

- role
      The name of the role to be assigned or created

- state
      Indicate desired state of the resource (Choices: present,
      absent)

- tenant
      The tenant name that has be added/removed

- token
      The token to be uses in case the password is not specified

- user
      The name of the user that has to added/removed from
      OpenStack

Requirements:    python-keystoneclient

# Create a tenant
- keystone_user: tenant=demo tenant_description="Default Tenant"

# Create a user
- keystone_user: user=john tenant=demo password=secrete

# Apply the admin role to the john user in the demo tenant
- keystone_user: role=admin user=john tenant=demo

> LINEINFILE

This module will search a file for a line, and ensure that it is
present or absent.This is primarily useful when you want to change
a single line in a file only. For other cases, see the [copy] or
[template] modules.

Options (= is mandatory):

- backrefs
      Used with `state=present'. If set, line can contain
      backreferences (both positional and named) that will get
      populated if the `regexp' matches. This flag changes the
      operation of the module slightly; `insertbefore' and
      `insertafter' will be ignored, and if the `regexp' doesn't
      match anywhere in the file, the file will be left unchanged.
      If the `regexp' does match, the last matching line will be
      replaced by the expanded line parameter. (Choices: yes, no)

- backup
      Create a backup file including the timestamp information so
      you can get the original file back if you somehow clobbered
      it incorrectly. (Choices: yes, no)

- create
      Used with `state=present'. If specified, the file will be
      created if it does not already exist. By default it will
      fail if the file is missing. (Choices: yes, no)

= dest
      The file to modify.

- insertafter
      Used with `state=present'. If specified, the line will be
      inserted after the specified regular expression. A special
      value is available; `EOF' for inserting the line at the end
      of the file. May not be used with `backrefs'. (Choices: EOF,
      *regex*)

- insertbefore
      Used with `state=present'. If specified, the line will be
      inserted before the specified regular expression. A value is
      available; `BOF' for inserting the line at the beginning of
      the file. May not be used with `backrefs'. (Choices: BOF,
      *regex*)

- line
      Required for `state=present'. The line to insert/replace
      into the file. If `backrefs' is set, may contain
      backreferences that will get expanded with the `regexp'
      capture groups if the regexp matches. The backreferences
      should be double escaped (see examples).

- others
      All arguments accepted by the [file] module also work here.

- regexp
      The regular expression to look for in every line of the
      file. For `state=present', the pattern to replace if found;
      only the last line found will be replaced. For
      `state=absent', the pattern of the line to remove.  Uses
      Python regular expressions; see
      http://docs.python.org/2/library/re.html.

- state
      Whether the line should be there or not. (Choices: present,
      absent)

- validate
      validation to run before copying into place

- lineinfile: dest=/etc/selinux/config regexp=^SELINUX= line=SELINUX=disabled

- lineinfile: dest=/etc/sudoers state=absent regexp="^%wheel"

- lineinfile: dest=/etc/hosts regexp='^127\.0\.0\.1' line='127.0.0.1 localhost' owner=root group=root mode=0644

- lineinfile: dest=/etc/httpd/conf/httpd.conf regexp="^Listen " insertafter="^#Listen " line="Listen 8080"

- lineinfile: dest=/etc/services regexp="^# port for http" insertbefore="^www.*80/tcp" line="# port for http by default"

# Add a line to a file if it does not exist, without passing regexp
- lineinfile: dest=/tmp/testfile line="192.168.1.99 foo.lab.net foo"

# Fully quoted because of the ': ' on the line. See the Gotchas in the YAML docs.
- lineinfile: "dest=/etc/sudoers state=present regexp='^%wheel' line='%wheel ALL=(ALL) NOPASSWD: ALL'"

- lineinfile: dest=/opt/jboss-as/bin/standalone.conf regexp='^(.*)Xms(\d+)m(.*)$' line='\1Xms${xms}m\3' backrefs=yes

# Validate a the sudoers file before saving
- lineinfile: dest=/etc/sudoers state=present regexp='^%ADMIN ALL\=' line='%ADMIN ALL=(ALL) NOPASSWD:ALL' validate='visudo -cf %s'

> LINODE

creates / deletes a Linode Public Cloud instance and optionally
waits for it to be 'running'.

Options (= is mandatory):

- api_key
      Linode API key

- datacenter
      datacenter to create an instance in (Linode Datacenter)

- distribution
      distribution to use for the instance (Linode Distribution)

- linode_id
      Unique ID of a linode server

- name
      Name to give the instance (alphanumeric, dashes,
      underscore)To keep sanity on the Linode Web Console, name is
      prepended with LinodeID_

- password
      root password to apply to a new server (auto generated if
      missing)

- payment_term
      payment term to use for the instance (payment term in
      months) (Choices: 1, 12, 24)

- plan
      plan to use for the instance (Linode plan)

- ssh_pub_key
      SSH public key applied to root user

- state
      Indicate desired state of the resource (Choices: present,
      active, started, absent, deleted, stopped, restarted)

- swap
      swap size in MB

- wait
      wait for the instance to be in state 'running' before
      returning (Choices: yes, no)

- wait_timeout
      how long before wait gives up, in seconds

Notes:    LINODE_API_KEY env variable can be used instead

Requirements:    linode-python

# Create a server
- local_action:
     module: linode
     api_key: 'longStringFromLinodeApi'
     name: linode-test1
     plan: 1
     datacenter: 2
     distribution: 99
     password: 'superSecureRootPassword'
     ssh_pub_key: 'ssh-rsa qwerty'
     swap: 768
     wait: yes
     wait_timeout: 600
     state: present

# Ensure a running server (create if missing)
- local_action:
     module: linode
     api_key: 'longStringFromLinodeApi'
     name: linode-test1
     linode_id: 12345678
     plan: 1
     datacenter: 2
     distribution: 99
     password: 'superSecureRootPassword'
     ssh_pub_key: 'ssh-rsa qwerty'
     swap: 768
     wait: yes
     wait_timeout: 600
     state: present

# Delete a server
- local_action:
     module: linode
     api_key: 'longStringFromLinodeApi'
     name: linode-test1
     linode_id: 12345678
     state: absent

# Stop a server
- local_action:
     module: linode
     api_key: 'longStringFromLinodeApi'
     name: linode-test1
     linode_id: 12345678
     state: stopped

# Reboot a server
- local_action:
     module: linode
     api_key: 'longStringFromLinodeApi'
     name: linode-test1
     linode_id: 12345678
     state: restarted

> LVG

This module creates, removes or resizes volume groups.

Options (= is mandatory):

- force
      If yes, allows to remove volume group with logical volumes.
      (Choices: yes, no)

- pesize
      The size of the physical extent in megabytes. Must be a
      power of 2.

- pvs
      List of comma-separated devices to use as physical devices
      in this volume group. Required when creating or resizing
      volume group.

- state
      Control if the volume group exists. (Choices: present,
      absent)

= vg
      The name of the volume group.

Notes:    module does not modify PE size for already present volume group

# Create a volume group on top of /dev/sda1 with physical extent size = 32MB.
- lvg:  vg=vg.services pvs=/dev/sda1 pesize=32

# Create or resize a volume group on top of /dev/sdb1 and /dev/sdc5.
# If, for example, we already have VG vg.services on top of /dev/sdb1,
# this VG will be extended by /dev/sdc5.  Or if vg.services was created on
# top of /dev/sda5, we first extend it with /dev/sdb1 and /dev/sdc5,
# and then reduce by /dev/sda5.
- lvg: vg=vg.services pvs=/dev/sdb1,/dev/sdc5

# Remove a volume group with name vg.services.
- lvg: vg=vg.services state=absent

> LVOL

This module creates, removes or resizes logical volumes.

Options (= is mandatory):

- force
      Shrink or remove operations of volumes requires this switch.
      Ensures that that filesystems get never corrupted/destroyed
      by mistake. (Choices: yes, no)

= lv
      The name of the logical volume.

- size
      The size of the logical volume, according to lvcreate(8)
      --size, by default in megabytes or optionally with one of
      [bBsSkKmMgGtTpPeE] units; or according to lvcreate(8)
      --extents as a percentage of [VG|PVS|FREE]; resizing is not
      supported with percentages.

- state
      Control if the logical volume exists. (Choices: present,
      absent)

= vg
      The volume group this logical volume is part of.

Notes:    Filesystems on top of the volume are not resized.

# Create a logical volume of 512m.
- lvol: vg=firefly lv=test size=512

# Create a logical volume of 512g.
- lvol: vg=firefly lv=test size=512g

# Create a logical volume the size of all remaining space in the volume group
- lvol: vg=firefly lv=test size=100%FREE

# Extend the logical volume to 1024m.
- lvol: vg=firefly lv=test size=1024

# Reduce the logical volume to 512m
- lvol: vg=firefly lv=test size=512 force=yes

# Remove the logical volume.
- lvol: vg=firefly lv=test state=absent force=yes

> MACPORTS

Manages MacPorts packages

Options (= is mandatory):

= name
      name of package to install/remove

- state
      state of the package (Choices: present, absent, active,
      inactive)

- update_cache
      update the package db first (Choices: yes, no)

- macports: name=foo state=present
- macports: name=foo state=present update_cache=yes
- macports: name=foo state=absent
- macports: name=foo state=active
- macports: name=foo state=inactive

> MAIL

This module is useful for sending emails from playbooks.One may
wonder why automate sending emails?  In complex environments there
are from time to time processes that cannot be automated, either
because you lack the authority to make it so, or because not
everyone agrees to a common approach.If you cannot automate a
specific step, but the step is non-blocking, sending out an email
to the responsible party to make him perform his part of the
bargain is an elegant way to put the responsibility in someone
else's lap.Of course sending out a mail can be equally useful as a
way to notify one or more people in a team that a specific action
has been (successfully) taken.

Options (= is mandatory):

- attach
      A space-separated list of pathnames of files to attach to
      the message. Attached files will have their content-type set
      to `application/octet-stream'.

- bcc
      The email-address(es) the mail is being 'blind' copied to.
      This is a comma-separated list, which may contain address
      and phrase portions.

- body
      The body of the email being sent.

- cc
      The email-address(es) the mail is being copied to. This is a
      comma-separated list, which may contain address and phrase
      portions.

- charset
      The character set of email being sent

- from
      The email-address the mail is sent from. May contain address
      and phrase.

- headers
      A vertical-bar-separated list of headers which should be
      added to the message. Each individual header is specified as
      `header=value' (see example below).

- host
      The mail server

- port
      The mail server port

= subject
      The subject of the email being sent.

- to
      The email-address(es) the mail is being sent to. This is a
      comma-separated list, which may contain address and phrase
      portions.

# Example playbook sending mail to root
- local_action: mail msg='System {{ ansible_hostname }} has been successfully provisioned.'

# Send e-mail to a bunch of users, attaching files
- local_action: mail
                host='127.0.0.1'
                port=2025
                subject="Ansible-report"
                body="Hello, this is an e-mail. I hope you like it ;-)"
                from="jane@example.net (Jane Jolie)"
                to="John Doe <j.d@example.org>, Suzie Something <sue@example.com>"
                cc="Charlie Root <root@localhost>"
                attach="/etc/group /tmp/pavatar2.png"
                headers=Reply-To=john@example.com|X-Special="Something or other"
                charset=utf8

> MODPROBE

Add or remove kernel modules.

Options (= is mandatory):

= name
      Name of kernel module to manage.

- state
      Whether the module should be present or absent. (Choices:
      present, absent)

# Add the 802.1q module
- modprobe: name=8021q state=present

> MONGODB_USER

Adds or removes a user from a MongoDB database.

Options (= is mandatory):

= database
      The name of the database to add/remove the user from

- login_host
      The host running the database

- login_password
      The password used to authenticate with

- login_port
      The port to connect to

- login_user
      The username used to authenticate with

- password
      The password to use for the user

- roles
      The database user roles valid values are one or more of the
      following: read, 'readWrite', 'dbAdmin', 'userAdmin',
      'clusterAdmin', 'readAnyDatabase', 'readWriteAnyDatabase',
      'userAdminAnyDatabase', 'dbAdminAnyDatabase'This param
      requires mongodb 2.4+ and pymongo 2.5+

- state
      The database user state (Choices: present, absent)

= user
      The name of the user to add or remove

Notes:    Requires the pymongo Python package on the remote host, version
      2.4.2+. This can be installed using pip or the OS package
      manager. @see
      http://api.mongodb.org/python/current/installation.html

Requirements:    pymongo

# Create 'burgers' database user with name 'bob' and password '12345'.
- mongodb_user: database=burgers name=bob password=12345 state=present

# Delete 'burgers' database user with name 'bob'.
- mongodb_user: database=burgers name=bob state=absent

# Define more users with various specific roles (if not defined, no roles is assigned, and the user will be added via pre mongo 2.2 style)
- mongodb_user: database=burgers name=ben password=12345 roles='read' state=present
- mongodb_user: database=burgers name=jim password=12345 roles='readWrite,dbAdmin,userAdmin' state=present
- mongodb_user: database=burgers name=joe password=12345 roles='readWriteAnyDatabase' state=present

> MONIT

Manage the state of a program monitored via `Monit'

Options (= is mandatory):

= name
      The name of the `monit' program/process to manage

= state
      The state of service (Choices: present, started, stopped,
      restarted, monitored, unmonitored, reloaded)

# Manage the state of program "httpd" to be in "started" state.
- monit: name=httpd state=started

> MOUNT

This module controls active and configured mount points in
`/etc/fstab'.

Options (= is mandatory):

- dump
      dump (see fstab(8))

= fstype
      file-system type

= name
      path to the mount point, eg: `/mnt/files'

- opts
      mount options (see fstab(8))

- passno
      passno (see fstab(8))

= src
      device to be mounted on `name'.

= state
      If `mounted' or `unmounted', the device will be actively
      mounted or unmounted as well as just configured in `fstab'.
      `absent' and `present' only deal with `fstab'. `mounted'
      will also automatically create the mount point directory if
      it doesn't exist. If `absent' changes anything, it will
      remove the mount point directory. (Choices: present, absent,
      mounted, unmounted)

# Mount DVD read-only
- mount: name=/mnt/dvd src=/dev/sr0 fstype=iso9660 opts=ro state=present

# Mount up device by label
- mount: name=/srv/disk src='LABEL=SOME_LABEL' state=present

# Mount up device by UUID
- mount: name=/home src='UUID=b3e48f45-f933-4c8e-a700-22a159ec9077' opts=noatime state=present

> MQTT

Publish a message on an MQTT topic.

Options (= is mandatory):

- client_id
      MQTT client identifier

- password
      Password for `username' to authenticate against the broker.

= payload
      Payload. The special string `"None"' may be used to send a
      NULL (i.e. empty) payload which is useful to simply notify
      with the `topic' or to clear previously retained messages.

- port
      MQTT broker port number

- qos
      QoS (Quality of Service) (Choices: 0, 1, 2)

- retain
      Setting this flag causes the broker to retain (i.e. keep)
      the message so that applications that subsequently subscribe
      to the topic can received the last retained message
      immediately.

- server
      MQTT broker address/name

= topic
      MQTT topic name

- username
      Username to authenticate against the broker.

Notes:    This module requires a connection to an MQTT broker such as
      Mosquitto http://mosquitto.org and the `mosquitto' Python
      module (http://mosquitto.org/python).

Requirements:    mosquitto

- local_action: mqtt
              topic=service/ansible/{{ ansible_hostname }}
              payload="Hello at {{ ansible_date_time.iso8601 }}"
              qos=0
              retain=false
              client_id=ans001

> MYSQL_DB

Add or remove MySQL databases from a remote host.

Options (= is mandatory):

- collation
      Collation mode

- encoding
      Encoding mode

- login_host
      Host running the database

- login_password
      The password used to authenticate with

- login_port
      Port of the MySQL server

- login_unix_socket
      The path to a Unix domain socket for local connections

- login_user
      The username used to authenticate with

= name
      name of the database to add or remove

- state
      The database state (Choices: present, absent, dump, import)

- target
      Location, on the remote host, of the dump file to read from
      or write to. Uncompressed SQL files (`.sql') as well as
      bzip2 (`.bz2') and gzip (`.gz') compressed files are
      supported.

Notes:    Requires the MySQLdb Python package on the remote host. For
      Ubuntu, this is as easy as apt-get install python-mysqldb.
      (See [apt].)Both `login_password' and `login_user' are
      required when you are passing credentials. If none are
      present, the module will attempt to read the credentials
      from `~/.my.cnf', and finally fall back to using the MySQL
      default login of `root' with no password.

Requirements:    ConfigParser

# Create a new database with name 'bobdata'
- mysql_db: name=bobdata state=present

# Copy database dump file to remote host and restore it to database 'my_db'
- copy: src=dump.sql.bz2 dest=/tmp
- mysql_db: name=my_db state=import target=/tmp/dump.sql.bz2

> MYSQL_REPLICATION

Manages MySQL server replication, slave, master status get and
change master host.

Options (= is mandatory):

- login_host
      mysql host to connect

- login_password
      password to connect mysql host, if defined login_user also
      needed.

- login_unix_socket
      unix socket to connect mysql server

- login_user
      username to connect mysql host, if defined login_password
      also needed.

- master_connect_retry
      same as mysql variable

- master_host
      same as mysql variable

- master_log_file
      same as mysql variable

- master_log_pos
      same as mysql variable

- master_password
      same as mysql variable

- master_port
      same as mysql variable

- master_ssl
      same as mysql variable

- master_ssl_ca
      same as mysql variable

- master_ssl_capath
      same as mysql variable

- master_ssl_cert
      same as mysql variable

- master_ssl_cipher
      same as mysql variable

- master_ssl_key
      same as mysql variable

- master_user
      same as mysql variable

- mode
      module operating mode. Could be getslave (SHOW SLAVE
      STATUS), getmaster (SHOW MASTER STATUS), changemaster
      (CHANGE MASTER TO), startslave (START SLAVE), stopslave
      (STOP SLAVE) (Choices: getslave, getmaster, changemaster,
      stopslave, startslave)

- relay_log_file
      same as mysql variable

- relay_log_pos
      same as mysql variable

# Stop mysql slave thread
- mysql_replication: mode=stopslave

# Get master binlog file name and binlog position
- mysql_replication: mode=getmaster

# Change master to master server 192.168.1.1 and use binary log 'mysql-bin.000009' with position 4578
- mysql_replication: mode=changemaster master_host=192.168.1.1 master_log_file=mysql-bin.000009 master_log_pos=4578

> MYSQL_USER

Adds or removes a user from a MySQL database.

Options (= is mandatory):

- append_privs
      Append the privileges defined by priv to the existing ones
      for this user instead of overwriting existing ones.
      (Choices: yes, no)

- check_implicit_admin
      Check if mysql allows login as root/nopassword before trying
      supplied credentials.

- host
      the 'host' part of the MySQL username

- login_host
      Host running the database

- login_password
      The password used to authenticate with

- login_port
      Port of the MySQL server

- login_unix_socket
      The path to a Unix domain socket for local connections

- login_user
      The username used to authenticate with

= name
      name of the user (role) to add or remove

- password
      set the user's password

- priv
      MySQL privileges string in the format:
      `db.table:priv1,priv2'

- state
      Whether the user should exist.  When `absent', removes the
      user. (Choices: present, absent)

Notes:    Requires the MySQLdb Python package on the remote host. For
      Ubuntu, this is as easy as apt-get install python-
      mysqldb.Both `login_password' and `login_username' are
      required when you are passing credentials. If none are
      present, the module will attempt to read the credentials
      from `~/.my.cnf', and finally fall back to using the MySQL
      default login of 'root' with no password.MySQL server
      installs with default login_user of 'root' and no password.
      To secure this user as part of an idempotent playbook, you
      must create at least two tasks: the first must change the
      root user's password, without providing any
      login_user/login_password details. The second must drop a
      ~/.my.cnf file containing the new root credentials.
      Subsequent runs of the playbook will then succeed by reading
      the new credentials from the file.

Requirements:    ConfigParser, MySQLdb

# Create database user with name 'bob' and password '12345' with all database privileges
- mysql_user: name=bob password=12345 priv=*.*:ALL state=present

# Creates database user 'bob' and password '12345' with all database privileges and 'WITH GRANT OPTION'
- mysql_user: name=bob password=12345 priv=*.*:ALL,GRANT state=present

# Ensure no user named 'sally' exists, also passing in the auth credentials.
- mysql_user: login_user=root login_password=123456 name=sally state=absent

# Example privileges string format
mydb.*:INSERT,UPDATE/anotherdb.*:SELECT/yetanotherdb.*:ALL

# Example using login_unix_socket to connect to server
- mysql_user: name=root password=abc123 login_unix_socket=/var/run/mysqld/mysqld.sock

# Example .my.cnf file for setting the root password
# Note: don't use quotes around the password, because the mysql_user module
# will include them in the password but the mysql client will not

[client]
user=root
password=n<_665{vS43y

> MYSQL_VARIABLES

Query / Set MySQL variables

Options (= is mandatory):

- login_host
      mysql host to connect

- login_password
      password to connect mysql host, if defined login_user also
      needed.

- login_unix_socket
      unix socket to connect mysql server

- login_user
      username to connect mysql host, if defined login_password
      also needed.

- value
      If set, then sets variable value to this

= variable
      Variable name to operate

# Check for sync_binary_log setting
- mysql_variables: variable=sync_binary_log

# Set read_only variable to 1
- mysql_variables: variable=read_only value=1

> NAGIOS

The [nagios] module has two basic functions: scheduling downtime
and toggling alerts for services or hosts.All actions require the
`host' parameter to be given explicitly. In playbooks you can use
the `{{inventory_hostname}}' variable to refer to the host the
playbook is currently running on.You can specify multiple services
at once by separating them with commas, .e.g.,
`services=httpd,nfs,puppet'.When specifying what service to handle
there is a special service value, `host', which will handle
alerts/downtime for the `host itself', e.g., `service=host'. This
keyword may not be given with other services at the same time.
`Setting alerts/downtime for a host does not affect
alerts/downtime for any of the services running on it.' To
schedule downtime for all services on particular host use keyword
"all", e.g., `service=all'.When using the [nagios] module you will
need to specify your Nagios server using the `delegate_to'
parameter.

Options (= is mandatory):

= action
      Action to take. (Choices: downtime, enable_alerts,
      disable_alerts, silence, unsilence, silence_nagios,
      unsilence_nagios, command)

- author
      Author to leave downtime comments as. Only usable with the
      `downtime' action.

- cmdfile
      Path to the nagios `command file' (FIFO pipe). Only required
      if auto-detection fails.

= command
      The raw command to send to nagios, which should not include
      the submitted time header or the line-feed *Required* option
      when using the `command' action.

- host
      Host to operate on in Nagios.

- minutes
      Minutes to schedule downtime for.Only usable with the
      `downtime' action.

= services
      What to manage downtime/alerts for. Separate multiple
      services with commas. `service' is an alias for `services'.
      *Required* option when using the `downtime',
      `enable_alerts', and `disable_alerts' actions.

Requirements:    Nagios

# set 30 minutes of apache downtime
- nagios: action=downtime minutes=30 service=httpd host={{ inventory_hostname }}

# schedule an hour of HOST downtime
- nagios: action=downtime minutes=60 service=host host={{ inventory_hostname }}

# schedule downtime for ALL services on HOST
- nagios: action=downtime minutes=45 service=all host={{ inventory_hostname }}

# schedule downtime for a few services
- nagios: action=downtime services=frob,foobar,qeuz host={{ inventory_hostname }}

# enable SMART disk alerts
- nagios: action=enable_alerts service=smart host={{ inventory_hostname }}

# "two services at once: disable httpd and nfs alerts"
- nagios: action=disable_alerts service=httpd,nfs host={{ inventory_hostname }}

# disable HOST alerts
- nagios: action=disable_alerts service=host host={{ inventory_hostname }}

# silence ALL alerts
- nagios: action=silence host={{ inventory_hostname }}

# unsilence all alerts
- nagios: action=unsilence host={{ inventory_hostname }}

# SHUT UP NAGIOS
- nagios: action=silence_nagios

# ANNOY ME NAGIOS
- nagios: action=unsilence_nagios

# command something
- nagios: action=command command='DISABLE_FAILURE_PREDICTION'

> NETSCALER

Manages Citrix NetScaler server and service entities.

Options (= is mandatory):

- action
      the action you want to perform on the entity (Choices:
      enable, disable)

= name
      name of the entity

= nsc_host
      hostname or ip of your netscaler

- nsc_protocol
      protocol used to access netscaler

= password
      password

- type
      type of the entity (Choices: server, service)

= user
      username

- validate_certs
      If `no', SSL certificates for the target url will not be
      validated. This should only be used on personally controlled
      sites using self-signed certificates. (Choices: yes, no)

Requirements:    urllib, urllib2

# Disable the server
ansible host -m netscaler -a "nsc_host=nsc.example.com user=apiuser password=apipass"

# Enable the server
ansible host -m netscaler -a "nsc_host=nsc.example.com user=apiuser password=apipass action=enable"

# Disable the service local:8080
ansible host -m netscaler -a "nsc_host=nsc.example.com user=apiuser password=apipass name=local:8080 type=service action=disable"

> NEWRELIC_DEPLOYMENT

Notify newrelic about app deployments (see http://newrelic.github.
io/newrelic_api/NewRelicApi/Deployment.html)

Options (= is mandatory):

- app_name
      (one of app_name or application_id are required) The value
      of app_name in the newrelic.yml file used by the application

- application_id
      (one of app_name or application_id are required) The
      application id, found in the URL when viewing the
      application in RPM

- appname
      Name of the application

- changelog
      A list of changes for this deployment

- description
      Text annotation for the deployment - notes for you

- environment
      The environment for this deployment

- revision
      A revision number (e.g., git commit SHA)

= token
      API token.

- user
      The name of the user/process that triggered this deployment

- validate_certs
      If `no', SSL certificates will not be validated. This should
      only be used on personally controlled sites using self-
      signed certificates. (Choices: yes, no)

Requirements:    urllib, urllib2

- newrelic_deployment: token=AAAAAA
                       app_name=myapp
                       user='ansible deployment'
                       revision=1.0

> NOVA_COMPUTE

Create or Remove virtual machines from Openstack.

Options (= is mandatory):

- auth_url
      The keystone url for authentication

- flavor_id
      The id of the flavor in which the new VM has to be created

= image_id
      The id of the image that has to be cloned

- key_name
      The key pair name to be used when creating a VM

= login_password
      Password of login user

= login_tenant_name
      The tenant name of the login user

= login_username
      login username to authenticate to keystone

- meta
      A list of key value pairs that should be provided as a
      metadata to the new VM

= name
      Name that has to be given to the instance

- nics
      A list of network id's to which the VM's interface should be
      attached

- region_name
      Name of the region

- security_groups
      The name of the security group to which the VM should be
      added

- state
      Indicate desired state of the resource (Choices: present,
      absent)

- wait
      If the module should wait for the VM to be created.

- wait_for
      The amount of time the module should wait for the VM to get
      into active state

Requirements:    novaclient

# Creates a new VM and attaches to a network and passes metadata to the instance
- nova_compute:
       state: present
       login_username: admin
       login_password: admin
       login_tenant_name: admin
       name: vm1
       image_id: 4f905f38-e52a-43d2-b6ec-754a13ffb529
       key_name: ansible_key
       wait_for: 200
       flavor_id: 4
       nics:
         - net-id: 34605f38-e52a-25d2-b6ec-754a13ffb723
       meta:
         hostname: test1
         group: uge_master

> NOVA_KEYPAIR

Add or Remove key pair from nova .

Options (= is mandatory):

- auth_url
      The keystone url for authentication

= login_password
      Password of login user

= login_tenant_name
      The tenant name of the login user

= login_username
      login username to authenticate to keystone

= name
      Name that has to be given to the key pair

- public_key
      The public key that would be uploaded to nova and injected
      to vm's upon creation

- region_name
      Name of the region

- state
      Indicate desired state of the resource (Choices: present,
      absent)

Requirements:    novaclient

# Creates a key pair with the running users public key
- nova_keypair: state=present login_username=admin
                login_password=admin login_tenant_name=admin name=ansible_key
                public_key={{ lookup('file','~/.ssh/id_rsa.pub') }}

# Creates a new key pair and the private key returned after the run.
- nova_keypair: state=present login_username=admin login_password=admin
                login_tenant_name=admin name=ansible_key

> NPM

Manage node.js packages with Node Package Manager (npm)

Options (= is mandatory):

- executable
      The executable location for npm.This is useful if you are
      using a version manager, such as nvm

- global
      Install the node.js library globally (Choices: yes, no)

- name
      The name of a node.js library to install

- path
      The base path where to install the node.js libraries

- production
      Install dependencies in production mode, excluding
      devDependencies (Choices: yes, no)

- state
      The state of the node.js library (Choices: present, absent,
      latest)

- version
      The version to be installed

description: Install "coffee-script" node.js package.
- npm: name=coffee-script path=/app/location

description: Install "coffee-script" node.js package on version 1.6.1.
- npm: name=coffee-script version=1.6.1 path=/app/location

description: Install "coffee-script" node.js package globally.
- npm: name=coffee-script global=yes

description: Remove the globally package "coffee-script".
- npm: name=coffee-script global=yes state=absent

description: Install packages based on package.json.
- npm: path=/app/location

description: Update packages based on package.json to their latest version.
- npm: path=/app/location state=latest

description: Install packages based on package.json using the npm installed with nvm v0.10.1.
- npm: path=/app/location executable=/opt/nvm/v0.10.1/bin/npm state=present

> OHAI

Similar to the [facter] module, this runs the `Ohai' discovery
program (http://wiki.opscode.com/display/chef/Ohai) on the remote
host and returns JSON inventory data. `Ohai' data is a bit more
verbose and nested than `facter'.

Requirements:    ohai

# Retrieve (ohai) data from all Web servers and store in one-file per host
ansible webservers -m ohai --tree=/tmp/ohaidata

> OPEN_ISCSI

Discover targets on given portal, (dis)connect targets, mark
targets to manually or auto start, return device nodes of
connected targets.

Options (= is mandatory):

- auto_node_startup
      whether the target node should be automatically connected at
      startup (Choices: True, False)

- discover
      whether the list of target nodes on the portal should be
      (re)discovered and added to the persistent iscsi database.
      Keep in mind that iscsiadm discovery resets configurtion,
      like node.startup to manual, hence combined with
      auto_node_startup=yes will allways return a changed state.
      (Choices: True, False)

- login
      whether the target node should be connected (Choices: True,
      False)

- node_auth
      discovery.sendtargets.auth.authmethod

- node_pass
      discovery.sendtargets.auth.password

- node_user
      discovery.sendtargets.auth.username

- port
      the port on which the iscsi target process listens

- portal
      the ip address of the iscsi target

- show_nodes
      whether the list of nodes in the persistent iscsi database
      should be returned by the module (Choices: True, False)

- target
      the iscsi target name

Requirements:    open_iscsi library and tools (iscsiadm)

Examples:

open_iscsi: show_nodes=yes discover=yes portal=10.1.2.3


open_iscsi: portal={{iscsi_target}} login=yes discover=yes


open_iscsi: login=yes target=iqn.1986-03.com.sun:02:f8c1f9e0-c3ec-ec84-c9c9-8bfb0cd5de3d


open_iscsi: login=no target=iqn.1986-03.com.sun:02:f8c1f9e0-c3ec-ec84-c9c9-8bfb0cd5de3d"

> OPENBSD_PKG

Manage packages on OpenBSD using the pkg tools.

Options (= is mandatory):

= name
      Name of the package.

= state
      `present' will make sure the package is installed. `latest'
      will make sure the latest version of the package is
      installed. `absent' will make sure the specified package is
      not installed. (Choices: present, latest, absent)

# Make sure nmap is installed
- openbsd_pkg: name=nmap state=present

# Make sure nmap is the latest version
- openbsd_pkg: name=nmap state=latest

# Make sure nmap is not installed
- openbsd_pkg: name=nmap state=absent

> OPENVSWITCH_BRIDGE

Manage Open vSwitch bridges

Options (= is mandatory):

= bridge
      Name of bridge to manage

- state
      Whether the bridge should exist (Choices: present, absent)

- timeout
      How long to wait for ovs-vswitchd to respond

Requirements:    ovs-vsctl

# Create a bridge named br-int
- openvswitch_bridge: bridge=br-int state=present

> OPENVSWITCH_PORT

Manage Open vSwitch ports

Options (= is mandatory):

= bridge
      Name of bridge to manage

= port
      Name of port to manage on the bridge

- state
      Whether the port should exist (Choices: present, absent)

- timeout
      How long to wait for ovs-vswitchd to respond

Requirements:    ovs-vsctl

# Creates port eth2 on bridge br-ex
- openvswitch_port: bridge=br-ex port=eth2 state=present

> OPKG

Manages OpenWrt packages

Options (= is mandatory):

= name
      name of package to install/remove

- state
      state of the package (Choices: present, absent)

- update_cache
      update the package db first (Choices: yes, no)

- opkg: name=foo state=present
- opkg: name=foo state=present update_cache=yes
- opkg: name=foo state=absent
- opkg: name=foo,bar state=absent

> OSX_SAY

makes an OS computer speak!  Amuse your friends, annoy your
coworkers!

Options (= is mandatory):

= msg
      What to say

- voice
      What voice to use

Notes:    If you like this module, you may also be interested in the osx_say
      callback in the plugins/ directory of the source checkout.

Requirements:    say

- local_action: osx_say msg="{{inventory_hostname}} is all done" voice=Zarvox

> OVIRT

allows you to create new instances, either from scratch or an
image, in addition to deleting or stopping instances on the
oVirt/RHEV platform

Options (= is mandatory):

- disk_alloc
      define if disk is thin or preallocated (Choices: thin,
      preallocated)

- disk_int
      interface type of the disk (Choices: virtio, ide)

- image
      template to use for the instance

- instance_cores
      define the instance's number of cores

- instance_cpus
      the instance's number of cpu's

- instance_disksize
      size of the instance's disk in GB

- instance_mem
      the instance's amount of memory in MB

= instance_name
      the name of the instance to use

- instance_network
      the logical network the machine should belong to

- instance_nic
      name of the network interface in oVirt/RHEV

- instance_os
      type of Operating System

- instance_type
      define if the instance is a server or desktop (Choices:
      server, desktop)

= password
      password of the user to authenticate with

- region
      the oVirt/RHEV datacenter where you want to deploy to

- resource_type
      whether you want to deploy an image or create an instance
      from scratch. (Choices: new, template)

- sdomain
      the Storage Domain where you want to create the instance's
      disk on.

- state
      create, terminate or remove instances (Choices: present,
      absent, shutdown, started, restarted)

= url
      the url of the oVirt instance

= user
      the user to authenticate with

- zone
      deploy the image to this oVirt cluster

Requirements:    ovirt-engine-sdk

# Basic example provisioning from image.

action: ovirt >
    user=admin@internal
    url=https://ovirt.example.com
    instance_name=ansiblevm04
    password=secret
    image=centos_64
    zone=cluster01
    resource_type=template"

# Full example to create new instance from scratch
action: ovirt >
    instance_name=testansible
    resource_type=new
    instance_type=server
    user=admin@internal
    password=secret
    url=https://ovirt.example.com
    instance_disksize=10
    zone=cluster01
    region=datacenter1
    instance_cpus=1
    instance_nic=nic1
    instance_network=rhevm
    instance_mem=1000
    disk_alloc=thin
    sdomain=FIBER01
    instance_cores=1
    instance_os=rhel_6x64
    disk_int=virtio"

# stopping an instance
action: ovirt >
    instance_name=testansible
    state=stopped
    user=admin@internal
    password=secret
    url=https://ovirt.example.com

# starting an instance
action: ovirt >
    instance_name=testansible
    state=started
    user=admin@internal
    password=secret
    url=https://ovirt.example.com

> PACMAN

Manages Archlinux packages

Options (= is mandatory):

= name
      name of package to install, upgrade or remove.

- recurse
      remove all not explicitly installed dependencies not
      required by other packages of the package to remove
      (Choices: yes, no)

- state
      desired state of the package. (Choices: installed, absent)

- update_cache
      update the package database first (pacman -Syy). (Choices:
      yes, no)

# Install package foo
- pacman: name=foo state=installed

# Remove package foo
- pacman: name=foo state=absent

# Remove packages foo and bar
- pacman: name=foo,bar state=absent

# Recursively remove package baz
- pacman: name=baz state=absent recurse=yes

# Update the package database (pacman -Syy) and install bar (bar will be the updated if a newer version exists)
- pacman: name=bar, state=installed, update_cache=yes

> PAGERDUTY

This module will let you create PagerDuty maintenance windows

Options (= is mandatory):

- desc
      Short description of maintenance window. (Choices: )

- hours
      Length of maintenance window in hours. (Choices: )

= name
      PagerDuty unique subdomain. (Choices: )

= passwd
      PagerDuty user password. (Choices: )

- service
      PagerDuty service ID. (Choices: )

= state
      Create a maintenance window or get a list of ongoing
      windows. (Choices: running, started, ongoing)

= user
      PagerDuty user ID. (Choices: )

- validate_certs
      If `no', SSL certificates will not be validated. This should
      only be used on personally controlled sites using self-
      signed certificates. (Choices: yes, no)

Notes:    This module does not yet have support to end maintenance windows.

Requirements:    PagerDuty API access

# List ongoing maintenance windows.
- pagerduty: name=companyabc user=example@example.com passwd=password123 state=ongoing

# Create a 1 hour maintenance window for service FOO123.
- pagerduty: name=companyabc
             user=example@example.com
             passwd=password123
             state=running
             service=FOO123

# Create a 4 hour maintenance window for service FOO123 with the description "deployment".
- pagerduty: name=companyabc
             user=example@example.com
             passwd=password123
             state=running
             service=FOO123
             hours=4
             desc=deployment

> PAUSE

Pauses playbook execution for a set amount of time, or until a
prompt is acknowledged. All parameters are optional. The default
behavior is to pause with a prompt.You can use `ctrl+c' if you
wish to advance a pause earlier than it is set to expire or if you
need to abort a playbook run entirely. To continue early: press
`ctrl+c' and then `c'. To abort a playbook: press `ctrl+c' and
then `a'.The pause module integrates into async/parallelized
playbooks without any special considerations (see also: Rolling
Updates). When using pauses with the `serial' playbook parameter
(as in rolling updates) you are only prompted once for the current
group of hosts.

Options (= is mandatory):

- minutes
      Number of minutes to pause for.

- prompt
      Optional text to use for the prompt message.

- seconds
      Number of seconds to pause for.

# Pause for 5 minutes to build app cache.
- pause: minutes=5

# Pause until you can verify updates to an application were successful.
- pause:

# A helpful reminder of what to look out for post-update.
- pause: prompt="Make sure org.foo.FooOverload exception is not present"

> PING

A trivial test module, this module always returns `pong' on
successful contact. It does not make sense in playbooks, but it is
useful from `/usr/bin/ansible'

# Test 'webservers' status
ansible webservers -m ping

> PINGDOM

This module will let you pause/unpause Pingdom alerts

Options (= is mandatory):

= checkid
      Pingdom ID of the check. (Choices: )

= key
      Pingdom API key. (Choices: )

= passwd
      Pingdom user password. (Choices: )

= state
      Define whether or not the check should be running or paused.
      (Choices: running, paused)

= uid
      Pingdom user ID. (Choices: )

Notes:    This module does not yet have support to add/remove checks.

Requirements:    This pingdom python library: https://github.com/mbabineau/pingdom-
      python

# Pause the check with the ID of 12345.
- pingdom: uid=example@example.com
           passwd=password123
           key=apipassword123
           checkid=12345
           state=paused

# Unpause the check with the ID of 12345.
- pingdom: uid=example@example.com
           passwd=password123
           key=apipassword123
           checkid=12345
           state=running

> PIP

Manage Python library dependencies. To use this module, one of the
following keys is required: `name' or `requirements'.

Options (= is mandatory):

- chdir
      cd into this directory before running the command

- executable
      The explicit executable or a pathname to the executable to
      be used to run pip for a specific version of Python
      installed in the system. For example `pip-3.3', if there are
      both Python 2.7 and 3.3 installations in the system and you
      want to run pip for the Python 3.3 installation.

- extra_args
      Extra arguments passed to pip.

- name
      The name of a Python library to install or the url of the
      remote package.

- requirements
      The path to a pip requirements file

- state
      The state of module (Choices: present, absent, latest)

- version
      The version number to install of the Python library
      specified in the `name' parameter

- virtualenv
      An optional path to a `virtualenv' directory to install into

- virtualenv_command
      The command or a pathname to the command to create the
      virtual environment with. For example `pyvenv',
      `virtualenv', `virtualenv2', `~/bin/virtualenv',
      `/usr/local/bin/virtualenv'.

- virtualenv_site_packages
      Whether the virtual environment will inherit packages from
      the global site-packages directory.  Note that if this
      setting is changed on an already existing virtual
      environment it will not have any effect, the environment
      must be deleted and newly created. (Choices: yes, no)

Notes:    Please note that virtualenv (http://www.virtualenv.org/) must be
      installed on the remote host if the virtualenv parameter is
      specified.

Requirements:    virtualenv, pip

# Install (Bottle) python package.
- pip: name=bottle

# Install (Bottle) python package on version 0.11.
- pip: name=bottle version=0.11

# Install (MyApp) using one of the remote protocols (bzr+,hg+,git+,svn+). You do not have to supply '-e' option in extra_args.
- pip: name='svn+http://myrepo/svn/MyApp#egg=MyApp'

# Install (Bottle) into the specified (virtualenv), inheriting none of the globally installed modules
- pip: name=bottle virtualenv=/my_app/venv

# Install (Bottle) into the specified (virtualenv), inheriting globally installed modules
- pip: name=bottle virtualenv=/my_app/venv virtualenv_site_packages=yes

# Install (Bottle) into the specified (virtualenv), using Python 2.7
- pip: name=bottle virtualenv=/my_app/venv virtualenv_command=virtualenv-2.7

# Install specified python requirements.
- pip: requirements=/my_app/requirements.txt

# Install specified python requirements in indicated (virtualenv).
- pip: requirements=/my_app/requirements.txt virtualenv=/my_app/venv

# Install specified python requirements and custom Index URL.
- pip: requirements=/my_app/requirements.txt extra_args='-i https://example.com/pypi/simple'

# Install (Bottle) for Python 3.3 specifically,using the 'pip-3.3' executable.
- pip: name=bottle executable=pip-3.3

> PKGIN

Manages SmartOS packages

Options (= is mandatory):

= name
      name of package to install/remove

- state
      state of the package (Choices: present, absent)

# install package foo"
- pkgin: name=foo state=present

# remove package foo
- pkgin: name=foo state=absent

# remove packages foo and bar
- pkgin: name=foo,bar state=absent

> PKGNG

Manage binary packages for FreeBSD using 'pkgng' which is
available in versions after 9.0.

Options (= is mandatory):

- cached
      use local package base or try to fetch an updated one
      (Choices: yes, no)

= name
      name of package to install/remove

- pkgsite
      specify packagesite to use for downloading packages, if not
      specified, use settings from /usr/local/etc/pkg.conf

- state
      state of the package (Choices: present, absent)

Notes:    When using pkgsite, be careful that already in cache packages
      won't be downloaded again.

# Install package foo
- pkgng: name=foo state=present

# Remove packages foo and bar
- pkgng: name=foo,bar state=absent

> PKGUTIL

Manages CSW packages (SVR4 format) on Solaris 10 and 11.These were
the native packages on Solaris <= 10 and are available as a legacy
feature in Solaris 11.Pkgutil is an advanced packaging system,
which resolves dependency on installation. It is designed for CSW
packages.

Options (= is mandatory):

= name
      Package name, e.g. (`CSWnrpe')

- site
      Specifies the repository path to install the package
      from.Its global definition is done in
      `/etc/opt/csw/pkgutil.conf'.

= state
      Whether to install (`present'), or remove (`absent') a
      package.The upgrade (`latest') operation will update/install
      the package to the latest version available.Note: The module
      has a limitation that (`latest') only works for one package,
      not lists of them. (Choices: present, absent, latest)

# Install a package
pkgutil: name=CSWcommon state=present

# Install a package from a specific repository
pkgutil: name=CSWnrpe site='ftp://myinternal.repo/opencsw/kiel state=latest'

> PORTINSTALL

Manage packages for FreeBSD using 'portinstall'.

Options (= is mandatory):

= name
      name of package to install/remove

- state
      state of the package (Choices: present, absent)

- use_packages
      use packages instead of ports whenever available (Choices:
      yes, no)

# Install package foo
- portinstall: name=foo state=present

# Install package security/cyrus-sasl2-saslauthd
- portinstall: name=security/cyrus-sasl2-saslauthd state=present

# Remove packages foo and bar
- portinstall: name=foo,bar state=absent

> POSTGRESQL_DB

Add or remove PostgreSQL databases from a remote host.

Options (= is mandatory):

- encoding
      Encoding of the database

- lc_collate
      Collation order (LC_COLLATE) to use in the database. Must
      match collation order of template database unless
      `template0' is used as template.

- lc_ctype
      Character classification (LC_CTYPE) to use in the database
      (e.g. lower, upper, ...) Must match LC_CTYPE of template
      database unless `template0' is used as template.

- login_host
      Host running the database

- login_password
      The password used to authenticate with

- login_user
      The username used to authenticate with

= name
      name of the database to add or remove

- owner
      Name of the role to set as owner of the database

- port
      Database port to connect to.

- state
      The database state (Choices: present, absent)

- template
      Template used to create the database

Notes:    The default authentication assumes that you are either logging in
      as or sudo'ing to the `postgres' account on the host.This
      module uses `psycopg2', a Python PostgreSQL database
      adapter. You must ensure that psycopg2 is installed on the
      host before using this module. If the remote host is the
      PostgreSQL server (which is the default case), then
      PostgreSQL must also be installed on the remote host. For
      Ubuntu-based systems, install the `postgresql', `libpq-dev',
      and `python-psycopg2' packages on the remote host before
      using this module.

Requirements:    psycopg2

# Create a new database with name "acme"
- postgresql_db: name=acme

# Create a new database with name "acme" and specific encoding and locale
# settings. If a template different from "template0" is specified, encoding
# and locale settings must match those of the template.
- postgresql_db: name=acme
                 encoding='UTF-8'
                 lc_collate='de_DE.UTF-8'
                 lc_ctype='de_DE.UTF-8'
                 template='template0'

> POSTGRESQL_PRIVS

Grant or revoke privileges on PostgreSQL database objects.This
module is basically a wrapper around most of the functionality of
PostgreSQL's GRANT and REVOKE statements with detection of changes
(GRANT/REVOKE `privs' ON `type' `objs' TO/FROM `roles')

Options (= is mandatory):

= database
      Name of database to connect to.Alias: `db'

- grant_option
      Whether `role' may grant/revoke the specified
      privileges/group memberships to others.Set to `no' to revoke
      GRANT OPTION, leave unspecified to make no
      changes.`grant_option' only has an effect if `state' is
      `present'.Alias: `admin_option' (Choices: yes, no)

- host
      Database host address. If unspecified, connect via Unix
      socket.Alias: `login_host'

- login
      The username to authenticate with.Alias: `login_user'

- objs
      Comma separated list of database objects to set privileges
      on.If `type' is `table' or `sequence', the special value
      `ALL_IN_SCHEMA' can be provided instead to specify all
      database objects of type `type' in the schema specified via
      `schema'. (This also works with PostgreSQL < 9.0.)If `type'
      is `database', this parameter can be omitted, in which case
      privileges are set for the database specified via
      `database'.If `type' is `function', colons (":") in object
      names will be replaced with commas (needed to specify
      function signatures, see examples)Alias: `obj'

- password
      The password to authenticate with.Alias: `login_password')

- port
      Database port to connect to.

- privs
      Comma separated list of privileges to grant/revoke.Alias:
      `priv'

= roles
      Comma separated list of role (user/group) names to set
      permissions for.The special value `PUBLIC' can be provided
      instead to set permissions for the implicitly defined PUBLIC
      group.Alias: `role'

- schema
      Schema that contains the database objects specified via
      `objs'.May only be provided if `type' is `table', `sequence'
      or `function'. Defaults to  `public' in these cases.

- state
      If `present', the specified privileges are granted, if
      `absent' they are revoked. (Choices: present, absent)

- type
      Type of database object to set privileges on. (Choices:
      table, sequence, function, database, schema, language,
      tablespace, group)

Notes:    Default authentication assumes that postgresql_privs is run by the
      `postgres' user on the remote host. (Ansible's `user' or
      `sudo-user').This module requires Python package `psycopg2'
      to be installed on the remote host. In the default case of
      the remote host also being the PostgreSQL server, PostgreSQL
      has to be installed there as well, obviously. For Debian
      /Ubuntu-based systems, install packages `postgresql' and
      `python-psycopg2'.Parameters that accept comma separated
      lists (`privs', `objs', `roles') have singular alias names
      (`priv', `obj', `role').To revoke only `GRANT OPTION' for a
      specific object, set `state' to `present' and `grant_option'
      to `no' (see examples).Note that when revoking privileges
      from a role R, this role  may still have access via
      privileges granted to any role R is a member of including
      `PUBLIC'.Note that when revoking privileges from a role R,
      you do so as the user specified via `login'. If R has been
      granted the same privileges by another user also, R can
      still access database objects via these privileges.When
      revoking privileges, `RESTRICT' is assumed (see PostgreSQL
      docs).

Requirements:    psycopg2

# On database "library":
# GRANT SELECT, INSERT, UPDATE ON TABLE public.books, public.authors
# TO librarian, reader WITH GRANT OPTION
- postgresql_privs: >
    database=library
    state=present
    privs=SELECT,INSERT,UPDATE
    type=table
    objs=books,authors
    schema=public
    roles=librarian,reader
    grant_option=yes

# Same as above leveraging default values:
- postgresql_privs: >
    db=library
    privs=SELECT,INSERT,UPDATE
    objs=books,authors
    roles=librarian,reader
    grant_option=yes

# REVOKE GRANT OPTION FOR INSERT ON TABLE books FROM reader
# Note that role "reader" will be *granted* INSERT privilege itself if this
# isn't already the case (since state=present).
- postgresql_privs: >
    db=library
    state=present
    priv=INSERT
    obj=books
    role=reader
    grant_option=no

# REVOKE INSERT, UPDATE ON ALL TABLES IN SCHEMA public FROM reader
# "public" is the default schema. This also works for PostgreSQL 8.x.
- postgresql_privs: >
    db=library
    state=absent
    privs=INSERT,UPDATE
    objs=ALL_IN_SCHEMA
    role=reader

# GRANT ALL PRIVILEGES ON SCHEMA public, math TO librarian
- postgresql_privs: >
    db=library
    privs=ALL
    type=schema
    objs=public,math
    role=librarian

# GRANT ALL PRIVILEGES ON FUNCTION math.add(int, int) TO librarian, reader
# Note the separation of arguments with colons.
- postgresql_privs: >
    db=library
    privs=ALL
    type=function
    obj=add(int:int)
    schema=math
    roles=librarian,reader

# GRANT librarian, reader TO alice, bob WITH ADMIN OPTION
# Note that group role memberships apply cluster-wide and therefore are not
# restricted to database "library" here.
- postgresql_privs: >
    db=library
    type=group
    objs=librarian,reader
    roles=alice,bob
    admin_option=yes

# GRANT ALL PRIVILEGES ON DATABASE library TO librarian
# Note that here "db=postgres" specifies the database to connect to, not the
# database to grant privileges on (which is specified via the "objs" param)
- postgresql_privs: >
    db=postgres
    privs=ALL
    type=database
    obj=library
    role=librarian

# GRANT ALL PRIVILEGES ON DATABASE library TO librarian
# If objs is omitted for type "database", it defaults to the database
# to which the connection is established
- postgresql_privs: >
    db=library
    privs=ALL
    type=database
    role=librarian

> POSTGRESQL_USER

Add or remove PostgreSQL users (roles) from a remote host and,
optionally, grant the users access to an existing database or
tables.The fundamental function of the module is to create, or
delete, roles from a PostgreSQL cluster. Privilege assignment, or
removal, is an optional step, which works on one database at a
time. This allows for the module to be called several times in the
same module to modify the permissions on different databases, or
to grant permissions to already existing users.A user cannot be
removed until all the privileges have been stripped from the user.
In such situation, if the module tries to remove the user it will
fail. To avoid this from happening the fail_on_user option signals
the module to try to remove the user, but if not possible keep
going; the module will report if changes happened and separately
if the user was removed or not.

Options (= is mandatory):

- db
      name of database where permissions will be granted

- encrypted
      denotes if the password is already encrypted. boolean.

- expires
      sets the user's password expiration.

- fail_on_user
      if `yes', fail when user can't be removed. Otherwise just
      log and continue (Choices: yes, no)

- login_host
      Host running PostgreSQL.

- login_password
      Password used to authenticate with PostgreSQL

- login_user
      User (role) used to authenticate with PostgreSQL

= name
      name of the user (role) to add or remove

- password
      set the user's password, before 1.4 this was required.When
      passing an encrypted password it must be generated with the
      format `'str["md5"] + md5[ password + username ]'',
      resulting in a total of 35 characters.  An easy way to do
      this is: `echo "md5`echo -n "verysecretpasswordJOE" |
      md5`"'.

- port
      Database port to connect to.

- priv
      PostgreSQL privileges string in the format:
      `table:priv1,priv2'

- role_attr_flags
      PostgreSQL role attributes string in the format:
      CREATEDB,CREATEROLE,SUPERUSER (Choices: [NO]SUPERUSER,
      [NO]CREATEROLE, [NO]CREATEUSER, [NO]CREATEDB, [NO]INHERIT,
      [NO]LOGIN, [NO]REPLICATION)

- state
      The user (role) state (Choices: present, absent)

Notes:    The default authentication assumes that you are either logging in
      as or sudo'ing to the postgres account on the host.This
      module uses psycopg2, a Python PostgreSQL database adapter.
      You must ensure that psycopg2 is installed on the host
      before using this module. If the remote host is the
      PostgreSQL server (which is the default case), then
      PostgreSQL must also be installed on the remote host. For
      Ubuntu-based systems, install the postgresql, libpq-dev, and
      python-psycopg2 packages on the remote host before using
      this module.If you specify PUBLIC as the user, then the
      privilege changes will apply to all users. You may not
      specify password or role_attr_flags when the PUBLIC user is
      specified.

Requirements:    psycopg2

# Create django user and grant access to database and products table
- postgresql_user: db=acme name=django password=ceec4eif7ya priv=CONNECT/products:ALL

# Create rails user, grant privilege to create other databases and demote rails from super user status
- postgresql_user: name=rails password=secret role_attr_flags=CREATEDB,NOSUPERUSER

# Remove test user privileges from acme
- postgresql_user: db=acme name=test priv=ALL/products:ALL state=absent fail_on_user=no

# Remove test user from test database and the cluster
- postgresql_user: db=test name=test priv=ALL state=absent

# Example privileges string format
INSERT,UPDATE/table:SELECT/anothertable:ALL

# Remove an existing user's password
- postgresql_user: db=test user=test password=NULL

> QUANTUM_FLOATING_IP

Add or Remove a floating IP to an instance

Options (= is mandatory):

- auth_url
      The keystone url for authentication

= instance_name
      The name of the instance to which the IP address should be
      assigned

- internal_network_name
      The name of the network of the port to associate with the
      floating ip. Necessary when VM multiple networks.

= login_password
      Password of login user

= login_tenant_name
      The tenant name of the login user

= login_username
      login username to authenticate to keystone

= network_name
      Name of the network from which IP has to be assigned to VM.
      Please make sure the network is an external network

- region_name
      Name of the region

- state
      Indicate desired state of the resource (Choices: present,
      absent)

Requirements:    novaclient, quantumclient, neutronclient, keystoneclient

# Assign a floating ip to the instance from an external network
- quantum_floating_ip: state=present login_username=admin login_password=admin
                       login_tenant_name=admin network_name=external_network
                       instance_name=vm1 internal_network_name=internal_network

> QUANTUM_NETWORK

Add or Remove network from OpenStack.

Options (= is mandatory):

- admin_state_up
      Whether the state should be marked as up or down

- auth_url
      The keystone url for authentication

= login_password
      Password of login user

= login_tenant_name
      The tenant name of the login user

= login_username
      login username to authenticate to keystone

= name
      Name to be assigned to the nework

- provider_network_type
      The type of the network to be created, gre, vlan, local.
      Available types depend on the plugin. The Quantum service
      decides if not specified.

- provider_physical_network
      The physical network which would realize the virtual network
      for flat and vlan networks.

- provider_segmentation_id
      The id that has to be assigned to the network, in case of
      vlan networks that would be vlan id and for gre the tunnel
      id

- region_name
      Name of the region

- router_external
      If 'yes', specifies that the virtual network is a external
      network (public).

- shared
      Whether this network is shared or not

- state
      Indicate desired state of the resource (Choices: present,
      absent)

- tenant_name
      The name of the tenant for whom the network is created

Requirements:    quantumclient, neutronclient, keystoneclient

# Create a GRE backed Quantum network with tunnel id 1 for tenant1
- quantum_network: name=t1network tenant_name=tenant1 state=present
                   provider_network_type=gre provider_segmentation_id=1
                   login_username=admin login_password=admin login_tenant_name=admin

# Create an external network
- quantum_network: name=external_network state=present
                   provider_network_type=local router_external=yes
                   login_username=admin login_password=admin login_tenant_name=admin

> QUANTUM_ROUTER

Create or Delete routers from OpenStack

Options (= is mandatory):

- admin_state_up
      desired admin state of the created router .

- auth_url
      The keystone url for authentication

= login_password
      Password of login user

= login_tenant_name
      The tenant name of the login user

= login_username
      login username to authenticate to keystone

= name
      Name to be give to the router

- region_name
      Name of the region

- state
      Indicate desired state of the resource (Choices: present,
      absent)

- tenant_name
      Name of the tenant for which the router has to be created,
      if none router would be created for the login tenant.

Requirements:    quantumclient, neutronclient, keystoneclient

# Creates a router for tenant admin
- quantum_router: state=present
                login_username=admin
                login_password=admin
                login_tenant_name=admin
                name=router1"

> QUANTUM_SUBNET

Add or Remove a floating IP to an instance

Options (= is mandatory):

- allocation_pool_end
      From the subnet pool the last IP that should be assigned to
      the virtual machines

- allocation_pool_start
      From the subnet pool the starting address from which the IP
      should be allocated

- auth_url
      The keystone URL for authentication

= cidr
      The CIDR representation of the subnet that should be
      assigned to the subnet

- dns_nameservers
      DNS nameservers for this subnet, comma-separated

- enable_dhcp
      Whether DHCP should be enabled for this subnet.

- gateway_ip
      The ip that would be assigned to the gateway for this subnet

- ip_version
      The IP version of the subnet 4 or 6

= login_password
      Password of login user

= login_tenant_name
      The tenant name of the login user

= login_username
      login username to authenticate to keystone

= network_name
      Name of the network to which the subnet should be attached

- region_name
      Name of the region

- state
      Indicate desired state of the resource (Choices: present,
      absent)

- tenant_name
      The name of the tenant for whom the subnet should be created

Requirements:    quantumclient, neutronclient, keystoneclient

# Create a subnet for a tenant with the specified subnet
- quantum_subnet: state=present login_username=admin login_password=admin
                  login_tenant_name=admin tenant_name=tenant1
                  network_name=network1 name=net1subnet cidr=192.168.0.0/24"

> RABBITMQ_PARAMETER

Manage dynamic, cluster-wide parameters for RabbitMQ

Options (= is mandatory):

= component
      Name of the component of which the parameter is being set

= name
      Name of the parameter being set

- node
      erlang node name of the rabbit we wish to configure

- state
      Specify if user is to be added or removed (Choices: present,
      absent)

- value
      Value of the parameter, as a JSON term

- vhost
      vhost to apply access privileges.

# Set the federation parameter 'local_username' to a value of 'guest' (in quotes)
- rabbitmq_parameter: component=federation
                      name=local-username
                      value='"guest"'
                      state=present

> RABBITMQ_PLUGIN

Enables or disables RabbitMQ plugins

Options (= is mandatory):

= names
      Comma-separated list of plugin names

- new_only
      Only enable missing pluginsDoes not disable plugins that are
      not in the names list (Choices: yes, no)

- prefix
      Specify a custom install prefix to a Rabbit

- state
      Specify if plugins are to be enabled or disabled (Choices:
      enabled, disabled)

# Enables the rabbitmq_management plugin
- rabbitmq_plugin: names=rabbitmq_management state=enabled

> RABBITMQ_POLICY

Manage the state of a virtual host in RabbitMQ.

Options (= is mandatory):

= name
      The name of the policy to manage.

- node
      Erlang node name of the rabbit we wish to configure.

= pattern
      A regex of queues to apply the policy to.

- priority
      The priority of the policy.

- state
      The state of the policy. (Choices: present, absent)

= tags
      A dict or string describing the policy.

- vhost
      The name of the vhost to apply to.

- name: ensure the default vhost contains the HA policy via a dict
  rabbitmq_policy: name=HA pattern='.*'
  args:
    tags:
      "ha-mode": all

- name: ensure the default vhost contains the HA policy
  rabbitmq_policy: name=HA pattern='.*' tags="ha-mode=all"

> RABBITMQ_USER

Add or remove users to RabbitMQ and assign permissions

Options (= is mandatory):

- configure_priv
      Regular expression to restrict configure actions on a
      resource for the specified vhost.By default all actions are
      restricted.

- force
      Deletes and recreates the user. (Choices: yes, no)

- node
      erlang node name of the rabbit we wish to configure

- password
      Password of user to add.To change the password of an
      existing user, you must also specify `force=yes'.

- read_priv
      Regular expression to restrict configure actions on a
      resource for the specified vhost.By default all actions are
      restricted.

- state
      Specify if user is to be added or removed (Choices: present,
      absent)

- tags
      User tags specified as comma delimited

= user
      Name of user to add

- vhost
      vhost to apply access privileges.

- write_priv
      Regular expression to restrict configure actions on a
      resource for the specified vhost.By default all actions are
      restricted.

# Add user to server and assign full access control
- rabbitmq_user: user=joe
                 password=changeme
                 vhost=/
                 configure_priv=.*
                 read_priv=.*
                 write_priv=.*
                 state=present

> RABBITMQ_VHOST

Manage the state of a virtual host in RabbitMQ

Options (= is mandatory):

= name
      The name of the vhost to manage

- node
      erlang node name of the rabbit we wish to configure

- state
      The state of vhost (Choices: present, absent)

- tracing
      Enable/disable tracing for a vhost (Choices: yes, no)

# Ensure that the vhost /test exists.
- rabbitmq_vhost: name=/test state=present

> RAW

Executes a low-down and dirty SSH command, not going through the
module subsystem. This is useful and should only be done in two
cases. The first case is installing `python-simplejson' on older
(Python 2.4 and before) hosts that need it as a dependency to run
modules, since nearly all core modules require it. Another is
speaking to any devices such as routers that do not have any
Python installed. In any other case, using the [shell] or
[command] module is much more appropriate. Arguments given to
[raw] are run directly through the configured remote shell.
Standard output, error output and return code are returned when
available. There is no change handler support for this module.This
module does not require python on the remote system, much like the
[script] module.

Options (= is mandatory):

- executable
      change the shell used to execute the command. Should be an
      absolute path to the executable.

= free_form
      the raw module takes a free form command to run

Notes:    If you want to execute a command securely and predictably, it may
      be better to use the [command] module instead. Best
      practices when writing playbooks will follow the trend of
      using [command] unless [shell] is explicitly required. When
      running ad-hoc commands, use your best judgement.

# Bootstrap a legacy python 2.4 host
- raw: yum -y install python-simplejson

> RAX

creates / deletes a Rackspace Public Cloud instance and optionally
waits for it to be 'running'.

Options (= is mandatory):

- api_key
      Rackspace API key (overrides `credentials')

- auth_endpoint
      The URI of the authentication service

- auto_increment
      Whether or not to increment a single number with the name of
      the created servers. Only applicable when used with the
      `group' attribute or meta key.

- count
      number of instances to launch

- count_offset
      number count to start at

- credentials
      File to find the Rackspace credentials in (ignored if
      `api_key' and `username' are provided)

- disk_config
      Disk partitioning strategy (Choices: auto, manual)

- env
      Environment as configured in ~/.pyrax.cfg, see https://githu
      b.com/rackspace/pyrax/blob/master/docs/getting_started.md
      #pyrax-configuration

- exact_count
      Explicitly ensure an exact count of instances, used with
      state=active/present

- files
      Files to insert into the instance.
      remotefilename:localcontent

- flavor
      flavor to use for the instance

- group
      host group to assign to server, is also used for idempotent
      operations to ensure a specific number of instances

- identity_type
      Authentication machanism to use, such as rackspace or
      keystone

- image
      image to use for the instance. Can be an `id', `human_id' or
      `name'

- instance_ids
      list of instance ids, currently only used when
      state='absent' to remove instances

- key_name
      key pair to use on the instance

- meta
      A hash of metadata to associate with the instance

- name
      Name to give the instance

- networks
      The network to attach to the instances. If specified, you
      must include ALL networks including the public and private
      interfaces. Can be `id' or `label'.

- region
      Region to create an instance in

- state
      Indicate desired state of the resource (Choices: present,
      absent)

- tenant_id
      The tenant ID used for authentication

- tenant_name
      The tenant name used for authentication

- username
      Rackspace username (overrides `credentials')

- verify_ssl
      Whether or not to require SSL validation of API endpoints

- wait
      wait for the instance to be in state 'running' before
      returning (Choices: yes, no)

- wait_timeout
      how long before wait gives up, in seconds

Notes:    The following environment variables can be used, `RAX_USERNAME',
      `RAX_API_KEY', `RAX_CREDS_FILE', `RAX_CREDENTIALS',
      `RAX_REGION'.`RAX_CREDENTIALS' and `RAX_CREDS_FILE' points
      to a credentials file appropriate for pyrax. See https://git
      hub.com/rackspace/pyrax/blob/master/docs/getting_started.md#
      authenticating`RAX_USERNAME' and `RAX_API_KEY' obviate the
      use of a credentials file`RAX_REGION' defines a Rackspace
      Public Cloud region (DFW, ORD, LON, ...)

Requirements:    pyrax

- name: Build a Cloud Server
  gather_facts: False
  tasks:
    - name: Server build request
      local_action:
        module: rax
        credentials: ~/.raxpub
        name: rax-test1
        flavor: 5
        image: b11d9567-e412-4255-96b9-bd63ab23bcfe
        files:
          /root/.ssh/authorized_keys: /home/localuser/.ssh/id_rsa.pub
          /root/test.txt: /home/localuser/test.txt
        wait: yes
        state: present
        networks:
          - private
          - public
      register: rax

- name: Build an exact count of cloud servers with incremented names
  hosts: local
  gather_facts: False
  tasks:
    - name: Server build requests
      local_action:
        module: rax
        credentials: ~/.raxpub
        name: test%03d.example.org
        flavor: performance1-1
        image: ubuntu-1204-lts-precise-pangolin
        state: present
        count: 10
        count_offset: 10
        exact_count: yes
        group: test
        wait: yes
      register: rax

> RAX_CLB

creates / deletes a Rackspace Public Cloud load balancer.

Options (= is mandatory):

- algorithm
      algorithm for the balancer being created (Choices: RANDOM,
      LEAST_CONNECTIONS, ROUND_ROBIN, WEIGHTED_LEAST_CONNECTIONS,
      WEIGHTED_ROUND_ROBIN)

- api_key
      Rackspace API key (overrides `credentials')

- credentials
      File to find the Rackspace credentials in (ignored if
      `api_key' and `username' are provided)

- meta
      A hash of metadata to associate with the instance

- name
      Name to give the load balancer

- port
      Port for the balancer being created

- protocol
      Protocol for the balancer being created (Choices: DNS_TCP,
      DNS_UDP, FTP, HTTP, HTTPS, IMAPS, IMAPv4, LDAP, LDAPS,
      MYSQL, POP3, POP3S, SMTP, TCP, TCP_CLIENT_FIRST, UDP,
      UDP_STREAM, SFTP)

- region
      Region to create the load balancer in

- state
      Indicate desired state of the resource (Choices: present,
      absent)

- timeout
      timeout for communication between the balancer and the node

- type
      type of interface for the balancer being created (Choices:
      PUBLIC, SERVICENET)

- username
      Rackspace username (overrides `credentials')

- vip_id
      Virtual IP ID to use when creating the load balancer for
      purposes of sharing an IP with another load balancer of
      another protocol

- wait
      wait for the balancer to be in state 'running' before
      returning (Choices: yes, no)

- wait_timeout
      how long before wait gives up, in seconds

Notes:    The following environment variables can be used, `RAX_USERNAME',
      `RAX_API_KEY', `RAX_CREDS_FILE', `RAX_CREDENTIALS',
      `RAX_REGION'.`RAX_CREDENTIALS' and `RAX_CREDS_FILE' points
      to a credentials file appropriate for pyrax. See https://git
      hub.com/rackspace/pyrax/blob/master/docs/getting_started.md#
      authenticating`RAX_USERNAME' and `RAX_API_KEY' obviate the
      use of a credentials file`RAX_REGION' defines a Rackspace
      Public Cloud region (DFW, ORD, LON, ...)

Requirements:    pyrax

- name: Build a Load Balancer
  gather_facts: False
  hosts: local
  connection: local
  tasks:
    - name: Load Balancer create request
      local_action:
        module: rax_clb
        credentials: ~/.raxpub
        name: my-lb
        port: 8080
        protocol: HTTP
        type: SERVICENET
        timeout: 30
        region: DFW
        wait: yes
        state: present
        meta:
          app: my-cool-app
      register: my_lb

> RAX_CLB_NODES

Adds, modifies and removes nodes from a Rackspace Cloud Load
Balancer

Options (= is mandatory):

- address
      IP address or domain name of the node

- api_key
      Rackspace API key (overrides `credentials')

- condition
      Condition for the node, which determines its role within the
      load balancer (Choices: enabled, disabled, draining)

- credentials
      File to find the Rackspace credentials in (ignored if
      `api_key' and `username' are provided)

= load_balancer_id
      Load balancer id

- node_id
      Node id

- port
      Port number of the load balanced service on the node

- region
      Region to authenticate in

- state
      Indicate desired state of the node (Choices: present,
      absent)

- type
      Type of node (Choices: primary, secondary)

- username
      Rackspace username (overrides `credentials')

- virtualenv
      Path to a virtualenv that should be activated before doing
      anything. The virtualenv has to already exist. Useful if
      installing pyrax globally is not an option.

- wait
      Wait for the load balancer to become active before returning
      (Choices: yes, no)

- wait_timeout
      How long to wait before giving up and returning an error

- weight
      Weight of node

Notes:    The following environment variables can be used: `RAX_USERNAME',
      `RAX_API_KEY', `RAX_CREDENTIALS' and `RAX_REGION'.

Requirements:    pyrax

# Add a new node to the load balancer
- local_action:
    module: rax_clb_nodes
    load_balancer_id: 71
    address: 10.2.2.3
    port: 80
    condition: enabled
    type: primary
    wait: yes
    credentials: /path/to/credentials

# Drain connections from a node
- local_action:
    module: rax_clb_nodes
    load_balancer_id: 71
    node_id: 410
    condition: draining
    wait: yes
    credentials: /path/to/credentials

# Remove a node from the load balancer
- local_action:
    module: rax_clb_nodes
    load_balancer_id: 71
    node_id: 410
    state: absent
    wait: yes
    credentials: /path/to/credentials

> RAX_DNS_RECORD

Manage DNS records on Rackspace Cloud DNS

Options (= is mandatory):

- api_key
      Rackspace API key (overrides `credentials')

- comment
      Brief description of the domain. Maximum length of 160
      characters

- credentials
      File to find the Rackspace credentials in (ignored if
      `api_key' and `username' are provided)

= data
      IP address for A/AAAA record, FQDN for CNAME/MX/NS, or text
      data for SRV/TXT

= domain
      Domain name to create the record in

= name
      FQDN record name to create

- priority
      Required for MX and SRV records, but forbidden for other
      record types. If specified, must be an integer from 0 to
      65535.

- state
      Indicate desired state of the resource (Choices: present,
      absent)

- ttl
      Time to live of domain in seconds

- type
      DNS record type (Choices: A, AAAA, CNAME, MX, NS, SRV, TXT)

- username
      Rackspace username (overrides `credentials')

Notes:    The following environment variables can be used, `RAX_USERNAME',
      `RAX_API_KEY', `RAX_CREDS_FILE', `RAX_CREDENTIALS',
      `RAX_REGION'.`RAX_CREDENTIALS' and `RAX_CREDS_FILE' points
      to a credentials file appropriate for pyrax. See https://git
      hub.com/rackspace/pyrax/blob/master/docs/getting_started.md#
      authenticating`RAX_USERNAME' and `RAX_API_KEY' obviate the
      use of a credentials file`RAX_REGION' defines a Rackspace
      Public Cloud region (DFW, ORD, LON, ...)

Requirements:    pyrax

- name: Create record
  hosts: all
  gather_facts: False
  tasks:
    - name: Record create request
      local_action:
        module: rax_dns_record
        credentials: ~/.raxpub
        domain: example.org
        name: www.example.org
        data: 127.0.0.1
        type: A
      register: rax_dns_record

> RAX_FACTS

Gather facts for Rackspace Cloud Servers.

Options (= is mandatory):

- address
      Server IP address to retrieve facts for, will match any IP
      assigned to the server

- api_key
      Rackspace API key (overrides `credentials')

- auth_endpoint
      The URI of the authentication service

- credentials
      File to find the Rackspace credentials in (ignored if
      `api_key' and `username' are provided)

- env
      Environment as configured in ~/.pyrax.cfg, see https://githu
      b.com/rackspace/pyrax/blob/master/docs/getting_started.md
      #pyrax-configuration

- id
      Server ID to retrieve facts for

- identity_type
      Authentication machanism to use, such as rackspace or
      keystone

- name
      Server name to retrieve facts for

- region
      Region to create an instance in

- tenant_id
      The tenant ID used for authentication

- tenant_name
      The tenant name used for authentication

- username
      Rackspace username (overrides `credentials')

- verify_ssl
      Whether or not to require SSL validation of API endpoints

Notes:    The following environment variables can be used, `RAX_USERNAME',
      `RAX_API_KEY', `RAX_CREDS_FILE', `RAX_CREDENTIALS',
      `RAX_REGION'.`RAX_CREDENTIALS' and `RAX_CREDS_FILE' points
      to a credentials file appropriate for pyrax. See https://git
      hub.com/rackspace/pyrax/blob/master/docs/getting_started.md#
      authenticating`RAX_USERNAME' and `RAX_API_KEY' obviate the
      use of a credentials file`RAX_REGION' defines a Rackspace
      Public Cloud region (DFW, ORD, LON, ...)

Requirements:    pyrax

- name: Gather info about servers
  hosts: all
  gather_facts: False
  tasks:
    - name: Get facts about servers
      local_action:
        module: rax_facts
        credentials: ~/.raxpub
        name: "{{ inventory_hostname }}"
        region: DFW
    - name: Map some facts
      set_fact:
        ansible_ssh_host: "{{ rax_accessipv4 }}"

> RAX_FILES

Manipulate Rackspace Cloud Files Containers

Options (= is mandatory):

- api_key
      Rackspace API key (overrides `credentials')

- clear_meta
      Optionally clear existing metadata when applying metadata to
      existing containers. Selecting this option is only
      appropriate when setting type=meta (Choices: yes, no)

= container
      The container to use for container or metadata operations.

- credentials
      File to find the Rackspace credentials in (ignored if
      `api_key' and `username' are provided)

- meta
      A hash of items to set as metadata values on a container

- private
      Used to set a container as private, removing it from the
      CDN.  *Warning!* Private containers, if previously made
      public, can have live objects available until the TTL on
      cached objects expires

- public
      Used to set a container as public, available via the Cloud
      Files CDN

- region
      Region to create an instance in

- ttl
      In seconds, set a container-wide TTL for all objects cached
      on CDN edge nodes. Setting a TTL is only appropriate for
      containers that are public

- type
      Type of object to do work on, i.e. metadata object or a
      container object (Choices: file, meta)

- username
      Rackspace username (overrides `credentials')

- web_error
      Sets an object to be presented as the HTTP error page when
      accessed by the CDN URL

- web_index
      Sets an object to be presented as the HTTP index page when
      accessed by the CDN URL

Notes:    The following environment variables can be used, `RAX_USERNAME',
      `RAX_API_KEY', `RAX_CREDS_FILE', `RAX_CREDENTIALS',
      `RAX_REGION'.`RAX_CREDENTIALS' and `RAX_CREDS_FILE' points
      to a credentials file appropriate for pyrax. See https://git
      hub.com/rackspace/pyrax/blob/master/docs/getting_started.md#
      authenticating`RAX_USERNAME' and `RAX_API_KEY' obviate the
      use of a credentials file`RAX_REGION' defines a Rackspace
      Public Cloud region (DFW, ORD, LON, ...)

Requirements:    pyrax

- name: "Test Cloud Files Containers"
  hosts: local
  gather_facts: no
  tasks:
    - name: "List all containers"
      rax_files: state=list

    - name: "Create container called 'mycontainer'"
      rax_files: container=mycontainer

    - name: "Create container 'mycontainer2' with metadata"
      rax_files:
        container: mycontainer2
        meta:
          key: value
          file_for: someuser@example.com

    - name: "Set a container's web index page"
      rax_files: container=mycontainer web_index=index.html

    - name: "Set a container's web error page"
      rax_files: container=mycontainer web_error=error.html

    - name: "Make container public"
      rax_files: container=mycontainer public=yes

    - name: "Make container public with a 24 hour TTL"
      rax_files: container=mycontainer public=yes ttl=86400

    - name: "Make container private"
      rax_files: container=mycontainer private=yes

- name: "Test Cloud Files Containers Metadata Storage"
  hosts: local
  gather_facts: no
  tasks:
    - name: "Get mycontainer2 metadata"
      rax_files:
        container: mycontainer2
        type: meta

    - name: "Set mycontainer2 metadata"
      rax_files:
        container: mycontainer2
        type: meta
        meta:
          uploaded_by: someuser@example.com

    - name: "Remove mycontainer2 metadata"
      rax_files:
        container: "mycontainer2"
        type: meta
        state: absent
        meta:
          key: ""
          file_for: ""

> RAX_FILES_OBJECTS

Upload, download, and delete objects in Rackspace Cloud Files

Options (= is mandatory):

- api_key
      Rackspace API key (overrides `credentials')

- clear_meta
      Optionally clear existing metadata when applying metadata to
      existing objects. Selecting this option is only appropriate
      when setting type=meta (Choices: yes, no)

= container
      The container to use for file object operations.

- credentials
      File to find the Rackspace credentials in (ignored if
      `api_key' and `username' are provided)

- dest
      The destination of a "get" operation; i.e. a local
      directory, "/home/user/myfolder". Used to specify the
      destination of an operation on a remote object; i.e. a file
      name, "file1", or a comma-separated list of remote objects,
      "file1,file2,file17"

- expires
      Used to set an expiration on a file or folder uploaded to
      Cloud Files. Requires an integer, specifying expiration in
      seconds

- meta
      A hash of items to set as metadata values on an uploaded
      file or folder

- method
      The method of operation to be performed.  For example, put
      to upload files to Cloud Files, get to download files from
      Cloud Files or delete to delete remote objects in Cloud
      Files (Choices: get, put, delete)

- region
      Region in which to work.  Maps to a Rackspace Cloud region,
      i.e. DFW, ORD, IAD, SYD, LON

- src
      Source from which to upload files.  Used to specify a remote
      object as a source for an operation, i.e. a file name,
      "file1", or a comma-separated list of remote objects,
      "file1,file2,file17".  src and dest are mutually exclusive
      on remote-only object operations

- structure
      Used to specify whether to maintain nested directory
      structure when downloading objects from Cloud Files.
      Setting to false downloads the contents of a container to a
      single, flat directory (Choices: yes, no)

- type
      Type of object to do work onMetadata object or a file object
      (Choices: file, meta)

- username
      Rackspace username (overrides `credentials')

Notes:    The following environment variables can be used, `RAX_USERNAME',
      `RAX_API_KEY', `RAX_CREDS_FILE', `RAX_CREDENTIALS',
      `RAX_REGION'.`RAX_CREDENTIALS' and `RAX_CREDS_FILE' points
      to a credentials file appropriate for pyrax. See https://git
      hub.com/rackspace/pyrax/blob/master/docs/getting_started.md#
      authenticating`RAX_USERNAME' and `RAX_API_KEY' obviate the
      use of a credentials file`RAX_REGION' defines a Rackspace
      Public Cloud region (DFW, ORD, LON, ...)

Requirements:    pyrax

- name: "Test Cloud Files Objects"
  hosts: local
  gather_facts: False
  tasks:
    - name: "Get objects from test container"
      rax_files_objects: container=testcont dest=~/Downloads/testcont

    - name: "Get single object from test container"
      rax_files_objects: container=testcont src=file1 dest=~/Downloads/testcont

    - name: "Get several objects from test container"
      rax_files_objects: container=testcont src=file1,file2,file3 dest=~/Downloads/testcont

    - name: "Delete one object in test container"
      rax_files_objects: container=testcont method=delete dest=file1

    - name: "Delete several objects in test container"
      rax_files_objects: container=testcont method=delete dest=file2,file3,file4

    - name: "Delete all objects in test container"
      rax_files_objects: container=testcont method=delete

    - name: "Upload all files to test container"
      rax_files_objects: container=testcont method=put src=~/Downloads/onehundred

    - name: "Upload one file to test container"
      rax_files_objects: container=testcont method=put src=~/Downloads/testcont/file1

    - name: "Upload one file to test container with metadata"
      rax_files_objects:
        container: testcont
        src: ~/Downloads/testcont/file2
        method: put
        meta:
          testkey: testdata
          who_uploaded_this: someuser@example.com

    - name: "Upload one file to test container with TTL of 60 seconds"
      rax_files_objects: container=testcont method=put src=~/Downloads/testcont/file3 expires=60

    - name: "Attempt to get remote object that does not exist"
      rax_files_objects: container=testcont method=get src=FileThatDoesNotExist.jpg dest=~/Downloads/testcont
      ignore_errors: yes

    - name: "Attempt to delete remote object that does not exist"
      rax_files_objects: container=testcont method=delete dest=FileThatDoesNotExist.jpg
      ignore_errors: yes

- name: "Test Cloud Files Objects Metadata"
  hosts: local
  gather_facts: false
  tasks:
    - name: "Get metadata on one object"
      rax_files_objects:  container=testcont type=meta dest=file2

    - name: "Get metadata on several objects"
      rax_files_objects:  container=testcont type=meta src=file2,file1

    - name: "Set metadata on an object"
      rax_files_objects:
        container: testcont
        type: meta
        dest: file17
        method: put
        meta:
          key1: value1
          key2: value2
        clear_meta: true

    - name: "Verify metadata is set"
      rax_files_objects:  container=testcont type=meta src=file17

    - name: "Delete metadata"
      rax_files_objects:
        container: testcont
        type: meta
        dest: file17
        method: delete
        meta:
          key1: ''
          key2: ''

    - name: "Get metadata on all objects"
      rax_files_objects:  container=testcont type=meta

> RAX_KEYPAIR

Create a keypair for use with Rackspace Cloud Servers

Options (= is mandatory):

- api_key
      Rackspace API key (overrides `credentials')

- auth_endpoint
      The URI of the authentication service

- credentials
      File to find the Rackspace credentials in (ignored if
      `api_key' and `username' are provided)

- env
      Environment as configured in ~/.pyrax.cfg, see https://githu
      b.com/rackspace/pyrax/blob/master/docs/getting_started.md
      #pyrax-configuration

- identity_type
      Authentication machanism to use, such as rackspace or
      keystone

= name
      Name of keypair

- public_key
      Public Key string to upload

- region
      Region to create an instance in

- state
      Indicate desired state of the resource (Choices: present,
      absent)

- tenant_id
      The tenant ID used for authentication

- tenant_name
      The tenant name used for authentication

- username
      Rackspace username (overrides `credentials')

- verify_ssl
      Whether or not to require SSL validation of API endpoints

Notes:    The following environment variables can be used, `RAX_USERNAME',
      `RAX_API_KEY', `RAX_CREDS_FILE', `RAX_CREDENTIALS',
      `RAX_REGION'.`RAX_CREDENTIALS' and `RAX_CREDS_FILE' points
      to a credentials file appropriate for pyrax. See https://git
      hub.com/rackspace/pyrax/blob/master/docs/getting_started.md#
      authenticating`RAX_USERNAME' and `RAX_API_KEY' obviate the
      use of a credentials file`RAX_REGION' defines a Rackspace
      Public Cloud region (DFW, ORD, LON, ...)Keypairs cannot be
      manipulated, only created and deleted. To "update" a keypair
      you must first delete and then recreate.

Requirements:    pyrax

- name: Create a keypair
  hosts: local
  gather_facts: False
  tasks:
    - name: keypair request
      local_action:
        module: rax_keypair
        credentials: ~/.raxpub
        name: my_keypair
        region: DFW
      register: keypair
    - name: Create local public key
      local_action:
        module: copy
        content: "{{ keypair.keypair.public_key }}"
        dest: "{{ inventory_dir }}/{{ keypair.keypair.name }}.pub"
    - name: Create local private key
      local_action:
        module: copy
        content: "{{ keypair.keypair.private_key }}"
        dest: "{{ inventory_dir }}/{{ keypair.keypair.name }}"

> RAX_NETWORK

creates / deletes a Rackspace Public Cloud isolated network.

Options (= is mandatory):

- api_key
      Rackspace API key (overrides `credentials')

- cidr
      cidr of the network being created

- credentials
      File to find the Rackspace credentials in (ignored if
      `api_key' and `username' are provided)

- label
      Label (name) to give the network

- region
      Region to create the network in

- state
      Indicate desired state of the resource (Choices: present,
      absent)

- username
      Rackspace username (overrides `credentials')

Notes:    The following environment variables can be used, `RAX_USERNAME',
      `RAX_API_KEY', `RAX_CREDS', `RAX_CREDENTIALS',
      `RAX_REGION'.`RAX_CREDENTIALS' and `RAX_CREDS' points to a
      credentials file appropriate for pyrax`RAX_USERNAME' and
      `RAX_API_KEY' obviate the use of a credentials
      file`RAX_REGION' defines a Rackspace Public Cloud region
      (DFW, ORD, LON, ...)

Requirements:    pyrax

- name: Build an Isolated Network
  gather_facts: False

  tasks:
    - name: Network create request
      local_action:
        module: rax_network
        credentials: ~/.raxpub
        label: my-net
        cidr: 192.168.3.0/24
        state: present

> RAX_QUEUE

creates / deletes a Rackspace Public Cloud queue.

Options (= is mandatory):

- api_key
      Rackspace API key (overrides `credentials')

- credentials
      File to find the Rackspace credentials in (ignored if
      `api_key' and `username' are provided)

- name
      Name to give the queue

- region
      Region to create the load balancer in

- state
      Indicate desired state of the resource (Choices: present,
      absent)

- username
      Rackspace username (overrides `credentials')

Notes:    The following environment variables can be used, `RAX_USERNAME',
      `RAX_API_KEY', `RAX_CREDS_FILE', `RAX_CREDENTIALS',
      `RAX_REGION'.`RAX_CREDENTIALS' and `RAX_CREDS_FILE' points
      to a credentials file appropriate for pyrax. See https://git
      hub.com/rackspace/pyrax/blob/master/docs/getting_started.md#
      authenticating`RAX_USERNAME' and `RAX_API_KEY' obviate the
      use of a credentials file`RAX_REGION' defines a Rackspace
      Public Cloud region (DFW, ORD, LON, ...)

Requirements:    pyrax

- name: Build a Queue
  gather_facts: False
  hosts: local
  connection: local
  tasks:
    - name: Queue create request
      local_action:
        module: rax_queue
        credentials: ~/.raxpub
        client_id: unique-client-name
        name: my-queue
        region: DFW
        state: present
      register: my_queue

> RDS

Creates, deletes, or modifies rds instances.  When creating an
instance it can be either a new instance or a read-only replica of
an existing instance. This module has a dependency on python-boto
>= 2.5. The 'promote' command requires boto >= 2.18.0.

Options (= is mandatory):

- apply_immediately
      Used only when command=modify.  If enabled, the
      modifications will be applied as soon as possible rather
      than waiting for the next preferred maintenance window.
      (Choices: yes, no)

- aws_access_key
      AWS access key. If not set then the value of the
      AWS_ACCESS_KEY environment variable is used.

- aws_secret_key
      AWS secret key. If not set then the value of the
      AWS_SECRET_KEY environment variable is used.

- backup_retention
      Number of days backups are retained.  Set to 0 to disable
      backups.  Default is 1 day.  Valid range: 0-35. Used only
      when command=create or command=modify.

- backup_window
      Backup window in format of hh24:mi-hh24:mi.  If not
      specified then a random backup window is assigned. Used only
      when command=create or command=modify.

= command
      Specifies the action to take. (Choices: create, replicate,
      delete, facts, modify, promote, snapshot, restore)

- db_engine
      The type of database.  Used only when command=create.
      (Choices: MySQL, oracle-se1, oracle-se, oracle-ee,
      sqlserver-ee, sqlserver-se, sqlserver-ex, sqlserver-web,
      postgres)

- db_name
      Name of a database to create within the instance.  If not
      specified then no database is created. Used only when
      command=create.

- engine_version
      Version number of the database engine to use. Used only when
      command=create. If not specified then the current Amazon RDS
      default engine version is used.

= instance_name
      Database instance identifier.

- instance_type
      The instance type of the database.  Must be specified when
      command=create. Optional when command=replicate,
      command=modify or command=restore. If not specified then the
      replica inherits the same instance type as the source
      instance. (Choices: db.t1.micro, db.m1.small, db.m1.medium,
      db.m1.large, db.m1.xlarge, db.m2.xlarge, db.m2.2xlarge,
      db.m2.4xlarge)

- iops
      Specifies the number of IOPS for the instance.  Used only
      when command=create or command=modify. Must be an integer
      greater than 1000.

- license_model
      The license model for this DB instance. Used only when
      command=create or command=restore. (Choices: license-
      included, bring-your-own-license, general-public-license)

- maint_window
      Maintenance window in format of ddd:hh24:mi-ddd:hh24:mi.
      (Example: Mon:22:00-Mon:23:15) If not specified then a
      random maintenance window is assigned. Used only when
      command=create or command=modify.

- multi_zone
      Specifies if this is a Multi-availability-zone deployment.
      Can not be used in conjunction with zone parameter. Used
      only when command=create or command=modify. (Choices: yes,
      no)

- new_instance_name
      Name to rename an instance to. Used only when
      command=modify.

- option_group
      The name of the option group to use.  If not specified then
      the default option group is used. Used only when
      command=create.

- parameter_group
      Name of the DB parameter group to associate with this
      instance.  If omitted then the RDS default DBParameterGroup
      will be used. Used only when command=create or
      command=modify.

- password
      Password for the master database username. Used only when
      command=create or command=modify.

- port
      Port number that the DB instance uses for connections.
      Defaults to 3306 for mysql, 1521 for Oracle, 1443 for SQL
      Server. Used only when command=create or command=replicate.

= region
      The AWS region to use. If not specified then the value of
      the EC2_REGION environment variable, if any, is used.

- security_groups
      Comma separated list of one or more security groups.  Used
      only when command=create or command=modify.

- size
      Size in gigabytes of the initial storage for the DB
      instance. Used only when command=create or command=modify.

- snapshot
      Name of snapshot to take. When command=delete, if no
      snapshot name is provided then no snapshot is taken. Used
      only when command=delete or command=snapshot.

- source_instance
      Name of the database to replicate. Used only when
      command=replicate.

- subnet
      VPC subnet group.  If specified then a VPC instance is
      created. Used only when command=create.

- upgrade
      Indicates that minor version upgrades should be applied
      automatically. Used only when command=create or
      command=replicate. (Choices: yes, no)

- username
      Master database username. Used only when command=create.

- vpc_security_groups
      Comma separated list of one or more vpc security groups.
      Used only when command=create or command=modify.

- wait
      When command=create, replicate, modify or restore then wait
      for the database to enter the 'available' state.  When
      command=delete wait for the database to be terminated.
      (Choices: yes, no)

- wait_timeout
      how long before wait gives up, in seconds

- zone
      availability zone in which to launch the instance. Used only
      when command=create, command=replicate or command=restore.

Requirements:    boto

# Basic mysql provisioning example
- rds: >
      command=create
      instance_name=new_database
      db_engine=MySQL
      size=10
      instance_type=db.m1.small
      username=mysql_admin
      password=1nsecure

# Create a read-only replica and wait for it to become available
- rds: >
      command=replicate
      instance_name=new_database_replica
      source_instance=new_database
      wait=yes
      wait_timeout=600

# Delete an instance, but create a snapshot before doing so
- rds: >
      command=delete
      instance_name=new_database
      snapshot=new_database_snapshot

# Get facts about an instance
- rds: >
      command=facts
      instance_name=new_database
      register: new_database_facts

# Rename an instance and wait for the change to take effect
- rds: >
      command=modify
      instance_name=new_database
      new_instance_name=renamed_database
      wait=yes

> REDHAT_SUBSCRIPTION

Manage registration and subscription to the Red Hat Network
entitlement platform.

Options (= is mandatory):

- activationkey
      supply an activation key for use with registration

- autosubscribe
      Upon successful registration, auto-consume available
      subscriptions

- password
      Red Hat Network password

- pool
      Specify a subscription pool name to consume.  Regular
      expressions accepted.

- rhsm_baseurl
      Specify CDN baseurl

- server_hostname
      Specify an alternative Red Hat Network server

- server_insecure
      Allow traffic over insecure http

- state
      whether to register and subscribe (`present'), or unregister
      (`absent') a system (Choices: present, absent)

- username
      Red Hat Network username

Notes:    In order to register a system, subscription-manager requires
      either a username and password, or an activationkey.

Requirements:    subscription-manager

# Register as user (joe_user) with password (somepass) and auto-subscribe to available content.
- redhat_subscription: action=register username=joe_user password=somepass autosubscribe=true

# Register with activationkey (1-222333444) and consume subscriptions matching
# the names (Red hat Enterprise Server) and (Red Hat Virtualization)
- redhat_subscription: action=register
                       activationkey=1-222333444
                       pool='^(Red Hat Enterprise Server|Red Hat Virtualization)$'

> REDIS

Unified utility to interact with redis instances. 'slave' Sets a
redis instance in slave or master mode. 'flush' Flushes all the
instance or a specified db.

Options (= is mandatory):

= command
      The selected redis command (Choices: slave, flush)

- db
      The database to flush (used in db mode) [flush command]

- flush_mode
      Type of flush (all the dbs in a redis instance or a specific
      one) [flush command] (Choices: all, db)

- login_host
      The host running the database

- login_password
      The password used to authenticate with (usually not used)

- login_port
      The port to connect to

- master_host
      The host of the master instance [slave command]

- master_port
      The port of the master instance [slave command]

- slave_mode
      the mode of the redis instance [slave command] (Choices:
      master, slave)

Notes:    Requires the redis-py Python package on the remote host. You can
      install it with pip (pip install redis) or with a package
      manager. https://github.com/andymccurdy/redis-pyIf the redis
      master instance we are making slave of is password protected
      this needs to be in the redis.conf in the masterauth
      variable

Requirements:    redis

# Set local redis instance to be slave of melee.island on port 6377
- redis: command=slave master_host=melee.island master_port=6377

# Deactivate slave mode
- redis: command=slave slave_mode=master

# Flush all the redis db
- redis: command=flush flush_mode=all

# Flush only one db in a redis instance
- redis: command=flush db=1 flush_mode=db

> RHN_CHANNEL

Adds or removes Red Hat software channels

Options (= is mandatory):

= name
      name of the software channel

= password
      the user's password

- state
      whether the channel should be present or not

= sysname
      name of the system as it is known in RHN/Satellite

= url
      The full url to the RHN/Satellite api

= user
      RHN/Satellite user

Notes:    this module fetches the system id from RHN.

Requirements:    none

- rhn_channel: name=rhel-x86_64-server-v2vwin-6 sysname=server01 url=https://rhn.redhat.com/rpc/api user=rhnuser password=guessme

> RHN_REGISTER

Manage registration to the Red Hat Network.

Options (= is mandatory):

- activationkey
      supply an activation key for use with registration

- channels
      Optionally specify a list of comma-separated channels to
      subscribe to upon successful registration.

- password
      Red Hat Network password

- server_url
      Specify an alternative Red Hat Network server URL

- state
      whether to register (`present'), or unregister (`absent') a
      system (Choices: present, absent)

- username
      Red Hat Network username

Notes:    In order to register a system, rhnreg_ks requires either a
      username and password, or an activationkey.

Requirements:    rhnreg_ks

# Unregister system from RHN.
- rhn_register: state=absent username=joe_user password=somepass

# Register as user (joe_user) with password (somepass) and auto-subscribe to available content.
- rhn_register: state=present username=joe_user password=somepass

# Register with activationkey (1-222333444) and enable extended update support.
- rhn_register: state=present activationkey=1-222333444 enable_eus=true

# Register as user (joe_user) with password (somepass) against a satellite
# server specified by (server_url).
- rhn_register:
    state=present
    username=joe_user
    password=somepass
    server_url=https://xmlrpc.my.satellite/XMLRPC

# Register as user (joe_user) with password (somepass) and enable
# channels (rhel-x86_64-server-6-foo-1) and (rhel-x86_64-server-6-bar-1).
- rhn_register: state=present username=joe_user
                password=somepass
                channels=rhel-x86_64-server-6-foo-1,rhel-x86_64-server-6-bar-1

> RIAK

This module can be used to join nodes to a cluster, check the
status of the cluster.

Options (= is mandatory):

- command
      The command you would like to perform against the cluster.
      (Choices: ping, kv_test, join, plan, commit)

- config_dir
      The path to the riak configuration directory

- http_conn
      The ip address and port that is listening for Riak HTTP
      queries

- target_node
      The target node for certain operations (join, ping)

- validate_certs
      If `no', SSL certificates will not be validated. This should
      only be used on personally controlled sites using self-
      signed certificates. (Choices: yes, no)

- wait_for_handoffs
      Number of seconds to wait for handoffs to complete.

- wait_for_ring
      Number of seconds to wait for all nodes to agree on the
      ring.

- wait_for_service
      Waits for a riak service to come online before continuing.
      (Choices: kv)

# Join's a Riak node to another node
- riak: command=join target_node=riak@10.1.1.1

# Wait for handoffs to finish.  Use with async and poll.
- riak: wait_for_handoffs=yes

# Wait for riak_kv service to startup
- riak: wait_for_service=kv

> ROUTE53

Creates and deletes DNS records in Amazons Route53 service

Options (= is mandatory):

- aws_access_key
      AWS access key.

- aws_secret_key
      AWS secret key.

= command
      Specifies the action to take. (Choices: get, create, delete)

- overwrite
      Whether an existing record should be overwritten on create
      if values do not match

= record
      The full DNS record to create or delete

- ttl
      The TTL to give the new record

= type
      The type of DNS record to create (Choices: A, CNAME, MX,
      AAAA, TXT, PTR, SRV, SPF, NS)

- value
      The new value when creating a DNS record.  Multiple comma-
      spaced values are allowed.  When deleting a record all
      values for the record must be specified or Route53 will not
      delete it.

= zone
      The DNS zone to modify

Requirements:    boto

# Add new.foo.com as an A record with 3 IPs
- route53: >
      command=create
      zone=foo.com
      record=new.foo.com
      type=A
      ttl=7200
      value=1.1.1.1,2.2.2.2,3.3.3.3

# Retrieve the details for new.foo.com
- route53: >
      command=get
      zone=foo.com
      record=new.foo.com
      type=A
  register: rec

# Delete new.foo.com A record using the results from the get command
- route53: >
      command=delete
      zone=foo.com
      record={{ rec.set.record }}
      type={{ rec.set.type }}
      value={{ rec.set.value }}

# Add an AAAA record.  Note that because there are colons in the value
# that the entire parameter list must be quoted:
- route53: >
      command=create
      zone=foo.com
      record=localhost.foo.com
      type=AAAA
      ttl=7200
      value="::1"

# Add a TXT record. Note that TXT and SPF records must be surrounded
# by quotes when sent to Route 53:
- route53: >
      command=create
      zone=foo.com
      record=localhost.foo.com
      type=TXT
      ttl=7200
      value=""bar""

> RPM_KEY

Adds or removes (rpm --import) a gpg key to your rpm database.

Options (= is mandatory):

= key
      Key that will be modified. Can be a url, a file, or a keyid
      if the key already exists in the database.

- state
      Wheather the key will be imported or removed from the rpm
      db. (Choices: present, absent)

- validate_certs
      If `no' and the `key' is a url starting with https, SSL
      certificates will not be validated. This should only be used
      on personally controlled sites using self-signed
      certificates. (Choices: yes, no)

# Example action to import a key from a url
- rpm_key: state=present key=http://apt.sw.be/RPM-GPG-KEY.dag.txt

# Example action to import a key from a file
- rpm_key: state=present key=/path/to/key.gpg

# Example action to ensure a key is not present in the db
- rpm_key: state=absent key=DEADB33F

> S3

This module allows the user to dictate the presence of a given
file in an S3 bucket. If or once the key (file) exists in the
bucket, it returns a time-expired download URL. This module has a
dependency on python-boto.

Options (= is mandatory):

- aws_access_key
      AWS access key. If not set then the value of the
      AWS_ACCESS_KEY environment variable is used.

- aws_secret_key
      AWS secret key. If not set then the value of the
      AWS_SECRET_KEY environment variable is used.

= bucket
      Bucket name.

- dest
      The destination file path when downloading an object/key
      with a GET operation.

- expiration
      Time limit (in seconds) for the URL generated and returned
      by S3/Walrus when performing a mode=put or mode=geturl
      operation.

= mode
      Switches the module behaviour between put (upload), get
      (download), geturl (return download url (Ansible 1.3+),
      getstr (download object as string (1.3+)), create (bucket)
      and delete (bucket).

- object
      Keyname of the object inside the bucket. Can be used to
      create "virtual directories", see examples.

- overwrite
      Force overwrite either locally on the filesystem or remotely
      with the object/key. Used with PUT and GET operations.

- s3_url
      S3 URL endpoint. If not specified then the S3_URL
      environment variable is used, if that variable is defined.

- src
      The source file path when performing a PUT operation.

Requirements:    boto

# Simple PUT operation
- s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=put
# Simple GET operation
- s3: bucket=mybucket object=/my/desired/key.txt dest=/usr/local/myfile.txt mode=get
# GET/download and overwrite local file (trust remote)
- s3: bucket=mybucket object=/my/desired/key.txt dest=/usr/local/myfile.txt mode=get
# GET/download and do not overwrite local file (trust remote)
- s3: bucket=mybucket object=/my/desired/key.txt dest=/usr/local/myfile.txt mode=get force=false
# PUT/upload and overwrite remote file (trust local)
- s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=put
# PUT/upload and do not overwrite remote file (trust local)
- s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=put force=false
# Download an object as a string to use else where in your playbook
- s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=getstr
# Create an empty bucket
- s3: bucket=mybucket mode=create
# Create a bucket with key as directory
- s3: bucket=mybucket object=/my/directory/path mode=create
# Delete a bucket and all contents
- s3: bucket=mybucket mode=delete

> SCRIPT

The [script] module takes the script name followed by a list of
space-delimited arguments. The local script at path will be
transfered to the remote node and then executed. The given script
will be processed through the shell environment on the remote
node. This module does not require python on the remote system,
much like the [raw] module.

Options (= is mandatory):

- creates
      a filename, when it already exists, this step will *not* be
      run.

= free_form
      path to the local script file followed by optional
      arguments.

- removes
      a filename, when it does not exist, this step will *not* be
      run.

Notes:    It is usually preferable to write Ansible modules than pushing
      scripts. Convert your script to an Ansible module for bonus
      points!

# Example from Ansible Playbooks
- script: /some/local/script.sh --some-arguments 1234

# Run a script that creates a file, but only if the file is not yet created
- script: /some/local/create_file.sh --some-arguments 1234 creates=/the/created/file.txt

# Run a script that removes a file, but only if the file is not yet removed
- script: /some/local/remove_file.sh --some-arguments 1234 removes=/the/removed/file.txt

> SEBOOLEAN

Toggles SELinux booleans.

Options (= is mandatory):

= name
      Name of the boolean to configure

- persistent
      Set to `yes' if the boolean setting should survive a reboot
      (Choices: yes, no)

= state
      Desired boolean value (Choices: yes, no)

Notes:    Not tested on any debian based system

# Set (httpd_can_network_connect) flag on and keep it persistent across reboots
- seboolean: name=httpd_can_network_connect state=yes persistent=yes

> SELINUX

Configures the SELinux mode and policy. A reboot may be required
after usage. Ansible will not issue this reboot but will let you
know when it is required.

Options (= is mandatory):

- conf
      path to the SELinux configuration file, if non-standard

- policy
      name of the SELinux policy to use (example: `targeted') will
      be required if state is not `disabled'

= state
      The SELinux mode (Choices: enforcing, permissive, disabled)

Notes:    Not tested on any debian based system

Requirements:    libselinux-python

- selinux: policy=targeted state=enforcing
- selinux: policy=targeted state=permissive
- selinux: state=disabled

> SERVICE

Controls services on remote hosts.

Options (= is mandatory):

- arguments
      Additional arguments provided on the command line

- enabled
      Whether the service should start on boot. At least one of
      state and enabled are required. (Choices: yes, no)

= name
      Name of the service.

- pattern
      If the service does not respond to the status command, name
      a substring to look for as would be found in the output of
      the `ps' command as a stand-in for a status result.  If the
      string is found, the service will be assumed to be running.

- runlevel
      For OpenRC init scripts (ex: Gentoo) only.  The runlevel
      that this service belongs to.

- sleep
      If the service is being `restarted' then sleep this many
      seconds between the stop and start command. This helps to
      workaround badly behaving init scripts that exit immediately
      after signaling a process to stop.

- state
      `started'/`stopped' are idempotent actions that will not run
      commands unless necessary.  `restarted' will always bounce
      the service.  `reloaded' will always reload. At least one of
      state and enabled are required. (Choices: started, stopped,
      restarted, reloaded)

# Example action to start service httpd, if not running
- service: name=httpd state=started

# Example action to stop service httpd, if running
- service: name=httpd state=stopped

# Example action to restart service httpd, in all cases
- service: name=httpd state=restarted

# Example action to reload service httpd, in all cases
- service: name=httpd state=reloaded

# Example action to enable service httpd, and not touch the running state
- service: name=httpd enabled=yes

# Example action to start service foo, based on running process /usr/bin/foo
- service: name=foo pattern=/usr/bin/foo state=started

# Example action to restart network service for interface eth0
- service: name=network state=restarted args=eth0

> SET_FACT

This module allows setting new variables.  Variables are set on a
host-by-host basis just like facts discovered by the setup
module.These variables will survive between plays.

Options (= is mandatory):

= key_value
      The `set_fact' module takes key=value pairs as variables to
      set in the playbook scope. Or alternatively, accepts complex
      arguments using the `args:' statement.

# Example setting host facts using key=value pairs
- set_fact: one_fact="something" other_fact="{{ local_var * 2 }}"

# Example setting host facts using complex arguments
- set_fact:
     one_fact: something
     other_fact: "{{ local_var * 2 }}"

> SETUP

This module is automatically called by playbooks to gather useful
variables about remote hosts that can be used in playbooks. It can
also be executed directly by `/usr/bin/ansible' to check what
variables are available to a host. Ansible provides many `facts'
about the system, automatically.

Options (= is mandatory):

- fact_path
      path used for local ansible facts (*.fact) - files in this
      dir will be run (if executable) and their results be added
      to ansible_local facts if a file is not executable it is
      read. File/results format can be json or ini-format

- filter
      if supplied, only return facts that match this shell-style
      (fnmatch) wildcard.

Notes:    More ansible facts will be added with successive releases. If
      `facter' or `ohai' are installed, variables from these
      programs will also be snapshotted into the JSON file for
      usage in templating. These variables are prefixed with
      `facter_' and `ohai_' so it's easy to tell their source. All
      variables are bubbled up to the caller. Using the ansible
      facts and choosing to not install `facter' and `ohai' means
      you can avoid Ruby-dependencies on your remote systems. (See
      also [facter] and [ohai].)The filter option filters only the
      first level subkey below ansible_facts.

# Display facts from all hosts and store them indexed by I(hostname) at C(/tmp/facts).
ansible all -m setup --tree /tmp/facts

# Display only facts regarding memory found by ansible on all hosts and output them.
ansible all -m setup -a 'filter=ansible_*_mb'

# Display only facts returned by facter.
ansible all -m setup -a 'filter=facter_*'

# Display only facts about certain interfaces.
ansible all -m setup -a 'filter=ansible_eth[0-2]'

> SHELL

The [shell] module takes the command name followed by a list of
space-delimited arguments. It is almost exactly like the [command]
module but runs the command through a shell (`/bin/sh') on the
remote node.

Options (= is mandatory):

- chdir
      cd into this directory before running the command

- creates
      a filename, when it already exists, this step will *not* be
      run.

- executable
      change the shell used to execute the command. Should be an
      absolute path to the executable.

= free_form
      The shell module takes a free form command to run

- removes
      a filename, when it does not exist, this step will *not* be
      run.

Notes:    If you want to execute a command securely and predictably, it may
      be better to use the [command] module instead. Best
      practices when writing playbooks will follow the trend of
      using [command] unless [shell] is explicitly required. When
      running ad-hoc commands, use your best judgement.

# Execute the command in remote shell; stdout goes to the specified
# file on the remote
- shell: somescript.sh >> somelog.txt

> SLURP

This module works like [fetch]. It is used for fetching a base64-
encoded blob containing the data in a remote file.

Options (= is mandatory):

= src
      The file on the remote system to fetch. This `must' be a
      file, not a directory.

Notes:    See also: [fetch]

ansible host -m slurp -a 'src=/tmp/xx'
   host | success >> {
      "content": "aGVsbG8gQW5zaWJsZSB3b3JsZAo=",
      "encoding": "base64"
   }

> STAT

Retrieves facts for a file similar to the linux/unix 'stat'
command.

Options (= is mandatory):

- follow
      Whether to follow symlinks

- get_md5
      Whether to return the md5 sum of the file

= path
      The full path of the file/object to get the facts of

# Obtain the stats of /etc/foo.conf, and check that the file still belongs
# to 'root'. Fail otherwise.
- stat: path=/etc/foo.conf
  register: st
- fail: msg="Whoops! file ownership has changed"
  when: st.stat.pw_name != 'root'

# Determine if a path exists and is a directory.  Note we need to test
# both that p.stat.isdir actually exists, and also that it's set to true.
- stat: path=/path/to/something
  register: p
- debug: msg="Path exists and is a directory"
  when: p.stat.isdir is defined and p.stat.isdir == true

# Don't do md5 checksum
- stat: path=/path/to/myhugefile get_md5=no

> SUBVERSION

Deploy given repository URL / revision to dest. If dest exists,
update to the specified revision, otherwise perform a checkout.

Options (= is mandatory):

= dest
      Absolute path where the repository should be deployed.

- executable
      Path to svn executable to use. If not supplied, the normal
      mechanism for resolving binary paths will be used.

- force
      If `yes', modified files will be discarded. If `no', module
      will fail if it encounters modified files. (Choices: yes,
      no)

- password
      --password parameter passed to svn.

= repo
      The subversion URL to the repository.

- revision
      Specific revision to checkout.

- username
      --username parameter passed to svn.

Notes:    Requres `svn' to be installed on the client.

# Checkout subversion repository to specified folder.
- subversion: repo=svn+ssh://an.example.org/path/to/repo dest=/src/checkout

> SUPERVISORCTL

Manage the state of a program or group of programs running via
`Supervisord'

Options (= is mandatory):

- config
      configuration file path, passed as -c to supervisorctl

= name
      The name of the `supervisord' program/process to manage

- password
      password to use for authentication with server, passed as -p
      to supervisorctl

- server_url
      URL on which supervisord server is listening, passed as -s
      to supervisorctl

= state
      The state of service (Choices: present, started, stopped,
      restarted)

- supervisorctl_path
      Path to supervisorctl executable to use

- username
      username to use for authentication with server, passed as -u
      to supervisorctl

# Manage the state of program to be in 'started' state.
- supervisorctl: name=my_app state=started

# Restart my_app, reading supervisorctl configuration from a specified file.
- supervisorctl: name=my_app state=restarted config=/var/opt/my_project/supervisord.conf

# Restart my_app, connecting to supervisord with credentials and server URL.
- supervisorctl: name=my_app state=restarted username=test password=testpass server_url=http://localhost:9001

> SVR4PKG

Manages SVR4 packages on Solaris 10 and 11.These were the native
packages on Solaris <= 10 and are available as a legacy feature in
Solaris 11.Note that this is a very basic packaging system. It
will not enforce dependencies on install or remove.

Options (= is mandatory):

= name
      Package name, e.g. `SUNWcsr'

- proxy
      HTTP[s] proxy to be used if `src' is a URL.

- response_file
      Specifies the location of a response file to be used if
      package expects input on install. (added in Ansible 1.4)

- src
      Specifies the location to install the package from. Required
      when `state=present'.Can be any path acceptable to the
      `pkgadd' command's `-d' option. e.g.: `somefile.pkg',
      `/dir/with/pkgs', `http:/server/mypkgs.pkg'.If using a file
      or directory, they must already be accessible by the host.
      See the [copy] module for a way to get them there.

= state
      Whether to install (`present'), or remove (`absent') a
      package.If the package is to be installed, then `src' is
      required.The SVR4 package system doesn't provide an upgrade
      operation. You need to uninstall the old, then install the
      new package. (Choices: present, absent)

# Install a package from an already copied file
- svr4pkg: name=CSWcommon src=/tmp/cswpkgs.pkg state=present

# Install a package directly from an http site
- svr4pkg: name=CSWpkgutil src=http://get.opencsw.org/now state=present

# Install a package with a response file
- svr4pkg: name=CSWggrep src=/tmp/third-party.pkg response_file=/tmp/ggrep.response state=present

# Ensure that a package is not installed.
- svr4pkg: name=SUNWgnome-sound-recorder state=absent

> SWDEPOT

Will install, upgrade and remove packages with swdepot package
manager (HP-UX)

Options (= is mandatory):

- depot
      The source repository from which install or upgrade a
      package. (Choices: )

= name
      package name. (Choices: )

= state
      whether to install (`present', `latest'), or remove
      (`absent') a package. (Choices: present, latest, absent)

- swdepot: name=unzip-6.0 state=installed depot=repository:/path
- swdepot: name=unzip state=latest depot=repository:/path
- swdepot: name=unzip state=absent

> SYNCHRONIZE

This is a wrapper around rsync. Of course you could just use the
command action to call rsync yourself, but you also have to add a
fair number of boilerplate options and host facts. You still may
need to call rsync directly via `command' or `shell' depending on
your use case. The synchronize action is meant to do common things
with `rsync' easily. It does not provide access to the full power
of rsync, but does make most invocations easier to follow.

Options (= is mandatory):

- archive
      Mirrors the rsync archive flag, enables recursive, links,
      perms, times, owner, group flags and -D. (Choices: yes, no)

- copy_links
      Copy symlinks as the item that they point to (the referent)
      is copied, rather than the symlink. (Choices: yes, no)

- delete
      Delete files that don't exist (after transfer, not before)
      in the `src' path. (Choices: yes, no)

= dest
      Path on the destination machine that will be synchronized
      from the source; The path can be absolute or relative.

- dest_port
      Port number for ssh on the destination host. The
      ansible_ssh_port inventory var takes precedence over this
      value.

- dirs
      Transfer directories without recursing (Choices: yes, no)

- existing_only
      Skip creating new files on receiver. (Choices: yes, no)

- group
      Preserve group (Choices: yes, no)

- links
      Copy symlinks as symlinks. (Choices: yes, no)

- mode
      Specify the direction of the synchroniztion. In push mode
      the localhost or delegate is the source; In pull mode the
      remote host in context is the source. (Choices: push, pull)

- owner
      Preserve owner (super user only) (Choices: yes, no)

- perms
      Preserve permissions. (Choices: yes, no)

- recursive
      Recurse into directories. (Choices: yes, no)

- rsync_path
      Specify the rsync command to run on the remote machine. See
      `--rsync-path' on the rsync man page.

- rsync_timeout
      Specify a --timeout for the rsync command in seconds.

= src
      Path on the source machine that will be synchronized to the
      destination; The path can be absolute or relative.

- times
      Preserve modification times (Choices: yes, no)

Notes:    Inspect the verbose output to validate the destination
      user/host/path are what was expected.The remote user for the
      dest path will always be the remote_user, not the
      sudo_user.Expect that dest=~/x will be ~<remote_user>/x even
      if using sudo.To exclude files and directories from being
      synchronized, you may add `.rsync-filter' files to the
      source directory.

# Synchronization of src on the control machine to dest on the remote hosts
synchronize: src=some/relative/path dest=/some/absolute/path

# Synchronization without any --archive options enabled
synchronize: src=some/relative/path dest=/some/absolute/path archive=no

# Synchronization with --archive options enabled except for --recursive
synchronize: src=some/relative/path dest=/some/absolute/path recursive=no

# Synchronization without --archive options enabled except use --links
synchronize: src=some/relative/path dest=/some/absolute/path archive=no links=yes

# Synchronization of two paths both on the control machine
local_action: synchronize src=some/relative/path dest=/some/absolute/path

# Synchronization of src on the inventory host to the dest on the localhost in
pull mode
synchronize: mode=pull src=some/relative/path dest=/some/absolute/path

# Synchronization of src on delegate host to dest on the current inventory host
synchronize: >
    src=some/relative/path dest=/some/absolute/path
    delegate_to: delegate.host

# Synchronize and delete files in dest on the remote host that are not found in src of localhost.
synchronize: src=some/relative/path dest=/some/absolute/path delete=yes

# Synchronize using an alternate rsync command
synchronize: src=some/relative/path dest=/some/absolute/path rsync_path="sudo rsync"

# Example .rsync-filter file in the source directory
- var       # exclude any path whose last part is 'var'
- /var      # exclude any path starting with 'var' starting at the source directory
+ /var/conf # include /var/conf even though it was previously excluded

> SYSCTL

This module manipulates sysctl entries and optionally performs a
`/sbin/sysctl -p' after changing them.

Options (= is mandatory):

- ignoreerrors
      Use this option to ignore errors about unknown keys.
      (Choices: yes, no)

= name
      The dot-separated path (aka `key') specifying the sysctl
      variable.

- reload
      If `yes', performs a `/sbin/sysctl -p' if the `sysctl_file'
      is updated. If `no', does not reload `sysctl' even if the
      `sysctl_file' is updated. (Choices: yes, no)

- state
      Whether the entry should be present or absent in the sysctl
      file. (Choices: present, absent)

- sysctl_file
      Specifies the absolute path to `sysctl.conf', if not
      `/etc/sysctl.conf'.

- sysctl_set
      Verify token value with the sysctl command and set with -w
      if necessary (Choices: yes, no)

- value
      Desired value of the sysctl key.

# Set vm.swappiness to 5 in /etc/sysctl.conf
- sysctl: name=vm.swappiness value=5 state=present

# Remove kernel.panic entry from /etc/sysctl.conf
- sysctl: name=kernel.panic state=absent sysctl_file=/etc/sysctl.conf

# Set kernel.panic to 3 in /tmp/test_sysctl.conf
- sysctl: name=kernel.panic value=3 sysctl_file=/tmp/test_sysctl.conf reload=no

# Set ip fowarding on in /proc and do not reload the sysctl file
- sysctl: name="net.ipv4.ip_forward" value=1 sysctl_set=yes

# Set ip forwarding on in /proc and in the sysctl file and reload if necessary
- sysctl: name="net.ipv4.ip_forward" value=1 sysctl_set=yes state=present reload=yes

> TEMPLATE

Templates are processed by the Jinja2 templating language
(http://jinja.pocoo.org/docs/) - documentation on the template
formatting can be found in the Template Designer Documentation
(http://jinja.pocoo.org/docs/templates/).Six additional variables
can be used in templates: `ansible_managed' (configurable via the
`defaults' section of `ansible.cfg') contains a string which can
be used to describe the template name, host, modification time of
the template file and the owner uid, `template_host' contains the
node name of the template's machine, `template_uid' the owner,
`template_path' the absolute path of the template,
`template_fullpath' is the absolute path of the template, and
`template_run_date' is the date that the template was rendered.
Note that including a string that uses a date in the template will
resort in the template being marked 'changed' each time.

Options (= is mandatory):

- backup
      Create a backup file including the timestamp information so
      you can get the original file back if you somehow clobbered
      it incorrectly. (Choices: yes, no)

= dest
      Location to render the template to on the remote machine.

- others
      all arguments accepted by the [file] module also work here,
      as well as the [copy] module (except the the 'content'
      parameter).

= src
      Path of a Jinja2 formatted template on the local server.
      This can be a relative or absolute path.

- validate
      validation to run before copying into place

Notes:    Since Ansible version 0.9, templates are loaded with
      `trim_blocks=True'.Also, you can override jinja2 settings by
      adding a special header to template file. i.e.
      `#jinja2:variable_start_string:'[%' ,
      variable_end_string:'%]'' which changes the variable
      interpolation markers to  [% var %] instead of  {{ var }}.
      This is the best way to prevent evaluation of things that
      look like, but should not be Jinja2.  raw/endraw in Jinja2
      will not work as you expect because templates in Ansible are
      recursively evaluated.

# Example from Ansible Playbooks
- template: src=/mytemplates/foo.j2 dest=/etc/file.conf owner=bin group=wheel mode=0644

# Copy a new "sudoers file into place, after passing validation with visudo
- action: template src=/mine/sudoers dest=/etc/sudoers validate='visudo -cf %s'

> UNARCHIVE

The [unarchive] module copies an archive file from the local
machine to a remote and unpacks it.

Options (= is mandatory):

- copy
      Should the file be copied from the local to the remote
      machine? (Choices: yes, no)

= dest
      Remote absolute path where the archive should be unpacked

= src
      Local path to archive file to copy to the remote server; can
      be absolute or relative.

Notes:    requires `tar'/`unzip' command on target hostcan handle `gzip',
      `bzip2' and `xz' compressed as well as uncompressed tar
      filesdetects type of archive automaticallyuses tar's `--diff
      arg' to calculate if changed or not. If this `arg' is not
      supported, it will always unpack the archivedoes not detect
      if a .zip file is different from destination - always
      unzipsexisting files/directories in the destination which
      are not in the archive are not touched.  This is the same
      behavior as a normal archive extractionexisting
      files/directories in the destination which are not in the
      archive are ignored for purposes of deciding if the archive
      should be unpacked or not

# Example from Ansible Playbooks
- unarchive: src=foo.tgz dest=/var/lib/foo

> URI

Interacts with HTTP and HTTPS web services and supports Digest,
Basic and WSSE HTTP authentication mechanisms.

Options (= is mandatory):

- HEADER_
      Any parameter starting with "HEADER_" is a sent with your
      request as a header. For example, HEADER_Content-
      Type="application/json" would send the header "Content-Type"
      along with your request with a value of "application/json".

- body
      The body of the http request/response to the web service.

- creates
      a filename, when it already exists, this step will not be
      run.

- dest
      path of where to download the file to (if desired). If
      `dest' is a directory, the basename of the file on the
      remote server will be used.

- follow_redirects
      Whether or not the URI module should follow redirects. `all'
      will follow all redirects. `safe' will follow only "safe"
      redirects, where "safe" means that the client is only doing
      a GET or HEAD on the URI to which it is being redirected.
      `none' will not follow any redirects. Note that `yes' and
      `no' choices are accepted for backwards compatibility, where
      `yes' is the equivalent of `all' and `no' is the equivalent
      of `safe'. `yes' and `no' are deprecated and will be removed
      in some future version of Ansible. (Choices: all, safe,
      none)

- force_basic_auth
      httplib2, the library used by the uri module only sends
      authentication information when a webservice responds to an
      initial request with a 401 status. Since some basic auth
      services do not properly send a 401, logins will fail. This
      option forces the sending of the Basic authentication header
      upon initial request. (Choices: yes, no)

- method
      The HTTP method of the request or response. (Choices: GET,
      POST, PUT, HEAD, DELETE, OPTIONS, PATCH)

- others
      all arguments accepted by the [file] module also work here

- password
      password for the module to use for Digest, Basic or WSSE
      authentication.

- removes
      a filename, when it does not exist, this step will not be
      run.

- return_content
      Whether or not to return the body of the request as a
      "content" key in the dictionary result. If the reported
      Content-type is "application/json", then the JSON is
      additionally loaded into a key called `json' in the
      dictionary results. (Choices: yes, no)

- status_code
      A valid, numeric, HTTP status code that signifies success of
      the request.

- timeout
      The socket level timeout in seconds

= url
      HTTP or HTTPS URL in the form
      (http|https)://host.domain[:port]/path

- user
      username for the module to use for Digest, Basic or WSSE
      authentication.

Requirements:    urlparse, httplib2

# Check that you can connect (GET) to a page and it returns a status 200
- uri: url=http://www.example.com

# Check that a page returns a status 200 and fail if the word AWESOME is not in the page contents.
- action: uri url=http://www.example.com return_content=yes
  register: webpage

- action: fail
  when: 'AWESOME' not in "{{ webpage.content }}"


# Create a JIRA issue.
- action: >
        uri url=https://your.jira.example.com/rest/api/2/issue/
        method=POST user=your_username password=your_pass
        body="{{ lookup('file','issue.json') }}" force_basic_auth=yes
        status_code=201 HEADER_Content-Type="application/json"

- action: >
        uri url=https://your.form.based.auth.examle.com/index.php
        method=POST body="name=your_username&password=your_password&enter=Sign%20in"
        status_code=302 HEADER_Content-Type="application/x-www-form-urlencoded"
  register: login

# Login to a form based webpage, then use the returned cookie to
# access the app in later tasks.
- action: uri url=https://your.form.based.auth.example.com/dashboard.php
            method=GET return_content=yes HEADER_Cookie="{{login.set_cookie}}"

> URPMI

Manages packages with `urpmi' (such as for Mageia or Mandriva)

Options (= is mandatory):

- force
      Corresponds to the `--force' option for `urpmi'. (Choices:
      yes, no)

- no-suggests
      Corresponds to the `--no-suggests' option for `urpmi'.
      (Choices: yes, no)

= pkg
      name of package to install, upgrade or remove.

- state
      Indicates the desired package state (Choices: absent,
      present)

- update_cache
      update the package database first `urpmi.update -a'.
      (Choices: yes, no)

# install package foo
- urpmi: pkg=foo state=present
# remove package foo
- urpmi: pkg=foo state=absent
# description: remove packages foo and bar
- urpmi: pkg=foo,bar state=absent
# description: update the package database (urpmi.update -a -q) and install bar (bar will be the updated if a newer version exists)
- urpmi: name=bar, state=present, update_cache=yes

> USER

Manage user accounts and user attributes.

Options (= is mandatory):

- append
      If `yes', will only add groups, not set them to just the
      list in `groups'.

- comment
      Optionally sets the description (aka `GECOS') of user
      account.

- createhome
      Unless set to `no', a home directory will be made for the
      user when the account is created or if the home directory
      does not exist. (Choices: yes, no)

- force
      When used with `state=absent', behavior is as with `userdel
      --force'. (Choices: yes, no)

- generate_ssh_key
      Whether to generate a SSH key for the user in question. This
      will *not* overwrite an existing SSH key. (Choices: yes, no)

- group
      Optionally sets the user's primary group (takes a group
      name).

- groups
      Puts the user in this comma-delimited list of groups. When
      set to the empty string ('groups='), the user is removed
      from all groups except the primary group.

- home
      Optionally set the user's home directory.

- login_class
      Optionally sets the user's login class for FreeBSD, OpenBSD
      and NetBSD systems.

- move_home
      If set to `yes' when used with `home=', attempt to move the
      user's home directory to the specified directory if it isn't
      there already. (Choices: yes, no)

= name
      Name of the user to create, remove or modify.

- non_unique
      Optionally when used with the -u option, this option allows
      to change the user ID to a non-unique value. (Choices: yes,
      no)

- password
      Optionally set the user's password to this crypted value.
      See the user example in the github examples directory for
      what this looks like in a playbook. The `FAQ
      <http://docs.ansible.com/faq.html#how-do-i-generate-crypted-
      passwords-for-the-user-module>`_ contains details on various
      ways to generate these password values.

- remove
      When used with `state=absent', behavior is as with `userdel
      --remove'. (Choices: yes, no)

- shell
      Optionally set the user's shell.

- ssh_key_bits
      Optionally specify number of bits in SSH key to create.

- ssh_key_comment
      Optionally define the comment for the SSH key.

- ssh_key_file
      Optionally specify the SSH key filename.

- ssh_key_passphrase
      Set a passphrase for the SSH key.  If no passphrase is
      provided, the SSH key will default to having no passphrase.

- ssh_key_type
      Optionally specify the type of SSH key to generate.
      Available SSH key types will depend on implementation
      present on target host.

- state
      Whether the account should exist.  When `absent', removes
      the user account. (Choices: present, absent)

- system
      When creating an account, setting this to `yes' makes the
      user a system account.  This setting cannot be changed on
      existing users. (Choices: yes, no)

- uid
      Optionally sets the `UID' of the user.

- update_password
      `always' will update passwords if they differ.  `on_create'
      will only set the password for newly created users.
      (Choices: always, on_create)

Requirements:    useradd, userdel, usermod

# Add the user 'johnd' with a specific uid and a primary group of 'admin'
- user: name=johnd comment="John Doe" uid=1040

# Remove the user 'johnd'
- user: name=johnd state=absent remove=yes

# Create a 2048-bit SSH key for user jsmith
- user: name=jsmith generate_ssh_key=yes ssh_key_bits=2048

> VIRT

Manages virtual machines supported by `libvirt'.

Options (= is mandatory):

- command
      in addition to state management, various non-idempotent
      commands are available. See examples (Choices: create,
      status, start, stop, pause, unpause, shutdown, undefine,
      destroy, get_xml, autostart, freemem, list_vms, info,
      nodeinfo, virttype, define)

= name
      name of the guest VM being managed. Note that VM must be
      previously defined with xml.

- state
      Note that there may be some lag for state requests like
      `shutdown' since these refer only to VM states. After
      starting a guest, it may not be immediately accessible.
      (Choices: running, shutdown)

- uri
      libvirt connection uri

- xml
      XML document used with the define command

Requirements:    libvirt

# a playbook task line:
- virt: name=alpha state=running

# /usr/bin/ansible invocations
ansible host -m virt -a "name=alpha command=status"
ansible host -m virt -a "name=alpha command=get_xml"
ansible host -m virt -a "name=alpha command=create uri=lxc:///"

# a playbook example of defining and launching an LXC guest
tasks:
  - name: define vm
    virt: name=foo
          command=define
          xml="{{ lookup('template', 'container-template.xml.j2') }}"
          uri=lxc:///
  - name: start vm
    virt: name=foo state=running uri=lxc:///

> WAIT_FOR

Waiting for a port to become available is useful for when services
are not immediately available after their init scripts return -
which is true of certain Java application servers. It is also
useful when starting guests with the [virt] module and needing to
pause until they are ready. This module can also be used to wait
for a file to be available on the filesystem or with a regex match
a string to be present in a file.

Options (= is mandatory):

- delay
      number of seconds to wait before starting to poll

- host
      hostname or IP address to wait for

- path
      path to a file on the filesytem that must exist before
      continuing

- port
      port number to poll

- search_regex
      with the path option can be used match a string in the file
      that must match before continuing.  Defaults to a multiline
      regex.

- state
      either `present', `started', or `stopped'When checking a
      port `started' will ensure the port is open, `stopped' will
      check that it is closedWhen checking for a file or a search
      string `present' or `started' will ensure that the file or
      string is present before continuing (Choices: present,
      started, stopped)

- timeout
      maximum number of seconds to wait for

- connect_timeout


# wait 300 seconds for port 8000 to become open on the host, don't start checking for 10 seconds
- wait_for: port=8000 delay=10

# wait until the file /tmp/foo is present before continuing
- wait_for: path=/tmp/foo

# wait until the string "completed" is in the file /tmp/foo before continuing
- wait_for: path=/tmp/foo search_regex=completed

> XATTR

Manages filesystem user defined extended attributes, requires that
they are enabled on the target filesystem and that the
setfattr/getfattr utilities are present.

Options (= is mandatory):

- follow
      if yes, dereferences symlinks and sets/gets attributes on
      symlink target, otherwise acts on symlink itself. (Choices:
      yes, no)

- key
      The name of a specific Extended attribute key to
      set/retrieve

= name
      The full path of the file/object to get the facts of

- state
      defines which state you want to do. `read' retrieves the
      current value for a `key' (default) `present' sets `name' to
      `value', default if value is set `all' dumps all data `keys'
      retrieves all keys `absent' deletes the key (Choices: read,
      present, all, keys, absent)

- value
      The value to set the named name/key to, it automatically
      sets the `state' to 'set'

# Obtain the extended attributes  of /etc/foo.conf
- xattr: name=/etc/foo.conf

# Sets the key 'foo' to value 'bar'
- xattr: path=/etc/foo.conf key=user.foo value=bar

# Removes the key 'foo'
- xattr: name=/etc/foo.conf key=user.foo state=absent

> YUM

Installs, upgrade, removes, and lists packages and groups with the
`yum' package manager.

Options (= is mandatory):

- conf_file
      The remote yum configuration file to use for the
      transaction.

- disable_gpg_check
      Whether to disable the GPG checking of signatures of
      packages being installed. Has an effect only if state is
      `present' or `latest'. (Choices: yes, no)

- disablerepo
      `repoid' of repositories to disable for the install/update
      operation These repos will not persist beyond the
      transaction Multiple repos separated with a ','

- enablerepo
      Repoid of repositories to enable for the install/update
      operation. These repos will not persist beyond the
      transaction multiple repos separated with a ','

- list
      Various (non-idempotent) commands for usage with
      `/usr/bin/ansible' and `not' playbooks. See examples.

= name
      Package name, or package specifier with version, like
      `name-1.0'. When using state=latest, this can be '*' which
      means run: yum -y update. You can also pass a url or a local
      path to a rpm file.

- state
      Whether to install (`present', `latest'), or remove
      (`absent') a package. (Choices: present, latest, absent)

Requirements:    yum, rpm

- name: install the latest version of Apache
  yum: name=httpd state=latest

- name: remove the Apache package
  yum: name=httpd state=removed

- name: install the latest version of Apche from the testing repo
  yum: name=httpd enablerepo=testing state=installed

- name: upgrade all packages
  yum: name=* state=latest

- name: install the nginx rpm from a remote repo
  yum: name=http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm state=present

- name: install nginx rpm from a local file
  yum: name=/usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm state=present

- name: install the 'Development tools' package group
  yum: name="@Development tools" state=present

> ZFS

Manages ZFS file systems on Solaris and FreeBSD. Can manage file
systems, volumes and snapshots. See zfs(1M) for more information
about the properties.

Options (= is mandatory):

- aclinherit
      The aclinherit property. (Choices: discard, noallow,
      restricted, passthrough, passthrough-x)

- aclmode
      The aclmode property. (Choices: discard, groupmask,
      passthrough)

- atime
      The atime property. (Choices: on, off)

- canmount
      The canmount property. (Choices: on, off, noauto)

- casesensitivity
      The casesensitivity property. (Choices: sensitive,
      insensitive, mixed)

- checksum
      The checksum property. (Choices: on, off, fletcher2,
      fletcher4, sha256)

- compression
      The compression property. (Choices: on, off, lzjb, gzip,
      gzip-1, gzip-2, gzip-3, gzip-4, gzip-5, gzip-6, gzip-7,
      gzip-8, gzip-9, lz4, zle)

- copies
      The copies property. (Choices: 1, 2, 3)

- dedup
      The dedup property. (Choices: on, off)

- devices
      The devices property. (Choices: on, off)

- exec
      The exec property. (Choices: on, off)

- jailed
      The jailed property. (Choices: on, off)

- logbias
      The logbias property. (Choices: latency, throughput)

- mountpoint
      The mountpoint property.

= name
      File system, snapshot or volume name e.g. `rpool/myfs'

- nbmand
      The nbmand property. (Choices: on, off)

- normalization
      The normalization property. (Choices: none, formC, formD,
      formKC, formKD)

- primarycache
      The primarycache property. (Choices: all, none, metadata)

- quota
      The quota property.

- readonly
      The readonly property. (Choices: on, off)

- recordsize
      The recordsize property.

- refquota
      The refquota property.

- refreservation
      The refreservation property.

- reservation
      The reservation property.

- secondarycache
      The secondarycache property. (Choices: all, none, metadata)

- setuid
      The setuid property. (Choices: on, off)

- shareiscsi
      The shareiscsi property. (Choices: on, off)

- sharenfs
      The sharenfs property.

- sharesmb
      The sharesmb property.

- snapdir
      The snapdir property. (Choices: hidden, visible)

= state
      Whether to create (`present'), or remove (`absent') a file
      system, snapshot or volume. (Choices: present, absent)

- sync
      The sync property. (Choices: on, off)

- utf8only
      The utf8only property. (Choices: on, off)

- volblocksize
      The volblocksize property.

- volsize
      The volsize property.

- vscan
      The vscan property. (Choices: on, off)

- xattr
      The xattr property. (Choices: on, off)

- zoned
      The zoned property. (Choices: on, off)

# Create a new file system called myfs in pool rpool
- zfs: name=rpool/myfs state=present

# Create a new volume called myvol in pool rpool.
- zfs: name=rpool/myvol state=present volsize=10M

# Create a snapshot of rpool/myfs file system.
- zfs: name=rpool/myfs@mysnapshot state=present

# Create a new file system called myfs2 with snapdir enabled
- zfs: name=rpool/myfs2 state=present snapdir=enabled

> ZYPPER

Manage packages on SuSE and openSuSE using the zypper and rpm
tools.

Options (= is mandatory):

- disable_gpg_check
      Whether to disable to GPG signature checking of the package
      signature being installed. Has an effect only if state is
      `present' or `latest'. (Choices: yes, no)

= name
      package name or package specifier wth version `name' or
      `name-1.0'.

- state
      `present' will make sure the package is installed. `latest'
      will make sure the latest version of the package is
      installed. `absent'  will make sure the specified package is
      not installed. (Choices: present, latest, absent)

Requirements:    zypper, rpm

# Install "nmap"
- zypper: name=nmap state=present

# Remove the "nmap" package
- zypper: name=nmap state=absent

> ZYPPER_REPOSITORY

Add or remove Zypper repositories on SUSE and openSUSE

Options (= is mandatory):

- description
      A description of the repository

- disable_gpg_check
      Whether to disable GPG signature checking of all packages.
      Has an effect only if state is `present'. (Choices: yes, no)

= name
      A name for the repository.

= repo
      URI of the repository or .repo file.

- state
      A source string state. (Choices: absent, present)

Requirements:    zypper

# Add NVIDIA repository for graphics drivers
- zypper_repository: name=nvidia-repo repo='ftp://download.nvidia.com/opensuse/12.2' state=present

# Remove NVIDIA repository
- zypper_repository: name=nvidia-repo repo='ftp://download.nvidia.com/opensuse/12.2' state=absent

Ansible V2 Documentation

About Ansible

Welcome to the Ansible documentation!

Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.

Ansible’s main goals are simplicity and ease-of-use. It also has a strong focus on security and reliability, featuring a minimum of moving parts, usage of OpenSSH for transport (with an accelerated socket mode and pull modes as alternatives), and a language that is designed around auditability by humans–even those not familiar with the program.

We believe simplicity is relevant to all sizes of environments, so we design for busy users of all types: developers, sysadmins, release engineers, IT managers, and everyone in between. Ansible is appropriate for managing all environments, from small setups with a handful of instances to enterprise environments with many thousands of instances.

Ansible manages machines in an agent-less manner. There is never a question of how to upgrade remote daemons or the problem of not being able to manage systems because daemons are uninstalled. Because OpenSSH is one of the most peer-reviewed open source components, security exposure is greatly reduced. Ansible is decentralized–it relies on your existing OS credentials to control access to remote machines. If needed, Ansible can easily connect with Kerberos, LDAP, and other centralized authentication management systems.

This documentation covers the current released version of Ansible (1.9.1) and also some development version features (2.0). For recent features, we note in each section the version of Ansible where the feature was added.

Ansible, Inc. releases a new major release of Ansible approximately every two months. The core application evolves somewhat conservatively, valuing simplicity in language design and setup. However, the community around new modules and plugins being developed and contributed moves very quickly, typically adding 20 or so new modules in each release.

Introduction

Before we dive into the really fun parts – playbooks, configuration management, deployment, and orchestration, we’ll learn how to get Ansible installed and cover some basic concepts. We’ll also go over how to execute ad-hoc commands in parallel across your nodes using /usr/bin/ansible. Additionally, we’ll see what sort of modules are available in Ansible’s core (though you can also write your own, which is also covered later).

Installation
Getting Ansible

You may also wish to follow the GitHub project if you have a GitHub account. This is also where we keep the issue tracker for sharing bugs and feature ideas.

Basics / What Will Be Installed

Ansible by default manages machines over the SSH protocol.

Once Ansible is installed, it will not add a database, and there will be no daemons to start or keep running. You only need to install it on one machine (which could easily be a laptop) and it can manage an entire fleet of remote machines from that central point. When Ansible manages remote machines, it does not leave software installed or running on them, so there’s no real question about how to upgrade Ansible when moving to a new version.

What Version To Pick?

Because it runs so easily from source and does not require any installation of software on remote machines, many users will actually track the development version.

Ansible’s release cycles are usually about two months long. Due to this short release cycle, minor bugs will generally be fixed in the next release versus maintaining backports on the stable branch. Major bugs will still have maintenance releases when needed, though these are infrequent.

If you are wishing to run the latest released version of Ansible and you are running Red Hat Enterprise Linux (TM), CentOS, Fedora, Debian, or Ubuntu, we recommend using the OS package manager.

For other installation options, we recommend installing via “pip”, which is the Python package manager, though other options are also available.

If you wish to track the development release to use and test the latest features, we will share information about running from source. It’s not necessary to install the program to run from source.

Control Machine Requirements

Currently Ansible can be run from any machine with Python 2.6 or 2.7 installed (Windows isn’t supported for the control machine).

This includes Red Hat, Debian, CentOS, OS X, any of the BSDs, and so on.

Managed Node Requirements

On the managed nodes, you only need Python 2.4 or later, but if you are running less than Python 2.5 on the remotes, you will also need:

  • python-simplejson

Note

Ansible’s “raw” module (for executing commands in a quick and dirty way) and the script module don’t even need that. So technically, you can use Ansible to install python-simplejson using the raw module, which then allows you to use everything else. (That’s jumping ahead though.)

Note

If you have SELinux enabled on remote nodes, you will also want to install libselinux-python on them before using any copy/file/template related functions in Ansible. You can of course still use the yum module in Ansible to install this package on remote systems that do not have it.

Note

Python 3 is a slightly different language than Python 2 and most Python programs (including Ansible) are not switching over yet. However, some Linux distributions (Gentoo, Arch) may not have a Python 2.X interpreter installed by default. On those systems, you should install one, and set the ‘ansible_python_interpreter’ variable in inventory (see Inventory) to point at your 2.X Python. Distributions like Red Hat Enterprise Linux, CentOS, Fedora, and Ubuntu all have a 2.X interpreter installed by default and this does not apply to those distributions. This is also true of nearly all Unix systems. If you need to bootstrap these remote systems by installing Python 2.X, using the ‘raw’ module will be able to do it remotely.

Installing the Control Machine
Running From Source

Ansible is trivially easy to run from a checkout, root permissions are not required to use it and there is no software to actually install for Ansible itself. No daemons or database setup are required. Because of this, many users in our community use the development version of Ansible all of the time, so they can take advantage of new features when they are implemented, and also easily contribute to the project. Because there is nothing to install, following the development version is significantly easier than most open source projects.

Note

If you are intending to use Tower as the Control Machine, do not use a source install. Please use apt/yum/pip for a stable version

To install from source.

$ git clone git://github.com/ansible/ansible.git --recursive
$ cd ./ansible
$ source ./hacking/env-setup

If you want to suppress spurious warnings/errors, use:

$ source ./hacking/env-setup -q

If you don’t have pip installed in your version of Python, install pip:

$ sudo easy_install pip

Ansible also uses the following Python modules that need to be installed:

$ sudo pip install paramiko PyYAML Jinja2 httplib2 six

Note when updating ansible, be sure to not only update the source tree, but also the “submodules” in git which point at Ansible’s own modules (not the same kind of modules, alas).

$ git pull --rebase
$ git submodule update --init --recursive

Once running the env-setup script you’ll be running from checkout and the default inventory file will be /etc/ansible/hosts. You can optionally specify an inventory file (see Inventory) other than /etc/ansible/hosts:

$ echo "127.0.0.1" > ~/ansible_hosts
$ export ANSIBLE_INVENTORY=~/ansible_hosts

Note

ANSIBLE_INVENTORY is available starting at 1.9 and substitutes the deprecated ANSIBLE_HOSTS

You can read more about the inventory file in later parts of the manual.

Now let’s test things with a ping command:

$ ansible all -m ping --ask-pass

You can also use “sudo make install” if you wish.

Latest Release Via Yum

RPMs are available from yum for EPEL 6, 7, and currently supported Fedora distributions.

Ansible itself can manage earlier operating systems that contain Python 2.4 or higher (so also EL5).

Fedora users can install Ansible directly, though if you are using RHEL or CentOS and have not already done so, configure EPEL

# install the epel-release RPM if needed on CentOS, RHEL, or Scientific Linux
$ sudo yum install ansible

You can also build an RPM yourself. From the root of a checkout or tarball, use the make rpm command to build an RPM you can distribute and install. Make sure you have rpm-build, make, and python2-devel installed.

$ git clone git://github.com/ansible/ansible.git --recursive
$ cd ./ansible
$ make rpm
$ sudo rpm -Uvh ./rpmbuild/ansible-*.noarch.rpm
Latest Releases Via Apt (Ubuntu)

Ubuntu builds are available in a PPA here.

To configure the PPA on your machine and install ansible run these commands:

$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible

Note

On older Ubuntu distributions, “software-properties-common” is called “python-software-properties”.

Debian/Ubuntu packages can also be built from the source checkout, run:

$ make deb

You may also wish to run from source to get the latest, which is covered above.

Latest Releases Via Portage (Gentoo)
$ emerge -av app-admin/ansible

To install the newest version, you may need to unmask the ansible package prior to emerging:

$ echo 'app-admin/ansible' >> /etc/portage/package.accept_keywords

Note

If you have Python 3 as a default Python slot on your Gentoo nodes (default setting), then you must set ansible_python_interpreter = /usr/bin/python2 in your group or inventory variables.

Latest Releases Via pkg (FreeBSD)
$ sudo pkg install ansible

You may also wish to install from ports, run:

$ sudo make -C /usr/ports/sysutils/ansible install
Latest Releases on Mac OSX

The preferred way to install ansible on a Mac is via pip.

The instructions can be found in Latest Releases Via Pip section.

Latest Releases Via OpenCSW (Solaris)

Ansible is available for Solaris as SysV package from OpenCSW.

# pkgadd -d http://get.opencsw.org/now
# /opt/csw/bin/pkgutil -i ansible
Latest Releases Via Pacman (Arch Linux)

Ansible is available in the Community repository:

$ pacman -S ansible

The AUR has a PKGBUILD for pulling directly from Github called ansible-git.

Also see the Ansible page on the ArchWiki.

Note

If you have Python 3 as a default Python slot on your Arch nodes (default setting), then you must set ansible_python_interpreter = /usr/bin/python2 in your group or inventory variables.

Latest Releases Via Pip

Ansible can be installed via “pip”, the Python package manager. If ‘pip’ isn’t already available in your version of Python, you can get pip by:

$ sudo easy_install pip

Then install Ansible with:

$ sudo pip install ansible

If you are installing on OS X Mavericks, you may encounter some noise from your compiler. A workaround is to do the following:

$ sudo CFLAGS=-Qunused-arguments CPPFLAGS=-Qunused-arguments pip install ansible

Readers that use virtualenv can also install Ansible under virtualenv, though we’d recommend to not worry about it and just install Ansible globally. Do not use easy_install to install ansible directly.

Tarballs of Tagged Releases

Packaging Ansible or wanting to build a local package yourself, but don’t want to do a git checkout? Tarballs of releases are available on the Ansible downloads page.

These releases are also tagged in the git repository with the release version.

See also

Introduction To Ad-Hoc Commands
Examples of basic commands
Playbooks
Learning ansible’s configuration management language
Mailing List
Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net
#ansible IRC chat channel
Getting Started
Foreword

Now that you’ve read Installation and installed Ansible, it’s time to dig in and get started with some commands.

What we are showing first are not the powerful configuration/deployment/orchestration features of Ansible. These features are handled by playbooks which are covered in a separate section.

This section is about how to initially get going. Once you have these concepts down, read Introduction To Ad-Hoc Commands for some more detail, and then you’ll be ready to dive into playbooks and explore the most interesting parts!

Remote Connection Information

Before we get started, it’s important to understand how Ansible communicates with remote machines over SSH.

By default, Ansible 1.3 and later will try to use native OpenSSH for remote communication when possible. This enables ControlPersist (a performance feature), Kerberos, and options in ~/.ssh/config such as Jump Host setup. However, when using Enterprise Linux 6 operating systems as the control machine (Red Hat Enterprise Linux and derivatives such as CentOS), the version of OpenSSH may be too old to support ControlPersist. On these operating systems, Ansible will fallback into using a high-quality Python implementation of OpenSSH called ‘paramiko’. If you wish to use features like Kerberized SSH and more, consider using Fedora, OS X, or Ubuntu as your control machine until a newer version of OpenSSH is available for your platform – or engage ‘accelerated mode’ in Ansible. See Accelerated Mode.

In releases up to and including Ansible 1.2, the default was strictly paramiko. Native SSH had to be explicitly selected with the -c ssh option or set in the configuration file.

Occasionally you’ll encounter a device that doesn’t support SFTP. This is rare, but should it occur, you can switch to SCP mode in Configuration file.

When speaking with remote machines, Ansible by default assumes you are using SSH keys. SSH keys are encouraged but password authentication can also be used where needed by supplying the option --ask-pass. If using sudo features and when sudo requires a password, also supply --ask-sudo-pass.

While it may be common sense, it is worth sharing: Any management system benefits from being run near the machines being managed. If you are running Ansible in a cloud, consider running it from a machine inside that cloud. In most cases this will work better than on the open Internet.

As an advanced topic, Ansible doesn’t just have to connect remotely over SSH. The transports are pluggable, and there are options for managing things locally, as well as managing chroot, lxc, and jail containers. A mode called ‘ansible-pull’ can also invert the system and have systems ‘phone home’ via scheduled git checkouts to pull configuration directives from a central repository.

Your first commands

Now that you’ve installed Ansible, it’s time to get started with some basics.

Edit (or create) /etc/ansible/hosts and put one or more remote systems in it. Your public SSH key should be located in authorized_keys on those systems:

192.168.1.50
aserver.example.org
bserver.example.org

This is an inventory file, which is also explained in greater depth here: Inventory.

We’ll assume you are using SSH keys for authentication. To set up SSH agent to avoid retyping passwords, you can do:

$ ssh-agent bash
$ ssh-add ~/.ssh/id_rsa

(Depending on your setup, you may wish to use Ansible’s --private-key option to specify a pem file instead)

Now ping all your nodes:

$ ansible all -m ping

Ansible will attempt to remote connect to the machines using your current user name, just like SSH would. To override the remote user name, just use the ‘-u’ parameter.

If you would like to access sudo mode, there are also flags to do that:

# as bruce
$ ansible all -m ping -u bruce
# as bruce, sudoing to root
$ ansible all -m ping -u bruce --sudo
# as bruce, sudoing to batman
$ ansible all -m ping -u bruce --sudo --sudo-user batman

(The sudo implementation is changeable in Ansible’s configuration file if you happen to want to use a sudo replacement. Flags passed to sudo (like -H) can also be set there.)

Now run a live command on all of your nodes:

$ ansible all -a "/bin/echo hello"

Congratulations! You’ve just contacted your nodes with Ansible. It’s soon going to be time to: read about some more real-world cases in Introduction To Ad-Hoc Commands, explore what you can do with different modules, and to learn about the Ansible Playbooks language. Ansible is not just about running commands, it also has powerful configuration management and deployment features. There’s more to explore, but you already have a fully working infrastructure!

Host Key Checking

Ansible 1.2.1 and later have host key checking enabled by default.

If a host is reinstalled and has a different key in ‘known_hosts’, this will result in an error message until corrected. If a host is not initially in ‘known_hosts’ this will result in prompting for confirmation of the key, which results in an interactive experience if using Ansible, from say, cron. You might not want this.

If you understand the implications and wish to disable this behavior, you can do so by editing /etc/ansible/ansible.cfg or ~/.ansible.cfg:

[defaults]
host_key_checking = False

Alternatively this can be set by an environment variable:

$ export ANSIBLE_HOST_KEY_CHECKING=False

Also note that host key checking in paramiko mode is reasonably slow, therefore switching to ‘ssh’ is also recommended when using this feature.

Ansible will log some information about module arguments on the remote system in the remote syslog, unless a task or play is marked with a “no_log: True” attribute. This is explained later.

To enable basic logging on the control machine see Configuration file document and set the ‘log_path’ configuration file setting. Enterprise users may also be interested in Ansible Tower. Tower provides a very robust database logging feature where it is possible to drill down and see history based on hosts, projects, and particular inventories over time – explorable both graphically and through a REST API.

See also

Inventory
More information about inventory
Introduction To Ad-Hoc Commands
Examples of basic commands
Playbooks
Learning Ansible’s configuration management language
Mailing List
Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net
#ansible IRC chat channel
Inventory

Ansible works against multiple systems in your infrastructure at the same time. It does this by selecting portions of systems listed in Ansible’s inventory file, which defaults to being saved in the location /etc/ansible/hosts.

Not only is this inventory configurable, but you can also use multiple inventory files at the same time (explained below) and also pull inventory from dynamic or cloud sources, as described in Dynamic Inventory.

Hosts and Groups

The format for /etc/ansible/hosts is an INI-like format and looks like this:

mail.example.com

[webservers]
foo.example.com
bar.example.com

[dbservers]
one.example.com
two.example.com
three.example.com

The things in brackets are group names, which are used in classifying systems and deciding what systems you are controlling at what times and for what purpose.

It is ok to put systems in more than one group, for instance a server could be both a webserver and a dbserver. If you do, note that variables will come from all of the groups they are a member of, and variable precedence is detailed in a later chapter.

If you have hosts that run on non-standard SSH ports you can put the port number after the hostname with a colon. Ports listed in your SSH config file won’t be used with the paramiko connection but will be used with the openssh connection.

To make things explicit, it is suggested that you set them if things are not running on the default port:

badwolf.example.com:5309

Suppose you have just static IPs and want to set up some aliases that live in your host file, or you are connecting through tunnels. You can also describe hosts like this:

jumper ansible_ssh_port=5555 ansible_ssh_host=192.168.1.50

In the above example, trying to ansible against the host alias “jumper” (which may not even be a real hostname) will contact 192.168.1.50 on port 5555. Note that this is using a feature of the inventory file to define some special variables. Generally speaking this is not the best way to define variables that describe your system policy, but we’ll share suggestions on doing this later. We’re just getting started.

Adding a lot of hosts? If you have a lot of hosts following similar patterns you can do this rather than listing each hostname:

[webservers]
www[01:50].example.com

For numeric patterns, leading zeros can be included or removed, as desired. Ranges are inclusive. You can also define alphabetic ranges:

[databases]
db-[a:f].example.com

You can also select the connection type and user on a per host basis:

[targets]

localhost              ansible_connection=local
other1.example.com     ansible_connection=ssh        ansible_ssh_user=mpdehaan
other2.example.com     ansible_connection=ssh        ansible_ssh_user=mdehaan

As mentioned above, setting these in the inventory file is only a shorthand, and we’ll discuss how to store them in individual files in the ‘host_vars’ directory a bit later on.

Host Variables

As alluded to above, it is easy to assign variables to hosts that will be used later in playbooks:

[atlanta]
host1 http_port=80 maxRequestsPerChild=808
host2 http_port=303 maxRequestsPerChild=909
Group Variables

Variables can also be applied to an entire group at once:

[atlanta]
host1
host2

[atlanta:vars]
ntp_server=ntp.atlanta.example.com
proxy=proxy.atlanta.example.com
Groups of Groups, and Group Variables

It is also possible to make groups of groups using the :children suffix. Just like above, you can apply variables using :vars:

[atlanta]
host1
host2

[raleigh]
host2
host3

[southeast:children]
atlanta
raleigh

[southeast:vars]
some_server=foo.southeast.example.com
halon_system_timeout=30
self_destruct_countdown=60
escape_pods=2

[usa:children]
southeast
northeast
southwest
northwest

If you need to store lists or hash data, or prefer to keep host and group specific variables separate from the inventory file, see the next section.

Splitting Out Host and Group Specific Data

The preferred practice in Ansible is actually not to store variables in the main inventory file.

In addition to storing variables directly in the INI file, host and group variables can be stored in individual files relative to the inventory file.

These variable files are in YAML format. See YAML Syntax if you are new to YAML.

Assuming the inventory file path is:

/etc/ansible/hosts

If the host is named ‘foosball’, and in groups ‘raleigh’ and ‘webservers’, variables in YAML files at the following locations will be made available to the host:

/etc/ansible/group_vars/raleigh
/etc/ansible/group_vars/webservers
/etc/ansible/host_vars/foosball

For instance, suppose you have hosts grouped by datacenter, and each datacenter uses some different servers. The data in the groupfile ‘/etc/ansible/group_vars/raleigh’ for the ‘raleigh’ group might look like:

---
ntp_server: acme.example.org
database_server: storage.example.org

It is ok if these files do not exist, as this is an optional feature.

As an advanced use-case, you can create directories named after your groups or hosts, and Ansible will read all the files in these directories. An example with the ‘raleigh’ group:

/etc/ansible/group_vars/raleigh/db_settings
/etc/ansible/group_vars/raleigh/cluster_settings

All hosts that are in the ‘raleigh’ group will have the variables defined in these files available to them. This can be very useful to keep your variables organized when a single file starts to be too big, or when you want to use Ansible Vault on a part of a group’s variables. Note that this only works on Ansible 1.4 or later.

Tip: In Ansible 1.2 or later the group_vars/ and host_vars/ directories can exist in either the playbook directory OR the inventory directory. If both paths exist, variables in the playbook directory will override variables set in the inventory directory.

Tip: Keeping your inventory file and variables in a git repo (or other version control) is an excellent way to track changes to your inventory and host variables.

List of Behavioral Inventory Parameters

As alluded to above, setting the following variables controls how ansible interacts with remote hosts.

Host connection:

ansible_connection
  Connection type to the host. Candidates are local, smart, ssh or paramiko.  The default is smart.

Ssh connection:

ansible_ssh_host
  The name of the host to connect to, if different from the alias you wish to give to it.
ansible_ssh_port
  The ssh port number, if not 22
ansible_ssh_user
  The default ssh user name to use.
ansible_ssh_pass
  The ssh password to use (this is insecure, we strongly recommend using --ask-pass or SSH keys)
ansible_ssh_private_key_file
  Private key file used by ssh.  Useful if using multiple keys and you don't want to use SSH agent.

Privilege escalation (see Ansible Privilege Escalation for further details):

ansible_become
  Equivalent to ansible_sudo or ansible_su, allows to force privilege escalation
ansible_become_method
  Allows to set privilege escalation method
ansible_become_user
  Equivalent to ansible_sudo_user or ansible_su_user, allows to set the user you become through privilege escalation
ansible_become_pass
  Equivalent to ansible_sudo_pass or ansible_su_pass, allows you to set the privilege escalation password

Remote host environment parameters:

ansible_shell_type
  The shell type of the target system. Commands are formatted using 'sh'-style syntax by default. Setting this to 'csh' or 'fish' will cause commands executed on target systems to follow those shell's syntax instead.
ansible_python_interpreter
  The target host python path. This is useful for systems with more
  than one Python or not located at "/usr/bin/python" such as \*BSD, or where /usr/bin/python
  is not a 2.X series Python.  We do not use the "/usr/bin/env" mechanism as that requires the remote user's
  path to be set right and also assumes the "python" executable is named python, where the executable might
  be named something like "python26".
ansible\_\*\_interpreter
  Works for anything such as ruby or perl and works just like ansible_python_interpreter.
  This replaces shebang of modules which will run on that host.

Examples from a host file:

some_host         ansible_ssh_port=2222     ansible_ssh_user=manager
aws_host          ansible_ssh_private_key_file=/home/example/.ssh/aws.pem
freebsd_host      ansible_python_interpreter=/usr/local/bin/python
ruby_module_host  ansible_ruby_interpreter=/usr/bin/ruby.1.9.3

See also

Dynamic Inventory
Pulling inventory from dynamic sources, such as cloud providers
Introduction To Ad-Hoc Commands
Examples of basic commands
Playbooks
Learning Ansible’s configuration, deployment, and orchestration language.
Mailing List
Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net
#ansible IRC chat channel
Dynamic Inventory

Often a user of a configuration management system will want to keep inventory in a different software system. Ansible provides a basic text-based system as described in Inventory but what if you want to use something else?

Frequent examples include pulling inventory from a cloud provider, LDAP, Cobbler, or a piece of expensive enterprisey CMDB software.

Ansible easily supports all of these options via an external inventory system. The contrib/inventory directory contains some of these already – including options for EC2/Eucalyptus, Rackspace Cloud, and OpenStack, examples of some of which will be detailed below.

Ansible Tower also provides a database to store inventory results that is both web and REST Accessible. Tower syncs with all Ansible dynamic inventory sources you might be using, and also includes a graphical inventory editor. By having a database record of all of your hosts, it’s easy to correlate past event history and see which ones have had failures on their last playbook runs.

For information about writing your own dynamic inventory source, see Developing Dynamic Inventory Sources.

Example: The Cobbler External Inventory Script

It is expected that many Ansible users with a reasonable amount of physical hardware may also be Cobbler users. (note: Cobbler was originally written by Michael DeHaan and is now led by James Cammarata, who also works for Ansible, Inc).

While primarily used to kickoff OS installations and manage DHCP and DNS, Cobbler has a generic layer that allows it to represent data for multiple configuration management systems (even at the same time), and has been referred to as a ‘lightweight CMDB’ by some admins.

To tie Ansible’s inventory to Cobbler (optional), copy this script to /etc/ansible and chmod +x the file. cobblerd will now need to be running when you are using Ansible and you’ll need to use Ansible’s -i command line option (e.g. -i /etc/ansible/cobbler.py). This particular script will communicate with Cobbler using Cobbler’s XMLRPC API.

First test the script by running /etc/ansible/cobbler.py directly. You should see some JSON data output, but it may not have anything in it just yet.

Let’s explore what this does. In cobbler, assume a scenario somewhat like the following:

cobbler profile add --name=webserver --distro=CentOS6-x86_64
cobbler profile edit --name=webserver --mgmt-classes="webserver" --ksmeta="a=2 b=3"
cobbler system edit --name=foo --dns-name="foo.example.com" --mgmt-classes="atlanta" --ksmeta="c=4"
cobbler system edit --name=bar --dns-name="bar.example.com" --mgmt-classes="atlanta" --ksmeta="c=5"

In the example above, the system ‘foo.example.com’ will be addressable by ansible directly, but will also be addressable when using the group names ‘webserver’ or ‘atlanta’. Since Ansible uses SSH, we’ll try to contact system foo over ‘foo.example.com’, only, never just ‘foo’. Similarly, if you try “ansible foo” it wouldn’t find the system... but “ansible ‘foo*’” would, because the system DNS name starts with ‘foo’.

The script doesn’t just provide host and group info. In addition, as a bonus, when the ‘setup’ module is run (which happens automatically when using playbooks), the variables ‘a’, ‘b’, and ‘c’ will all be auto-populated in the templates:

# file: /srv/motd.j2
Welcome, I am templated with a value of a={{ a }}, b={{ b }}, and c={{ c }}

Which could be executed just like this:

ansible webserver -m setup
ansible webserver -m template -a "src=/tmp/motd.j2 dest=/etc/motd"

Note

The name ‘webserver’ came from cobbler, as did the variables for the config file. You can still pass in your own variables like normal in Ansible, but variables from the external inventory script will override any that have the same name.

So, with the template above (motd.j2), this would result in the following data being written to /etc/motd for system ‘foo’:

Welcome, I am templated with a value of a=2, b=3, and c=4

And on system ‘bar’ (bar.example.com):

Welcome, I am templated with a value of a=2, b=3, and c=5

And technically, though there is no major good reason to do it, this also works too:

ansible webserver -m shell -a "echo {{ a }}"

So in other words, you can use those variables in arguments/actions as well.

Example: AWS EC2 External Inventory Script

If you use Amazon Web Services EC2, maintaining an inventory file might not be the best approach, because hosts may come and go over time, be managed by external applications, or you might even be using AWS autoscaling. For this reason, you can use the EC2 external inventory script.

You can use this script in one of two ways. The easiest is to use Ansible’s -i command line option and specify the path to the script after marking it executable:

ansible -i ec2.py -u ubuntu us-east-1d -m ping

The second option is to copy the script to /etc/ansible/hosts and chmod +x it. You will also need to copy the ec2.ini file to /etc/ansible/ec2.ini. Then you can run ansible as you would normally.

To successfully make an API call to AWS, you will need to configure Boto (the Python interface to AWS). There are a variety of methods available, but the simplest is just to export two environment variables:

export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'

You can test the script by itself to make sure your config is correct:

cd contrib/inventory
./ec2.py --list

After a few moments, you should see your entire EC2 inventory across all regions in JSON.

Since each region requires its own API call, if you are only using a small set of regions, feel free to edit ec2.ini and list only the regions you are interested in. There are other config options in ec2.ini including cache control, and destination variables.

At their heart, inventory files are simply a mapping from some name to a destination address. The default ec2.ini settings are configured for running Ansible from outside EC2 (from your laptop for example) – and this is not the most efficient way to manage EC2.

If you are running Ansible from within EC2, internal DNS names and IP addresses may make more sense than public DNS names. In this case, you can modify the destination_variable in ec2.ini to be the private DNS name of an instance. This is particularly important when running Ansible within a private subnet inside a VPC, where the only way to access an instance is via its private IP address. For VPC instances, vpc_destination_variable in ec2.ini provides a means of using which ever boto.ec2.instance variable makes the most sense for your use case.

The EC2 external inventory provides mappings to instances from several groups:

Global
All instances are in group ec2.
Instance ID
These are groups of one since instance IDs are unique. e.g. i-00112233 i-a1b1c1d1
Region
A group of all instances in an AWS region. e.g. us-east-1 us-west-2
Availability Zone
A group of all instances in an availability zone. e.g. us-east-1a us-east-1b
Security Group
Instances belong to one or more security groups. A group is created for each security group, with all characters except alphanumerics, dashes (-) converted to underscores (_). Each group is prefixed by security_group_ e.g. security_group_default security_group_webservers security_group_Pete_s_Fancy_Group
Tags
Each instance can have a variety of key/value pairs associated with it called Tags. The most common tag key is ‘Name’, though anything is possible. Each key/value pair is its own group of instances, again with special characters converted to underscores, in the format tag_KEY_VALUE e.g. tag_Name_Web can be used as is tag_Name_redis-master-001 becomes tag_Name_redis_master_001 tag_aws_cloudformation_logical-id_WebServerGroup becomes tag_aws_cloudformation_logical_id_WebServerGroup

When the Ansible is interacting with a specific server, the EC2 inventory script is called again with the --host HOST option. This looks up the HOST in the index cache to get the instance ID, and then makes an API call to AWS to get information about that specific instance. It then makes information about that instance available as variables to your playbooks. Each variable is prefixed by ec2_. Here are some of the variables available:

  • ec2_architecture
  • ec2_description
  • ec2_dns_name
  • ec2_id
  • ec2_image_id
  • ec2_instance_type
  • ec2_ip_address
  • ec2_kernel
  • ec2_key_name
  • ec2_launch_time
  • ec2_monitored
  • ec2_ownerId
  • ec2_placement
  • ec2_platform
  • ec2_previous_state
  • ec2_private_dns_name
  • ec2_private_ip_address
  • ec2_public_dns_name
  • ec2_ramdisk
  • ec2_region
  • ec2_root_device_name
  • ec2_root_device_type
  • ec2_security_group_ids
  • ec2_security_group_names
  • ec2_spot_instance_request_id
  • ec2_state
  • ec2_state_code
  • ec2_state_reason
  • ec2_status
  • ec2_subnet_id
  • ec2_tag_Name
  • ec2_tenancy
  • ec2_virtualization_type
  • ec2_vpc_id

Both ec2_security_group_ids and ec2_security_group_names are comma-separated lists of all security groups. Each EC2 tag is a variable in the format ec2_tag_KEY.

To see the complete list of variables available for an instance, run the script by itself:

cd contrib/inventory
./ec2.py --host ec2-12-12-12-12.compute-1.amazonaws.com

Note that the AWS inventory script will cache results to avoid repeated API calls, and this cache setting is configurable in ec2.ini. To explicitly clear the cache, you can run the ec2.py script with the --refresh-cache parameter:

# ./ec2.py --refresh-cache
Other inventory scripts

In addition to Cobbler and EC2, inventory scripts are also available for:

BSD Jails
DigitalOcean
Google Compute Engine
Linode
OpenShift
OpenStack Nova
Red Hat's SpaceWalk
Vagrant (not to be confused with the provisioner in vagrant, which is preferred)
Zabbix

Sections on how to use these in more detail will be added over time, but by looking at the “contrib/inventory” directory of the Ansible checkout it should be very obvious how to use them. The process for the AWS inventory script is the same.

If you develop an interesting inventory script that might be general purpose, please submit a pull request – we’d likely be glad to include it in the project.

Using Multiple Inventory Sources

If the location given to -i in Ansible is a directory (or as so configured in ansible.cfg), Ansible can use multiple inventory sources at the same time. When doing so, it is possible to mix both dynamic and statically managed inventory sources in the same ansible run. Instant hybrid cloud!

Static Groups of Dynamic Groups

When defining groups of groups in the static inventory file, the child groups must also be defined in the static inventory file, or ansible will return an error. If you want to define a static group of dynamic child groups, define the dynamic groups as empty in the static inventory file. For example:

[tag_Name_staging_foo]

[tag_Name_staging_bar]

[staging:children]
tag_Name_staging_foo
tag_Name_staging_bar

See also

Inventory
All about static inventory files
Mailing List
Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net
#ansible IRC chat channel
Patterns

Topics

Patterns in Ansible are how we decide which hosts to manage. This can mean what hosts to communicate with, but in terms of Playbooks it actually means what hosts to apply a particular configuration or IT process to.

We’ll go over how to use the command line in Introduction To Ad-Hoc Commands section, however, basically it looks like this:

ansible <pattern_goes_here> -m <module_name> -a <arguments>

Such as:

ansible webservers -m service -a "name=httpd state=restarted"

A pattern usually refers to a set of groups (which are sets of hosts) – in the above case, machines in the “webservers” group.

Anyway, to use Ansible, you’ll first need to know how to tell Ansible which hosts in your inventory to talk to. This is done by designating particular host names or groups of hosts.

The following patterns are equivalent and target all hosts in the inventory:

all
*

It is also possible to address a specific host or set of hosts by name:

one.example.com
one.example.com:two.example.com
192.168.1.50
192.168.1.*

The following patterns address one or more groups. Groups separated by a colon indicate an “OR” configuration. This means the host may be in either one group or the other:

webservers
webservers:dbservers

You can exclude groups as well, for instance, all machines must be in the group webservers but not in the group phoenix:

webservers:!phoenix

You can also specify the intersection of two groups. This would mean the hosts must be in the group webservers and the host must also be in the group staging:

webservers:&staging

You can do combinations:

webservers:dbservers:&staging:!phoenix

The above configuration means “all machines in the groups ‘webservers’ and ‘dbservers’ are to be managed if they are in the group ‘staging’ also, but the machines are not to be managed if they are in the group ‘phoenix’ ... whew!

You can also use variables if you want to pass some group specifiers via the “-e” argument to ansible-playbook, but this is uncommonly used:

webservers:!{{excluded}}:&{{required}}

You also don’t have to manage by strictly defined groups. Individual host names, IPs and groups, can also be referenced using wildcards:

*.example.com
*.com

It’s also ok to mix wildcard patterns and groups at the same time:

one*.com:dbservers

As an advanced usage, you can also select the numbered server in a group:

webservers[0]

Or a portion of servers in a group:

webservers[0-25]

Most people don’t specify patterns as regular expressions, but you can. Just start the pattern with a ‘~’:

~(web|db).*\.example\.com

While we’re jumping a bit ahead, additionally, you can add an exclusion criteria just by supplying the --limit flag to /usr/bin/ansible or /usr/bin/ansible-playbook:

ansible-playbook site.yml --limit datacenter2

And if you want to read the list of hosts from a file, prefix the file name with ‘@’. Since Ansible 1.2:

ansible-playbook site.yml --limit @retry_hosts.txt

Easy enough. See Introduction To Ad-Hoc Commands and then Playbooks for how to apply this knowledge.

See also

Introduction To Ad-Hoc Commands
Examples of basic commands
Playbooks
Learning ansible’s configuration management language
Mailing List
Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net
#ansible IRC chat channel
Introduction To Ad-Hoc Commands

The following examples show how to use /usr/bin/ansible for running ad hoc tasks.

What’s an ad-hoc command?

An ad-hoc command is something that you might type in to do something really quick, but don’t want to save for later.

This is a good place to start to understand the basics of what Ansible can do prior to learning the playbooks language – ad-hoc commands can also be used to do quick things that you might not necessarily want to write a full playbook for.

Generally speaking, the true power of Ansible lies in playbooks. Why would you use ad-hoc tasks versus playbooks?

For instance, if you wanted to power off all of your lab for Christmas vacation, you could execute a quick one-liner in Ansible without writing a playbook.

For configuration management and deployments, though, you’ll want to pick up on using ‘/usr/bin/ansible-playbook’ – the concepts you will learn here will port over directly to the playbook language.

(See Playbooks for more information about those)

If you haven’t read Inventory already, please look that over a bit first and then we’ll get going.

Parallelism and Shell Commands

Arbitrary example.

Let’s use Ansible’s command line tool to reboot all web servers in Atlanta, 10 at a time. First, let’s set up SSH-agent so it can remember our credentials:

$ ssh-agent bash
$ ssh-add ~/.ssh/id_rsa

If you don’t want to use ssh-agent and want to instead SSH with a password instead of keys, you can with --ask-pass (-k), but it’s much better to just use ssh-agent.

Now to run the command on all servers in a group, in this case, atlanta, in 10 parallel forks:

$ ansible atlanta -a "/sbin/reboot" -f 10

/usr/bin/ansible will default to running from your user account. If you do not like this behavior, pass in “-u username”. If you want to run commands as a different user, it looks like this:

$ ansible atlanta -a "/usr/bin/foo" -u username

Often you’ll not want to just do things from your user account. If you want to run commands through sudo:

$ ansible atlanta -a "/usr/bin/foo" -u username --sudo [--ask-sudo-pass]

Use --ask-sudo-pass (-K) if you are not using passwordless sudo. This will interactively prompt you for the password to use. Use of passwordless sudo makes things easier to automate, but it’s not required.

It is also possible to sudo to a user other than root using --sudo-user (-U):

$ ansible atlanta -a "/usr/bin/foo" -u username -U otheruser [--ask-sudo-pass]

Note

Rarely, some users have security rules where they constrain their sudo environment to running specific command paths only. This does not work with ansible’s no-bootstrapping philosophy and hundreds of different modules. If doing this, use Ansible from a special account that does not have this constraint. One way of doing this without sharing access to unauthorized users would be gating Ansible with Ansible Tower, which can hold on to an SSH credential and let members of certain organizations use it on their behalf without having direct access.

Ok, so those are basics. If you didn’t read about patterns and groups yet, go back and read Patterns.

The -f 10 in the above specifies the usage of 10 simultaneous processes to use. You can also set this in Configuration file to avoid setting it again. The default is actually 5, which is really small and conservative. You are probably going to want to talk to a lot more simultaneous hosts so feel free to crank this up. If you have more hosts than the value set for the fork count, Ansible will talk to them, but it will take a little longer. Feel free to push this value as high as your system can handle it!

You can also select what Ansible “module” you want to run. Normally commands also take a -m for module name, but the default module name is ‘command’, so we didn’t need to specify that all of the time. We’ll use -m in later examples to run some other About Modules.

Note

The command module does not support shell variables and things like piping. If we want to execute a module using a shell, use the ‘shell’ module instead. Read more about the differences on the About Modules page.

Using the shell module looks like this:

$ ansible raleigh -m shell -a 'echo $TERM'

When running any command with the Ansible ad hoc CLI (as opposed to Playbooks), pay particular attention to shell quoting rules, so the local shell doesn’t eat a variable before it gets passed to Ansible. For example, using double rather than single quotes in the above example would evaluate the variable on the box you were on.

So far we’ve been demoing simple command execution, but most Ansible modules usually do not work like simple scripts. They make the remote system look like you state, and run the commands necessary to get it there. This is commonly referred to as ‘idempotence’, and is a core design goal of Ansible. However, we also recognize that running arbitrary commands is equally important, so Ansible easily supports both.

File Transfer

Here’s another use case for the /usr/bin/ansible command line. Ansible can SCP lots of files to multiple machines in parallel.

To transfer a file directly to many servers:

$ ansible atlanta -m copy -a "src=/etc/hosts dest=/tmp/hosts"

If you use playbooks, you can also take advantage of the template module, which takes this another step further. (See module and playbook documentation).

The file module allows changing ownership and permissions on files. These same options can be passed directly to the copy module as well:

$ ansible webservers -m file -a "dest=/srv/foo/a.txt mode=600"
$ ansible webservers -m file -a "dest=/srv/foo/b.txt mode=600 owner=mdehaan group=mdehaan"

The file module can also create directories, similar to mkdir -p:

$ ansible webservers -m file -a "dest=/path/to/c mode=755 owner=mdehaan group=mdehaan state=directory"

As well as delete directories (recursively) and delete files:

$ ansible webservers -m file -a "dest=/path/to/c state=absent"
Managing Packages

There are modules available for yum and apt. Here are some examples with yum.

Ensure a package is installed, but don’t update it:

$ ansible webservers -m yum -a "name=acme state=present"

Ensure a package is installed to a specific version:

$ ansible webservers -m yum -a "name=acme-1.5 state=present"

Ensure a package is at the latest version:

$ ansible webservers -m yum -a "name=acme state=latest"

Ensure a package is not installed:

$ ansible webservers -m yum -a "name=acme state=absent"

Ansible has modules for managing packages under many platforms. If your package manager does not have a module available for it, you can install for other packages using the command module or (better!) contribute a module for other package managers. Stop by the mailing list for info/details.

Users and Groups

The ‘user’ module allows easy creation and manipulation of existing user accounts, as well as removal of user accounts that may exist:

$ ansible all -m user -a "name=foo password=<crypted password here>"

$ ansible all -m user -a "name=foo state=absent"

See the About Modules section for details on all of the available options, including how to manipulate groups and group membership.

Deploying From Source Control

Deploy your webapp straight from git:

$ ansible webservers -m git -a "repo=git://foo.example.org/repo.git dest=/srv/myapp version=HEAD"

Since Ansible modules can notify change handlers it is possible to tell Ansible to run specific tasks when the code is updated, such as deploying Perl/Python/PHP/Ruby directly from git and then restarting apache.

Managing Services

Ensure a service is started on all webservers:

$ ansible webservers -m service -a "name=httpd state=started"

Alternatively, restart a service on all webservers:

$ ansible webservers -m service -a "name=httpd state=restarted"

Ensure a service is stopped:

$ ansible webservers -m service -a "name=httpd state=stopped"
Time Limited Background Operations

Long running operations can be backgrounded, and their status can be checked on later. If you kick hosts and don’t want to poll, it looks like this:

$ ansible all -B 3600 -P 0 -a "/usr/bin/long_running_operation --do-stuff"

If you do decide you want to check on the job status later, you can use the async_status module, passing it the job id that was returned when you ran the original job in the background:

$ ansible web1.example.com -m async_status -a "jid=488359678239.2844"

Polling is built-in and looks like this:

$ ansible all -B 1800 -P 60 -a "/usr/bin/long_running_operation --do-stuff"

The above example says “run for 30 minutes max (-B: 30*60=1800), poll for status (-P) every 60 seconds”.

Poll mode is smart so all jobs will be started before polling will begin on any machine. Be sure to use a high enough --forks value if you want to get all of your jobs started very quickly. After the time limit (in seconds) runs out (-B), the process on the remote nodes will be terminated.

Typically you’ll only be backgrounding long-running shell commands or software upgrades only. Backgrounding the copy module does not do a background file transfer. Playbooks also support polling, and have a simplified syntax for this.

Gathering Facts

Facts are described in the playbooks section and represent discovered variables about a system. These can be used to implement conditional execution of tasks but also just to get ad-hoc information about your system. You can see all facts via:

$ ansible all -m setup

It’s also possible to filter this output to just export certain facts, see the “setup” module documentation for details.

Read more about facts at Variables once you’re ready to read up on Playbooks.

See also

Configuration file
All about the Ansible config file
About Modules
A list of available modules
Playbooks
Using Ansible for configuration management & deployment
Mailing List
Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net
#ansible IRC chat channel
Configuration file

Certain settings in Ansible are adjustable via a configuration file. The stock configuration should be sufficient for most users, but there may be reasons you would want to change them.

Changes can be made and used in a configuration file which will be processed in the following order:

* ANSIBLE_CONFIG (an environment variable)
* ansible.cfg (in the current directory)
* .ansible.cfg (in the home directory)
* /etc/ansible/ansible.cfg

Prior to 1.5 the order was:

* ansible.cfg (in the current directory)
* ANSIBLE_CONFIG (an environment variable)
* .ansible.cfg (in the home directory)
* /etc/ansible/ansible.cfg

Ansible will process the above list and use the first file found. Settings in files are not merged.

Getting the latest configuration

If installing ansible from a package manager, the latest ansible.cfg should be present in /etc/ansible, possibly as a ”.rpmnew” file (or other) as appropriate in the case of updates.

If you have installed from pip or from source, however, you may want to create this file in order to override default settings in Ansible.

You may wish to consult the ansible.cfg in source control for all of the possible latest values.

Environmental configuration

Ansible also allows configuration of settings via environment variables. If these environment variables are set, they will override any setting loaded from the configuration file. These variables are for brevity not defined here, but look in ‘constants.py’ in the source tree if you want to use these. They are mostly considered to be a legacy system as compared to the config file, but are equally valid.

Explanation of values by section

The configuration file is broken up into sections. Most options are in the “general” section but some sections of the file are specific to certain connection types.

General defaults

In the [defaults] section of ansible.cfg, the following settings are tunable:

action_plugins

Actions are pieces of code in ansible that enable things like module execution, templating, and so forth.

This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from different locations:

action_plugins = ~/.ansible/plugins/action_plugins/:/usr/share/ansible_plugins/action_plugins

Most users will not need to use this feature. See Developing Plugins for more details.

ansible_managed

Ansible-managed is a string that can be inserted into files written by Ansible’s config templating system, if you use a string like:

{{ ansible_managed }}

The default configuration shows who modified a file and when:

ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}

This is useful to tell users that a file has been placed by Ansible and manual changes are likely to be overwritten.

Note that if using this feature, and there is a date in the string, the template will be reported changed each time as the date is updated.

ask_pass

This controls whether an Ansible playbook should prompt for a password by default. The default behavior is no:

ask_pass=True

If using SSH keys for authentication, it’s probably not needed to change this setting.

ask_sudo_pass

Similar to ask_pass, this controls whether an Ansible playbook should prompt for a sudo password by default when sudoing. The default behavior is also no:

ask_sudo_pass=True

Users on platforms where sudo passwords are enabled should consider changing this setting.

ask_vault_pass

This controls whether an Ansible playbook should prompt for the vault password by default. The default behavior is no:

ask_vault_pass=True
bin_ansible_callbacks

New in version 1.8.

Controls whether callback plugins are loaded when running /usr/bin/ansible. This may be used to log activity from the command line, send notifications, and so on. Callback plugins are always loaded for /usr/bin/ansible-playbook if present and cannot be disabled:

bin_ansible_callbacks=False

Prior to 1.8, callbacks were never loaded for /usr/bin/ansible.

callback_plugins

Callbacks are pieces of code in ansible that get called on specific events, permitting to trigger notifications.

This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from different locations:

callback_plugins = ~/.ansible/plugins/callback_plugins/:/usr/share/ansible_plugins/callback_plugins

Most users will not need to use this feature. See Developing Plugins for more details

stdout_callback

New in version 2.0.

This setting allows you to override the default stdout callback for ansible-playbook.

callback_whitelist

New in version 2.0.

Now ansible ships with all included callback plugins ready to use but they are disabled by default, this setting lets you enable a list of additional callbacks, this cannot change or override the default stdout callback, use stdout_callback for that.

command_warnings

New in version 1.8.

By default since Ansible 1.8, Ansible will warn when usage of the shell and command module appear to be simplified by using a default Ansible module instead. This can include reminders to use the ‘git’ module instead of shell commands to execute ‘git’. Using modules when possible over arbitrary shell commands can lead to more reliable and consistent playbook runs, and also easier to maintain playbooks:

command_warnings = False

These warnings can be silenced by adjusting the following setting or adding warn=yes or warn=no to the end of the command line parameter string, like so:

- name: usage of git that could be replaced with the git module
  shell: git update foo warn=yes
connection_plugins

Connections plugin permit to extend the channel used by ansible to transport commands and files.

This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from different locations:

connection_plugins = ~/.ansible/plugins/connection_plugins/:/usr/share/ansible_plugins/connection_plugins

Most users will not need to use this feature. See Developing Plugins for more details

deprecation_warnings

New in version 1.3.

Allows disabling of deprecating warnings in ansible-playbook output:

deprecation_warnings = True

Deprecation warnings indicate usage of legacy features that are slated for removal in a future release of Ansible.

display_skipped_hosts

If set to False, ansible will not display any status for a task that is skipped. The default behavior is to display skipped tasks:

display_skipped_hosts=True

Note that Ansible will always show the task header for any task, regardless of whether or not the task is skipped.

error_on_undefined_vars

On by default since Ansible 1.3, this causes ansible to fail steps that reference variable names that are likely typoed:

error_on_undefined_vars=True

If set to False, any ‘{{ template_expression }}’ that contains undefined variables will be rendered in a template or ansible action line exactly as written.

executable

This indicates the command to use to spawn a shell under a sudo environment. Users may need to change this to /bin/bash in rare instances when sudo is constrained, but in most cases it may be left as is:

executable = /bin/bash
filter_plugins

Filters are specific functions that can be used to extend the template system.

This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from different locations:

filter_plugins = ~/.ansible/plugins/filter_plugins/:/usr/share/ansible_plugins/filter_plugins

Most users will not need to use this feature. See Developing Plugins for more details

force_color

This options forces color mode even when running without a TTY:

force_color = 1
force_handlers

New in version 1.9.1.

This option causes notified handlers to run on a host even if a failure occurs on that host:

force_handlers = True

The default is False, meaning that handlers will not run if a failure has occurred on a host. This can also be set per play or on the command line. See Handlers and Failure for more details.

forks

This is the default number of parallel processes to spawn when communicating with remote hosts. Since Ansible 1.3, the fork number is automatically limited to the number of possible hosts, so this is really a limit of how much network and CPU load you think you can handle. Many users may set this to 50, some set it to 500 or more. If you have a large number of hosts, higher values will make actions across all of those hosts complete faster. The default is very very conservative:

forks=5
gathering

New in 1.6, the ‘gathering’ setting controls the default policy of facts gathering (variables discovered about remote systems).

The value ‘implicit’ is the default, which means that the fact cache will be ignored and facts will be gathered per play unless ‘gather_facts: False’ is set. The value ‘explicit’ is the inverse, facts will not be gathered unless directly requested in the play. The value ‘smart’ means each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the playbook run. This option can be useful for those wishing to save fact gathering time. Both ‘smart’ and ‘explicit’ will use the fact cache.

hash_behaviour

Ansible by default will override variables in specific precedence orders, as described in Variables. When a variable of higher precedence wins, it will replace the other value.

Some users prefer that variables that are hashes (aka ‘dictionaries’ in Python terms) are merged. This setting is called ‘merge’. This is not the default behavior and it does not affect variables whose values are scalars (integers, strings) or arrays. We generally recommend not using this setting unless you think you have an absolute need for it, and playbooks in the official examples repos do not use this setting:

hash_behaviour=replace

The valid values are either ‘replace’ (the default) or ‘merge’.

hostfile

This is a deprecated setting since 1.9, please look at inventory for the new setting.

host_key_checking

As described in Getting Started, host key checking is on by default in Ansible 1.3 and later. If you understand the implications and wish to disable it, you may do so here by setting the value to False:

host_key_checking=True
inventory

This is the default location of the inventory file, script, or directory that Ansible will use to determine what hosts it has available to talk to:

inventory = /etc/ansible/hosts

It used to be called hostfile in Ansible before 1.9

jinja2_extensions

This is a developer-specific feature that allows enabling additional Jinja2 extensions:

jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n

If you do not know what these do, you probably don’t need to change this setting :)

library

This is the default location Ansible looks to find modules:

library = /usr/share/ansible

Ansible knows how to look in multiple locations if you feed it a colon separated path, and it also will look for modules in the ”./library” directory alongside a playbook.

log_path

If present and configured in ansible.cfg, Ansible will log information about executions at the designated location. Be sure the user running Ansible has permissions on the logfile:

log_path=/var/log/ansible.log

This behavior is not on by default. Note that ansible will, without this setting, record module arguments called to the syslog of managed machines. Password arguments are excluded.

For Enterprise users seeking more detailed logging history, you may be interested in Ansible Tower.

lookup_plugins

This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from different locations:

lookup_plugins = ~/.ansible/plugins/lookup_plugins/:/usr/share/ansible_plugins/lookup_plugins

Most users will not need to use this feature. See Developing Plugins for more details

module_lang

This is to set the default language to communicate between the module and the system. By default, the value is ‘C’.

module_name

This is the default module name (-m) value for /usr/bin/ansible. The default is the ‘command’ module. Remember the command module doesn’t support shell variables, pipes, or quotes, so you might wish to change it to ‘shell’:

module_name = command
nocolor

By default ansible will try to colorize output to give a better indication of failure and status information. If you dislike this behavior you can turn it off by setting ‘nocolor’ to 1:

nocolor=0
nocows

By default ansible will take advantage of cowsay if installed to make /usr/bin/ansible-playbook runs more exciting. Why? We believe systems management should be a happy experience. If you do not like the cows, you can disable them by setting ‘nocows’ to 1:

nocows=0
pattern

This is the default group of hosts to talk to in a playbook if no “hosts:” stanza is supplied. The default is to talk to all hosts. You may wish to change this to protect yourself from surprises:

hosts=*

Note that /usr/bin/ansible always requires a host pattern and does not use this setting, only /usr/bin/ansible-playbook.

poll_interval

For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling), this is how often to check back on the status of those tasks when an explicit poll interval is not supplied. The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and providing a quick turnaround when something may have completed:

poll_interval=15
private_key_file

If you are using a pem file to authenticate with machines rather than SSH agent or passwords, you can set the default value here to avoid re-specifying --private-key with every invocation:

private_key_file=/path/to/file.pem
remote_port

This sets the default SSH port on all of your systems, for systems that didn’t specify an alternative value in inventory. The default is the standard 22:

remote_port = 22
remote_tmp

Ansible works by transferring modules to your remote machines, running them, and then cleaning up after itself. In some cases, you may not wish to use the default location and would like to change the path. You can do so by altering this setting:

remote_tmp = $HOME/.ansible/tmp

The default is to use a subdirectory of the user’s home directory. Ansible will then choose a random directory name inside this location.

remote_user

This is the default username ansible will connect as for /usr/bin/ansible-playbook. Note that /usr/bin/ansible will always default to the current user if this is not defined:

remote_user = root
roles_path

The roles path indicate additional directories beyond the ‘roles/’ subdirectory of a playbook project to search to find Ansible roles. For instance, if there was a source control repository of common roles and a different repository of playbooks, you might choose to establish a convention to checkout roles in /opt/mysite/roles like so:

roles_path = /opt/mysite/roles

Additional paths can be provided separated by colon characters, in the same way as other pathstrings:

roles_path = /opt/mysite/roles:/opt/othersite/roles

Roles will be first searched for in the playbook directory. Should a role not be found, it will indicate all the possible paths that were searched.

sudo_exe

If using an alternative sudo implementation on remote machines, the path to sudo can be replaced here provided the sudo implementation is matching CLI flags with the standard sudo:

sudo_exe=sudo
sudo_flags

Additional flags to pass to sudo when engaging sudo support. The default is ‘-H’ which preserves the $HOME environment variable of the original user. In some situations you may wish to add or remove flags, but in general most users will not need to change this setting:

sudo_flags=-H
sudo_user

This is the default user to sudo to if --sudo-user is not specified or ‘sudo_user’ is not specified in an Ansible playbook. The default is the most logical: ‘root’:

sudo_user=root
system_warnings

New in version 1.6.

Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts):

system_warnings = True

These may include warnings about 3rd party packages or other conditions that should be resolved if possible.

timeout

This is the default SSH timeout to use on connection attempts:

timeout = 10
transport

This is the default transport to use if “-c <transport_name>” is not specified to /usr/bin/ansible or /usr/bin/ansible-playbook. The default is ‘smart’, which will use ‘ssh’ (OpenSSH based) if the local operating system is new enough to support ControlPersist technology, and then will otherwise use ‘paramiko’. Other transport options include ‘local’, ‘chroot’, ‘jail’, and so on.

Users should usually leave this setting as ‘smart’ and let their playbooks choose an alternate setting when needed with the ‘connection:’ play parameter.

vars_plugins

This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from different locations:

vars_plugins = ~/.ansible/plugins/vars_plugins/:/usr/share/ansible_plugins/vars_plugins

Most users will not need to use this feature. See Developing Plugins for more details

vault_password_file

New in version 1.7.

Configures the path to the Vault password file as an alternative to specifying --vault-password-file on the command line:

vault_password_file = /path/to/vault_password_file

As of 1.7 this file can also be a script. If you are using a script instead of a flat file, ensure that it is marked as executable, and that the password is printed to standard output. If your script needs to prompt for data, prompts can be sent to standard error.

Privilege Escalation Settings

Ansible can use existing privilege escalation systems to allow a user to execute tasks as another. As of 1.9 ‘become’ supersedes the old sudo/su, while still being backwards compatible. Settings live under the [privilege_escalation] header.

become

The equivalent of adding sudo: or su: to a play or task, set to true/yes to activate privilege escalation. The default behavior is no:

become=True
become_method

Set the privilege escalation method. The default is sudo, other options are su, pbrun, pfexec:

become_method=su
become_user

The equivalent to ansible_sudo_user or ansible_su_user, allows to set the user you become through privilege escalation. The default is ‘root’:

become_user=root
become_ask_pass

Ask for privilege escalation password, the default is False:

become_ask_pass=True
Paramiko Specific Settings

Paramiko is the default SSH connection implementation on Enterprise Linux 6 or earlier, and is not used by default on other platforms. Settings live under the [paramiko] header.

record_host_keys

The default setting of yes will record newly discovered and approved (if host key checking is enabled) hosts in the user’s hostfile. This setting may be inefficient for large numbers of hosts, and in those situations, using the ssh transport is definitely recommended instead. Setting it to False will improve performance and is recommended when host key checking is disabled:

record_host_keys=True
OpenSSH Specific Settings

Under the [ssh_connection] header, the following settings are tunable for SSH connections. OpenSSH is the default connection type for Ansible on OSes that are new enough to support ControlPersist. (This means basically all operating systems except Enterprise Linux 6 or earlier).

ssh_args

If set, this will pass a specific set of options to Ansible rather than Ansible’s usual defaults:

ssh_args = -o ControlMaster=auto -o ControlPersist=60s

In particular, users may wish to raise the ControlPersist time to encourage performance. A value of 30 minutes may be appropriate. If ssh_args is set, the default control_path setting is not used.

control_path

This is the location to save ControlPath sockets. This defaults to:

control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r

On some systems with very long hostnames or very long path names (caused by long user names or deeply nested home directories) this can exceed the character limit on file socket names (108 characters for most platforms). In that case, you may wish to shorten the string to something like the below:

control_path = %(directory)s/%%h-%%r

Ansible 1.4 and later will instruct users to run with “-vvvv” in situations where it hits this problem and if so it is easy to tell there is too long of a Control Path filename. This may be frequently encountered on EC2. This setting is ignored if ssh_args is set.

scp_if_ssh

Occasionally users may be managing a remote system that doesn’t have SFTP enabled. If set to True, we can cause scp to be used to transfer remote files instead:

scp_if_ssh=False

There’s really no reason to change this unless problems are encountered, and then there’s also no real drawback to managing the switch. Most environments support SFTP by default and this doesn’t usually need to be changed.

pipelining

Enabling pipelining reduces the number of SSH operations required to execute a module on the remote server, by executing many ansible modules without actual file transfer. This can result in a very significant performance improvement when enabled, however when using “sudo:” operations you must first disable ‘requiretty’ in /etc/sudoers on all managed hosts.

By default, this option is disabled to preserve compatibility with sudoers configurations that have requiretty (the default on many distros), but is highly recommended if you can enable it, eliminating the need for Accelerated Mode:

pipelining=False
Accelerated Mode Settings

Under the [accelerate] header, the following settings are tunable for Accelerated Mode. Acceleration is a useful performance feature to use if you cannot enable pipelining in your environment, but is probably not needed if you can.

accelerate_port

New in version 1.3.

This is the port to use for accelerated mode:

accelerate_port = 5099
accelerate_timeout

New in version 1.4.

This setting controls the timeout for receiving data from a client. If no data is received during this time, the socket connection will be closed. A keepalive packet is sent back to the controller every 15 seconds, so this timeout should not be set lower than 15 (by default, the timeout is 30 seconds):

accelerate_timeout = 30
accelerate_connect_timeout

New in version 1.4.

This setting controls the timeout for the socket connect call, and should be kept relatively low. The connection to the accelerate_port will be attempted 3 times before Ansible will fall back to ssh or paramiko (depending on your default connection setting) to try and start the accelerate daemon remotely. The default setting is 1.0 seconds:

accelerate_connect_timeout = 1.0

Note, this value can be set to less than one second, however it is probably not a good idea to do so unless you’re on a very fast and reliable LAN. If you’re connecting to systems over the internet, it may be necessary to increase this timeout.

accelerate_daemon_timeout

New in version 1.6.

This setting controls the timeout for the accelerated daemon, as measured in minutes. The default daemon timeout is 30 minutes:

accelerate_daemon_timeout = 30

Note, prior to 1.6, the timeout was hard-coded from the time of the daemon’s launch. For version 1.6+, the timeout is now based on the last activity to the daemon and is configurable via this option.

accelerate_multi_key

New in version 1.6.

If enabled, this setting allows multiple private keys to be uploaded to the daemon. Any clients connecting to the daemon must also enable this option:

accelerate_multi_key = yes

New clients first connect to the target node over SSH to upload the key, which is done via a local socket file, so they must have the same access as the user that launched the daemon originally.

Selinux Specific Settings

These are settings that control SELinux interactions.

special_context_filesystems

New in version 1.9.

This is a list of file systems that require special treatment when dealing with security context. The normal behaviour is for operations to copy the existing context or use the user default, this changes it to use a file system dependent context. The default list is: nfs,vboxsf,fuse,ramfs

Windows Support
Windows: How Does It Work

As you may have already read, Ansible manages Linux/Unix machines using SSH by default.

Starting in version 1.7, Ansible also contains support for managing Windows machines. This uses native PowerShell remoting, rather than SSH.

Ansible will still be run from a Linux control machine, and uses the “winrm” Python module to talk to remote hosts.

No additional software needs to be installed on the remote machines for Ansible to manage them, it still maintains the agentless properties that make it popular on Linux/Unix.

Note that it is expected you have a basic understanding of Ansible prior to jumping into this section, so if you haven’t written a Linux playbook first, it might be worthwhile to dig in there first.

Installing on the Control Machine

On a Linux control machine:

pip install https://github.com/diyan/pywinrm/archive/master.zip#egg=pywinrm

If you wish to connect to domain accounts published through Active Directory (as opposed to local accounts created on the remote host):

pip install kerberos

Kerberos is installed and configured by default on OS X and many Linux distributions. If your control machine has not already done this for you, you will need to.

Inventory

Ansible’s windows support relies on a few standard variables to indicate the username, password, and connection type (windows) of the remote hosts. These variables are most easily set up in inventory. This is used instead of SSH-keys or passwords as normally fed into Ansible:

[windows]
winserver1.example.com
winserver2.example.com

In group_vars/windows.yml, define the following inventory variables:

# it is suggested that these be encrypted with ansible-vault:
# ansible-vault edit group_vars/windows.yml

ansible_ssh_user: Administrator
ansible_ssh_pass: SecretPasswordGoesHere
ansible_ssh_port: 5986
ansible_connection: winrm

Notice that the ssh_port is not actually for SSH, but this is a holdover variable name from how Ansible is mostly an SSH-oriented system. Again, Windows management will not happen over SSH.

If you have installed the kerberos module and ansible_ssh_user contains @ (e.g. username@realm), Ansible will first attempt Kerberos authentication. This method uses the principal you are authenticated to Kerberos with on the control machine and not ``ansible_ssh_user``. If that fails, either because you are not signed into Kerberos on the control machine or because the corresponding domain account on the remote host is not available, then Ansible will fall back to “plain” username/password authentication.

When using your playbook, don’t forget to specify –ask-vault-pass to provide the password to unlock the file.

Test your configuration like so, by trying to contact your Windows nodes. Note this is not an ICMP ping, but tests the Ansible communication channel that leverages Windows remoting:

ansible windows [-i inventory] -m win_ping --ask-vault-pass

If you haven’t done anything to prep your systems yet, this won’t work yet. This is covered in a later section about how to enable PowerShell remoting - and if necessary - how to upgrade PowerShell to a version that is 3 or higher.

You’ll run this command again later though, to make sure everything is working.

Windows System Prep

In order for Ansible to manage your windows machines, you will have to enable PowerShell remoting configured.

To automate setup of WinRM, you can run this PowerShell script on the remote machine.

Admins may wish to modify this setup slightly, for instance to increase the timeframe of the certificate.

Note

On Windows 7 and Server 2008 R2 machines, due to a bug in Windows Management Framework 3.0, it may be necessary to install this hotfix http://support.microsoft.com/kb/2842230 to avoid receiving out of memory and stack overflow exceptions. Newly-installed Server 2008 R2 systems which are not fully up to date with windows updates are known to have this issue.

Windows 8.1 and Server 2012 R2 are not affected by this issue as they come with Windows Management Framework 4.0.

Getting to PowerShell 3.0 or higher

PowerShell 3.0 or higher is needed for most provided Ansible modules for Windows, and is also required to run the above setup script. Note that PowerShell 3.0 is only supported on Windows 7 SP1, Windows Server 2008 SP1, and later releases of Windows.

Looking at an ansible checkout, copy the examples/scripts/upgrade_to_ps3.ps1 script onto the remote host and run a PowerShell console as an administrator. You will now be running PowerShell 3 and can try connectivity again using the win_ping technique referenced above.

What modules are available

Most of the Ansible modules in core Ansible are written for a combination of Linux/Unix machines and arbitrary web services, though there are various Windows modules as listed in the “windows” subcategory of the Ansible module index.

Browse this index to see what is available.

In many cases, it may not be necessary to even write or use an Ansible module.

In particular, the “script” module can be used to run arbitrary PowerShell scripts, allowing Windows administrators familiar with PowerShell a very native way to do things, as in the following playbook:

- hosts: windows
  tasks:
    - script: foo.ps1 --argument --other-argument

Note there are a few other Ansible modules that don’t start with “win” that also function, including “slurp”, “raw”, and “setup” (which is how fact gathering works).

Developers: Supported modules and how it works

Developing ansible modules are covered in a later section of the documentation, with a focus on Linux/Unix. What if you want to write Windows modules for ansible though?

For Windows, ansible modules are implemented in PowerShell. Skim those Linux/Unix module development chapters before proceeding.

Windows modules live in a “windows/” subfolder in the Ansible “library/” subtree. For example, if a module is named “library/windows/win_ping”, there will be embedded documentation in the “win_ping” file, and the actual PowerShell code will live in a “win_ping.ps1” file. Take a look at the sources and this will make more sense.

Modules (ps1 files) should start as follows:

#!powershell
# <license>

# WANT_JSON
# POWERSHELL_COMMON

# code goes here, reading in stdin as JSON and outputting JSON

The above magic is necessary to tell Ansible to mix in some common code and also know how to push modules out. The common code contains some nice wrappers around working with hash data structures and emitting JSON results, and possibly a few more useful things. Regular Ansible has this same concept for reusing Python code - this is just the windows equivalent.

What modules you see in windows/ are just a start. Additional modules may be submitted as pull requests to github.

Reminder: You Must Have a Linux Control Machine

Note running Ansible from a Windows control machine is NOT a goal of the project. Refrain from asking for this feature, as it limits what technologies, features, and code we can use in the main project in the future. A Linux control machine will be required to manage Windows hosts.

Cygwin is not supported, so please do not ask questions about Ansible running from Cygwin.

Windows Facts

Just as with Linux/Unix, facts can be gathered for windows hosts, which will return things such as the operating system version. To see what variables are available about a windows host, run the following:

ansible winhost.example.com -m setup

Note that this command invocation is exactly the same as the Linux/Unix equivalent.

Windows Playbook Examples

Look to the list of windows modules for most of what is possible, though also some modules like “raw” and “script” also work on Windows, as do “fetch” and “slurp”.

Here is an example of pushing and running a PowerShell script:

- name: test script module
  hosts: windows
  tasks:
    - name: run test script
      script: files/test_script.ps1

Running individual commands uses the ‘raw’ module, as opposed to the shell or command module as is common on Linux/Unix operating systems:

- name: test raw module
  hosts: windows
  tasks:
    - name: run ipconfig
      raw: ipconfig
      register: ipconfig
    - debug: var=ipconfig

And for a final example, here’s how to use the win_stat module to test for file existence. Note that the data returned by the win_stat module is slightly different than what is provided by the Linux equivalent:

- name: test stat module
  hosts: windows
  tasks:
    - name: test stat module on file
      win_stat: path="C:/Windows/win.ini"
      register: stat_file

    - debug: var=stat_file

    - name: check stat_file result
      assert:
          that:
             - "stat_file.stat.exists"
             - "not stat_file.stat.isdir"
             - "stat_file.stat.size > 0"
             - "stat_file.stat.md5"

Again, recall that the Windows modules are all listed in the Windows category of modules, with the exception that the “raw”, “script”, and “fetch” modules are also available. These modules do not start with a “win” prefix.

Windows Contributions

Windows support in Ansible is still very new, and contributions are quite welcome, whether this is in the form of new modules, tweaks to existing modules, documentation, or something else. Please stop by the ansible-devel mailing list if you would like to get involved and say hi.

See also

Developing Modules
How to write modules
Playbooks
Learning ansible’s configuration management language
List of Windows Modules
Windows specific module list, all implemented in PowerShell
Mailing List
Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net
#ansible IRC chat channel

Quickstart Video

We’ve recorded a short video that shows how to get started with Ansible that you may like to use alongside the documentation.

The quickstart video is about 30 minutes long and will show you some of the basics about your first steps with Ansible.

Enjoy, and be sure to visit the rest of the documentation to learn more.

Playbooks

Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process.

If Ansible modules are the tools in your workshop, playbooks are your design plans.

At a basic level, playbooks can be used to manage configurations of and deployments to remote machines. At a more advanced level, they can sequence multi-tier rollouts involving rolling updates, and can delegate actions to other hosts, interacting with monitoring servers and load balancers along the way.

While there’s a lot of information here, there’s no need to learn everything at once. You can start small and pick up more features over time as you need them.

Playbooks are designed to be human-readable and are developed in a basic text language. There are multiple ways to organize playbooks and the files they include, and we’ll offer up some suggestions on that and making the most out of Ansible.

It is recommended to look at Example Playbooks while reading along with the playbook documentation. These illustrate best practices as well as how to put many of the various concepts together.

Intro to Playbooks
About Playbooks

Playbooks are a completely different way to use ansible than in adhoc task execution mode, and are particularly powerful.

Simply put, playbooks are the basis for a really simple configuration management and multi-machine deployment system, unlike any that already exist, and one that is very well suited to deploying complex applications.

Playbooks can declare configurations, but they can also orchestrate steps of any manual ordered process, even as different steps must bounce back and forth between sets of machines in particular orders. They can launch tasks synchronously or asynchronously.

While you might run the main /usr/bin/ansible program for ad-hoc tasks, playbooks are more likely to be kept in source control and used to push out your configuration or assure the configurations of your remote systems are in spec.

There are also some full sets of playbooks illustrating a lot of these techniques in the ansible-examples repository. We’d recommend looking at these in another tab as you go along.

There are also many jumping off points after you learn playbooks, so hop back to the documentation index after you’re done with this section.

Playbook Language Example

Playbooks are expressed in YAML format (see YAML Syntax) and have a minimum of syntax, which intentionally tries to not be a programming language or script, but rather a model of a configuration or a process.

Each playbook is composed of one or more ‘plays’ in a list.

The goal of a play is to map a group of hosts to some well defined roles, represented by things ansible calls tasks. At a basic level, a task is nothing more than a call to an ansible module, which you should have learned about in earlier chapters.

By composing a playbook of multiple ‘plays’, it is possible to orchestrate multi-machine deployments, running certain steps on all machines in the webservers group, then certain steps on the database server group, then more commands back on the webservers group, etc.

“plays” are more or less a sports analogy. You can have quite a lot of plays that affect your systems to do different things. It’s not as if you were just defining one particular state or model, and you can run different plays at different times.

For starters, here’s a playbook that contains just one play:

---
- hosts: webservers
  vars:
    http_port: 80
    max_clients: 200
  remote_user: root
  tasks:
  - name: ensure apache is at the latest version
    yum: pkg=httpd state=latest
  - name: write the apache config file
    template: src=/srv/httpd.j2 dest=/etc/httpd.conf
    notify:
    - restart apache
  - name: ensure apache is running (and enable it at boot)
    service: name=httpd state=started enabled=yes
  handlers:
    - name: restart apache
      service: name=httpd state=restarted

We can also break task items out over multiple lines using the YAML dictionary types to supply module arguments. This can be helpful when working with tasks that have really long parameters or modules that take many parameters to keep them well structured. Below is another version of the above example but using YAML dictionaries to supply the modules with their key=value arguments.:

---
- hosts: webservers
  vars:
    http_port: 80
    max_clients: 200
  remote_user: root
  tasks:
  - name: ensure apache is at the latest version
    yum:
      pkg: httpd
      state: latest
  - name: write the apache config file
    template:
      src: /srv/httpd.j2
      dest: /etc/httpd.conf
    notify:
    - restart apache
  - name: ensure apache is running
    service:
      name: httpd
      state: started
  handlers:
    - name: restart apache
      service:
        name: httpd
        state: restarted

Playbooks can contain multiple plays. You may have a playbook that targets first the web servers, and then the database servers. For example:

---
- hosts: webservers
  remote_user: root

  tasks:
  - name: ensure apache is at the latest version
    yum: pkg=httpd state=latest
  - name: write the apache config file
    template: src=/srv/httpd.j2 dest=/etc/httpd.conf

- hosts: databases
  remote_user: root

  tasks:
  - name: ensure postgresql is at the latest version
    yum: name=postgresql state=latest
  - name: ensure that postgresql is started
    service: name=postgresql state=running

You can use this method to switch between the host group you’re targeting, the username logging into the remote servers, whether to sudo or not, and so forth. Plays, like tasks, run in the order specified in the playbook: top to bottom.

Below, we’ll break down what the various features of the playbook language are.

Basics
Hosts and Users

For each play in a playbook, you get to choose which machines in your infrastructure to target and what remote user to complete the steps (called tasks) as.

The hosts line is a list of one or more groups or host patterns, separated by colons, as described in the Patterns documentation. The remote_user is just the name of the user account:

---
- hosts: webservers
  remote_user: root

Note

The remote_user parameter was formerly called just user. It was renamed in Ansible 1.4 to make it more distinguishable from the user module (used to create users on remote systems).

Remote users can also be defined per task:

---
- hosts: webservers
  remote_user: root
  tasks:
    - name: test connection
      ping:
      remote_user: yourname

Note

The remote_user parameter for tasks was added in 1.4.

Support for running things as another user is also available (see Ansible Privilege Escalation):

---
- hosts: webservers
  remote_user: yourname
  sudo: yes

You can also use sudo on a particular task instead of the whole play:

---
- hosts: webservers
  remote_user: yourname
  tasks:
    - service: name=nginx state=started
      become: yes
      become_method: sudo

Note

The become syntax deprecates the old sudo/su specific syntax beginning in 1.9.

You can also login as you, and then become a user different than root:

---
- hosts: webservers
  remote_user: yourname
  become: yes
  become_user: postgres

You can also use other privilege escalation methods, like su:

---
- hosts: webservers
  remote_user: yourname
  become: yes
  become_method: su

If you need to specify a password to sudo, run ansible-playbook with --ask-become-pass or when using the old sudo syntax --ask-sudo-pass (-K). If you run a become playbook and the playbook seems to hang, it’s probably stuck at the privilege escalation prompt. Just Control-C to kill it and run it again adding the appropriate password.

Important

When using become_user to a user other than root, the module arguments are briefly written into a random tempfile in /tmp. These are deleted immediately after the command is executed. This only occurs when changing privileges from a user like ‘bob’ to ‘timmy’, not when going from ‘bob’ to ‘root’, or logging in directly as ‘bob’ or ‘root’. If it concerns you that this data is briefly readable (not writable), avoid transferring unencrypted passwords with become_user set. In other cases, ‘/tmp’ is not used and this does not come into play. Ansible also takes care to not log password parameters.

Tasks list

Each play contains a list of tasks. Tasks are executed in order, one at a time, against all machines matched by the host pattern, before moving on to the next task. It is important to understand that, within a play, all hosts are going to get the same task directives. It is the purpose of a play to map a selection of hosts to tasks.

When running the playbook, which runs top to bottom, hosts with failed tasks are taken out of the rotation for the entire playbook. If things fail, simply correct the playbook file and rerun.

The goal of each task is to execute a module, with very specific arguments. Variables, as mentioned above, can be used in arguments to modules.

Modules are ‘idempotent’, meaning if you run them again, they will make only the changes they must in order to bring the system to the desired state. This makes it very safe to rerun the same playbook multiple times. They won’t change things unless they have to change things.

The command and shell modules will typically rerun the same command again, which is totally ok if the command is something like ‘chmod’ or ‘setsebool’, etc. Though there is a ‘creates’ flag available which can be used to make these modules also idempotent.

Every task should have a name, which is included in the output from running the playbook. This is output for humans, so it is nice to have reasonably good descriptions of each task step. If the name is not provided though, the string fed to ‘action’ will be used for output.

Tasks can be declared using the legacy “action: module options” format, but it is recommended that you use the more conventional “module: options” format. This recommended format is used throughout the documentation, but you may encounter the older format in some playbooks.

Here is what a basic task looks like. As with most modules, the service module takes key=value arguments:

tasks:
  - name: make sure apache is running
    service: name=httpd state=running

The command and shell modules are the only modules that just take a list of arguments and don’t use the key=value form. This makes them work as simply as you would expect:

tasks:
  - name: disable selinux
    command: /sbin/setenforce 0

The command and shell module care about return codes, so if you have a command whose successful exit code is not zero, you may wish to do this:

tasks:
  - name: run this command and ignore the result
    shell: /usr/bin/somecommand || /bin/true

Or this:

tasks:
  - name: run this command and ignore the result
    shell: /usr/bin/somecommand
    ignore_errors: True

If the action line is getting too long for comfort you can break it on a space and indent any continuation lines:

tasks:
  - name: Copy ansible inventory file to client
    copy: src=/etc/ansible/hosts dest=/etc/ansible/hosts
            owner=root group=root mode=0644

Variables can be used in action lines. Suppose you defined a variable called ‘vhost’ in the ‘vars’ section, you could do this:

tasks:
  - name: create a virtual host file for {{ vhost }}
    template: src=somefile.j2 dest=/etc/httpd/conf.d/{{ vhost }}

Those same variables are usable in templates, which we’ll get to later.

Now in a very basic playbook all the tasks will be listed directly in that play, though it will usually make more sense to break up tasks using the ‘include:’ directive. We’ll show that a bit later.

Action Shorthand

New in version 0.8.

Ansible prefers listing modules like this in 0.8 and later:

template: src=templates/foo.j2 dest=/etc/foo.conf

You will notice in earlier versions, this was only available as:

action: template src=templates/foo.j2 dest=/etc/foo.conf

The old form continues to work in newer versions without any plan of deprecation.

Handlers: Running Operations On Change

As we’ve mentioned, modules are written to be ‘idempotent’ and can relay when they have made a change on the remote system. Playbooks recognize this and have a basic event system that can be used to respond to change.

These ‘notify’ actions are triggered at the end of each block of tasks in a playbook, and will only be triggered once even if notified by multiple different tasks.

For instance, multiple resources may indicate that apache needs to be restarted because they have changed a config file, but apache will only be bounced once to avoid unnecessary restarts.

Here’s an example of restarting two services when the contents of a file change, but only if the file changes:

- name: template configuration file
  template: src=template.j2 dest=/etc/foo.conf
  notify:
     - restart memcached
     - restart apache

The things listed in the ‘notify’ section of a task are called handlers.

Handlers are lists of tasks, not really any different from regular tasks, that are referenced by a globally unique name. Handlers are what notifiers notify. If nothing notifies a handler, it will not run. Regardless of how many things notify a handler, it will run only once, after all of the tasks complete in a particular play.

Here’s an example handlers section:

handlers:
    - name: restart memcached
      service: name=memcached state=restarted
    - name: restart apache
      service: name=apache state=restarted

Handlers are best used to restart services and trigger reboots. You probably won’t need them for much else.

Note

  • Notify handlers are always run in the order written.
  • Handler names live in a global namespace.
  • If two handler tasks have the same name, only one will run. *

Roles are described later on. It’s worthwhile to point out that handlers are automatically processed between ‘pre_tasks’, ‘roles’, ‘tasks’, and ‘post_tasks’ sections. If you ever want to flush all the handler commands immediately though, in 1.2 and later, you can:

tasks:
   - shell: some tasks go here
   - meta: flush_handlers
   - shell: some other tasks

In the above example any queued up handlers would be processed early when the ‘meta’ statement was reached. This is a bit of a niche case but can come in handy from time to time.

Executing A Playbook

Now that you’ve learned playbook syntax, how do you run a playbook? It’s simple. Let’s run a playbook using a parallelism level of 10:

ansible-playbook playbook.yml -f 10
Ansible-Pull

Should you want to invert the architecture of Ansible, so that nodes check in to a central location, instead of pushing configuration out to them, you can.

Ansible-pull is a small script that will checkout a repo of configuration instructions from git, and then run ansible-playbook against that content.

Assuming you load balance your checkout location, ansible-pull scales essentially infinitely.

Run ansible-pull --help for details.

There’s also a clever playbook available to configure ansible-pull via a crontab from push mode.

Tips and Tricks

Look at the bottom of the playbook execution for a summary of the nodes that were targeted and how they performed. General failures and fatal “unreachable” communication attempts are kept separate in the counts.

If you ever want to see detailed output from successful modules as well as unsuccessful ones, use the --verbose flag. This is available in Ansible 0.5 and later.

Ansible playbook output is vastly upgraded if the cowsay package is installed. Try it!

To see what hosts would be affected by a playbook before you run it, you can do this:

ansible-playbook playbook.yml --list-hosts

See also

YAML Syntax
Learn about YAML syntax
Best Practices
Various tips about managing playbooks in the real world
Ansible V2 Documentation
Hop back to the documentation index for a lot of special topics about playbooks
About Modules
Learn about available modules
Developing Modules
Learn how to extend Ansible by writing your own modules
Patterns
Learn about how to select hosts
Github examples directory
Complete end-to-end playbook examples
Mailing List
Questions? Help? Ideas? Stop by the list on Google Groups
Playbook Roles and Include Statements
Introduction

While it is possible to write a playbook in one very large file (and you might start out learning playbooks this way), eventually you’ll want to reuse files and start to organize things.

At a basic level, including task files allows you to break up bits of configuration policy into smaller files. Task includes pull in tasks from other files. Since handlers are tasks too, you can also include handler files from the ‘handlers:’ section.

See Playbooks if you need a review of these concepts.

Playbooks can also include plays from other playbook files. When that is done, the plays will be inserted into the playbook to form a longer list of plays.

When you start to think about it – tasks, handlers, variables, and so on – begin to form larger concepts. You start to think about modeling what something is, rather than how to make something look like something. It’s no longer “apply this handful of THINGS to these hosts”, you say “these hosts are dbservers” or “these hosts are webservers”. In programming, we might call that “encapsulating” how things work. For instance, you can drive a car without knowing how the engine works.

Roles in Ansible build on the idea of include files and combine them to form clean, reusable abstractions – they allow you to focus more on the big picture and only dive down into the details when needed.

We’ll start with understanding includes so roles make more sense, but our ultimate goal should be understanding roles – roles are great and you should use them every time you write playbooks.

See the ansible-examples repository on GitHub for lots of examples of all of this put together. You may wish to have this open in a separate tab as you dive in.

Task Include Files And Encouraging Reuse

Suppose you want to reuse lists of tasks between plays or playbooks. You can use include files to do this. Use of included task lists is a great way to define a role that system is going to fulfill. Remember, the goal of a play in a playbook is to map a group of systems into multiple roles. Let’s see what this looks like...

A task include file simply contains a flat list of tasks, like so:

---
# possibly saved as tasks/foo.yml

- name: placeholder foo
  command: /bin/foo

- name: placeholder bar
  command: /bin/bar

Include directives look like this, and can be mixed in with regular tasks in a playbook:

tasks:

  - include: tasks/foo.yml

You can also pass variables into includes. We call this a ‘parameterized include’.

For instance, if deploying multiple wordpress instances, I could contain all of my wordpress tasks in a single wordpress.yml file, and use it like so:

tasks:
  - include: wordpress.yml wp_user=timmy
  - include: wordpress.yml wp_user=alice
  - include: wordpress.yml wp_user=bob

Starting in 1.0, variables can also be passed to include files using an alternative syntax, which also supports structured variables:

tasks:

  - include: wordpress.yml
    vars:
        wp_user: timmy
        ssh_keys:
          - keys/one.txt
          - keys/two.txt

Using either syntax, variables passed in can then be used in the included files. We’ll cover them in Variables. You can reference them like this:

{{ wp_user }}

(In addition to the explicitly passed-in parameters, all variables from the vars section are also available for use here as well.)

Playbooks can include other playbooks too, but that’s mentioned in a later section.

Note

As of 1.0, task include statements can be used at arbitrary depth. They were previously limited to a single level, so task includes could not include other files containing task includes.

Includes can also be used in the ‘handlers’ section, for instance, if you want to define how to restart apache, you only have to do that once for all of your playbooks. You might make a handlers.yml that looks like:

---
# this might be in a file like handlers/handlers.yml
- name: restart apache
  service: name=apache state=restarted

And in your main playbook file, just include it like so, at the bottom of a play:

handlers:
  - include: handlers/handlers.yml

You can mix in includes along with your regular non-included tasks and handlers.

Includes can also be used to import one playbook file into another. This allows you to define a top-level playbook that is composed of other playbooks.

For example:

- name: this is a play at the top level of a file
  hosts: all
  remote_user: root

  tasks:

  - name: say hi
    tags: foo
    shell: echo "hi..."

- include: load_balancers.yml
- include: webservers.yml
- include: dbservers.yml

Note that you cannot do variable substitution when including one playbook inside another.

Note

You can not conditionally path the location to an include file, like you can with ‘vars_files’. If you find yourself needing to do this, consider how you can restructure your playbook to be more class/role oriented. This is to say you cannot use a ‘fact’ to decide what include file to use. All hosts contained within the play are going to get the same tasks. (‘when‘ provides some ability for hosts to conditionally skip tasks).

Roles

New in version 1.2.

Now that you have learned about tasks and handlers, what is the best way to organize your playbooks? The short answer is to use roles! Roles are ways of automatically loading certain vars_files, tasks, and handlers based on a known file structure. Grouping content by roles also allows easy sharing of roles with other users.

Roles are just automation around ‘include’ directives as described above, and really don’t contain much additional magic beyond some improvements to search path handling for referenced files. However, that can be a big thing!

Example project structure:

site.yml
webservers.yml
fooservers.yml
roles/
   common/
     files/
     templates/
     tasks/
     handlers/
     vars/
     defaults/
     meta/
   webservers/
     files/
     templates/
     tasks/
     handlers/
     vars/
     defaults/
     meta/

In a playbook, it would look like this:

---
- hosts: webservers
  roles:
     - common
     - webservers

This designates the following behaviors, for each role ‘x’:

  • If roles/x/tasks/main.yml exists, tasks listed therein will be added to the play
  • If roles/x/handlers/main.yml exists, handlers listed therein will be added to the play
  • If roles/x/vars/main.yml exists, variables listed therein will be added to the play
  • If roles/x/meta/main.yml exists, any role dependencies listed therein will be added to the list of roles (1.3 and later)
  • Any copy tasks can reference files in roles/x/files/ without having to path them relatively or absolutely
  • Any script tasks can reference scripts in roles/x/files/ without having to path them relatively or absolutely
  • Any template tasks can reference files in roles/x/templates/ without having to path them relatively or absolutely
  • Any include tasks can reference files in roles/x/tasks/ without having to path them relatively or absolutely

In Ansible 1.4 and later you can configure a roles_path to search for roles. Use this to check all of your common roles out to one location, and share them easily between multiple playbook projects. See Configuration file for details about how to set this up in ansible.cfg.

Note

Role dependencies are discussed below.

If any files are not present, they are just ignored. So it’s ok to not have a ‘vars/’ subdirectory for the role, for instance.

Note, you are still allowed to list tasks, vars_files, and handlers “loose” in playbooks without using roles, but roles are a good organizational feature and are highly recommended. If there are loose things in the playbook, the roles are evaluated first.

Also, should you wish to parameterize roles, by adding variables, you can do so, like this:

---

- hosts: webservers
  roles:
    - common
    - { role: foo_app_instance, dir: '/opt/a',  port: 5000 }
    - { role: foo_app_instance, dir: '/opt/b',  port: 5001 }

While it’s probably not something you should do often, you can also conditionally apply roles like so:

---

- hosts: webservers
  roles:
    - { role: some_role, when: "ansible_os_family == 'RedHat'" }

This works by applying the conditional to every task in the role. Conditionals are covered later on in the documentation.

Finally, you may wish to assign tags to the roles you specify. You can do so inline::

---

- hosts: webservers
  roles:
    - { role: foo, tags: ["bar", "baz"] }

If the play still has a ‘tasks’ section, those tasks are executed after roles are applied.

If you want to define certain tasks to happen before AND after roles are applied, you can do this:

---

- hosts: webservers

  pre_tasks:
    - shell: echo 'hello'

  roles:
    - { role: some_role }

  tasks:
    - shell: echo 'still busy'

  post_tasks:
    - shell: echo 'goodbye'

Note

If using tags with tasks (described later as a means of only running part of a playbook), be sure to also tag your pre_tasks and post_tasks and pass those along as well, especially if the pre and post tasks are used for monitoring outage window control or load balancing.

Role Default Variables

New in version 1.3.

Role default variables allow you to set default variables for included or dependent roles (see below). To create defaults, simply add a defaults/main.yml file in your role directory. These variables will have the lowest priority of any variables available, and can be easily overridden by any other variable, including inventory variables.

Role Dependencies

New in version 1.3.

Role dependencies allow you to automatically pull in other roles when using a role. Role dependencies are stored in the meta/main.yml file contained within the role directory. This file should contain a list of roles and parameters to insert before the specified role, such as the following in an example roles/myapp/meta/main.yml:

---
dependencies:
  - { role: common, some_parameter: 3 }
  - { role: apache, port: 80 }
  - { role: postgres, dbname: blarg, other_parameter: 12 }

Role dependencies can also be specified as a full path, just like top level roles:

---
dependencies:
   - { role: '/path/to/common/roles/foo', x: 1 }

Role dependencies can also be installed from source control repos or tar files (via galaxy) using comma separated format of path, an optional version (tag, commit, branch etc) and optional friendly role name (an attempt is made to derive a role name from the repo name or archive filename). Both through the command line or via a requirements.yml passed to ansible-galaxy.

Roles dependencies are always executed before the role that includes them, and are recursive. By default, roles can also only be added as a dependency once - if another role also lists it as a dependency it will not be run again. This behavior can be overridden by adding allow_duplicates: yes to the meta/main.yml file. For example, a role named ‘car’ could add a role named ‘wheel’ to its dependencies as follows:

---
dependencies:
- { role: wheel, n: 1 }
- { role: wheel, n: 2 }
- { role: wheel, n: 3 }
- { role: wheel, n: 4 }

And the meta/main.yml for wheel contained the following:

---
allow_duplicates: yes
dependencies:
- { role: tire }
- { role: brake }

The resulting order of execution would be as follows:

tire(n=1)
brake(n=1)
wheel(n=1)
tire(n=2)
brake(n=2)
wheel(n=2)
...
car

Note

Variable inheritance and scope are detailed in the Variables.

Embedding Modules In Roles

This is an advanced topic that should not be relevant for most users.

If you write a custom module (see Developing Modules) you may wish to distribute it as part of a role. Generally speaking, Ansible as a project is very interested in taking high-quality modules into ansible core for inclusion, so this shouldn’t be the norm, but it’s quite easy to do.

A good example for this is if you worked at a company called AcmeWidgets, and wrote an internal module that helped configure your internal software, and you wanted other people in your organization to easily use this module – but you didn’t want to tell everyone how to configure their Ansible library path.

Alongside the ‘tasks’ and ‘handlers’ structure of a role, add a directory named ‘library’. In this ‘library’ directory, then include the module directly inside of it.

Assuming you had this:

roles/
   my_custom_modules/
       library/
          module1
          module2

The module will be usable in the role itself, as well as any roles that are called after this role, as follows:

- hosts: webservers
  roles:
    - my_custom_modules
    - some_other_role_using_my_custom_modules
    - yet_another_role_using_my_custom_modules

This can also be used, with some limitations, to modify modules in Ansible’s core distribution, such as to use development versions of modules before they are released in production releases. This is not always advisable as API signatures may change in core components, however, and is not always guaranteed to work. It can be a handy way of carrying a patch against a core module, however, should you have good reason for this. Naturally the project prefers that contributions be directed back to github whenever possible via a pull request.

Ansible Galaxy

Ansible Galaxy is a free site for finding, downloading, rating, and reviewing all kinds of community developed Ansible roles and can be a great way to get a jumpstart on your automation projects.

You can sign up with social auth, and the download client ‘ansible-galaxy’ is included in Ansible 1.4.2 and later.

Read the “About” page on the Galaxy site for more information.

See also

Ansible Galaxy
How to share roles on galaxy, role management
YAML Syntax
Learn about YAML syntax
Playbooks
Review the basic Playbook language features
Best Practices
Various tips about managing playbooks in the real world
Variables
All about variables in playbooks
Conditionals
Conditionals in playbooks
Loops
Loops in playbooks
About Modules
Learn about available modules
Developing Modules
Learn how to extend Ansible by writing your own modules
GitHub Ansible examples
Complete playbook files from the GitHub project source
Mailing List
Questions? Help? Ideas? Stop by the list on Google Groups
Variables

While automation exists to make it easier to make things repeatable, all of your systems are likely not exactly alike.

On some systems you may want to set some behavior or configuration that is slightly different from others.

Also, some of the observed behavior or state of remote systems might need to influence how you configure those systems. (Such as you might need to find out the IP address of a system and even use it as a configuration value on another system).

You might have some templates for configuration files that are mostly the same, but slightly different based on those variables.

Variables in Ansible are how we deal with differences between systems.

To understand variables you’ll also want to dig into Conditionals and Loops. Useful things like the “group_by” module and the “when” conditional can also be used with variables, and to help manage differences between systems.

It’s highly recommended that you consult the ansible-examples github repository to see a lot of examples of variables put to use.

What Makes A Valid Variable Name

Before we start using variables it’s important to know what are valid variable names.

Variable names should be letters, numbers, and underscores. Variables should always start with a letter.

“foo_port” is a great variable. “foo5” is fine too.

“foo-port”, “foo port”, “foo.port” and “12” are not valid variable names.

Easy enough, let’s move on.

Variables Defined in Inventory

We’ve actually already covered a lot about variables in another section, so far this shouldn’t be terribly new, but a bit of a refresher.

Often you’ll want to set variables based on what groups a machine is in. For instance, maybe machines in Boston want to use ‘boston.ntp.example.com’ as an NTP server.

See the Inventory document for multiple ways on how to define variables in inventory.

Variables Defined in a Playbook

In a playbook, it’s possible to define variables directly inline like so:

- hosts: webservers
  vars:
    http_port: 80

This can be nice as it’s right there when you are reading the playbook.

Variables defined from included files and roles

It turns out we’ve already talked about variables in another place too.

As described in Playbook Roles and Include Statements, variables can also be included in the playbook via include files, which may or may not be part of an “Ansible Role”. Usage of roles is preferred as it provides a nice organizational system.

Using Variables: About Jinja2

It’s nice enough to know about how to define variables, but how do you use them?

Ansible allows you to reference variables in your playbooks using the Jinja2 templating system. While you can do a lot of complex things in Jinja, only the basics are things you really need to learn at first.

For instance, in a simple template, you can do something like:

My amp goes to {{ max_amp_value }}

And that will provide the most basic form of variable substitution.

This is also valid directly in playbooks, and you’ll occasionally want to do things like:

template: src=foo.cfg.j2 dest={{ remote_install_path }}/foo.cfg

In the above example, we used a variable to help decide where to place a file.

Inside a template you automatically have access to all of the variables that are in scope for a host. Actually it’s more than that – you can also read variables about other hosts. We’ll show how to do that in a bit.

Note

ansible allows Jinja2 loops and conditionals in templates, but in playbooks, we do not use them. Ansible playbooks are pure machine-parseable YAML. This is a rather important feature as it means it is possible to code-generate pieces of files, or to have other ecosystem tools read Ansible files. Not everyone will need this but it can unlock possibilities.

Jinja2 Filters

Note

These are infrequently utilized features. Use them if they fit a use case you have, but this is optional knowledge.

Filters in Jinja2 are a way of transforming template expressions from one kind of data into another. Jinja2 ships with many of these. See builtin filters in the official Jinja2 template documentation.

In addition to those, Ansible supplies many more. See the Jinja2 filters document for a list of available filters and example usage guide.

Hey Wait, A YAML Gotcha

YAML syntax requires that if you start a value with {{ foo }} you quote the whole line, since it wants to be sure you aren’t trying to start a YAML dictionary. This is covered on the YAML Syntax page.

This won’t work:

- hosts: app_servers
  vars:
      app_path: {{ base_path }}/22

Do it like this and you’ll be fine:

- hosts: app_servers
  vars:
       app_path: "{{ base_path }}/22"
Information discovered from systems: Facts

There are other places where variables can come from, but these are a type of variable that are discovered, not set by the user.

Facts are information derived from speaking with your remote systems.

An example of this might be the ip address of the remote host, or what the operating system is.

To see what information is available, try the following:

ansible hostname -m setup

This will return a ginormous amount of variable data, which may look like this, as taken from Ansible 1.4 on a Ubuntu 12.04 system:

"ansible_all_ipv4_addresses": [
    "REDACTED IP ADDRESS"
],
"ansible_all_ipv6_addresses": [
    "REDACTED IPV6 ADDRESS"
],
"ansible_architecture": "x86_64",
"ansible_bios_date": "09/20/2012",
"ansible_bios_version": "6.00",
"ansible_cmdline": {
    "BOOT_IMAGE": "/boot/vmlinuz-3.5.0-23-generic",
    "quiet": true,
    "ro": true,
    "root": "UUID=4195bff4-e157-4e41-8701-e93f0aec9e22",
    "splash": true
},
"ansible_date_time": {
    "date": "2013-10-02",
    "day": "02",
    "epoch": "1380756810",
    "hour": "19",
    "iso8601": "2013-10-02T23:33:30Z",
    "iso8601_micro": "2013-10-02T23:33:30.036070Z",
    "minute": "33",
    "month": "10",
    "second": "30",
    "time": "19:33:30",
    "tz": "EDT",
    "year": "2013"
},
"ansible_default_ipv4": {
    "address": "REDACTED",
    "alias": "eth0",
    "gateway": "REDACTED",
    "interface": "eth0",
    "macaddress": "REDACTED",
    "mtu": 1500,
    "netmask": "255.255.255.0",
    "network": "REDACTED",
    "type": "ether"
},
"ansible_default_ipv6": {},
"ansible_devices": {
    "fd0": {
        "holders": [],
        "host": "",
        "model": null,
        "partitions": {},
        "removable": "1",
        "rotational": "1",
        "scheduler_mode": "deadline",
        "sectors": "0",
        "sectorsize": "512",
        "size": "0.00 Bytes",
        "support_discard": "0",
        "vendor": null
    },
    "sda": {
        "holders": [],
        "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)",
        "model": "VMware Virtual S",
        "partitions": {
            "sda1": {
                "sectors": "39843840",
                "sectorsize": 512,
                "size": "19.00 GB",
                "start": "2048"
            },
            "sda2": {
                "sectors": "2",
                "sectorsize": 512,
                "size": "1.00 KB",
                "start": "39847934"
            },
            "sda5": {
                "sectors": "2093056",
                "sectorsize": 512,
                "size": "1022.00 MB",
                "start": "39847936"
            }
        },
        "removable": "0",
        "rotational": "1",
        "scheduler_mode": "deadline",
        "sectors": "41943040",
        "sectorsize": "512",
        "size": "20.00 GB",
        "support_discard": "0",
        "vendor": "VMware,"
    },
    "sr0": {
        "holders": [],
        "host": "IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)",
        "model": "VMware IDE CDR10",
        "partitions": {},
        "removable": "1",
        "rotational": "1",
        "scheduler_mode": "deadline",
        "sectors": "2097151",
        "sectorsize": "512",
        "size": "1024.00 MB",
        "support_discard": "0",
        "vendor": "NECVMWar"
    }
},
"ansible_distribution": "Ubuntu",
"ansible_distribution_release": "precise",
"ansible_distribution_version": "12.04",
"ansible_domain": "",
"ansible_env": {
    "COLORTERM": "gnome-terminal",
    "DISPLAY": ":0",
    "HOME": "/home/mdehaan",
    "LANG": "C",
    "LESSCLOSE": "/usr/bin/lesspipe %s %s",
    "LESSOPEN": "| /usr/bin/lesspipe %s",
    "LOGNAME": "root",
    "LS_COLORS": "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:",
    "MAIL": "/var/mail/root",
    "OLDPWD": "/root/ansible/docsite",
    "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
    "PWD": "/root/ansible",
    "SHELL": "/bin/bash",
    "SHLVL": "1",
    "SUDO_COMMAND": "/bin/bash",
    "SUDO_GID": "1000",
    "SUDO_UID": "1000",
    "SUDO_USER": "mdehaan",
    "TERM": "xterm",
    "USER": "root",
    "USERNAME": "root",
    "XAUTHORITY": "/home/mdehaan/.Xauthority",
    "_": "/usr/local/bin/ansible"
},
"ansible_eth0": {
    "active": true,
    "device": "eth0",
    "ipv4": {
        "address": "REDACTED",
        "netmask": "255.255.255.0",
        "network": "REDACTED"
    },
    "ipv6": [
        {
            "address": "REDACTED",
            "prefix": "64",
            "scope": "link"
        }
    ],
    "macaddress": "REDACTED",
    "module": "e1000",
    "mtu": 1500,
    "type": "ether"
},
"ansible_form_factor": "Other",
"ansible_fqdn": "ubuntu2.example.com",
"ansible_hostname": "ubuntu2",
"ansible_interfaces": [
    "lo",
    "eth0"
],
"ansible_kernel": "3.5.0-23-generic",
"ansible_lo": {
    "active": true,
    "device": "lo",
    "ipv4": {
        "address": "127.0.0.1",
        "netmask": "255.0.0.0",
        "network": "127.0.0.0"
    },
    "ipv6": [
        {
            "address": "::1",
            "prefix": "128",
            "scope": "host"
        }
    ],
    "mtu": 16436,
    "type": "loopback"
},
"ansible_lsb": {
    "codename": "precise",
    "description": "Ubuntu 12.04.2 LTS",
    "id": "Ubuntu",
    "major_release": "12",
    "release": "12.04"
},
"ansible_machine": "x86_64",
"ansible_memfree_mb": 74,
"ansible_memtotal_mb": 991,
"ansible_mounts": [
    {
        "device": "/dev/sda1",
        "fstype": "ext4",
        "mount": "/",
        "options": "rw,errors=remount-ro",
        "size_available": 15032406016,
        "size_total": 20079898624
    }
],
"ansible_nodename": "ubuntu2.example.com",
"ansible_os_family": "Debian",
"ansible_pkg_mgr": "apt",
"ansible_processor": [
    "Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz"
],
"ansible_processor_cores": 1,
"ansible_processor_count": 1,
"ansible_processor_threads_per_core": 1,
"ansible_processor_vcpus": 1,
"ansible_product_name": "VMware Virtual Platform",
"ansible_product_serial": "REDACTED",
"ansible_product_uuid": "REDACTED",
"ansible_product_version": "None",
"ansible_python_version": "2.7.3",
"ansible_selinux": false,
"ansible_ssh_host_key_dsa_public": "REDACTED KEY VALUE"
"ansible_ssh_host_key_ecdsa_public": "REDACTED KEY VALUE"
"ansible_ssh_host_key_rsa_public": "REDACTED KEY VALUE"
"ansible_swapfree_mb": 665,
"ansible_swaptotal_mb": 1021,
"ansible_system": "Linux",
"ansible_system_vendor": "VMware, Inc.",
"ansible_user_id": "root",
"ansible_userspace_architecture": "x86_64",
"ansible_userspace_bits": "64",
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "VMware"

In the above the model of the first harddrive may be referenced in a template or playbook as:

{{ ansible_devices.sda.model }}

Similarly, the hostname as the system reports it is:

{{ ansible_nodename }}

and the unqualified hostname shows the string before the first period(.):

{{ ansible_hostname }}

Facts are frequently used in conditionals (see Conditionals) and also in templates.

Facts can be also used to create dynamic groups of hosts that match particular criteria, see the About Modules documentation on ‘group_by’ for details, as well as in generalized conditional statements as discussed in the Conditionals chapter.

Turning Off Facts

If you know you don’t need any fact data about your hosts, and know everything about your systems centrally, you can turn off fact gathering. This has advantages in scaling Ansible in push mode with very large numbers of systems, mainly, or if you are using Ansible on experimental platforms. In any play, just do this:

- hosts: whatever
  gather_facts: no
Local Facts (Facts.d)

New in version 1.3.

As discussed in the playbooks chapter, Ansible facts are a way of getting data about remote systems for use in playbook variables.

Usually these are discovered automatically by the ‘setup’ module in Ansible. Users can also write custom facts modules, as described in the API guide. However, what if you want to have a simple way to provide system or user provided data for use in Ansible variables, without writing a fact module?

For instance, what if you want users to be able to control some aspect about how their systems are managed? “Facts.d” is one such mechanism.

Note

Perhaps “local facts” is a bit of a misnomer, it means “locally supplied user values” as opposed to “centrally supplied user values”, or what facts are – “locally dynamically determined values”.

If a remotely managed system has an “/etc/ansible/facts.d” directory, any files in this directory ending in ”.fact”, can be JSON, INI, or executable files returning JSON, and these can supply local facts in Ansible.

For instance assume a /etc/ansible/facts.d/preferences.fact:

[general]
asdf=1
bar=2

This will produce a hash variable fact named “general” with ‘asdf’ and ‘bar’ as members. To validate this, run the following:

ansible <hostname> -m setup -a "filter=ansible_local"

And you will see the following fact added:

"ansible_local": {
        "preferences": {
            "general": {
                "asdf" : "1",
                "bar"  : "2"
            }
        }
 }

And this data can be accessed in a template/playbook as:

{{ ansible_local.preferences.general.asdf }}

The local namespace prevents any user supplied fact from overriding system facts or variables defined elsewhere in the playbook.

If you have a playbook that is copying over a custom fact and then running it, making an explicit call to re-run the setup module can allow that fact to be used during that particular play. Otherwise, it will be available in the next play that gathers fact information. Here is an example of what that might look like:

- hosts: webservers
  tasks:
    - name: create directory for ansible custom facts
      file: state=directory recurse=yes path=/etc/ansible/facts.d
    - name: install custom impi fact
      copy: src=ipmi.fact dest=/etc/ansible/facts.d
    - name: re-read facts after adding custom fact
      setup: filter=ansible_local

In this pattern however, you could also write a fact module as well, and may wish to consider this as an option.

Fact Caching

New in version 1.8.

As shown elsewhere in the docs, it is possible for one server to reference variables about another, like so:

{{ hostvars['asdf.example.com']['ansible_os_family'] }}

With “Fact Caching” disabled, in order to do this, Ansible must have already talked to ‘asdf.example.com’ in the current play, or another play up higher in the playbook. This is the default configuration of ansible.

To avoid this, Ansible 1.8 allows the ability to save facts between playbook runs, but this feature must be manually enabled. Why might this be useful?

Imagine, for instance, a very large infrastructure with thousands of hosts. Fact caching could be configured to run nightly, but configuration of a small set of servers could run ad-hoc or periodically throughout the day. With fact-caching enabled, it would not be necessary to “hit” all servers to reference variables and information about them.

With fact caching enabled, it is possible for machine in one group to reference variables about machines in the other group, despite the fact that they have not been communicated with in the current execution of /usr/bin/ansible-playbook.

To benefit from cached facts, you will want to change the ‘gathering’ setting to ‘smart’ or ‘explicit’ or set ‘gather_facts’ to False in most plays.

Currently, Ansible ships with two persistent cache plugins: redis and jsonfile.

To configure fact caching using redis, enable it in ansible.cfg as follows:

[defaults]
gathering = smart
fact_caching = redis
fact_caching_timeout = 86400
# seconds

To get redis up and running, perform the equivalent OS commands:

yum install redis
service redis start
pip install redis

Note that the Python redis library should be installed from pip, the version packaged in EPEL is too old for use by Ansible.

In current embodiments, this feature is in beta-level state and the Redis plugin does not support port or password configuration, this is expected to change in the near future.

To configure fact caching using jsonfile, enable it in ansible.cfg as follows:

[defaults]
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /path/to/cachedir
fact_caching_timeout = 86400
# seconds

fact_caching_connection is a local filesystem path to a writeable directory (ansible will attempt to create the directory if one does not exist).

Registered Variables

Another major use of variables is running a command and using the result of that command to save the result into a variable. Results will vary from module to module. Use of -v when executing playbooks will show possible values for the results.

The value of a task being executed in ansible can be saved in a variable and used later. See some examples of this in the Conditionals chapter.

While it’s mentioned elsewhere in that document too, here’s a quick syntax example:

- hosts: web_servers

  tasks:

     - shell: /usr/bin/foo
       register: foo_result
       ignore_errors: True

     - shell: /usr/bin/bar
       when: foo_result.rc == 5

Registered variables are valid on the host the remainder of the playbook run, which is the same as the lifetime of “facts” in Ansible. Effectively registered variables are just like facts.

Accessing Complex Variable Data

We already talked about facts a little higher up in the documentation.

Some provided facts, like networking information, are made available as nested data structures. To access them a simple {{ foo }} is not sufficient, but it is still easy to do. Here’s how we get an IP address:

{{ ansible_eth0["ipv4"]["address"] }}

OR alternatively:

{{ ansible_eth0.ipv4.address }}

Similarly, this is how we access the first element of an array:

{{ foo[0] }}
Magic Variables, and How To Access Information About Other Hosts

Even if you didn’t define them yourself, Ansible provides a few variables for you automatically. The most important of these are ‘hostvars’, ‘group_names’, and ‘groups’. Users should not use these names themselves as they are reserved. ‘environment’ is also reserved.

Hostvars lets you ask about the variables of another host, including facts that have been gathered about that host. If, at this point, you haven’t talked to that host yet in any play in the playbook or set of playbooks, you can get at the variables, but you will not be able to see the facts.

If your database server wants to use the value of a ‘fact’ from another node, or an inventory variable assigned to another node, it’s easy to do so within a template or even an action line:

{{ hostvars['test.example.com']['ansible_distribution'] }}

Additionally, group_names is a list (array) of all the groups the current host is in. This can be used in templates using Jinja2 syntax to make template source files that vary based on the group membership (or role) of the host:

{% if 'webserver' in group_names %}
   # some part of a configuration file that only applies to webservers
{% endif %}

groups is a list of all the groups (and hosts) in the inventory. This can be used to enumerate all hosts within a group. For example:

{% for host in groups['app_servers'] %}
   # something that applies to all app servers.
{% endfor %}

A frequently used idiom is walking a group to find all IP addresses in that group:

{% for host in groups['app_servers'] %}
   {{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}
{% endfor %}

An example of this could include pointing a frontend proxy server to all of the app servers, setting up the correct firewall rules between servers, etc. You need to make sure that the facts of those hosts have been populated before though, for example by running a play against them if the facts have not been cached recently (fact caching was added in Ansible 1.8).

Additionally, inventory_hostname is the name of the hostname as configured in Ansible’s inventory host file. This can be useful for when you don’t want to rely on the discovered hostname ansible_hostname or for other mysterious reasons. If you have a long FQDN, inventory_hostname_short also contains the part up to the first period, without the rest of the domain.

play_hosts is available as a list of hostnames that are in scope for the current play. This may be useful for filling out templates with multiple hostnames or for injecting the list into the rules for a load balancer.

delegate_to is the inventory hostname of the host that the current task has been delegated to using ‘delegate_to’.

Don’t worry about any of this unless you think you need it. You’ll know when you do.

Also available, inventory_dir is the pathname of the directory holding Ansible’s inventory host file, inventory_file is the pathname and the filename pointing to the Ansible’s inventory host file.

And finally, role_path will return the current role’s pathname (since 1.8). This will only work inside a role.

Variable File Separation

It’s a great idea to keep your playbooks under source control, but you may wish to make the playbook source public while keeping certain important variables private. Similarly, sometimes you may just want to keep certain information in different files, away from the main playbook.

You can do this by using an external variables file, or files, just like this:

---

- hosts: all
  remote_user: root
  vars:
    favcolor: blue
  vars_files:
    - /vars/external_vars.yml

  tasks:

  - name: this is just a placeholder
    command: /bin/echo foo

This removes the risk of sharing sensitive data with others when sharing your playbook source with them.

The contents of each variables file is a simple YAML dictionary, like this:

---
# in the above example, this would be vars/external_vars.yml
somevar: somevalue
password: magic

Note

It’s also possible to keep per-host and per-group variables in very similar files, this is covered in Splitting Out Host and Group Specific Data.

Passing Variables On The Command Line

In addition to vars_prompt and vars_files, it is possible to send variables over the Ansible command line. This is particularly useful when writing a generic release playbook where you may want to pass in the version of the application to deploy:

ansible-playbook release.yml --extra-vars "version=1.23.45 other_variable=foo"

This is useful, for, among other things, setting the hosts group or the user for the playbook.

Example:

---

- hosts: '{{ hosts }}'
  remote_user: '{{ user }}'

  tasks:
     - ...

ansible-playbook release.yml --extra-vars "hosts=vipers user=starbuck"

As of Ansible 1.2, you can also pass in extra vars as quoted JSON, like so:

--extra-vars '{"pacman":"mrs","ghosts":["inky","pinky","clyde","sue"]}'

The key=value form is obviously simpler, but it’s there if you need it!

As of Ansible 1.3, extra vars can be loaded from a JSON file with the “@” syntax:

--extra-vars "@some_file.json"

Also as of Ansible 1.3, extra vars can be formatted as YAML, either on the command line or in a file as above.

Variable Precedence: Where Should I Put A Variable?

A lot of folks may ask about how variables override another. Ultimately it’s Ansible’s philosophy that it’s better you know where to put a variable, and then you have to think about it a lot less.

Avoid defining the variable “x” in 47 places and then ask the question “which x gets used”. Why? Because that’s not Ansible’s Zen philosophy of doing things.

There is only one Empire State Building. One Mona Lisa, etc. Figure out where to define a variable, and don’t make it complicated.

However, let’s go ahead and get precedence out of the way! It exists. It’s a real thing, and you might have a use for it.

If multiple variables of the same name are defined in different places, they win in a certain order, which is:

* extra vars (-e in the command line) always win
* then comes connection variables defined in inventory (ansible_ssh_user, etc)
* then comes "most everything else" (command line switches, vars in play, included vars, role vars, etc)
* then comes the rest of the variables defined in inventory
* then comes facts discovered about a system
* then "role defaults", which are the most "defaulty" and lose in priority to everything.

Note

In versions prior to 1.5.4, facts discovered about a system were in the “most everything else” category above.

That seems a little theoretical. Let’s show some examples and where you would choose to put what based on the kind of control you might want over values.

First off, group variables are super powerful.

Site wide defaults should be defined as a ‘group_vars/all’ setting. Group variables are generally placed alongside your inventory file. They can also be returned by a dynamic inventory script (see Dynamic Inventory) or defined in things like Ansible Tower from the UI or API:

---
# file: /etc/ansible/group_vars/all
# this is the site wide default
ntp_server: default-time.example.com

Regional information might be defined in a ‘group_vars/region’ variable. If this group is a child of the ‘all’ group (which it is, because all groups are), it will override the group that is higher up and more general:

---
# file: /etc/ansible/group_vars/boston
ntp_server: boston-time.example.com

If for some crazy reason we wanted to tell just a specific host to use a specific NTP server, it would then override the group variable!:

---
# file: /etc/ansible/host_vars/xyz.boston.example.com
ntp_server: override.example.com

So that covers inventory and what you would normally set there. It’s a great place for things that deal with geography or behavior. Since groups are frequently the entity that maps roles onto hosts, it is sometimes a shortcut to set variables on the group instead of defining them on a role. You could go either way.

Remember: Child groups override parent groups, and hosts always override their groups.

Next up: learning about role variable precedence.

We’ll pretty much assume you are using roles at this point. You should be using roles for sure. Roles are great. You are using roles aren’t you? Hint hint.

Ok, so if you are writing a redistributable role with reasonable defaults, put those in the ‘roles/x/defaults/main.yml’ file. This means the role will bring along a default value but ANYTHING in Ansible will override it. It’s just a default. That’s why it says “defaults” :) See Playbook Roles and Include Statements for more info about this:

---
# file: roles/x/defaults/main.yml
# if not overridden in inventory or as a parameter, this is the value that will be used
http_port: 80

if you are writing a role and want to ensure the value in the role is absolutely used in that role, and is not going to be overridden by inventory, you should put it in roles/x/vars/main.yml like so, and inventory values cannot override it. -e however, still will:

---
# file: roles/x/vars/main.yml
# this will absolutely be used in this role
http_port: 80

So the above is a great way to plug in constants about the role that are always true. If you are not sharing your role with others, app specific behaviors like ports is fine to put in here. But if you are sharing roles with others, putting variables in here might be bad. Nobody will be able to override them with inventory, but they still can by passing a parameter to the role.

Parameterized roles are useful.

If you are using a role and want to override a default, pass it as a parameter to the role like so:

roles:
   - { role: apache, http_port: 8080 }

This makes it clear to the playbook reader that you’ve made a conscious choice to override some default in the role, or pass in some configuration that the role can’t assume by itself. It also allows you to pass something site-specific that isn’t really part of the role you are sharing with others.

This can often be used for things that might apply to some hosts multiple times, like so:

roles:
   - { role: app_user, name: Ian    }
   - { role: app_user, name: Terry  }
   - { role: app_user, name: Graham }
   - { role: app_user, name: John   }

That’s a bit arbitrary, but you can see how the same role was invoked multiple Times. In that example it’s quite likely there was no default for ‘name’ supplied at all. Ansible can yell at you when variables aren’t defined – it’s the default behavior in fact.

So that’s a bit about roles.

There are a few bonus things that go on with roles.

Generally speaking, variables set in one role are available to others. This means if you have a “roles/common/vars/main.yml” you can set variables in there and make use of them in other roles and elsewhere in your playbook:

roles:
   - { role: common_settings }
   - { role: something, foo: 12 }
   - { role: something_else }

Note

There are some protections in place to avoid the need to namespace variables. In the above, variables defined in common_settings are most definitely available to ‘something’ and ‘something_else’ tasks, but if “something’s” guaranteed to have foo set at 12, even if somewhere deep in common settings it set foo to 20.

So, that’s precedence, explained in a more direct way. Don’t worry about precedence, just think about if your role is defining a variable that is a default, or a “live” variable you definitely want to use. Inventory lies in precedence right in the middle, and if you want to forcibly override something, use -e.

If you found that a little hard to understand, take a look at the ansible-examples repo on our github for a bit more about how all of these things can work together.

See also

Playbooks
An introduction to playbooks
Conditionals
Conditional statements in playbooks
Jinja2 filters
Jinja2 filters and their uses
Loops
Looping in playbooks
Playbook Roles and Include Statements
Playbook organization by roles
Best Practices
Best practices in playbooks
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel
Jinja2 filters

Filters in Jinja2 are a way of transforming template expressions from one kind of data into another. Jinja2 ships with many of these. See builtin filters in the official Jinja2 template documentation.

In addition to those, Ansible supplies many more.

Filters For Formatting Data

The following filters will take a data structure in a template and render it in a slightly different format. These are occasionally useful for debugging:

{{ some_variable | to_json }}
{{ some_variable | to_yaml }}

For human readable output, you can use:

{{ some_variable | to_nice_json }}
{{ some_variable | to_nice_yaml }}

Alternatively, you may be reading in some already formatted data:

{{ some_variable | from_json }}
{{ some_variable | from_yaml }}

for example:

tasks:
  - shell: cat /some/path/to/file.json
    register: result

  - set_fact: myvar="{{ result.stdout | from_json }}"
Filters Often Used With Conditionals

The following tasks are illustrative of how filters can be used with conditionals:

tasks:

  - shell: /usr/bin/foo
    register: result
    ignore_errors: True

  - debug: msg="it failed"
    when: result|failed

  # in most cases you'll want a handler, but if you want to do something right now, this is nice
  - debug: msg="it changed"
    when: result|changed

  - debug: msg="it succeeded"
    when: result|success

  - debug: msg="it was skipped"
    when: result|skipped
Forcing Variables To Be Defined

The default behavior from ansible and ansible.cfg is to fail if variables are undefined, but you can turn this off.

This allows an explicit check with this feature off:

{{ variable | mandatory }}

The variable value will be used as is, but the template evaluation will raise an error if it is undefined.

Defaulting Undefined Variables

Jinja2 provides a useful ‘default’ filter, that is often a better approach to failing if a variable is not defined:

{{ some_variable | default(5) }}

In the above example, if the variable ‘some_variable’ is not defined, the value used will be 5, rather than an error being raised.

Omitting Undefined Variables and Parameters

As of Ansible 1.8, it is possible to use the default filter to omit variables and module parameters using the special omit variable:

- name: touch files with an optional mode
  file: dest={{item.path}} state=touch mode={{item.mode|default(omit)}}
  with_items:
    - path: /tmp/foo
    - path: /tmp/bar
    - path: /tmp/baz
      mode: "0444"

For the first two files in the list, the default mode will be determined by the umask of the system as the mode= parameter will not be sent to the file module while the final file will receive the mode=0444 option.

Note

If you are “chaining” additional filters after the default(omit) filter, you should instead do something like this: “{{ foo | default(None) | some_filter or omit }}”. In this example, the default None (python null) value will cause the later filters to fail, which will trigger the or omit portion of the logic. Using omit in this manner is very specific to the later filters you’re chaining though, so be prepared for some trial and error if you do this.

List Filters

These filters all operate on list variables.

New in version 1.8.

To get the minimum value from list of numbers:

{{ list1 | min }}

To get the maximum value from a list of numbers:

{{ [3, 4, 2] | max }}
Set Theory Filters

All these functions return a unique set from sets or lists.

New in version 1.4.

To get a unique set from a list:

{{ list1 | unique }}

To get a union of two lists:

{{ list1 | union(list2) }}

To get the intersection of 2 lists (unique list of all items in both):

{{ list1 | intersect(list2) }}

To get the difference of 2 lists (items in 1 that don’t exist in 2):

{{ list1 | difference(list2) }}

To get the symmetric difference of 2 lists (items exclusive to each list):

{{ list1 | symmetric_difference(list2) }}
Version Comparison Filters

New in version 1.6.

To compare a version number, such as checking if the ansible_distribution_version version is greater than or equal to ‘12.04’, you can use the version_compare filter.

The version_compare filter can also be used to evaluate the ansible_distribution_version:

{{ ansible_distribution_version | version_compare('12.04', '>=') }}

If ansible_distribution_version is greater than or equal to 12, this filter will return True, otherwise it will return False.

The version_compare filter accepts the following operators:

<, lt, <=, le, >, gt, >=, ge, ==, =, eq, !=, <>, ne

This filter also accepts a 3rd parameter, strict which defines if strict version parsing should be used. The default is False, and if set as True will use more strict version parsing:

{{ sample_version_var | version_compare('1.0', operator='lt', strict=True) }}
Random Number Filter

New in version 1.6.

This filter can be used similar to the default jinja2 random filter (returning a random item from a sequence of items), but can also generate a random number based on a range.

To get a random item from a list:

{{ ['a','b','c']|random }} => 'c'

To get a random number from 0 to supplied end:

{{ 59 |random}} * * * * root /script/from/cron

Get a random number from 0 to 100 but in steps of 10:

{{ 100 |random(step=10) }}  => 70

Get a random number from 1 to 100 but in steps of 10:

{{ 100 |random(1, 10) }}    => 31
{{ 100 |random(start=1, step=10) }}    => 51
Shuffle Filter

New in version 1.8.

This filter will randomize an existing list, giving a different order every invocation.

To get a random list from an existing list:

{{ ['a','b','c']|shuffle }} => ['c','a','b']
{{ ['a','b','c']|shuffle }} => ['b','c','a']

note that when used with a non ‘listable’ item it is a noop, otherwise it always returns a list

Math

New in version 1.9.

To see if something is actually a number:

{{ myvar | isnan }}

Get the logarithm (default is e):

{{ myvar | log }}

Get the base 10 logarithm:

{{ myvar | log(10) }}

Give me the power of 2! (or 5):

{{ myvar | pow(2) }}
{{ myvar | pow(5) }}

Square root, or the 5th:

{{ myvar | root }}
{{ myvar | root(5) }}

Note that jinja2 already provides some like abs() and round().

IP address filter

New in version 1.9.

To test if a string is a valid IP address:

{{ myvar | ipaddr }}

You can also require a specific IP protocol version:

{{ myvar | ipv4 }}
{{ myvar | ipv6 }}

IP address filter can also be used to extract specific information from an IP address. For example, to get the IP address itself from a CIDR, you can use:

{{ '192.0.2.1/24' | ipaddr('address') }}

More information about ipaddr filter and complete usage guide can be found in Jinja2 ‘ipaddr()’ filter.

Hashing filters

New in version 1.9.

To get the sha1 hash of a string:

{{ 'test1'|hash('sha1') }}

To get the md5 hash of a string:

{{ 'test1'|hash('md5') }}

Get a string checksum:

{{ 'test2'|checksum }}

Other hashes (platform dependent):

{{ 'test2'|hash('blowfish') }}

To get a sha512 password hash (random salt):

{{ 'passwordsaresecret'|password_hash('sha512') }}

To get a sha256 password hash with a specific salt:

{{ 'secretpassword'|password_hash('sha256', 'mysecretsalt') }}

Hash types available depend on the master system running ansible, ‘hash’ depends on hashlib password_hash depends on crypt.

Other Useful Filters

To add quotes for shell usage:

- shell: echo={{ string_value | quote }}

To use one value on true and another on false (new in version 1.9):

{{ (name == "John") | ternary('Mr','Ms') }}

To concatenate a list into a string:

{{ list | join(" ") }}

To get the last name of a file path, like ‘foo.txt’ out of ‘/etc/asdf/foo.txt’:

{{ path | basename }}

To get the directory from a path:

{{ path | dirname }}

To expand a path containing a tilde (~) character (new in version 1.5):

{{ path | expanduser }}

To get the real path of a link (new in version 1.8):

{{ path | realpath }}

To get the relative path of a link, from a start point (new in version 1.7):

{{ path | relpath('/etc') }}

To get the root and extension of a path or filename (new in version 2.0):

# with path == 'nginx.conf' the return would be ('nginx', '.conf')
{{ path | splitext }}

To work with Base64 encoded strings:

{{ encoded | b64decode }}
{{ decoded | b64encode }}

To create a UUID from a string (new in version 1.9):

{{ hostname | to_uuid }}

To cast values as certain types, such as when you input a string as “True” from a vars_prompt and the system doesn’t know it is a boolean value:

- debug: msg=test
  when: some_string_value | bool

To match strings against a regex, use the “match” or “search” filter:

vars:
  url: "http://example.com/users/foo/resources/bar"

tasks:
    - shell: "msg='matched pattern 1'"
      when: url | match("http://example.com/users/.*/resources/.*")

    - debug: "msg='matched pattern 2'"
      when: url | search("/users/.*/resources/.*")

‘match’ will require a complete match in the string, while ‘search’ will require a match inside of the string.

New in version 1.6.

To replace text in a string with regex, use the “regex_replace” filter:

# convert "ansible" to "able"
{{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }}

# convert "foobar" to "bar"
{{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }}

Note

If “regex_replace” filter is used with variables inside YAML arguments (as opposed to simpler ‘key=value’ arguments), then you need to escape backreferences (e.g. \\1) with 4 backslashes (\\\\) instead of 2 (\\).

To escape special characters within a regex, use the “regex_escape” filter:

# convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$'
{{ '^f.*o(.*)$' | regex_escape() }}

A few useful filters are typically added with each new Ansible release. The development documentation shows how to extend Ansible filters by writing your own as plugins, though in general, we encourage new ones to be added to core so everyone can make use of them.

See also

Playbooks
An introduction to playbooks
Conditionals
Conditional statements in playbooks
Variables
All about variables
Loops
Looping in playbooks
Playbook Roles and Include Statements
Playbook organization by roles
Best Practices
Best practices in playbooks
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel
Jinja2 ‘ipaddr()’ filter

New in version 1.9.

ipaddr() is a Jinja2 filter designed to provide an interface to netaddr Python package from within Ansible. It can operate on strings or lists of items, test various data to check if they are valid IP addresses and manipulate the input data to extract requested information. ipaddr() works both with IPv4 and IPv6 addresses in various forms, there are also additional functions available to manipulate IP subnets and MAC addresses.

To use this filter in Ansible, you need to install netaddr Python library on a computer on which you use Ansible (it is not required on remote hosts). It can usually be installed either via your system package manager, or using pip:

pip install netaddr
Basic tests

ipaddr() is designed to return the input value if a query is True, and False if query is False. This way it can be very easily used in chained filters. To use the filter, pass a string to it:

{{ '192.0.2.0' | ipaddr }}

You can also pass the values as variables:

{{ myvar | ipaddr }}

Here are some example tests of various input strings:

# These values are valid IP addresses or network ranges
'192.168.0.1'       -> 192.168.0.1
'192.168.32.0/24'   -> 192.168.32.0/24
'fe80::100/10'      -> fe80::100/10
45443646733         -> ::a:94a7:50d
'523454/24'         -> 0.7.252.190/24

# Values that are not valid IP addresses or network ranges:
'localhost'         -> False
True                -> False
'space bar'         -> False
False               -> False
''                  -> False
':'                 -> False
'fe80:/10'          -> False

Sometimes you need either IPv4 or IPv6 addresses. To filter only for particular type, ipaddr() filter has two “aliases”, ipv4() and ipv6().

Example us of an IPv4 filter:

{{ myvar | ipv4 }}

And similar example of an IPv6 filter:

{{ myvar | ipv6 }}

Here’s an example test to look for IPv4 addresses:

'192.168.0.1'       -> 192.168.0.1
'192.168.32.0/24'   -> 192.168.32.0/24
'fe80::100/10'      -> False
45443646733         -> False
'523454/24'         -> 0.7.252.190/24

And the same data filtered for IPv6 addresses:

'192.168.0.1'       -> False
'192.168.32.0/24'   -> False
'fe80::100/10'      -> fe80::100/10
45443646733         -> ::a:94a7:50d
'523454/24'         -> False
Filtering lists

You can filter entire lists - ipaddr() will return a list with values valid for a particular query:

# Example list of values
test_list = ['192.24.2.1', 'host.fqdn', '::1', '192.168.32.0/24', 'fe80::100/10', True, '', '42540766412265424405338506004571095040/64']

# {{ test_list | ipaddr }}
['192.24.2.1', '::1', '192.168.32.0/24', 'fe80::100/10', '2001:db8:32c:faad::/64']

# {{ test_list | ipv4 }}
['192.24.2.1', '192.168.32.0/24']

# {{ test_list | ipv6 }}
['::1', 'fe80::100/10', '2001:db8:32c:faad::/64']
Wrapping IPv6 addresses in [ ] brackets

Some configuration files require IPv6 addresses to be “wrapped” in square brackets ([ ]). To accomplish that, you can use ipwrap() filter. It will wrap all IPv6 addresses and leave any other strings intact:

# {{ test_list | ipwrap }}
['192.24.2.1', 'host.fqdn', '[::1]', '192.168.32.0/24', '[fe80::100]/10', True, '', '[2001:db8:32c:faad::]/64']

As you can see, ipwrap() did not filter out non-IP address values, which is usually what you want when for example you are mixing IP addresses with hostnames. If you still want to filter out all non-IP address values, you can chain both filters together:

# {{ test_list | ipaddr | ipwrap }}
['192.24.2.1', '[::1]', '192.168.32.0/24', '[fe80::100]/10', '[2001:db8:32c:faad::]/64']
Basic queries

You can provide single argument to each ipaddr() filter. Filter will then treat it as a query and return values modified by that query. Lists will contain only values that you are querying for.

Types of queries include:

  • query by name: ipaddr('address'), ipv4('network');
  • query by CIDR range: ipaddr('192.168.0.0/24'), ipv6('2001:db8::/32');
  • query by index number: ipaddr('1'), ipaddr('-1');

If a query type is not recognized, Ansible will raise an error.

Getting information about hosts and networks

Here’s our test list again:

# Example list of values
test_list = ['192.24.2.1', 'host.fqdn', '::1', '192.168.32.0/24', 'fe80::100/10', True, '', '42540766412265424405338506004571095040/64']

Lets take above list and get only those elements that are host IP addresses, and not network ranges:

# {{ test_list | ipaddr('address') }}
['192.24.2.1', '::1', 'fe80::100']

As you can see, even though some values had a host address with a CIDR prefix, it was dropped by the filter. If you want host IP addresses with their correct CIDR prefixes (as is common with IPv6 addressing), you can use ipaddr('host') filter:

# {{ test_list | ipaddr('host') }}
['192.24.2.1/32', '::1/128', 'fe80::100/10']

Filtering by IP address types also works:

# {{ test_list | ipv4('address') }}
['192.24.2.1']

# {{ test_list | ipv6('address') }}
['::1', 'fe80::100']

You can check if IP addresses or network ranges are accessible on a public Internet, or if they are in private networks:

# {{ test_list | ipaddr('public') }}
['192.24.2.1', '2001:db8:32c:faad::/64']

# {{ test_list | ipaddr('private') }}
['192.168.32.0/24', 'fe80::100/10']

You can check which values are specifically network ranges:

# {{ test_list | ipaddr('net') }}
['192.168.32.0/24', '2001:db8:32c:faad::/64']

You can also check how many IP addresses can be in a certain range:

# {{ test_list | ipaddr('net') | ipaddr('size') }}
[256, 18446744073709551616L]

By specifying a network range as a query, you can check if given value is in that range:

# {{ test_list | ipaddr('192.0.0.0/8') }}
['192.24.2.1', '192.168.32.0/24']

If you specify a positive or negative integer as a query, ipaddr() will treat this as an index and will return specific IP address from a network range, in the ‘host/prefix’ format:

# First IP address (network address)
# {{ test_list | ipaddr('net') | ipaddr('0') }}
['192.168.32.0/24', '2001:db8:32c:faad::/64']

# Second IP address (usually gateway host)
# {{ test_list | ipaddr('net') | ipaddr('1') }}
['192.168.32.1/24', '2001:db8:32c:faad::1/64']

# Last IP address (broadcast in IPv4 networks)
# {{ test_list | ipaddr('net') | ipaddr('-1') }}
['192.168.32.255/24', '2001:db8:32c:faad:ffff:ffff:ffff:ffff/64']

You can also select IP addresses from a range by their index, from the start or end of the range:

# {{ test_list | ipaddr('net') | ipaddr('200') }}
['192.168.32.200/24', '2001:db8:32c:faad::c8/64']

# {{ test_list | ipaddr('net') | ipaddr('-200') }}
['192.168.32.56/24', '2001:db8:32c:faad:ffff:ffff:ffff:ff38/64']

# {{ test_list | ipaddr('net') | ipaddr('400') }}
['2001:db8:32c:faad::190/64']
Getting information from host/prefix values

Very frequently you use combination of IP addresses and subnet prefixes (“CIDR”), this is even more common with IPv6. ipaddr() filter can extract useful data from these prefixes.

Here’s an example set of two host prefixes (with some “control” values):

host_prefix = ['2001:db8:deaf:be11::ef3/64', '192.0.2.48/24', '127.0.0.1', '192.168.0.0/16']

First, let’s make sure that we only work with correct host/prefix values, not just subnets or single IP addresses:

# {{ test_list | ipaddr('host/prefix') }}
['2001:db8:deaf:be11::ef3/64', '192.0.2.48/24']

In Debian-based systems, network configuration stored in /etc/network/interfaces file uses combination of IP address, network address, netmask and broadcast address to configure IPv4 network interface. We can get these values from single ‘host/prefix’ combination:

# Jinja2 template
{% set ipv4_host = host_prefix | unique | ipv4('host/prefix') | first %}
iface eth0 inet static
    address   {{ ipv4_host | ipaddr('address') }}
    network   {{ ipv4_host | ipaddr('network') }}
    netmask   {{ ipv4_host | ipaddr('netmask') }}
    broadcast {{ ipv4_host | ipaddr('broadcast') }}

# Generated configuration file
iface eth0 inet static
    address   192.0.2.48
    network   192.0.2.0
    netmask   255.255.255.0
    broadcast 192.0.2.255

In above example, we needed to handle the fact that values were stored in a list, which is unusual in IPv4 networks, where only single IP address can be set on an interface. However, IPv6 networks can have multiple IP addresses set on an interface:

# Jinja2 template
iface eth0 inet6 static
  {% set ipv6_list = host_prefix | unique | ipv6('host/prefix') %}
  address {{ ipv6_list[0] }}
  {% if ipv6_list | length > 1 %}
  {% for subnet in ipv6_list[1:] %}
  up   /sbin/ip address add {{ subnet }} dev eth0
  down /sbin/ip address del {{ subnet }} dev eth0
  {% endfor %}
  {% endif %}

# Generated configuration file
iface eth0 inet6 static
  address 2001:db8:deaf:be11::ef3/64

If needed, you can extract subnet and prefix information from ‘host/prefix’ value:

# {{ host_prefix | ipaddr('host/prefix') | ipaddr('subnet') }}
['2001:db8:deaf:be11::/64', '192.0.2.0/24']

# {{ host_prefix | ipaddr('host/prefix') | ipaddr('prefix') }}
[64, 24]
IP address conversion

Here’s our test list again:

# Example list of values
test_list = ['192.24.2.1', 'host.fqdn', '::1', '192.168.32.0/24', 'fe80::100/10', True, '', '42540766412265424405338506004571095040/64']

You can convert IPv4 addresses into IPv6 addresses:

# {{ test_list | ipv4('ipv6') }}
['::ffff:192.24.2.1/128', '::ffff:192.168.32.0/120']

Converting from IPv6 to IPv4 works very rarely:

# {{ test_list | ipv6('ipv4') }}
['0.0.0.1/32']

But we can make double conversion if needed:

# {{ test_list | ipaddr('ipv6') | ipaddr('ipv4') }}
['192.24.2.1/32', '0.0.0.1/32', '192.168.32.0/24']

You can convert IP addresses to integers, the same way that you can convert integers into IP addresses:

# {{ test_list | ipaddr('address') | ipaddr('int') }}
[3222798849, 1, '3232243712/24', '338288524927261089654018896841347694848/10', '42540766412265424405338506004571095040/64']

You can convert IP addresses to PTR records:

# {% for address in test_list | ipaddr %}
# {{ address | ipaddr('revdns') }}
# {% endfor %}
1.2.24.192.in-addr.arpa.
1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa.
0.32.168.192.in-addr.arpa.
0.0.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.e.f.ip6.arpa.
0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.d.a.a.f.c.2.3.0.8.b.d.0.1.0.0.2.ip6.arpa.
Converting IPv4 address to 6to4 address

6to4 tunnel is a way to access IPv6 Internet from IPv4-only network. If you have a public IPv4 address, you automatically can configure it’s IPv6 equivalent in 2002::/16 network range - after conversion you will gain access to a 2002:xxxx:xxxx::/48 subnet which could be split into 65535 /64 subnets if needed.

To convert your IPv4 address, just send it through '6to4' filter. It will be automatically converted to a router address (with ::1/48 host address):

# {{ '193.0.2.0' | ipaddr('6to4') }}
2002:c100:0200::1/48
Subnet manipulation

ipsubnet() filter can be used to manipulate network subnets in several ways.

Here is some example IP address and subnet:

address = '192.168.144.5'
subnet  = '192.168.0.0/16'

To check if a given string is a subnet, pass it through the filter without any arguments. If given string is an IP address, it will be converted into a subnet:

# {{ address | ipsubnet }}
192.168.144.5/32

# {{ subnet | ipsubnet }}
192.168.0.0/16

If you specify a subnet size as first parameter of ipsubnet() filter, and subnet size is smaller than current one, you will get number of subnets a given subnet can be split into:

# {{ subnet | ipsubnet(20) }}
16

Second argument of ipsubnet() filter is an index number; by specifying it you can get new subnet with specified size:

# First subnet
# {{ subnet | ipsubnet(20, 0) }}
192.168.0.0/20

# Last subnet
# {{ subnet | ipsubnet(20, -1) }}
192.168.240.0/20

# Fifth subnet
# {{ subnet | ipsubnet(20, 5) }}
192.168.80.0/20

# Fifth to last subnet
# {{ subnet | ipsubnet(20, -5) }}
192.168.176.0/20

If you specify an IP address instead of a subnet, and give a subnet size as a first argument, ipsubnet() filter will instead return biggest subnet that contains a given IP address:

# {{ address | ipsubnet(20) }}
192.168.128.0/18

By specifying an index number as a second argument, you can select smaller and smaller subnets:

# First subnet
# {{ subnet | ipsubnet(18, 0) }}
192.168.128.0/18

# Last subnet
# {{ subnet | ipsubnet(18, -1) }}
192.168.144.4/31

# Fifth subnet
# {{ subnet | ipsubnet(18, 5) }}
192.168.144.0/23

# Fifth to last subnet
# {{ subnet | ipsubnet(18, -5) }}
192.168.144.0/27

You can use ipsubnet() filter with ipaddr() filter to for example split given /48 prefix into smaller, /64 subnets:

# {{ '193.0.2.0' | ipaddr('6to4') | ipsubnet(64, 58820) | ipaddr('1') }}
2002:c100:200:e5c4::1/64

Because of the size of IPv6 subnets, iteration over all of them to find the correct one may take some time on slower computers, depending on the size difference between subnets.

MAC address filter

You can use hwaddr() filter to check if a given string is a MAC address or convert it between various formats. Examples:

# Example MAC address
macaddress = '1a:2b:3c:4d:5e:6f'

# Check if given string is a MAC address
# {{ macaddress | hwaddr }}
1a:2b:3c:4d:5e:6f

# Convert MAC address to PostgreSQL format
# {{ macaddress | hwaddr('pgsql') }}
1a2b3c:4d5e6f

# Convert MAC address to Cisco format
# {{ macaddress | hwaddr('cisco') }}
1a2b.3c4d.5e6f

See also

Playbooks
An introduction to playbooks
Jinja2 filters
Introduction to Jinja2 filters and their uses
Conditionals
Conditional statements in playbooks
Variables
All about variables
Loops
Looping in playbooks
Playbook Roles and Include Statements
Playbook organization by roles
Best Practices
Best practices in playbooks
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel
Conditionals

Often the result of a play may depend on the value of a variable, fact (something learned about the remote system), or previous task result. In some cases, the values of variables may depend on other variables. Further, additional groups can be created to manage hosts based on whether the hosts match other criteria. There are many options to control execution flow in Ansible.

Let’s dig into what they are.

The When Statement

Sometimes you will want to skip a particular step on a particular host. This could be something as simple as not installing a certain package if the operating system is a particular version, or it could be something like performing some cleanup steps if a filesystem is getting full.

This is easy to do in Ansible, with the when clause, which contains a Jinja2 expression (see Variables). It’s actually pretty simple:

tasks:
  - name: "shutdown Debian flavored systems"
    command: /sbin/shutdown -t now
    when: ansible_os_family == "Debian"

You can also use parentheses to group conditions:

tasks:
  - name: "shutdown CentOS 6 and 7 systems"
    command: /sbin/shutdown -t now
    when: ansible_distribution == "CentOS" and
          (ansible_distribution_major_version == "6" or ansible_distribution_major_version == "7")

A number of Jinja2 “filters” can also be used in when statements, some of which are unique and provided by Ansible. Suppose we want to ignore the error of one statement and then decide to do something conditionally based on success or failure:

tasks:
  - command: /bin/false
    register: result
    ignore_errors: True
  - command: /bin/something
    when: result|failed
  - command: /bin/something_else
    when: result|success
  - command: /bin/still/something_else
    when: result|skipped

Note that was a little bit of foreshadowing on the ‘register’ statement. We’ll get to it a bit later in this chapter.

As a reminder, to see what facts are available on a particular system, you can do:

ansible hostname.example.com -m setup

Tip: Sometimes you’ll get back a variable that’s a string and you’ll want to do a math operation comparison on it. You can do this like so:

tasks:
  - shell: echo "only on Red Hat 6, derivatives, and later"
    when: ansible_os_family == "RedHat" and ansible_lsb.major_release|int >= 6

Note

the above example requires the lsb_release package on the target host in order to return the ansible_lsb.major_release fact.

Variables defined in the playbooks or inventory can also be used. An example may be the execution of a task based on a variable’s boolean value:

vars:
  epic: true

Then a conditional execution might look like:

tasks:
    - shell: echo "This certainly is epic!"
      when: epic

or:

tasks:
    - shell: echo "This certainly isn't epic!"
      when: not epic

If a required variable has not been set, you can skip or fail using Jinja2’s defined test. For example:

tasks:
    - shell: echo "I've got '{{ foo }}' and am not afraid to use it!"
      when: foo is defined

    - fail: msg="Bailing out. this play requires 'bar'"
      when: bar is undefined

This is especially useful in combination with the conditional import of vars files (see below).

Note that when combining when with with_items (see Loops), be aware that the when statement is processed separately for each item. This is by design:

tasks:
    - command: echo {{ item }}
      with_items: [ 0, 2, 4, 6, 8, 10 ]
      when: item > 5
Loading in Custom Facts

It’s also easy to provide your own facts if you want, which is covered in Developing Modules. To run them, just make a call to your own custom fact gathering module at the top of your list of tasks, and variables returned there will be accessible to future tasks:

tasks:
    - name: gather site specific fact data
      action: site_facts
    - command: /usr/bin/thingy
      when: my_custom_fact_just_retrieved_from_the_remote_system == '1234'
Applying ‘when’ to roles and includes

Note that if you have several tasks that all share the same conditional statement, you can affix the conditional to a task include statement as below. All the tasks get evaluated, but the conditional is applied to each and every task:

- include: tasks/sometasks.yml
  when: "'reticulating splines' in output"

Note

In versions prior to 2.0 this worked with task includes but not playbook includes. 2.0 allows it to work with both.

Or with a role:

- hosts: webservers
  roles:
     - { role: debian_stock_config, when: ansible_os_family == 'Debian' }

You will note a lot of ‘skipped’ output by default in Ansible when using this approach on systems that don’t match the criteria. Read up on the ‘group_by’ module in the About Modules docs for a more streamlined way to accomplish the same thing.

Conditional Imports

Note

This is an advanced topic that is infrequently used. You can probably skip this section.

Sometimes you will want to do certain things differently in a playbook based on certain criteria. Having one playbook that works on multiple platforms and OS versions is a good example.

As an example, the name of the Apache package may be different between CentOS and Debian, but it is easily handled with a minimum of syntax in an Ansible Playbook:

---
- hosts: all
  remote_user: root
  vars_files:
    - "vars/common.yml"
    - [ "vars/{{ ansible_os_family }}.yml", "vars/os_defaults.yml" ]
  tasks:
  - name: make sure apache is running
    service: name={{ apache }} state=running

Note

The variable ‘ansible_os_family’ is being interpolated into the list of filenames being defined for vars_files.

As a reminder, the various YAML files contain just keys and values:

---
# for vars/CentOS.yml
apache: httpd
somethingelse: 42

How does this work? If the operating system was ‘CentOS’, the first file Ansible would try to import would be ‘vars/CentOS.yml’, followed by ‘/vars/os_defaults.yml’ if that file did not exist. If no files in the list were found, an error would be raised. On Debian, it would instead first look towards ‘vars/Debian.yml’ instead of ‘vars/CentOS.yml’, before falling back on ‘vars/os_defaults.yml’. Pretty simple.

To use this conditional import feature, you’ll need facter or ohai installed prior to running the playbook, but you can of course push this out with Ansible if you like:

# for facter
ansible -m yum -a "pkg=facter state=present"
ansible -m yum -a "pkg=ruby-json state=present"

# for ohai
ansible -m yum -a "pkg=ohai state=present"

Ansible’s approach to configuration – separating variables from tasks, keeps your playbooks from turning into arbitrary code with ugly nested ifs, conditionals, and so on - and results in more streamlined & auditable configuration rules – especially because there are a minimum of decision points to track.

Selecting Files And Templates Based On Variables

Note

This is an advanced topic that is infrequently used. You can probably skip this section.

Sometimes a configuration file you want to copy, or a template you will use may depend on a variable. The following construct selects the first available file appropriate for the variables of a given host, which is often much cleaner than putting a lot of if conditionals in a template.

The following example shows how to template out a configuration file that was very different between, say, CentOS and Debian:

- name: template a file
  template: src={{ item }} dest=/etc/myapp/foo.conf
  with_first_found:
    - files:
       - {{ ansible_distribution }}.conf
       - default.conf
      paths:
       - search_location_one/somedir/
       - /opt/other_location/somedir/
Register Variables

Often in a playbook it may be useful to store the result of a given command in a variable and access it later. Use of the command module in this way can in many ways eliminate the need to write site specific facts, for instance, you could test for the existence of a particular program.

The ‘register’ keyword decides what variable to save a result in. The resulting variables can be used in templates, action lines, or when statements. It looks like this (in an obviously trivial example):

- name: test play
  hosts: all

  tasks:

      - shell: cat /etc/motd
        register: motd_contents

      - shell: echo "motd contains the word hi"
        when: motd_contents.stdout.find('hi') != -1

As shown previously, the registered variable’s string contents are accessible with the ‘stdout’ value. The registered result can be used in the “with_items” of a task if it is converted into a list (or already is a list) as shown below. “stdout_lines” is already available on the object as well though you could also call “home_dirs.stdout.split()” if you wanted, and could split by other fields:

- name: registered variable usage as a with_items list
  hosts: all

  tasks:

      - name: retrieve the list of home directories
        command: ls /home
        register: home_dirs

      - name: add home dirs to the backup spooler
        file: path=/mnt/bkspool/{{ item }} src=/home/{{ item }} state=link
        with_items: home_dirs.stdout_lines
        # same as with_items: home_dirs.stdout.split()

See also

Playbooks
An introduction to playbooks
Playbook Roles and Include Statements
Playbook organization by roles
Best Practices
Best practices in playbooks
Conditionals
Conditional statements in playbooks
Variables
All about variables
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel
Loops

Often you’ll want to do many things in one task, such as create a lot of users, install a lot of packages, or repeat a polling step until a certain result is reached.

This chapter is all about how to use loops in playbooks.

Standard Loops

To save some typing, repeated tasks can be written in short-hand like so:

- name: add several users
  user: name={{ item }} state=present groups=wheel
  with_items:
     - testuser1
     - testuser2

If you have defined a YAML list in a variables file, or the ‘vars’ section, you can also do:

with_items: "{{somelist}}"

The above would be the equivalent of:

- name: add user testuser1
  user: name=testuser1 state=present groups=wheel
- name: add user testuser2
  user: name=testuser2 state=present groups=wheel

The yum and apt modules use with_items to execute fewer package manager transactions.

Note that the types of items you iterate over with ‘with_items’ do not have to be simple lists of strings. If you have a list of hashes, you can reference subkeys using things like:

- name: add several users
  user: name={{ item.name }} state=present groups={{ item.groups }}
  with_items:
    - { name: 'testuser1', groups: 'wheel' }
    - { name: 'testuser2', groups: 'root' }

Also be aware that when combining when with with_items (or any other loop statement), the when statement is processed separately for each item. See The When Statement for an example.

Nested Loops

Loops can be nested as well:

- name: give users access to multiple databases
  mysql_user: name={{ item[0] }} priv={{ item[1] }}.*:ALL append_privs=yes password=foo
  with_nested:
    - [ 'alice', 'bob' ]
    - [ 'clientdb', 'employeedb', 'providerdb' ]

As with the case of ‘with_items’ above, you can use previously defined variables.:

- name: here, 'users' contains the above list of employees
  mysql_user: name={{ item[0] }} priv={{ item[1] }}.*:ALL append_privs=yes password=foo
  with_nested:
    - "{{users}}"
    - [ 'clientdb', 'employeedb', 'providerdb' ]
Looping over Hashes

New in version 1.5.

Suppose you have the following variable:

---
users:
  alice:
    name: Alice Appleworth
    telephone: 123-456-7890
  bob:
    name: Bob Bananarama
    telephone: 987-654-3210

And you want to print every user’s name and phone number. You can loop through the elements of a hash using with_dict like this:

tasks:
  - name: Print phone records
    debug: msg="User {{ item.key }} is {{ item.value.name }} ({{ item.value.telephone }})"
    with_dict: "{{users}}"
Looping over Fileglobs

with_fileglob matches all files in a single directory, non-recursively, that match a pattern. It can be used like this:

---
- hosts: all

  tasks:

    # first ensure our target directory exists
    - file: dest=/etc/fooapp state=directory

    # copy each file over that matches the given pattern
    - copy: src={{ item }} dest=/etc/fooapp/ owner=root mode=600
      with_fileglob:
        - /playbooks/files/fooapp/*

Note

When using a relative path with with_fileglob in a role, Ansible resolves the path relative to the roles/<rolename>/files directory.

Looping over Parallel Sets of Data

Note

This is an uncommon thing to want to do, but we’re documenting it for completeness. You probably won’t be reaching for this one often.

Suppose you have the following variable data was loaded in via somewhere:

---
alpha: [ 'a', 'b', 'c', 'd' ]
numbers:  [ 1, 2, 3, 4 ]

And you want the set of ‘(a, 1)’ and ‘(b, 2)’ and so on. Use ‘with_together’ to get this:

tasks:
    - debug: msg="{{ item.0 }} and {{ item.1 }}"
      with_together:
        - "{{alpha}}"
        - "{{numbers}}"
Looping over Subelements

Suppose you want to do something like loop over a list of users, creating them, and allowing them to login by a certain set of SSH keys.

How might that be accomplished? Let’s assume you had the following defined and loaded in via “vars_files” or maybe a “group_vars/all” file:

---
users:
  - name: alice
    authorized:
      - /tmp/alice/onekey.pub
      - /tmp/alice/twokey.pub
    mysql:
        password: mysql-password
        hosts:
          - "%"
          - "127.0.0.1"
          - "::1"
          - "localhost"
        privs:
          - "*.*:SELECT"
          - "DB1.*:ALL"
  - name: bob
    authorized:
      - /tmp/bob/id_rsa.pub
    mysql:
        password: other-mysql-password
        hosts:
          - "db1"
        privs:
          - "*.*:SELECT"
          - "DB2.*:ALL"

It might happen like so:

- user: name={{ item.name }} state=present generate_ssh_key=yes
  with_items: "{{users}}"

- authorized_key: "user={{ item.0.name }} key='{{ lookup('file', item.1) }}'"
  with_subelements:
     - users
     - authorized

Given the mysql hosts and privs subkey lists, you can also iterate over a list in a nested subkey:

- name: Setup MySQL users
  mysql_user: name={{ item.0.user }} password={{ item.0.mysql.password }} host={{ item.1 }} priv={{ item.0.mysql.privs | join('/') }}
  with_subelements:
    - users
    - mysql.hosts

Subelements walks a list of hashes (aka dictionaries) and then traverses a list with a given (nested sub-)key inside of those records.

Optionally, you can add a third element to the subelements list, that holds a dictionary of flags. Currently you can add the ‘skip_missing’ flag. If set to True, the lookup plugin will skip the lists items that do not contain the given subkey. Without this flag, or if that flag is set to False, the plugin will yield an error and complain about the missing subkey.

The authorized_key pattern is exactly where it comes up most.

Looping over Integer Sequences

with_sequence generates a sequence of items in ascending numerical order. You can specify a start, end, and an optional step value.

Arguments should be specified in key=value pairs. If supplied, the ‘format’ is a printf style string.

Numerical values can be specified in decimal, hexadecimal (0x3f8) or octal (0600). Negative numbers are not supported. This works as follows:

---
- hosts: all

  tasks:

    # create groups
    - group: name=evens state=present
    - group: name=odds state=present

    # create some test users
    - user: name={{ item }} state=present groups=evens
      with_sequence: start=0 end=32 format=testuser%02x

    # create a series of directories with even numbers for some reason
    - file: dest=/var/stuff/{{ item }} state=directory
      with_sequence: start=4 end=16 stride=2

    # a simpler way to use the sequence plugin
    # create 4 groups
    - group: name=group{{ item }} state=present
      with_sequence: count=4
Random Choices

The ‘random_choice’ feature can be used to pick something at random. While it’s not a load balancer (there are modules for those), it can somewhat be used as a poor man’s loadbalancer in a MacGyver like situation:

- debug: msg={{ item }}
  with_random_choice:
     - "go through the door"
     - "drink from the goblet"
     - "press the red button"
     - "do nothing"

One of the provided strings will be selected at random.

At a more basic level, they can be used to add chaos and excitement to otherwise predictable automation environments.

Do-Until Loops

Sometimes you would want to retry a task until a certain condition is met. Here’s an example:

- action: shell /usr/bin/foo
  register: result
  until: result.stdout.find("all systems go") != -1
  retries: 5
  delay: 10

The above example run the shell module recursively till the module’s result has “all systems go” in its stdout or the task has been retried for 5 times with a delay of 10 seconds. The default value for “retries” is 3 and “delay” is 5.

The task returns the results returned by the last task run. The results of individual retries can be viewed by -vv option. The registered variable will also have a new key “attempts” which will have the number of the retries for the task.

Finding First Matched Files

Note

This is an uncommon thing to want to do, but we’re documenting it for completeness. You probably won’t be reaching for this one often.

This isn’t exactly a loop, but it’s close. What if you want to use a reference to a file based on the first file found that matches a given criteria, and some of the filenames are determined by variable names? Yes, you can do that as follows:

- name: INTERFACES | Create Ansible header for /etc/network/interfaces
  template: src={{ item }} dest=/etc/foo.conf
  with_first_found:
    - "{{ansible_virtualization_type}}_foo.conf"
    - "default_foo.conf"

This tool also has a long form version that allows for configurable search paths. Here’s an example:

- name: some configuration template
  template: src={{ item }} dest=/etc/file.cfg mode=0444 owner=root group=root
  with_first_found:
    - files:
       - "{{inventory_hostname}}/etc/file.cfg"
      paths:
       - ../../../templates.overwrites
       - ../../../templates
    - files:
        - etc/file.cfg
      paths:
        - templates
Iterating Over The Results of a Program Execution

Note

This is an uncommon thing to want to do, but we’re documenting it for completeness. You probably won’t be reaching for this one often.

Sometimes you might want to execute a program, and based on the output of that program, loop over the results of that line by line. Ansible provides a neat way to do that, though you should remember, this is always executed on the control machine, not the local machine:

- name: Example of looping over a command result
  shell: /usr/bin/frobnicate {{ item }}
  with_lines: /usr/bin/frobnications_per_host --param {{ inventory_hostname }}

Ok, that was a bit arbitrary. In fact, if you’re doing something that is inventory related you might just want to write a dynamic inventory source instead (see Dynamic Inventory), but this can be occasionally useful in quick-and-dirty implementations.

Should you ever need to execute a command remotely, you would not use the above method. Instead do this:

- name: Example of looping over a REMOTE command result
  shell: /usr/bin/something
  register: command_result

- name: Do something with each result
  shell: /usr/bin/something_else --param {{ item }}
  with_items: "{{command_result.stdout_lines}}"
Looping Over A List With An Index

Note

This is an uncommon thing to want to do, but we’re documenting it for completeness. You probably won’t be reaching for this one often.

If you want to loop over an array and also get the numeric index of where you are in the array as you go, you can also do that. It’s uncommonly used:

- name: indexed loop demo
  debug: msg="at array position {{ item.0 }} there is a value {{ item.1 }}"
  with_indexed_items: "{{some_list}}"
Using ini file with a loop

The ini plugin can use regexp to retrieve a set of keys. As a consequence, we can loop over this set. Here is the ini file we’ll use:

[section1]
value1=section1/value1
value2=section1/value2

[section2]
value1=section2/value1
value2=section2/value2

Here is an example of using with_ini:

- debug: msg="{{item}}"
  with_ini: value[1-2] section=section1 file=lookup.ini re=true

And here is the returned value:

{
      "changed": false,
      "msg": "All items completed",
      "results": [
          {
              "invocation": {
                  "module_args": "msg=\"section1/value1\"",
                  "module_name": "debug"
              },
              "item": "section1/value1",
              "msg": "section1/value1",
              "verbose_always": true
          },
          {
              "invocation": {
                  "module_args": "msg=\"section1/value2\"",
                  "module_name": "debug"
              },
              "item": "section1/value2",
              "msg": "section1/value2",
              "verbose_always": true
          }
      ]
  }
Flattening A List

Note

This is an uncommon thing to want to do, but we’re documenting it for completeness. You probably won’t be reaching for this one often.

In rare instances you might have several lists of lists, and you just want to iterate over every item in all of those lists. Assume a really crazy hypothetical datastructure:

----
# file: roles/foo/vars/main.yml
packages_base:
  - [ 'foo-package', 'bar-package' ]
packages_apps:
  - [ ['one-package', 'two-package' ]]
  - [ ['red-package'], ['blue-package']]

As you can see the formatting of packages in these lists is all over the place. How can we install all of the packages in both lists?:

- name: flattened loop demo
  yum: name={{ item }} state=installed
  with_flattened:
     - "{{packages_base}}"
     - "{{packages_apps}}"

That’s how!

Using register with a loop

When using register with a loop the data structure placed in the variable during a loop, will contain a results attribute, that is a list of all responses from the module.

Here is an example of using register with with_items:

- shell: echo "{{ item }}"
  with_items:
    - one
    - two
  register: echo

This differs from the data structure returned when using register without a loop:

{
    "changed": true,
    "msg": "All items completed",
    "results": [
        {
            "changed": true,
            "cmd": "echo \"one\" ",
            "delta": "0:00:00.003110",
            "end": "2013-12-19 12:00:05.187153",
            "invocation": {
                "module_args": "echo \"one\"",
                "module_name": "shell"
            },
            "item": "one",
            "rc": 0,
            "start": "2013-12-19 12:00:05.184043",
            "stderr": "",
            "stdout": "one"
        },
        {
            "changed": true,
            "cmd": "echo \"two\" ",
            "delta": "0:00:00.002920",
            "end": "2013-12-19 12:00:05.245502",
            "invocation": {
                "module_args": "echo \"two\"",
                "module_name": "shell"
            },
            "item": "two",
            "rc": 0,
            "start": "2013-12-19 12:00:05.242582",
            "stderr": "",
            "stdout": "two"
        }
    ]
}

Subsequent loops over the registered variable to inspect the results may look like:

- name: Fail if return code is not 0
  fail:
    msg: "The command ({{ item.cmd }}) did not have a 0 return code"
  when: item.rc != 0
  with_items: "{{echo.results}}"
Writing Your Own Iterators

While you ordinarily shouldn’t have to, should you wish to write your own ways to loop over arbitrary datastructures, you can read Developing Plugins for some starter information. Each of the above features are implemented as plugins in ansible, so there are many implementations to reference.

See also

Playbooks
An introduction to playbooks
Playbook Roles and Include Statements
Playbook organization by roles
Best Practices
Best practices in playbooks
Conditionals
Conditional statements in playbooks
Variables
All about variables
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel
Blocks

In 2.0 we added a block feature to allow for logical grouping of tasks and even in play error handling. Most of what you can apply to a single task can be applied at the block level, which also makes it much easier to set data or directives common to the tasks.

Block example
   tasks:
     - block:
         - yum: name={{ item }} state=installed
           with_items:
             - httpd
             - memcached

         - template: src=templates/src.j2 dest=/etc/foo.conf

         - service: name=bar state=started enabled=True

       when: ansible_distribution == 'CentOS'
       become: true
       become_user: root

In the example above the 3 tasks will be executed only when the block’s when condition is met and enables privilege escalation for all the enclosed tasks.

Error Handling

About Blocks Blocks also introduce the ability to handle errors in a way similar to exceptions in most programming languages.

Block error handling example
 tasks:
  - block:
      - debug: msg='i execute normally'
      - command: /bin/false
      - debug: msg='i never execute, cause ERROR!'
    rescue:
      - debug: msg='I caught an error'
      - command: /bin/false
      - debug: msg='I also never execute :-('
    always:
      - debug: msg="this always executes"

The tasks in the block would execute normally, if there is any error the rescue section would get executed with whatever you need to do to recover from the previous error. The always section runs no matter what previous error did or did not occur in the block and rescue sections.

See also

Playbooks
An introduction to playbooks
Playbook Roles and Include Statements
Playbook organization by roles
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel
Strategies

In 2.0 we added a new way to control play execution, strategy, by default plays will still run as they used to, with what we call the linear strategy. All hosts will run each task before any host starts the next task, using the number of forks (default 5) to parallelize.

The serial directive can ‘batch’ this behaviour to a subset of the hosts, which then run to completion of the play before the next ‘batch’ starts.

A second strategy ships with ansible free, which allows each host to run until the end of the play as fast as it can.:

- hosts: all
  strategy: free
  tasks:
  ...
Strategy Plugins

The strategies are implelented via a new type of plugin, this means that in the future new execution types can be added, either locally by users or to Ansible itself by a code contribution.

See also

Playbooks
An introduction to playbooks
Playbook Roles and Include Statements
Playbook organization by roles
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel
Best Practices

Here are some tips for making the most of Ansible and Ansible playbooks.

You can find some example playbooks illustrating these best practices in our ansible-examples repository. (NOTE: These may not use all of the features in the latest release, but are still an excellent reference!).

Content Organization

The following section shows one of many possible ways to organize playbook content.

Your usage of Ansible should fit your needs, however, not ours, so feel free to modify this approach and organize as you see fit.

One thing you will definitely want to do though, is use the “roles” organization feature, which is documented as part of the main playbooks page. See Playbook Roles and Include Statements. You absolutely should be using roles. Roles are great. Use roles. Roles! Did we say that enough? Roles are great.

Directory Layout

The top level of the directory would contain files and directories like so:

production                # inventory file for production servers
staging                   # inventory file for staging environment

group_vars/
   group1                 # here we assign variables to particular groups
   group2                 # ""
host_vars/
   hostname1              # if systems need specific variables, put them here
   hostname2              # ""

library/                  # if any custom modules, put them here (optional)
filter_plugins/           # if any custom filter plugins, put them here (optional)

site.yml                  # master playbook
webservers.yml            # playbook for webserver tier
dbservers.yml             # playbook for dbserver tier

roles/
    common/               # this hierarchy represents a "role"
        tasks/            #
            main.yml      #  <-- tasks file can include smaller files if warranted
        handlers/         #
            main.yml      #  <-- handlers file
        templates/        #  <-- files for use with the template resource
            ntp.conf.j2   #  <------- templates end in .j2
        files/            #
            bar.txt       #  <-- files for use with the copy resource
            foo.sh        #  <-- script files for use with the script resource
        vars/             #
            main.yml      #  <-- variables associated with this role
        defaults/         #
            main.yml      #  <-- default lower priority variables for this role
        meta/             #
            main.yml      #  <-- role dependencies

    webtier/              # same kind of structure as "common" was above, done for the webtier role
    monitoring/           # ""
    fooapp/               # ""
Use Dynamic Inventory With Clouds

If you are using a cloud provider, you should not be managing your inventory in a static file. See Dynamic Inventory.

This does not just apply to clouds – If you have another system maintaining a canonical list of systems in your infrastructure, usage of dynamic inventory is a great idea in general.

How to Differentiate Staging vs Production

If managing static inventory, it is frequently asked how to differentiate different types of environments. The following example shows a good way to do this. Similar methods of grouping could be adapted to dynamic inventory (for instance, consider applying the AWS tag “environment:production”, and you’ll get a group of systems automatically discovered named “ec2_tag_environment_production”.

Let’s show a static inventory example though. Below, the production file contains the inventory of all of your production hosts.

It is suggested that you define groups based on purpose of the host (roles) and also geography or datacenter location (if applicable):

# file: production

[atlanta-webservers]
www-atl-1.example.com
www-atl-2.example.com

[boston-webservers]
www-bos-1.example.com
www-bos-2.example.com

[atlanta-dbservers]
db-atl-1.example.com
db-atl-2.example.com

[boston-dbservers]
db-bos-1.example.com

# webservers in all geos
[webservers:children]
atlanta-webservers
boston-webservers

# dbservers in all geos
[dbservers:children]
atlanta-dbservers
boston-dbservers

# everything in the atlanta geo
[atlanta:children]
atlanta-webservers
atlanta-dbservers

# everything in the boston geo
[boston:children]
boston-webservers
boston-dbservers
Group And Host Variables

This section extends on the previous example.

Groups are nice for organization, but that’s not all groups are good for. You can also assign variables to them! For instance, atlanta has its own NTP servers, so when setting up ntp.conf, we should use them. Let’s set those now:

---
# file: group_vars/atlanta
ntp: ntp-atlanta.example.com
backup: backup-atlanta.example.com

Variables aren’t just for geographic information either! Maybe the webservers have some configuration that doesn’t make sense for the database servers:

---
# file: group_vars/webservers
apacheMaxRequestsPerChild: 3000
apacheMaxClients: 900

If we had any default values, or values that were universally true, we would put them in a file called group_vars/all:

---
# file: group_vars/all
ntp: ntp-boston.example.com
backup: backup-boston.example.com

We can define specific hardware variance in systems in a host_vars file, but avoid doing this unless you need to:

---
# file: host_vars/db-bos-1.example.com
foo_agent_port: 86
bar_agent_port: 99

Again, if we are using dynamic inventory sources, many dynamic groups are automatically created. So a tag like “class:webserver” would load in variables from the file “group_vars/ec2_tag_class_webserver” automatically.

Top Level Playbooks Are Separated By Role

In site.yml, we include a playbook that defines our entire infrastructure. Note this is SUPER short, because it’s just including some other playbooks. Remember, playbooks are nothing more than lists of plays:

---
# file: site.yml
- include: webservers.yml
- include: dbservers.yml

In a file like webservers.yml (also at the top level), we simply map the configuration of the webservers group to the roles performed by the webservers group. Also notice this is incredibly short. For example:

---
# file: webservers.yml
- hosts: webservers
  roles:
    - common
    - webtier

The idea here is that we can choose to configure our whole infrastructure by “running” site.yml or we could just choose to run a subset by running webservers.yml. This is analogous to the “–limit” parameter to ansible but a little more explicit:

ansible-playbook site.yml --limit webservers
ansible-playbook webservers.yml
Task And Handler Organization For A Role

Below is an example tasks file that explains how a role works. Our common role here just sets up NTP, but it could do more if we wanted:

---
# file: roles/common/tasks/main.yml

- name: be sure ntp is installed
  yum: pkg=ntp state=installed
  tags: ntp

- name: be sure ntp is configured
  template: src=ntp.conf.j2 dest=/etc/ntp.conf
  notify:
    - restart ntpd
  tags: ntp

- name: be sure ntpd is running and enabled
  service: name=ntpd state=running enabled=yes
  tags: ntp

Here is an example handlers file. As a review, handlers are only fired when certain tasks report changes, and are run at the end of each play:

---
# file: roles/common/handlers/main.yml
- name: restart ntpd
  service: name=ntpd state=restarted

See Playbook Roles and Include Statements for more information.

What This Organization Enables (Examples)

Above we’ve shared our basic organizational structure.

Now what sort of use cases does this layout enable? Lots! If I want to reconfigure my whole infrastructure, it’s just:

ansible-playbook -i production site.yml

What about just reconfiguring NTP on everything? Easy.:

ansible-playbook -i production site.yml --tags ntp

What about just reconfiguring my webservers?:

ansible-playbook -i production webservers.yml

What about just my webservers in Boston?:

ansible-playbook -i production webservers.yml --limit boston

What about just the first 10, and then the next 10?:

ansible-playbook -i production webservers.yml --limit boston[0-10]
ansible-playbook -i production webservers.yml --limit boston[10-20]

And of course just basic ad-hoc stuff is also possible.:

ansible boston -i production -m ping
ansible boston -i production -m command -a '/sbin/reboot'

And there are some useful commands to know (at least in 1.1 and higher):

# confirm what task names would be run if I ran this command and said "just ntp tasks"
ansible-playbook -i production webservers.yml --tags ntp --list-tasks

# confirm what hostnames might be communicated with if I said "limit to boston"
ansible-playbook -i production webservers.yml --limit boston --list-hosts
Deployment vs Configuration Organization

The above setup models a typical configuration topology. When doing multi-tier deployments, there are going to be some additional playbooks that hop between tiers to roll out an application. In this case, ‘site.yml’ may be augmented by playbooks like ‘deploy_exampledotcom.yml’ but the general concepts can still apply.

Consider “playbooks” as a sports metaphor – you don’t have to just have one set of plays to use against your infrastructure all the time – you can have situational plays that you use at different times and for different purposes.

Ansible allows you to deploy and configure using the same tool, so you would likely reuse groups and just keep the OS configuration in separate playbooks from the app deployment.

Staging vs Production

As also mentioned above, a good way to keep your staging (or testing) and production environments separate is to use a separate inventory file for staging and production. This way you pick with -i what you are targeting. Keeping them all in one file can lead to surprises!

Testing things in a staging environment before trying in production is always a great idea. Your environments need not be the same size and you can use group variables to control the differences between those environments.

Rolling Updates

Understand the ‘serial’ keyword. If updating a webserver farm you really want to use it to control how many machines you are updating at once in the batch.

See Delegation, Rolling Updates, and Local Actions.

Always Mention The State

The ‘state’ parameter is optional to a lot of modules. Whether ‘state=present’ or ‘state=absent’, it’s always best to leave that parameter in your playbooks to make it clear, especially as some modules support additional states.

Group By Roles

We’re somewhat repeating ourselves with this tip, but it’s worth repeating. A system can be in multiple groups. See Inventory and Patterns. Having groups named after things like webservers and dbservers is repeated in the examples because it’s a very powerful concept.

This allows playbooks to target machines based on role, as well as to assign role specific variables using the group variable system.

See Playbook Roles and Include Statements.

Operating System and Distribution Variance

When dealing with a parameter that is different between two different operating systems, a great way to handle this is by using the group_by module.

This makes a dynamic group of hosts matching certain criteria, even if that group is not defined in the inventory file:

---

# talk to all hosts just so we can learn about them
- hosts: all
  tasks:
     - group_by: key=os_{{ ansible_distribution }}

# now just on the CentOS hosts...

- hosts: os_CentOS
  gather_facts: False
  tasks:
     - # tasks that only happen on CentOS go here

This will throw all systems into a dynamic group based on the operating system name.

If group-specific settings are needed, this can also be done. For example:

---
# file: group_vars/all
asdf: 10

---
# file: group_vars/os_CentOS
asdf: 42

In the above example, CentOS machines get the value of ‘42’ for asdf, but other machines get ‘10’. This can be used not only to set variables, but also to apply certain roles to only certain systems.

Alternatively, if only variables are needed:

- hosts: all
  tasks:
    - include_vars: "os_{{ ansible_distribution }}.yml"
    - debug: var=asdf

This will pull in variables based on the OS name.

Bundling Ansible Modules With Playbooks

If a playbook has a ”./library” directory relative to its YAML file, this directory can be used to add ansible modules that will automatically be in the ansible module path. This is a great way to keep modules that go with a playbook together. This is shown in the directory structure example at the start of this section.

Whitespace and Comments

Generous use of whitespace to break things up, and use of comments (which start with ‘#’), is encouraged.

Always Name Tasks

It is possible to leave off the ‘name’ for a given task, though it is recommended to provide a description about why something is being done instead. This name is shown when the playbook is run.

Keep It Simple

When you can do something simply, do something simply. Do not reach to use every feature of Ansible together, all at once. Use what works for you. For example, you will probably not need vars, vars_files, vars_prompt and --extra-vars all at once, while also using an external inventory file.

If something feels complicated, it probably is, and may be a good opportunity to simplify things.

Version Control

Use version control. Keep your playbooks and inventory file in git (or another version control system), and commit when you make changes to them. This way you have an audit trail describing when and why you changed the rules that are automating your infrastructure.

See also

YAML Syntax
Learn about YAML syntax
Playbooks
Review the basic playbook features
About Modules
Learn about available modules
Developing Modules
Learn how to extend Ansible by writing your own modules
Patterns
Learn about how to select hosts
GitHub examples directory
Complete playbook files from the github project source
Mailing List
Questions? Help? Ideas? Stop by the list on Google Groups

Playbooks: Special Topics

Here are some playbook features that not everyone may need to learn, but can be quite useful for particular applications. Browsing these topics is recommended as you may find some useful tips here, but feel free to learn the basics of Ansible first and adopt these only if they seem relevant or useful to your environment.

Ansible Privilege Escalation

Ansible can use existing privilege escalation systems to allow a user to execute tasks as another.

Become

Before 1.9 Ansible mostly allowed the use of sudo and a limited use of su to allow a login/remote user to become a different user and execute tasks, create resources with the 2nd user’s permissions. As of 1.9 ‘become’ supersedes the old sudo/su, while still being backwards compatible. This new system also makes it easier to add other privilege escalation tools like pbrun (Powerbroker), pfexec and others.

New directives
become
equivalent to adding ‘sudo:’ or ‘su:’ to a play or task, set to ‘true’/’yes’ to activate privilege escalation
become_user
equivalent to adding ‘sudo_user:’ or ‘su_user:’ to a play or task, set to user with desired privileges
become_method
at play or task level overrides the default method set in ansible.cfg, set to ‘sudo’/’su’/’pbrun’/’pfexec’
New ansible_ variables

Each allows you to set an option per group and/or host

ansible_become
equivalent to ansible_sudo or ansible_su, allows to force privilege escalation
ansible_become_method
allows to set privilege escalation method
ansible_become_user
equivalent to ansible_sudo_user or ansible_su_user, allows to set the user you become through privilege escalation
ansible_become_pass
equivalent to ansible_sudo_pass or ansible_su_pass, allows you to set the privilege escalation password
New command line options
--ask-become-pass
 ask for privilege escalation password
–become,-b
run operations with become (no password implied)
--become-method=BECOME_METHOD
 privilege escalation method to use (default=sudo), valid choices: [ sudo | su | pbrun | pfexec ]
--become-user=BECOME_USER
 run operations as this user (default=root)
sudo and su still work!

Old playbooks will not need to be changed, even though they are deprecated, sudo and su directives will continue to work though it is recommended to move to become as they may be retired at one point. You cannot mix directives on the same object though, Ansible will complain if you try to.

Become will default to using the old sudo/su configs and variables if they exist, but will override them if you specify any of the new ones.

Note

Privilege escalation methods must also be supported by the connection plugin used, most will warn if they do not, some will just ignore it as they always run as root (jail, chroot, etc).

Note

Methods cannot be chained, you cannot use ‘sudo /bin/su -‘ to become a user, you need to have privileges to run the command as that user in sudo or be able to su directly to it (the same for pbrun, pfexec or other supported methods).

Note

Privilege escalation permissions have to be general, Ansible does not always use a specific command to do something but runs modules (code) from a temporary file name which changes every time. So if you have ‘/sbin/sevice’ or ‘/bin/chmod’ as the allowed commands this will fail with ansible.

See also

Mailing List
Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net
#ansible IRC chat channel
Accelerated Mode

New in version 1.3.

You Might Not Need This!

Are you running Ansible 1.5 or later? If so, you may not need accelerated mode due to a new feature called “SSH pipelining” and should read the pipelining section of the documentation.

For users on 1.5 and later, accelerated mode only makes sense if you (A) are managing from an Enterprise Linux 6 or earlier host
and still are on paramiko, or (B) can’t enable TTYs with sudo as described in the pipelining docs.

If you can use pipelining, Ansible will reduce the amount of files transferred over the wire, making everything much more efficient, and performance will be on par with accelerated mode in nearly all cases, possibly excluding very large file transfer. Because less moving parts are involved, pipelining is better than accelerated mode for nearly all use cases.

Accelerated moded remains around in support of EL6 control machines and other constrained environments.

Accelerated Mode Details

While OpenSSH using the ControlPersist feature is quite fast and scalable, there is a certain small amount of overhead involved in using SSH connections. While many people will not encounter a need, if you are running on a platform that doesn’t have ControlPersist support (such as an EL6 control machine), you’ll probably be even more interested in tuning options.

Accelerated mode is there to help connections work faster, but still uses SSH for initial secure key exchange. There is no additional public key infrastructure to manage, and this does not require things like NTP or even DNS.

Accelerated mode can be anywhere from 2-6x faster than SSH with ControlPersist enabled, and 10x faster than paramiko.

Accelerated mode works by launching a temporary daemon over SSH. Once the daemon is running, Ansible will connect directly to it via a socket connection. Ansible secures this communication by using a temporary AES key that is exchanged during the SSH connection (this key is different for every host, and is also regenerated periodically).

By default, Ansible will use port 5099 for the accelerated connection, though this is configurable. Once running, the daemon will accept connections for 30 minutes, after which time it will terminate itself and need to be restarted over SSH.

Accelerated mode offers several improvements over the (deprecated) original fireball mode from which it was based:

  • No bootstrapping is required, only a single line needs to be added to each play you wish to run in accelerated mode.
  • Support for sudo commands (see below for more details and caveats) is available.
  • There are fewer requirements. ZeroMQ is no longer required, nor are there any special packages beyond python-keyczar
  • python 2.5 or higher is required.

In order to use accelerated mode, simply add accelerate: true to your play:

---

- hosts: all
  accelerate: true

  tasks:

  - name: some task
    command: echo {{ item }}
    with_items:
    - foo
    - bar
    - baz

If you wish to change the port Ansible will use for the accelerated connection, just add the accelerated_port option:

---

- hosts: all
  accelerate: true
  # default port is 5099
  accelerate_port: 10000

The accelerate_port option can also be specified in the environment variable ACCELERATE_PORT, or in your ansible.cfg configuration:

[accelerate]
accelerate_port = 5099

As noted above, accelerated mode also supports running tasks via sudo, however there are two important caveats:

  • You must remove requiretty from your sudoers options.
  • Prompting for the sudo password is not yet supported, so the NOPASSWD option is required for sudo’ed commands.

As of Ansible version 1.6, you can also allow the use of multiple keys for connections from multiple Ansible management nodes. To do so, add the following option to your ansible.cfg configuration:

accelerate_multi_key = yes

When enabled, the daemon will open a UNIX socket file (by default $ANSIBLE_REMOTE_TEMP/.ansible-accelerate/.local.socket). New connections over SSH can use this socket file to upload new keys to the daemon.

Asynchronous Actions and Polling

By default tasks in playbooks block, meaning the connections stay open until the task is done on each node. This may not always be desirable, or you may be running operations that take longer than the SSH timeout.

The easiest way to do this is to kick them off all at once and then poll until they are done.

You will also want to use asynchronous mode on very long running operations that might be subject to timeout.

To launch a task asynchronously, specify its maximum runtime and how frequently you would like to poll for status. The default poll value is 10 seconds if you do not specify a value for poll:

---

- hosts: all
  remote_user: root

  tasks:

  - name: simulate long running op (15 sec), wait for up to 45 sec, poll every 5 sec
    command: /bin/sleep 15
    async: 45
    poll: 5

Note

There is no default for the async time limit. If you leave off the ‘async’ keyword, the task runs synchronously, which is Ansible’s default.

Alternatively, if you do not need to wait on the task to complete, you may “fire and forget” by specifying a poll value of 0:

---

- hosts: all
  remote_user: root

  tasks:

  - name: simulate long running op, allow to run for 45 sec, fire and forget
    command: /bin/sleep 15
    async: 45
    poll: 0

Note

You shouldn’t “fire and forget” with operations that require exclusive locks, such as yum transactions, if you expect to run other commands later in the playbook against those same resources.

Note

Using a higher value for --forks will result in kicking off asynchronous tasks even faster. This also increases the efficiency of polling.

If you would like to perform a variation of the “fire and forget” where you “fire and forget, check on it later” you can perform a task similar to the following:

---
# Requires ansible 1.8+
- name: 'YUM - fire and forget task'
  yum: name=docker-io state=installed
  async: 1000
  poll: 0
  register: yum_sleeper

- name: 'YUM - check on fire and forget task'
  async_status: jid={{ yum_sleeper.ansible_job_id }}
  register: job_result
  until: job_result.finished
  retries: 30

Note

If the value of async: is not high enough, this will cause the “check on it later” task to fail because the temporary status file that the async_status: is looking for will not have been written or no longer exist

See also

Playbooks
An introduction to playbooks
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel
Check Mode (“Dry Run”)

New in version 1.1.

When ansible-playbook is executed with --check it will not make any changes on remote systems. Instead, any module instrumented to support ‘check mode’ (which contains most of the primary core modules, but it is not required that all modules do this) will report what changes they would have made rather than making them. Other modules that do not support check mode will also take no action, but just will not report what changes they might have made.

Check mode is just a simulation, and if you have steps that use conditionals that depend on the results of prior commands, it may be less useful for you. However it is great for one-node-at-time basic configuration management use cases.

Example:

ansible-playbook foo.yml --check
Running a task in check mode

New in version 1.3.

Sometimes you may want to have a task to be executed even in check mode. To achieve this, use the always_run clause on the task. Its value is a Jinja2 expression, just like the when clause. In simple cases a boolean YAML value would be sufficient as a value.

Example:

tasks:

  - name: this task is run even in check mode
    command: /something/to/run --even-in-check-mode
    always_run: yes

As a reminder, a task with a when clause evaluated to false, will still be skipped even if it has a always_run clause evaluated to true.

Showing Differences with --diff

New in version 1.1.

The --diff option to ansible-playbook works great with --check (detailed above) but can also be used by itself. When this flag is supplied, if any templated files on the remote system are changed, and the ansible-playbook CLI will report back the textual changes made to the file (or, if used with --check, the changes that would have been made). Since the diff feature produces a large amount of output, it is best used when checking a single host at a time, like so:

ansible-playbook foo.yml --check --diff --limit foo.example.com
Delegation, Rolling Updates, and Local Actions

Being designed for multi-tier deployments since the beginning, Ansible is great at doing things on one host on behalf of another, or doing local steps with reference to some remote hosts.

This in particular is very applicable when setting up continuous deployment infrastructure or zero downtime rolling updates, where you might be talking with load balancers or monitoring systems.

Additional features allow for tuning the orders in which things complete, and assigning a batch window size for how many machines to process at once during a rolling update.

This section covers all of these features. For examples of these items in use, please see the ansible-examples repository. There are quite a few examples of zero-downtime update procedures for different kinds of applications.

You should also consult the About Modules section, various modules like ‘ec2_elb’, ‘nagios’, and ‘bigip_pool’, and ‘netscaler’ dovetail neatly with the concepts mentioned here.

You’ll also want to read up on Playbook Roles and Include Statements, as the ‘pre_task’ and ‘post_task’ concepts are the places where you would typically call these modules.

Rolling Update Batch Size

New in version 0.7.

By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling updates use case, you can define how many hosts Ansible should manage at a single time by using the ‘’serial’’ keyword:

- name: test play
  hosts: webservers
  serial: 3

In the above example, if we had 100 hosts, 3 hosts in the group ‘webservers’ would complete the play completely before moving on to the next 3 hosts.

The ‘’serial’’ keyword can also be specified as a percentage in Ansible 1.8 and later, which will be applied to the total number of hosts in a play, in order to determine the number of hosts per pass:

- name: test play
  hosts: websevers
  serial: "30%"

If the number of hosts does not divide equally into the number of passes, the final pass will contain the remainder.

Note

No matter how small the percentage, the number of hosts per pass will always be 1 or greater.

Maximum Failure Percentage

New in version 1.3.

By default, Ansible will continue executing actions as long as there are hosts in the group that have not yet failed. In some situations, such as with the rolling updates described above, it may be desirable to abort the play when a certain threshold of failures have been reached. To achieve this, as of version 1.3 you can set a maximum failure percentage on a play as follows:

- hosts: webservers
  max_fail_percentage: 30
  serial: 10

In the above example, if more than 3 of the 10 servers in the group were to fail, the rest of the play would be aborted.

Note

The percentage set must be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task to abort when 2 of the systems failed, the percentage should be set at 49 rather than 50.

Delegation

New in version 0.7.

This isn’t actually rolling update specific but comes up frequently in those cases.

If you want to perform a task on one host with reference to other hosts, use the ‘delegate_to’ keyword on a task. This is ideal for placing nodes in a load balanced pool, or removing them. It is also very useful for controlling outage windows. Using this with the ‘serial’ keyword to control the number of hosts executing at one time is also a good idea:

---

- hosts: webservers
  serial: 5

  tasks:

  - name: take out of load balancer pool
    command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
    delegate_to: 127.0.0.1

  - name: actual steps would go here
    yum: name=acme-web-stack state=latest

  - name: add back to load balancer pool
    command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
    delegate_to: 127.0.0.1

These commands will run on 127.0.0.1, which is the machine running Ansible. There is also a shorthand syntax that you can use on a per-task basis: ‘local_action’. Here is the same playbook as above, but using the shorthand syntax for delegating to 127.0.0.1:

---

# ...

  tasks:

  - name: take out of load balancer pool
    local_action: command /usr/bin/take_out_of_pool {{ inventory_hostname }}

# ...

  - name: add back to load balancer pool
    local_action: command /usr/bin/add_back_to_pool {{ inventory_hostname }}

A common pattern is to use a local action to call ‘rsync’ to recursively copy files to the managed servers. Here is an example:

---
# ...
  tasks:

  - name: recursively copy files from management server to target
    local_action: command rsync -a /path/to/files {{ inventory_hostname }}:/path/to/target/

Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync will need to ask for a passphrase.

Run Once

New in version 1.7.

In some cases there may be a need to only run a task one time and only on one host. This can be achieved by configuring “run_once” on a task:

---
# ...

  tasks:

    # ...

    - command: /opt/application/upgrade_db.py
      run_once: true

    # ...

This can be optionally paired with “delegate_to” to specify an individual host to execute on:

- command: /opt/application/upgrade_db.py
  run_once: true
  delegate_to: web01.example.org

When “run_once” is not used with “delegate_to” it will execute on the first host, as defined by inventory, in the group(s) of hosts targeted by the play. e.g. webservers[0] if the play targeted “hosts: webservers”.

This approach is similar, although more concise and cleaner than applying a conditional to a task such as:

- command: /opt/application/upgrade_db.py
  when: inventory_hostname == webservers[0]
Local Playbooks

It may be useful to use a playbook locally, rather than by connecting over SSH. This can be useful for assuring the configuration of a system by putting a playbook on a crontab. This may also be used to run a playbook inside an OS installer, such as an Anaconda kickstart.

To run an entire playbook locally, just set the “hosts:” line to “hosts: 127.0.0.1” and then run the playbook like so:

ansible-playbook playbook.yml --connection=local

Alternatively, a local connection can be used in a single playbook play, even if other plays in the playbook use the default remote connection type:

- hosts: 127.0.0.1
  connection: local
Interrupt execution on any error

With option ‘’any_errors_fatal’’ any failure on any host in a multi-host play will be treated as fatal and Ansible will exit immediately without waiting for the other hosts.

Sometimes ‘’serial’’ execution is unsuitable - number of hosts is unpredictable (because of dynamic inventory), speed is crucial (simultaneous execution is required). But all tasks must be 100% successful to continue playbook execution.

For example there is a service located in many datacenters, there a some load balancers to pass traffic from users to service. There is a deploy playbook to upgrade service deb-packages. Playbook stages:

  • disable traffic on load balancers (must be turned off simultaneously)
  • gracefully stop service
  • upgrade software (this step includes tests and starting service)
  • enable traffic on load balancers (should be turned off simultaneously)

Service can’t be stopped with “alive” load balancers, they must be disabled, all of them. So second stage can’t be played if any server failed on “stage 1”.

For datacenter “A” playbook can be written this way:

---
- hosts: load_balancers_dc_a
  any_errors_fatal: True
  tasks:
  - name: 'shutting down datacenter [ A ]'
    command: /usr/bin/disable-dc

- hosts: frontends_dc_a
  tasks:
  - name: 'stopping service'
    command: /usr/bin/stop-software
  - name: 'updating software'
    command: /usr/bin/upgrade-software

- hosts: load_balancers_dc_a
  tasks:
  - name: 'Starting datacenter [ A ]'
    command: /usr/bin/enable-dc

In this example Ansible will start software upgrade on frontends only if all load balancers are successfully disabled.

See also

Playbooks
An introduction to playbooks
Ansible Examples on GitHub
Many examples of full-stack deployments
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel
Setting the Environment (and Working With Proxies)

New in version 1.1.

It is quite possible that you may need to get package updates through a proxy, or even get some package updates through a proxy and access other packages not through a proxy. Or maybe a script you might wish to call may also need certain environment variables set to run properly.

Ansible makes it easy for you to configure your environment by using the ‘environment’ keyword. Here is an example:

- hosts: all
  remote_user: root

  tasks:

    - apt: name=cobbler state=installed
      environment:
        http_proxy: http://proxy.example.com:8080

The environment can also be stored in a variable, and accessed like so:

- hosts: all
  remote_user: root

  # here we make a variable named "proxy_env" that is a dictionary
  vars:
    proxy_env:
      http_proxy: http://proxy.example.com:8080

  tasks:

    - apt: name=cobbler state=installed
      environment: proxy_env

While just proxy settings were shown above, any number of settings can be supplied. The most logical place to define an environment hash might be a group_vars file, like so:

---
# file: group_vars/boston

ntp_server: ntp.bos.example.com
backup: bak.bos.example.com
proxy_env:
  http_proxy: http://proxy.bos.example.com:8080
  https_proxy: http://proxy.bos.example.com:8080

See also

Playbooks
An introduction to playbooks
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel
Error Handling In Playbooks

Ansible normally has defaults that make sure to check the return codes of commands and modules and it fails fast – forcing an error to be dealt with unless you decide otherwise.

Sometimes a command that returns 0 isn’t an error. Sometimes a command might not always need to report that it ‘changed’ the remote system. This section describes how to change the default behavior of Ansible for certain tasks so output and error handling behavior is as desired.

Ignoring Failed Commands

New in version 0.6.

Generally playbooks will stop executing any more steps on a host that has a failure. Sometimes, though, you want to continue on. To do so, write a task that looks like this:

- name: this will not be counted as a failure
  command: /bin/false
  ignore_errors: yes

Note that the above system only governs the failure of the particular task, so if you have an undefined variable used, it will still raise an error that users will need to address.

Handlers and Failure

New in version 1.9.1.

When a task fails on a host, handlers which were previously notified will not be run on that host. This can lead to cases where an unrelated failure can leave a host in an unexpected state. For example, a task could update a configuration file and notify a handler to restart some service. If a task later on in the same play fails, the service will not be restarted despite the configuration change.

You can change this behavior with the --force-handlers command-line option, or by including force_handlers: True in a play, or force_handlers = True in ansible.cfg. When handlers are forced, they will run when notified even if a task fails on that host. (Note that certain errors could still prevent the handler from running, such as a host becoming unreachable.)

Controlling What Defines Failure

New in version 1.4.

Suppose the error code of a command is meaningless and to tell if there is a failure what really matters is the output of the command, for instance if the string “FAILED” is in the output.

Ansible in 1.4 and later provides a way to specify this behavior as follows:

- name: this command prints FAILED when it fails
  command: /usr/bin/example-command -x -y -z
  register: command_result
  failed_when: "'FAILED' in command_result.stderr"

In previous version of Ansible, this can be still be accomplished as follows:

- name: this command prints FAILED when it fails
  command: /usr/bin/example-command -x -y -z
  register: command_result
  ignore_errors: True

- name: fail the play if the previous command did not succeed
  fail: msg="the command failed"
  when: "'FAILED' in command_result.stderr"
Overriding The Changed Result

New in version 1.3.

When a shell/command or other module runs it will typically report “changed” status based on whether it thinks it affected machine state.

Sometimes you will know, based on the return code or output that it did not make any changes, and wish to override the “changed” result such that it does not appear in report output or does not cause handlers to fire:

tasks:

  - shell: /usr/bin/billybass --mode="take me to the river"
    register: bass_result
    changed_when: "bass_result.rc != 2"

  # this will never report 'changed' status
  - shell: wall 'beep'
    changed_when: False

See also

Playbooks
An introduction to playbooks
Best Practices
Best practices in playbooks
Conditionals
Conditional statements in playbooks
Variables
All about variables
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel
Using Lookups

Lookup plugins allow access of data in Ansible from outside sources. These plugins are evaluated on the Ansible control machine, and can include reading the filesystem but also contacting external datastores and services. These values are then made available using the standard templating system in Ansible, and are typically used to load variables or templates with information from those systems.

Note

This is considered an advanced feature, and many users will probably not rely on these features.

Note

Lookups occur on the local computer, not on the remote computer.

Note

Since 1.9 you can pass wantlist=True to lookups to use in jinja2 template “for” loops.

Intro to Lookups: Getting File Contents

The file lookup is the most basic lookup type.

Contents can be read off the filesystem as follows:

- hosts: all
  vars:
     contents: "{{ lookup('file', '/etc/foo.txt') }}"

  tasks:

     - debug: msg="the value of foo.txt is {{ contents }}"
The Password Lookup

Note

A great alternative to the password lookup plugin, if you don’t need to generate random passwords on a per-host basis, would be to use Vault. Read the documentation there and consider using it first, it will be more desirable for most applications.

password generates a random plaintext password and stores it in a file at a given filepath.

(Docs about crypted save modes are pending)

If the file exists previously, it will retrieve its contents, behaving just like with_file. Usage of variables like “{{ inventory_hostname }}” in the filepath can be used to set up random passwords per host (what simplifies password management in ‘host_vars’ variables).

Generated passwords contain a random mix of upper and lowercase ASCII letters, the numbers 0-9 and punctuation (”. , : - _”). The default length of a generated password is 20 characters. This length can be changed by passing an extra parameter:

---
- hosts: all

  tasks:

    # create a mysql user with a random password:
    - mysql_user: name={{ client }}
                  password="{{ lookup('password', 'credentials/' + client + '/' + tier + '/' + role + '/mysqlpassword length=15') }}"
                  priv={{ client }}_{{ tier }}_{{ role }}.*:ALL

    (...)

Note

If the file already exists, no data will be written to it. If the file has contents, those contents will be read in as the password. Empty files cause the password to return as an empty string

Starting in version 1.4, password accepts a “chars” parameter to allow defining a custom character set in the generated passwords. It accepts comma separated list of names that are either string module attributes (ascii_letters,digits, etc) or are used literally:

---
- hosts: all

  tasks:

    # create a mysql user with a random password using only ascii letters:
    - mysql_user: name={{ client }}
                  password="{{ lookup('password', '/tmp/passwordfile chars=ascii_letters') }}"
                  priv={{ client }}_{{ tier }}_{{ role }}.*:ALL

    # create a mysql user with a random password using only digits:
    - mysql_user: name={{ client }}
                  password="{{ lookup('password', '/tmp/passwordfile chars=digits') }}"
                  priv={{ client }}_{{ tier }}_{{ role }}.*:ALL

    # create a mysql user with a random password using many different char sets:
    - mysql_user: name={{ client }}
                  password="{{ lookup('password', '/tmp/passwordfile chars=ascii_letters,digits,hexdigits,punctuation') }}"
                  priv={{ client }}_{{ tier }}_{{ role }}.*:ALL

    (...)

To enter comma use two commas ‘,,’ somewhere - preferably at the end. Quotes and double quotes are not supported.

The CSV File Lookup

New in version 1.5.

The csvfile lookup reads the contents of a file in CSV (comma-separated value) format. The lookup looks for the row where the first column matches keyname, and returns the value in the first column, unless a different column is specified.

The example below shows the contents of a CSV file named elements.csv with information about the periodic table of elements:

Symbol,Atomic Number,Atomic Mass
H,1,1.008
He,2,4.0026
Li,3,6.94
Be,4,9.012
B,5,10.81

We can use the csvfile plugin to look up the atomic number or atomic of Lithium by its symbol:

- debug: msg="The atomic number of Lithium is {{ lookup('csvfile', 'Li file=elements.csv delimiter=,') }}"
- debug: msg="The atomic mass of Lithium is {{ lookup('csvfile', 'Li file=elements.csv delimiter=, col=2') }}"

The csvfile lookup supports several arguments. The format for passing arguments is:

lookup('csvfile', 'key arg1=val1 arg2=val2 ...')

The first value in the argument is the key, which must be an entry that appears exactly once in column 0 (the first column, 0-indexed) of the table. All other arguments are optional.

Field Default Description
file ansible.csv Name of the file to load
delimiter TAB Delimiter used by CSV file. As a special case, tab can be specified as either TAB or t.
col 1 The column to output, indexed by 0
default empty string return value if the key is not in the csv file

Note

The default delimiter is TAB, not comma.

The INI File Lookup

New in version 2.0.

The ini lookup reads the contents of a file in INI format (key1=value1). This plugin retrieve the value on the right side after the equal sign (‘=’) of a given section ([section]). You can also read a property file which - in this case - does not contain section.

Here’s a simple example of an INI file with user/password configuration:

[production]
# My production information
user=robert
pass=somerandompassword

[integration]
# My integration information
user=gertrude
pass=anotherpassword

We can use the ini plugin to lookup user configuration:

- debug: msg="User in integration is {{ lookup('ini', 'user section=integration file=users.ini') }}"
- debug: msg="User in production  is {{ lookup('ini', 'user section=production  file=users.ini') }}"

Another example for this plugin is for looking for a value on java properties. Here’s a simple properties we’ll take as an example:

user.name=robert
user.pass=somerandompassword

You can retrieve the user.name field with the following lookup:

- debug: msg="user.name is {{ lookup('ini', 'user.name type=property file=user.properties') }}"

The ini lookup supports several arguments like the csv plugin. The format for passing arguments is:

lookup('ini', 'key [type=<properties|ini>] [section=section] [file=file.ini] [re=true] [default=<defaultvalue>]')

The first value in the argument is the key, which must be an entry that appears exactly once on keys. All other arguments are optional.

Field Default Description
type ini Type of the file. Can be ini or properties (for java properties).
file ansible.ini Name of the file to load
section global Default section where to lookup for key.
re False The key is a regexp.
default empty string return value if the key is not in the ini file

Note

In java properties files, there’s no need to specify a section.

The Credstash Lookup

Credstash is a small utility for managing secrets using AWS’s KMS and DynamoDB: https://github.com/LuminalOSS/credstash

First, you need to store your secrets with credstash:

$ credstash put my-github-password secure123

my-github-password has been stored

Example usage:

---
- name: "Test credstash lookup plugin -- get my github password"
  debug: msg="Credstash lookup! {{ lookup('credstash', 'my-github-password') }}"

You can specify regions or tables to fetch secrets from:

---
- name: "Test credstash lookup plugin -- get my other password from us-west-1"
  debug: msg="Credstash lookup! {{ lookup('credstash', 'my-other-password', region='us-west-1') }}"


- name: "Test credstash lookup plugin -- get the company's github password"
  debug: msg="Credstash lookup! {{ lookup('credstash', 'company-github-password', table='company-passwords') }}"
More Lookups

Various lookup plugins allow additional ways to iterate over data. In Loops you will learn how to use them to walk over collections of numerous types. However, they can also be used to pull in data from remote sources, such as shell commands or even key value stores. This section will cover lookup plugins in this capacity.

Here are some examples:

---
- hosts: all

  tasks:

     - debug: msg="{{ lookup('env','HOME') }} is an environment variable"

     - debug: msg="{{ item }} is a line from the result of this command"
       with_lines:
         - cat /etc/motd

     - debug: msg="{{ lookup('pipe','date') }} is the raw result of running this command"

     # redis_kv lookup requires the Python redis package
     - debug: msg="{{ lookup('redis_kv', 'redis://localhost:6379,somekey') }} is value in Redis for somekey"

     # dnstxt lookup requires the Python dnspython package
     - debug: msg="{{ lookup('dnstxt', 'example.com') }} is a DNS TXT record for example.com"

     - debug: msg="{{ lookup('template', './some_template.j2') }} is a value from evaluation of this template"

     - debug: msg="{{ lookup('etcd', 'foo') }} is a value from a locally running etcd"

     # The following lookups were added in 1.9
     - debug: msg="{{item}}"
       with_url:
            - 'https://github.com/gremlin.keys'

     # outputs the cartesian product of the supplied lists
     - debug: msg="{{item}}"
       with_cartesian:
            - list1
            - list2
            - list3

As an alternative you can also assign lookup plugins to variables or use them elsewhere. This macros are evaluated each time they are used in a task (or template):

vars:
  motd_value: "{{ lookup('file', '/etc/motd') }}"

tasks:

  - debug: msg="motd value is {{ motd_value }}"

See also

Playbooks
An introduction to playbooks
Conditionals
Conditional statements in playbooks
Variables
All about variables
Loops
Looping in playbooks
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel
Prompts

When running a playbook, you may wish to prompt the user for certain input, and can do so with the ‘vars_prompt’ section.

A common use for this might be for asking for sensitive data that you do not want to record.

This has uses beyond security, for instance, you may use the same playbook for all software releases and would prompt for a particular release version in a push-script.

Here is a most basic example:

---
- hosts: all
  remote_user: root

  vars:
    from: "camelot"

  vars_prompt:
    - name: "name"
      prompt: "what is your name?"
    - name: "quest"
      prompt: "what is your quest?"
    - name: "favcolor"
      prompt: "what is your favorite color?"

If you have a variable that changes infrequently, it might make sense to provide a default value that can be overridden. This can be accomplished using the default argument:

vars_prompt:

  - name: "release_version"
    prompt: "Product release version"
    default: "1.0"

An alternative form of vars_prompt allows for hiding input from the user, and may later support some other options, but otherwise works equivalently:

vars_prompt:

  - name: "some_password"
    prompt: "Enter password"
    private: yes

  - name: "release_version"
    prompt: "Product release version"
    private: no

If Passlib is installed, vars_prompt can also crypt the entered value so you can use it, for instance, with the user module to define a password:

vars_prompt:

  - name: "my_password2"
    prompt: "Enter password2"
    private: yes
    encrypt: "sha512_crypt"
    confirm: yes
    salt_size: 7

You can use any crypt scheme supported by ‘Passlib’:

  • des_crypt - DES Crypt
  • bsdi_crypt - BSDi Crypt
  • bigcrypt - BigCrypt
  • crypt16 - Crypt16
  • md5_crypt - MD5 Crypt
  • bcrypt - BCrypt
  • sha1_crypt - SHA-1 Crypt
  • sun_md5_crypt - Sun MD5 Crypt
  • sha256_crypt - SHA-256 Crypt
  • sha512_crypt - SHA-512 Crypt
  • apr_md5_crypt - Apache’s MD5-Crypt variant
  • phpass - PHPass’ Portable Hash
  • pbkdf2_digest - Generic PBKDF2 Hashes
  • cta_pbkdf2_sha1 - Cryptacular’s PBKDF2 hash
  • dlitz_pbkdf2_sha1 - Dwayne Litzenberger’s PBKDF2 hash
  • scram - SCRAM Hash
  • bsd_nthash - FreeBSD’s MCF-compatible nthash encoding

However, the only parameters accepted are ‘salt’ or ‘salt_size’. You can use your own salt using ‘salt’, or have one generated automatically using ‘salt_size’. If nothing is specified, a salt of size 8 will be generated.

See also

Playbooks
An introduction to playbooks
Conditionals
Conditional statements in playbooks
Variables
All about variables
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel
Tags

If you have a large playbook it may become useful to be able to run a specific part of the configuration without running the whole playbook.

Both plays and tasks support a “tags:” attribute for this reason.

Example:

tasks:

    - yum: name={{ item }} state=installed
      with_items:
         - httpd
         - memcached
      tags:
         - packages

    - template: src=templates/src.j2 dest=/etc/foo.conf
      tags:
         - configuration

If you wanted to just run the “configuration” and “packages” part of a very long playbook, you could do this:

ansible-playbook example.yml --tags "configuration,packages"

On the other hand, if you want to run a playbook without certain tasks, you could do this:

ansible-playbook example.yml --skip-tags "notification"

You may also apply tags to roles:

roles:
  - { role: webserver, port: 5000, tags: [ 'web', 'foo' ] }

And you may also tag basic include statements:

- include: foo.yml tags=web,foo

Both of these apply the specified tags to every task inside the included file or role, so that these tasks can be selectively run when the playbook is invoked with the corresponding tags.

Special Tags

There is a special ‘always’ tag that will always run a task, unless specifically skipped (–skip-tags always)

Example:

tasks:

    - debug: msg="Always runs"
      tags:
        - always

    - debug: msg="runs when you use tag1"
      tags:
        - tag1

There are another 3 special keywords for tags, ‘tagged’, ‘untagged’ and ‘all’, which run only tagged, only untagged and all tasks respectively.

By default ansible runs as if ‘–tags all’ had been specified.

See also

Playbooks
An introduction to playbooks
Playbook Roles and Include Statements
Playbook organization by roles
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel
Vault

New in Ansible 1.5, “Vault” is a feature of ansible that allows keeping sensitive data such as passwords or keys in encrypted files, rather than as plaintext in your playbooks or roles. These vault files can then be distributed or placed in source control.

To enable this feature, a command line tool, ansible-vault is used to edit files, and a command line flag –ask-vault-pass or –vault-password-file is used. Alternately, you may specify the location of a password file or command Ansible to always prompt for the password in your ansible.cfg file. These options require no command line flag usage.

What Can Be Encrypted With Vault

The vault feature can encrypt any structured data file used by Ansible. This can include “group_vars/” or “host_vars/” inventory variables, variables loaded by “include_vars” or “vars_files”, or variable files passed on the ansible-playbook command line with “-e @file.yml” or “-e @file.json”. Role variables and defaults are also included!

Ansible tasks, handlers, and so on are also data so these can be encrypted with vault as well. To hide the names of variables that you’re using, you can encrypt the task files in their entirety. However, that might be a little too much and could annoy your coworkers :)

Creating Encrypted Files

To create a new encrypted data file, run the following command:

ansible-vault create foo.yml

First you will be prompted for a password. The password used with vault currently must be the same for all files you wish to use together at the same time.

After providing a password, the tool will launch whatever editor you have defined with $EDITOR, and defaults to vim. Once you are done with the editor session, the file will be saved as encrypted data.

The default cipher is AES (which is shared-secret based).

Editing Encrypted Files

To edit an encrypted file in place, use the ansible-vault edit command. This command will decrypt the file to a temporary file and allow you to edit the file, saving it back when done and removing the temporary file:

ansible-vault edit foo.yml
Rekeying Encrypted Files

Should you wish to change your password on a vault-encrypted file or files, you can do so with the rekey command:

ansible-vault rekey foo.yml bar.yml baz.yml

This command can rekey multiple data files at once and will ask for the original password and also the new password.

Encrypting Unencrypted Files

If you have existing files that you wish to encrypt, use the ansible-vault encrypt command. This command can operate on multiple files at once:

ansible-vault encrypt foo.yml bar.yml baz.yml
Decrypting Encrypted Files

If you have existing files that you no longer want to keep encrypted, you can permanently decrypt them by running the ansible-vault decrypt command. This command will save them unencrypted to the disk, so be sure you do not want ansible-vault edit instead:

ansible-vault decrypt foo.yml bar.yml baz.yml
Viewing Encrypted Files

Available since Ansible 1.8

If you want to view the contents of an encrypted file without editing it, you can use the ansible-vault view command:

ansible-vault view foo.yml bar.yml baz.yml
Running a Playbook With Vault

To run a playbook that contains vault-encrypted data files, you must pass one of two flags. To specify the vault-password interactively:

ansible-playbook site.yml --ask-vault-pass

This prompt will then be used to decrypt (in memory only) any vault encrypted files that are accessed. Currently this requires that all files be encrypted with the same password.

Alternatively, passwords can be specified with a file or a script, the script version will require Ansible 1.7 or later. When using this flag, ensure permissions on the file are such that no one else can access your key and do not add your key to source control:

ansible-playbook site.yml --vault-password-file ~/.vault_pass.txt

ansible-playbook site.yml --vault-password-file ~/.vault_pass.py

The password should be a string stored as a single line in the file.

If you are using a script instead of a flat file, ensure that it is marked as executable, and that the password is printed to standard output. If your script needs to prompt for data, prompts can be sent to standard error.

This is something you may wish to do if using Ansible from a continuous integration system like Jenkins.

(The –vault-password-file option can also be used with the Ansible-Pull command if you wish, though this would require distributing the keys to your nodes, so understand the implications – vault is more intended for push mode).

Start and Step

This shows a few alternative ways to run playbooks. These modes are very useful for testing new plays or debugging.

Start-at-task

If you want to start executing your playbook at a particular task, you can do so with the --start-at-task option:

ansible-playbook playbook.yml --start-at-task="install packages"

The above will start executing your playbook at a task named “install packages”.

Step

Playbooks can also be executed interactively with --step:

ansible-playbook playbook.yml --step

This will cause ansible to stop on each task, and ask if it should execute that task. Say you had a task called “configure ssh”, the playbook run will stop and ask:

Perform task: configure ssh (y/n/c):

Answering “y” will execute the task, answering “n” will skip the task, and answering “c” will continue executing all the remaining tasks without asking.

About Modules

Introduction

Modules (also referred to as “task plugins” or “library plugins”) are the ones that do the actual work in ansible, they are what gets executed in each playbook task. But you can also run a single one using the ‘ansible’ command.

Let’s review how we execute three different modules from the command line:

ansible webservers -m service -a "name=httpd state=started"
ansible webservers -m ping
ansible webservers -m command -a "/sbin/reboot -t now"

Each module supports taking arguments. Nearly all modules take key=value arguments, space delimited. Some modules take no arguments, and the command/shell modules simply take the string of the command you want to run.

From playbooks, Ansible modules are executed in a very similar way:

- name: reboot the servers
  action: command /sbin/reboot -t now

Which can be abbreviated to:

- name: reboot the servers
  command: /sbin/reboot -t now

Another way to pass arguments to a module is using yaml syntax also called ‘complex args’

- name: restart webserver
  service:
    name: httpd
    state: restarted

All modules technically return JSON format data, though if you are using the command line or playbooks, you don’t really need to know much about that. If you’re writing your own module, you care, and this means you do not have to write modules in any particular language – you get to choose.

Modules strive to be idempotent, meaning they will seek to avoid changes to the system unless a change needs to be made. When using Ansible playbooks, these modules can trigger ‘change events’ in the form of notifying ‘handlers’ to run additional tasks.

Documentation for each module can be accessed from the command line with the ansible-doc tool:

ansible-doc yum

A list of all installed modules is also available:

ansible-doc -l

See also

Introduction To Ad-Hoc Commands
Examples of using modules in /usr/bin/ansible
Playbooks
Examples of using modules with /usr/bin/ansible-playbook
Developing Modules
How to write your own modules
Python API
Examples of using modules with the Python API
Mailing List
Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net
#ansible IRC chat channel
Core Modules

These are modules that the core ansible team maintains and will always ship with ansible itself. They will also receive slightly higher priority for all requests than those in the “extras” repos.

The source of these modules is hosted on GitHub in the ansible-modules-core repo.

If you believe you have found a bug in a core module and are already running the latest stable or development version of Ansible, first look in the issue tracker at github.com/ansible/ansible-modules-core to see if a bug has already been filed. If not, we would be grateful if you would file one.

Should you have a question rather than a bug report, inquries are welcome on the ansible-project google group or on Ansible’s “#ansible” channel, located on irc.freenode.net. Development oriented topics should instead use the similar ansible-devel google group.

Documentation updates for these modules can also be edited directly in the module itself and by submitting a pull request to the module source code, just look for the “DOCUMENTATION” block in the source tree.

Extras Modules

These modules are currently shipped with Ansible, but might be shipped separately in the future. They are also mostly maintained by the community. Non-core modules are still fully usable, but may receive slightly lower response rates for issues and pull requests.

Popular “extras” modules may be promoted to core modules over time.

This source for these modules is hosted on GitHub in the ansible-modules-extras repo.

If you believe you have found a bug in an extras module and are already running the latest stable or development version of Ansible, first look in the issue tracker at github.com/ansible/ansible-modules-extras o see if a bug has already been filed. If not, we would be grateful if you would file one.

Should you have a question rather than a bug report, inquries are welcome on the ansible-project google group or on Ansible’s “#ansible” channel, located on irc.freenode.net. Development oriented topics should instead use the similar ansible-devel google group.

Documentation updates for this module can also be edited directly in the module and by submitting a pull request to the module source code, just look for the “DOCUMENTATION” block in the source tree.

For help in developing on modules, should you be so inclined, please read Community Information & Contributing, Helping Testing PRs and Developing Modules.

Common Return Values

Ansible modules normally return a data structure that can be registered into a variable, or seen directly when using the ansible program as output. Here we document the values common to all modules, each module can optionally document its own unique returns. If these docs exist they will be visible through ansible-doc and https://docs.ansible.com.

Facts

Some modules return ‘facts’ to ansible (i.e setup), this is done through a ‘ansible_facts’ key and anything inside will automatically be available for the current host directly as a variable and there is no need to register this data.

Status

Every module must return a status, saying if the module was successful, if anything changed or not. Ansible itself will return a status if it skips the module due to a user condition (when: ) or running in check mode when the module does not support it.

Other common returns

It is common on failure or success to return a ‘msg’ that either explains the failure or makes a note about the execution. Some modules, specifically those that execute shell or commands directly, will return stdout and stderr, if ansible sees a stdout in the results it will append a stdout_lines which is just a list or the lines in stdout.

See also

About Modules
Learn about available modules
GitHub Core modules directory
Browse source of core modules
Github Extras modules directory
Browse source of extras modules.
Mailing List
Development mailing list
irc.freenode.net
#ansible IRC chat channel

Ansible ships with a number of modules (called the ‘module library’) that can be executed directly on remote hosts or through Playbooks.

Users can also write their own modules. These modules can control system resources, like services, packages, or files (anything really), or handle executing system commands.

See also

Introduction To Ad-Hoc Commands
Examples of using modules in /usr/bin/ansible
Playbooks
Examples of using modules with /usr/bin/ansible-playbook
Developing Modules
How to write your own modules
Python API
Examples of using modules with the Python API
Mailing List
Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net
#ansible IRC chat channel

Detailed Guides

This section is new and evolving. The idea here is explore particular use cases in greater depth and provide a more “top down” explanation of some basic features.

Amazon Web Services Guide
Introduction

Ansible contains a number of modules for controlling Amazon Web Services (AWS). The purpose of this section is to explain how to put Ansible modules together (and use inventory scripts) to use Ansible in AWS context.

Requirements for the AWS modules are minimal.

All of the modules require and are tested against recent versions of boto. You’ll need this Python module installed on your control machine. Boto can be installed from your OS distribution or python’s “pip install boto”.

Whereas classically ansible will execute tasks in its host loop against multiple remote machines, most cloud-control steps occur on your local machine with reference to the regions to control.

In your playbook steps we’ll typically be using the following pattern for provisioning steps:

- hosts: localhost
  connection: local
  gather_facts: False
  tasks:
    - ...
Authentication

Authentication with the AWS-related modules is handled by either specifying your access and secret key as ENV variables or module arguments.

For environment variables:

export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'

For storing these in a vars_file, ideally encrypted with ansible-vault:

---
ec2_access_key: "--REMOVED--"
ec2_secret_key: "--REMOVED--"
Provisioning

The ec2 module provisions and de-provisions instances within EC2.

An example of making sure there are only 5 instances tagged ‘Demo’ in EC2 follows.

In the example below, the “exact_count” of instances is set to 5. This means if there are 0 instances already existing, then 5 new instances would be created. If there were 2 instances, only 3 would be created, and if there were 8 instances, 3 instances would be terminated.

What is being counted is specified by the “count_tag” parameter. The parameter “instance_tags” is used to apply tags to the newly created instance.:

# demo_setup.yml

- hosts: localhost
  connection: local
  gather_facts: False

  tasks:

    - name: Provision a set of instances
      ec2:
         key_name: my_key
         group: test
         instance_type: t2.micro
         image: "{{ ami_id }}"
         wait: true
         exact_count: 5
         count_tag:
            Name: Demo
         instance_tags:
            Name: Demo
      register: ec2

The data about what instances are created is being saved by the “register” keyword in the variable named “ec2”.

From this, we’ll use the add_host module to dynamically create a host group consisting of these new instances. This facilitates performing configuration actions on the hosts immediately in a subsequent task.:

# demo_setup.yml

- hosts: localhost
  connection: local
  gather_facts: False

  tasks:

    - name: Provision a set of instances
      ec2:
         key_name: my_key
         group: test
         instance_type: t2.micro
         image: "{{ ami_id }}"
         wait: true
         exact_count: 5
         count_tag:
            Name: Demo
         instance_tags:
            Name: Demo
      register: ec2

   - name: Add all instance public IPs to host group
     add_host: hostname={{ item.public_ip }} groups=ec2hosts
     with_items: ec2.instances

With the host group now created, a second play at the bottom of the the same provisioning playbook file might now have some configuration steps:

# demo_setup.yml

- name: Provision a set of instances
  hosts: localhost
  # ... AS ABOVE ...

- hosts: ec2hosts
  name: configuration play
  user: ec2-user
  gather_facts: true

  tasks:

     - name: Check NTP service
       service: name=ntpd state=started
Host Inventory

Once your nodes are spun up, you’ll probably want to talk to them again. With a cloud setup, it’s best to not maintain a static list of cloud hostnames in text files. Rather, the best way to handle this is to use the ec2 dynamic inventory script.

This will also dynamically select nodes that were even created outside of Ansible, and allow Ansible to manage them.

See the doc:aws_example for how to use this, then flip back over to this chapter.

Tags And Groups And Variables

When using the ec2 inventory script, hosts automatically appear in groups based on how they are tagged in EC2.

For instance, if a host is given the “class” tag with the value of “webserver”, it will be automatically discoverable via a dynamic group like so:

- hosts: tag_class_webserver
  tasks:
    - ping

Using this philosophy can be a great way to keep systems separated by the function they perform.

In this example, if we wanted to define variables that are automatically applied to each machine tagged with the ‘class’ of ‘webserver’, ‘group_vars’ in ansible can be used. See Splitting Out Host and Group Specific Data.

Similar groups are available for regions and other classifications, and can be similarly assigned variables using the same mechanism.

Autoscaling with Ansible Pull

Amazon Autoscaling features automatically increase or decrease capacity based on load. There are also Ansible ansibles shown in the cloud documentation that can configure autoscaling policy.

When nodes come online, it may not be sufficient to wait for the next cycle of an ansible command to come along and configure that node.

To do this, pre-bake machine images which contain the necessary ansible-pull invocation. Ansible-pull is a command line tool that fetches a playbook from a git server and runs it locally.

One of the challenges of this approach is that there needs to be a centralized way to store data about the results of pull commands in an autoscaling context. For this reason, the autoscaling solution provided below in the next section can be a better approach.

Read Ansible-Pull for more information on pull-mode playbooks.

Autoscaling with Ansible Tower

Ansible Tower also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call a defined URL and the server will “dial out” to the requester and configure an instance that is spinning up. This can be a great way to reconfigure ephemeral nodes. See the Tower install and product documentation for more details.

A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded and less information has to be shared with remote hosts.

Ansible With (And Versus) CloudFormation

CloudFormation is a Amazon technology for defining a cloud stack as a JSON document.

Ansible modules provide an easier to use interface than CloudFormation in many examples, without defining a complex JSON document. This is recommended for most users.

However, for users that have decided to use CloudFormation, there is an Ansible module that can be used to apply a CloudFormation template to Amazon.

When using Ansible with CloudFormation, typically Ansible will be used with a tool like Packer to build images, and CloudFormation will launch those images, or ansible will be invoked through user data once the image comes online, or a combination of the two.

Please see the examples in the Ansible CloudFormation module for more details.

AWS Image Building With Ansible

Many users may want to have images boot to a more complete configuration rather than configuring them entirely after instantiation. To do this, one of many programs can be used with Ansible playbooks to define and upload a base image, which will then get its own AMI ID for usage with the ec2 module or other Ansible AWS modules such as ec2_asg or the cloudformation module. Possible tools include Packer, aminator, and Ansible’s ec2_ami module.

Generally speaking, we find most users using Packer.

Documentation for the Ansible Packer provisioner can be found here.

If you do not want to adopt Packer at this time, configuring a base-image with Ansible after provisioning (as shown above) is acceptable.

Next Steps: Explore Modules

Ansible ships with lots of modules for configuring a wide array of EC2 services. Browse the “Cloud” category of the module documentation for a full list with examples.

See also

About Modules
All the documentation for Ansible modules
Playbooks
An introduction to playbooks
Delegation, Rolling Updates, and Local Actions
Delegation, useful for working with loud balancers, clouds, and locally executed steps.
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel
Rackspace Cloud Guide
Introduction

Note

This section of the documentation is under construction. We are in the process of adding more examples about the Rackspace modules and how they work together. Once complete, there will also be examples for Rackspace Cloud in ansible-examples.

Ansible contains a number of core modules for interacting with Rackspace Cloud.

The purpose of this section is to explain how to put Ansible modules together (and use inventory scripts) to use Ansible in a Rackspace Cloud context.

Prerequisites for using the rax modules are minimal. In addition to ansible itself, all of the modules require and are tested against pyrax 1.5 or higher. You’ll need this Python module installed on the execution host.

pyrax is not currently available in many operating system package repositories, so you will likely need to install it via pip:

$ pip install pyrax

The following steps will often execute from the control machine against the Rackspace Cloud API, so it makes sense to add localhost to the inventory file. (Ansible may not require this manual step in the future):

[localhost]
localhost ansible_connection=local

In playbook steps, we’ll typically be using the following pattern:

- hosts: localhost
  connection: local
  gather_facts: False
  tasks:
Credentials File

The rax.py inventory script and all rax modules support a standard pyrax credentials file that looks like:

[rackspace_cloud]
username = myraxusername
api_key = d41d8cd98f00b204e9800998ecf8427e

Setting the environment parameter RAX_CREDS_FILE to the path of this file will help Ansible find how to load this information.

More information about this credentials file can be found at https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating

Running from a Python Virtual Environment (Optional)

Most users will not be using virtualenv, but some users, particularly Python developers sometimes like to.

There are special considerations when Ansible is installed to a Python virtualenv, rather than the default of installing at a global scope. Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python. This is done via the interpreter line in modules, however when instructed by setting the inventory variable ‘ansible_python_interpreter’, Ansible will use this specified path instead to find Python. This can be a cause of confusion as one may assume that modules running on ‘localhost’, or perhaps running via ‘local_action’, are using the virtualenv Python interpreter. By setting this line in the inventory, the modules will execute in the virtualenv interpreter and have available the virtualenv packages, specifically pyrax. If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows:

[localhost]
localhost ansible_connection=local ansible_python_interpreter=/path/to/ansible_venv/bin/python

Note

pyrax may be installed in the global Python package scope or in a virtual environment. There are no special considerations to keep in mind when installing pyrax.

Provisioning

Now for the fun parts.

The ‘rax’ module provides the ability to provision instances within Rackspace Cloud. Typically the provisioning task will be performed from your Ansible control server (in our example, localhost) against the Rackspace cloud API. This is done for several reasons:

  • Avoiding installing the pyrax library on remote nodes
  • No need to encrypt and distribute credentials to remote nodes
  • Speed and simplicity

Note

Authentication with the Rackspace-related modules is handled by either specifying your username and API key as environment variables or passing them as module arguments, or by specifying the location of a credentials file.

Here is a basic example of provisioning an instance in ad-hoc mode:

$ ansible localhost -m rax -a "name=awx flavor=4 image=ubuntu-1204-lts-precise-pangolin wait=yes" -c local

Here’s what it would look like in a playbook, assuming the parameters were defined in variables:

tasks:
  - name: Provision a set of instances
    local_action:
        module: rax
        name: "{{ rax_name }}"
        flavor: "{{ rax_flavor }}"
        image: "{{ rax_image }}"
        count: "{{ rax_count }}"
        group: "{{ group }}"
        wait: yes
    register: rax

The rax module returns data about the nodes it creates, like IP addresses, hostnames, and login passwords. By registering the return value of the step, it is possible used this data to dynamically add the resulting hosts to inventory (temporarily, in memory). This facilitates performing configuration actions on the hosts in a follow-on task. In the following example, the servers that were successfully created using the above task are dynamically added to a group called “raxhosts”, with each nodes hostname, IP address, and root password being added to the inventory.

- name: Add the instances we created (by public IP) to the group 'raxhosts'
  local_action:
      module: add_host
      hostname: "{{ item.name }}"
      ansible_ssh_host: "{{ item.rax_accessipv4 }}"
      ansible_ssh_pass: "{{ item.rax_adminpass }}"
      groups: raxhosts
  with_items: rax.success
  when: rax.action == 'create'

With the host group now created, the next play in this playbook could now configure servers belonging to the raxhosts group.

- name: Configuration play
  hosts: raxhosts
  user: root
  roles:
    - ntp
    - webserver

The method above ties the configuration of a host with the provisioning step. This isn’t always what you want, and leads us to the next section.

Host Inventory

Once your nodes are spun up, you’ll probably want to talk to them again. The best way to handle his is to use the “rax” inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up Ansible via other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by metadata, region, OS, etc. Utilizing metadata is highly recommended in “rax” and can provide an easy way to sort between host groups and roles. If you don’t want to use the rax.py dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended.

In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common directory and be sure the scripts are chmod +x, and the INI-based ones are not.

rax.py

To use the rackspace dynamic inventory script, copy rax.py into your inventory directory and make it executable. You can specify a credentials file for rax.py utilizing the RAX_CREDS_FILE environment variable.

Note

Dynamic inventory scripts (like rax.py) are saved in /usr/share/ansible/inventory if Ansible has been installed globally. If installed to a virtualenv, the inventory scripts are installed to $VIRTUALENV/share/inventory.

Note

Users of Ansible Tower will note that dynamic inventory is natively supported by Tower, and all you have to do is associate a group with your Rackspace Cloud credentials, and it will easily synchronize without going through these steps:

$ RAX_CREDS_FILE=~/.raxpub ansible all -i rax.py -m setup

rax.py also accepts a RAX_REGION environment variable, which can contain an individual region, or a comma separated list of regions.

When using rax.py, you will not have a ‘localhost’ defined in the inventory.

As mentioned previously, you will often be running most of these modules outside of the host loop, and will need ‘localhost’ defined. The recommended way to do this, would be to create an inventory directory, and place both the rax.py script and a file containing localhost in it.

Executing ansible or ansible-playbook and specifying the inventory directory instead of an individual file, will cause ansible to evaluate each file in that directory for inventory.

Let’s test our inventory script to see if it can talk to Rackspace Cloud.

$ RAX_CREDS_FILE=~/.raxpub ansible all -i inventory/ -m setup

Assuming things are properly configured, the rax.py inventory script will output information similar to the following information, which will be utilized for inventory and variables.

{
    "ORD": [
        "test"
    ],
    "_meta": {
        "hostvars": {
            "test": {
                "ansible_ssh_host": "1.1.1.1",
                "rax_accessipv4": "1.1.1.1",
                "rax_accessipv6": "2607:f0d0:1002:51::4",
                "rax_addresses": {
                    "private": [
                        {
                            "addr": "2.2.2.2",
                            "version": 4
                        }
                    ],
                    "public": [
                        {
                            "addr": "1.1.1.1",
                            "version": 4
                        },
                        {
                            "addr": "2607:f0d0:1002:51::4",
                            "version": 6
                        }
                    ]
                },
                "rax_config_drive": "",
                "rax_created": "2013-11-14T20:48:22Z",
                "rax_flavor": {
                    "id": "performance1-1",
                    "links": [
                        {
                            "href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1",
                            "rel": "bookmark"
                        }
                    ]
                },
                "rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
                "rax_human_id": "test",
                "rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
                "rax_image": {
                    "id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
                    "links": [
                        {
                            "href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653",
                            "rel": "bookmark"
                        }
                    ]
                },
                "rax_key_name": null,
                "rax_links": [
                    {
                        "href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
                        "rel": "self"
                    },
                    {
                        "href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
                        "rel": "bookmark"
                    }
                ],
                "rax_metadata": {
                    "foo": "bar"
                },
                "rax_name": "test",
                "rax_name_attr": "name",
                "rax_networks": {
                    "private": [
                        "2.2.2.2"
                    ],
                    "public": [
                        "1.1.1.1",
                        "2607:f0d0:1002:51::4"
                    ]
                },
                "rax_os-dcf_diskconfig": "AUTO",
                "rax_os-ext-sts_power_state": 1,
                "rax_os-ext-sts_task_state": null,
                "rax_os-ext-sts_vm_state": "active",
                "rax_progress": 100,
                "rax_status": "ACTIVE",
                "rax_tenant_id": "111111",
                "rax_updated": "2013-11-14T20:49:27Z",
                "rax_user_id": "22222"
            }
        }
    }
}
Standard Inventory

When utilizing a standard ini formatted inventory file (as opposed to the inventory plugin), it may still be advantageous to retrieve discoverable hostvar information from the Rackspace API.

This can be achieved with the rax_facts module and an inventory file similar to the following:

[test_servers]
hostname1 rax_region=ORD
hostname2 rax_region=ORD
- name: Gather info about servers
  hosts: test_servers
  gather_facts: False
  tasks:
    - name: Get facts about servers
      local_action:
        module: rax_facts
        credentials: ~/.raxpub
        name: "{{ inventory_hostname }}"
        region: "{{ rax_region }}"
    - name: Map some facts
      set_fact:
        ansible_ssh_host: "{{ rax_accessipv4 }}"

While you don’t need to know how it works, it may be interesting to know what kind of variables are returned.

The rax_facts module provides facts as followings, which match the rax.py inventory script:

{
    "ansible_facts": {
        "rax_accessipv4": "1.1.1.1",
        "rax_accessipv6": "2607:f0d0:1002:51::4",
        "rax_addresses": {
            "private": [
                {
                    "addr": "2.2.2.2",
                    "version": 4
                }
            ],
            "public": [
                {
                    "addr": "1.1.1.1",
                    "version": 4
                },
                {
                    "addr": "2607:f0d0:1002:51::4",
                    "version": 6
                }
            ]
        },
        "rax_config_drive": "",
        "rax_created": "2013-11-14T20:48:22Z",
        "rax_flavor": {
            "id": "performance1-1",
            "links": [
                {
                    "href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1",
                    "rel": "bookmark"
                }
            ]
        },
        "rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
        "rax_human_id": "test",
        "rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
        "rax_image": {
            "id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
            "links": [
                {
                    "href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653",
                    "rel": "bookmark"
                }
            ]
        },
        "rax_key_name": null,
        "rax_links": [
            {
                "href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
                "rel": "self"
            },
            {
                "href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
                "rel": "bookmark"
            }
        ],
        "rax_metadata": {
            "foo": "bar"
        },
        "rax_name": "test",
        "rax_name_attr": "name",
        "rax_networks": {
            "private": [
                "2.2.2.2"
            ],
            "public": [
                "1.1.1.1",
                "2607:f0d0:1002:51::4"
            ]
        },
        "rax_os-dcf_diskconfig": "AUTO",
        "rax_os-ext-sts_power_state": 1,
        "rax_os-ext-sts_task_state": null,
        "rax_os-ext-sts_vm_state": "active",
        "rax_progress": 100,
        "rax_status": "ACTIVE",
        "rax_tenant_id": "111111",
        "rax_updated": "2013-11-14T20:49:27Z",
        "rax_user_id": "22222"
    },
    "changed": false
}
Use Cases

This section covers some additional usage examples built around a specific use case.

Network and Server

Create an isolated cloud network and build a server

- name: Build Servers on an Isolated Network
  hosts: localhost
  connection: local
  gather_facts: False
  tasks:
    - name: Network create request
      local_action:
        module: rax_network
        credentials: ~/.raxpub
        label: my-net
        cidr: 192.168.3.0/24
        region: IAD
        state: present

    - name: Server create request
      local_action:
        module: rax
        credentials: ~/.raxpub
        name: web%04d.example.org
        flavor: 2
        image: ubuntu-1204-lts-precise-pangolin
        disk_config: manual
        networks:
          - public
          - my-net
        region: IAD
        state: present
        count: 5
        exact_count: yes
        group: web
        wait: yes
        wait_timeout: 360
      register: rax
Complete Environment

Build a complete webserver environment with servers, custom networks and load balancers, install nginx and create a custom index.html

---
- name: Build environment
  hosts: localhost
  connection: local
  gather_facts: False
  tasks:
    - name: Load Balancer create request
      local_action:
        module: rax_clb
        credentials: ~/.raxpub
        name: my-lb
        port: 80
        protocol: HTTP
        algorithm: ROUND_ROBIN
        type: PUBLIC
        timeout: 30
        region: IAD
        wait: yes
        state: present
        meta:
          app: my-cool-app
      register: clb

    - name: Network create request
      local_action:
        module: rax_network
        credentials: ~/.raxpub
        label: my-net
        cidr: 192.168.3.0/24
        state: present
        region: IAD
      register: network

    - name: Server create request
      local_action:
        module: rax
        credentials: ~/.raxpub
        name: web%04d.example.org
        flavor: performance1-1
        image: ubuntu-1204-lts-precise-pangolin
        disk_config: manual
        networks:
          - public
          - private
          - my-net
        region: IAD
        state: present
        count: 5
        exact_count: yes
        group: web
        wait: yes
      register: rax

    - name: Add servers to web host group
      local_action:
        module: add_host
        hostname: "{{ item.name }}"
        ansible_ssh_host: "{{ item.rax_accessipv4 }}"
        ansible_ssh_pass: "{{ item.rax_adminpass }}"
        ansible_ssh_user: root
        groups: web
      with_items: rax.success
      when: rax.action == 'create'

    - name: Add servers to Load balancer
      local_action:
        module: rax_clb_nodes
        credentials: ~/.raxpub
        load_balancer_id: "{{ clb.balancer.id }}"
        address: "{{ item.rax_networks.private|first }}"
        port: 80
        condition: enabled
        type: primary
        wait: yes
        region: IAD
      with_items: rax.success
      when: rax.action == 'create'

- name: Configure servers
  hosts: web
  handlers:
    - name: restart nginx
      service: name=nginx state=restarted

  tasks:
    - name: Install nginx
      apt: pkg=nginx state=latest update_cache=yes cache_valid_time=86400
      notify:
        - restart nginx

    - name: Ensure nginx starts on boot
      service: name=nginx state=started enabled=yes

    - name: Create custom index.html
      copy: content="{{ inventory_hostname }}" dest=/usr/share/nginx/www/index.html
            owner=root group=root mode=0644
RackConnect and Managed Cloud

When using RackConnect version 2 or Rackspace Managed Cloud there are Rackspace automation tasks that are executed on the servers you create after they are successfully built. If your automation executes before the RackConnect or Managed Cloud automation, you can cause failures and un-usable servers.

These examples show creating servers, and ensuring that the Rackspace automation has completed before Ansible continues onwards.

For simplicity, these examples are joined, however both are only needed when using RackConnect. When only using Managed Cloud, the RackConnect portion can be ignored.

The RackConnect portions only apply to RackConnect version 2.

Using a Control Machine
- name: Create an exact count of servers
  hosts: localhost
  connection: local
  gather_facts: False
  tasks:
    - name: Server build requests
      local_action:
        module: rax
        credentials: ~/.raxpub
        name: web%03d.example.org
        flavor: performance1-1
        image: ubuntu-1204-lts-precise-pangolin
        disk_config: manual
        region: DFW
        state: present
        count: 1
        exact_count: yes
        group: web
        wait: yes
      register: rax

    - name: Add servers to in memory groups
      local_action:
        module: add_host
        hostname: "{{ item.name }}"
        ansible_ssh_host: "{{ item.rax_accessipv4 }}"
        ansible_ssh_pass: "{{ item.rax_adminpass }}"
        ansible_ssh_user: root
        rax_id: "{{ item.rax_id }}"
        groups: web,new_web
      with_items: rax.success
      when: rax.action == 'create'

- name: Wait for rackconnect and managed cloud automation to complete
  hosts: new_web
  gather_facts: false
  tasks:
    - name: Wait for rackconnnect automation to complete
      local_action:
        module: rax_facts
        credentials: ~/.raxpub
        id: "{{ rax_id }}"
        region: DFW
      register: rax_facts
      until: rax_facts.ansible_facts['rax_metadata']['rackconnect_automation_status']|default('') == 'DEPLOYED'
      retries: 30
      delay: 10

    - name: Wait for managed cloud automation to complete
      local_action:
        module: rax_facts
        credentials: ~/.raxpub
        id: "{{ rax_id }}"
        region: DFW
      register: rax_facts
      until: rax_facts.ansible_facts['rax_metadata']['rax_service_level_automation']|default('') == 'Complete'
      retries: 30
      delay: 10

- name: Base Configure Servers
  hosts: web
  roles:
    - role: users

    - role: openssh
      opensshd_PermitRootLogin: "no"

    - role: ntp
Using Ansible Pull
---
- name: Ensure Rackconnect and Managed Cloud Automation is complete
  hosts: all
  connection: local
  tasks:
    - name: Check for completed bootstrap
      stat:
        path: /etc/bootstrap_complete
      register: bootstrap

    - name: Get region
      command: xenstore-read vm-data/provider_data/region
      register: rax_region
      when: bootstrap.stat.exists != True

    - name: Wait for rackconnect automation to complete
      uri:
        url: "https://{{ rax_region.stdout|trim }}.api.rackconnect.rackspace.com/v1/automation_status?format=json"
        return_content: yes
      register: automation_status
      when: bootstrap.stat.exists != True
      until: automation_status['automation_status']|default('') == 'DEPLOYED'
      retries: 30
      delay: 10

    - name: Wait for managed cloud automation to complete
      wait_for:
        path: /tmp/rs_managed_cloud_automation_complete
        delay: 10
      when: bootstrap.stat.exists != True

    - name: Set bootstrap completed
      file:
        path: /etc/bootstrap_complete
        state: touch
        owner: root
        group: root
        mode: 0400

- name: Base Configure Servers
  hosts: all
  connection: local
  roles:
    - role: users

    - role: openssh
      opensshd_PermitRootLogin: "no"

    - role: ntp
Using Ansible Pull with XenStore
---
- name: Ensure Rackconnect and Managed Cloud Automation is complete
  hosts: all
  connection: local
  tasks:
    - name: Check for completed bootstrap
      stat:
        path: /etc/bootstrap_complete
      register: bootstrap

    - name: Wait for rackconnect_automation_status xenstore key to exist
      command: xenstore-exists vm-data/user-metadata/rackconnect_automation_status
      register: rcas_exists
      when: bootstrap.stat.exists != True
      failed_when: rcas_exists.rc|int > 1
      until: rcas_exists.rc|int == 0
      retries: 30
      delay: 10

    - name: Wait for rackconnect automation to complete
      command: xenstore-read vm-data/user-metadata/rackconnect_automation_status
      register: rcas
      when: bootstrap.stat.exists != True
      until: rcas.stdout|replace('"', '') == 'DEPLOYED'
      retries: 30
      delay: 10

    - name: Wait for rax_service_level_automation xenstore key to exist
      command: xenstore-exists vm-data/user-metadata/rax_service_level_automation
      register: rsla_exists
      when: bootstrap.stat.exists != True
      failed_when: rsla_exists.rc|int > 1
      until: rsla_exists.rc|int == 0
      retries: 30
      delay: 10

    - name: Wait for managed cloud automation to complete
      command: xenstore-read vm-data/user-metadata/rackconnect_automation_status
      register: rsla
      when: bootstrap.stat.exists != True
      until: rsla.stdout|replace('"', '') == 'DEPLOYED'
      retries: 30
      delay: 10

    - name: Set bootstrap completed
      file:
        path: /etc/bootstrap_complete
        state: touch
        owner: root
        group: root
        mode: 0400

- name: Base Configure Servers
  hosts: all
  connection: local
  roles:
    - role: users

    - role: openssh
      opensshd_PermitRootLogin: "no"

    - role: ntp
Advanced Usage
Autoscaling with Tower

Ansible Tower also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call a defined URL and the server will “dial out” to the requester and configure an instance that is spinning up. This can be a great way to reconfigure ephemeral nodes. See the Tower documentation for more details.

A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded and less information has to be shared with remote hosts.

Orchestration in the Rackspace Cloud

Ansible is a powerful orchestration tool, and rax modules allow you the opportunity to orchestrate complex tasks, deployments, and configurations. The key here is to automate provisioning of infrastructure, like any other piece of software in an environment. Complex deployments might have previously required manual manipulation of load balancers, or manual provisioning of servers. Utilizing the rax modules included with Ansible, one can make the deployment of additional nodes contingent on the current number of running nodes, or the configuration of a clustered application dependent on the number of nodes with common metadata. One could automate the following scenarios, for example:

  • Servers that are removed from a Cloud Load Balancer one-by-one, updated, verified, and returned to the load balancer pool
  • Expansion of an already-online environment, where nodes are provisioned, bootstrapped, configured, and software installed
  • A procedure where app log files are uploaded to a central location, like Cloud Files, before a node is decommissioned
  • Servers and load balancers that have DNS records created and destroyed on creation and decommissioning, respectively
Google Cloud Platform Guide
Introduction

Note

This section of the documentation is under construction. We are in the process of adding more examples about all of the GCE modules and how they work together. Upgrades via github pull requests are welcomed!

Ansible contains modules for managing Google Compute Engine resources, including creating instances, controlling network access, working with persistent disks, and managing load balancers. Additionally, there is an inventory plugin that can automatically suck down all of your GCE instances into Ansible dynamic inventory, and create groups by tag and other properties.

The GCE modules all require the apache-libcloud module, which you can install from pip:

$ pip install apache-libcloud

Note

If you’re using Ansible on Mac OS X, libcloud also needs to access a CA cert chain. You’ll need to download one (you can get one for here.)

Credentials

To work with the GCE modules, you’ll first need to get some credentials. You can create new one from the console by going to the “APIs and Auth” section and choosing to create a new client ID for a service account. Once you’ve created a new client ID and downloaded (you must click Generate new P12 Key) the generated private key (in the pkcs12 format), you’ll need to convert the key by running the following command:

$ openssl pkcs12 -in pkey.pkcs12 -passin pass:notasecret -nodes -nocerts | openssl rsa -out pkey.pem

There are two different ways to provide credentials to Ansible so that it can talk with Google Cloud for provisioning and configuration actions:

  • by providing to the modules directly
  • by populating a secrets.py file
Calling Modules By Passing Credentials

For the GCE modules you can specify the credentials as arguments:

  • service_account_email: email associated with the project
  • pem_file: path to the pem file
  • project_id: id of the project

For example, to create a new instance using the cloud module, you can use the following configuration:

- name: Create instance(s)
  hosts: localhost
  connection: local
  gather_facts: no

  vars:
    service_account_email: unique-id@developer.gserviceaccount.com
    pem_file: /path/to/project.pem
    project_id: project-id
    machine_type: n1-standard-1
    image: debian-7

  tasks:

   - name: Launch instances
     gce:
         instance_names: dev
         machine_type: "{{ machine_type }}"
         image: "{{ image }}"
         service_account_email: "{{ service_account_email }}"
         pem_file: "{{ pem_file }}"
         project_id: "{{ project_id }}"
Calling Modules with secrets.py

Create a file secrets.py looking like following, and put it in some folder which is in your $PYTHONPATH:

GCE_PARAMS = ('i...@project.googleusercontent.com', '/path/to/project.pem')
GCE_KEYWORD_PARAMS = {'project': 'project_id'}

Ensure to enter the email address from the created services account and not the one from your main account.

Now the modules can be used as above, but the account information can be omitted.

GCE Dynamic Inventory

The best way to interact with your hosts is to use the gce inventory plugin, which dynamically queries GCE and tells Ansible what nodes can be managed.

Note that when using the inventory script gce.py, you also need to populate the gce.ini file that you can find in the contrib/inventory directory of the ansible checkout.

To use the GCE dynamic inventory script, copy gce.py from contrib/inventory into your inventory directory and make it executable. You can specify credentials for gce.py using the GCE_INI_PATH environment variable – the default is to look for gce.ini in the same directory as the inventory script.

Let’s see if inventory is working:

$ ./gce.py --list

You should see output describing the hosts you have, if any, running in Google Compute Engine.

Now let’s see if we can use the inventory script to talk to Google.

$ GCE_INI_PATH=~/.gce.ini ansible all -i gce.py -m setup
hostname | success >> {
  "ansible_facts": {
    "ansible_all_ipv4_addresses": [
      "x.x.x.x"
    ],

As with all dynamic inventory scripts in Ansible, you can configure the inventory path in ansible.cfg. The recommended way to use the inventory is to create an inventory directory, and place both the gce.py script and a file containing localhost in it. This can allow for cloud inventory to be used alongside local inventory (such as a physical datacenter) or machines running in different providers.

Executing ansible or ansible-playbook and specifying the inventory directory instead of an individual file will cause ansible to evaluate each file in that directory for inventory.

Let’s once again use our inventory script to see if it can talk to Google Cloud:

$ ansible all -i inventory/ -m setup
hostname | success >> {
  "ansible_facts": {
    "ansible_all_ipv4_addresses": [
        "x.x.x.x"
    ],

The output should be similar to the previous command. If you’re wanting less output and just want to check for SSH connectivity, use “-m” ping instead.

Use Cases

For the following use case, let’s use this small shell script as a wrapper.

#!/usr/bin/env bash
PLAYBOOK="$1"

if [[ -z $PLAYBOOK ]]; then
  echo "You need to pass a playbook as argument to this script."
  exit 1
fi

export SSL_CERT_FILE=$(pwd)/cacert.cer
export ANSIBLE_HOST_KEY_CHECKING=False

if [[ ! -f "$SSL_CERT_FILE" ]]; then
  curl -O http://curl.haxx.se/ca/cacert.pem
fi

ansible-playbook -v -i inventory/ "$PLAYBOOK"
Create an instance

The GCE module provides the ability to provision instances within Google Compute Engine. The provisioning task is typically performed from your Ansible control server against Google Cloud’s API.

A playbook would looks like this:

- name: Create instance(s)
  hosts: localhost
  gather_facts: no
  connection: local

  vars:
    machine_type: n1-standard-1 # default
    image: debian-7
    service_account_email: unique-id@developer.gserviceaccount.com
    pem_file: /path/to/project.pem
    project_id: project-id

  tasks:
    - name: Launch instances
      gce:
          instance_names: dev
          machine_type: "{{ machine_type }}"
          image: "{{ image }}"
          service_account_email: "{{ service_account_email }}"
          pem_file: "{{ pem_file }}"
          project_id: "{{ project_id }}"
          tags: webserver
      register: gce

    - name: Wait for SSH to come up
      wait_for: host={{ item.public_ip }} port=22 delay=10 timeout=60
      with_items: gce.instance_data

    - name: Add host to groupname
      add_host: hostname={{ item.public_ip }} groupname=new_instances
      with_items: gce.instance_data

- name: Manage new instances
  hosts: new_instances
  connection: ssh
  sudo: True
  roles:
    - base_configuration
    - production_server

Note that use of the “add_host” module above creates a temporary, in-memory group. This means that a play in the same playbook can then manage machines in the ‘new_instances’ group, if so desired. Any sort of arbitrary configuration is possible at this point.

Configuring instances in a group

All of the created instances in GCE are grouped by tag. Since this is a cloud, it’s probably best to ignore hostnames and just focus on group management.

Normally we’d also use roles here, but the following example is a simple one. Here we will also use the “gce_net” module to open up access to port 80 on these nodes.

The variables in the ‘vars’ section could also be kept in a ‘vars_files’ file or something encrypted with Ansible-vault, if you so choose. This is just a basic example of what is possible:

- name: Setup web servers
  hosts: tag_webserver
  gather_facts: no

  vars:
    machine_type: n1-standard-1 # default
    image: debian-7
    service_account_email: unique-id@developer.gserviceaccount.com
    pem_file: /path/to/project.pem
    project_id: project-id

  roles:

    - name: Install lighttpd
      apt: pkg=lighttpd state=installed
      sudo: True

    - name: Allow HTTP
      local_action: gce_net
      args:
        fwname: "all-http"
        name: "default"
        allowed: "tcp:80"
        state: "present"
        service_account_email: "{{ service_account_email }}"
        pem_file: "{{ pem_file }}"
        project_id: "{{ project_id }}"

By pointing your browser to the IP of the server, you should see a page welcoming you.

Upgrades to this documentation are welcome, hit the github link at the top right of this page if you would like to make additions!

Using Vagrant and Ansible
Introduction

Vagrant is a tool to manage virtual machine environments, and allows you to configure and use reproducible work environments on top of various virtualization and cloud platforms. It also has integration with Ansible as a provisioner for these virtual machines, and the two tools work together well.

This guide will describe how to use Vagrant and Ansible together.

If you’re not familiar with Vagrant, you should visit the documentation.

This guide assumes that you already have Ansible installed and working. Running from a Git checkout is fine. Follow the Installation guide for more information.

Vagrant Setup

The first step once you’ve installed Vagrant is to create a Vagrantfile and customize it to suit your needs. This is covered in detail in the Vagrant documentation, but here is a quick example:

$ mkdir vagrant-test
$ cd vagrant-test
$ vagrant init precise32 http://files.vagrantup.com/precise32.box

This will create a file called Vagrantfile that you can edit to suit your needs. The default Vagrantfile has a lot of comments. Here is a simplified example that includes a section to use the Ansible provisioner:

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
    config.vm.box = "precise32"
    config.vm.box_url = "http://files.vagrantup.com/precise32.box"

    config.vm.network :public_network

    config.vm.provision "ansible" do |ansible|
        ansible.playbook = "playbook.yml"
    end
end

The Vagrantfile has a lot of options, but these are the most important ones. Notice the config.vm.provision section that refers to an Ansible playbook called playbook.yml in the same directory as the Vagrantfile. Vagrant runs the provisioner once the virtual machine has booted and is ready for SSH access.

$ vagrant up

This will start the VM and run the provisioning playbook.

There are a lot of Ansible options you can configure in your Vagrantfile. Some particularly useful options are ansible.extra_vars, ansible.sudo and ansible.sudo_user, and ansible.host_key_checking which you can disable to avoid SSH connection problems to new virtual machines.

Visit the Ansible Provisioner documentation for more information.

To re-run a playbook on an existing VM, just run:

$ vagrant provision

This will re-run the playbook.

Running Ansible Manually

Sometimes you may want to run Ansible manually against the machines. This is pretty easy to do.

Vagrant automatically creates an inventory file for each Vagrant machine in the same directory located under .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory. It configures the inventory file according to the SSH tunnel that Vagrant automatically creates, and executes ansible-playbook with the correct username and SSH key options to allow access. A typical automatically-created inventory file may look something like this:

# Generated by Vagrant

machine ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222

If you want to run Ansible manually, you will want to make sure to pass ansible or ansible-playbook commands the correct arguments for the username (usually vagrant) and the SSH key (since Vagrant 1.7.0, this will be something like .vagrant/machines/[machine name]/[provider]/private_key), and the autogenerated inventory file.

Here is an example:

$ ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory --private-key=.vagrant/machines/default/virtualbox/private_key -u vagrant playbook.yml

Note: Vagrant versions prior to 1.7.0 will use the private key located at ~/.vagrant.d/insecure_private_key.

See also

Vagrant Home
The Vagrant homepage with downloads
Vagrant Documentation
Vagrant Documentation
Ansible Provisioner
The Vagrant documentation for the Ansible provisioner
Playbooks
An introduction to playbooks
Continuous Delivery and Rolling Upgrades
Introduction

Continuous Delivery is the concept of frequently delivering updates to your software application.

The idea is that by updating more often, you do not have to wait for a specific timed period, and your organization gets better at the process of responding to change.

Some Ansible users are deploying updates to their end users on an hourly or even more frequent basis – sometimes every time there is an approved code change. To achieve this, you need tools to be able to quickly apply those updates in a zero-downtime way.

This document describes in detail how to achieve this goal, using one of Ansible’s most complete example playbooks as a template: lamp_haproxy. This example uses a lot of Ansible features: roles, templates, and group variables, and it also comes with an orchestration playbook that can do zero-downtime rolling upgrades of the web application stack.

The playbooks deploy Apache, PHP, MySQL, Nagios, and HAProxy to a CentOS-based set of servers.

We’re not going to cover how to run these playbooks here. Read the included README in the github project along with the example for that information. Instead, we’re going to take a close look at every part of the playbook and describe what it does.

Site Deployment

Let’s start with site.yml. This is our site-wide deployment playbook. It can be used to initially deploy the site, as well as push updates to all of the servers:

---
# This playbook deploys the whole application stack in this site.

# Apply common configuration to all hosts
- hosts: all

  roles:
  - common

# Configure and deploy database servers.
- hosts: dbservers

  roles:
  - db

# Configure and deploy the web servers. Note that we include two roles
# here, the 'base-apache' role which simply sets up Apache, and 'web'
# which includes our example web application.

- hosts: webservers

  roles:
  - base-apache
  - web

# Configure and deploy the load balancer(s).
- hosts: lbservers

  roles:
  - haproxy

# Configure and deploy the Nagios monitoring node(s).
- hosts: monitoring

  roles:
  - base-apache
  - nagios

Note

If you’re not familiar with terms like playbooks and plays, you should review Playbooks.

In this playbook we have 5 plays. The first one targets all hosts and applies the common role to all of the hosts. This is for site-wide things like yum repository configuration, firewall configuration, and anything else that needs to apply to all of the servers.

The next four plays run against specific host groups and apply specific roles to those servers. Along with the roles for Nagios monitoring, the database, and the web application, we’ve implemented a base-apache role that installs and configures a basic Apache setup. This is used by both the sample web application and the Nagios hosts.

Reusable Content: Roles

By now you should have a bit of understanding about roles and how they work in Ansible. Roles are a way to organize content: tasks, handlers, templates, and files, into reusable components.

This example has six roles: common, base-apache, db, haproxy, nagios, and web. How you organize your roles is up to you and your application, but most sites will have one or more common roles that are applied to all systems, and then a series of application-specific roles that install and configure particular parts of the site.

Roles can have variables and dependencies, and you can pass in parameters to roles to modify their behavior. You can read more about roles in the Playbook Roles and Include Statements section.

Configuration: Group Variables

Group variables are variables that are applied to groups of servers. They can be used in templates and in playbooks to customize behavior and to provide easily-changed settings and parameters. They are stored in a directory called group_vars in the same location as your inventory. Here is lamp_haproxy’s group_vars/all file. As you might expect, these variables are applied to all of the machines in your inventory:

---
httpd_port: 80
ntpserver: 192.168.1.2

This is a YAML file, and you can create lists and dictionaries for more complex variable structures. In this case, we are just setting two variables, one for the port for the web server, and one for the NTP server that our machines should use for time synchronization.

Here’s another group variables file. This is group_vars/dbservers which applies to the hosts in the dbservers group:

---
mysqlservice: mysqld
mysql_port: 3306
dbuser: root
dbname: foodb
upassword: usersecret

If you look in the example, there are group variables for the webservers group and the lbservers group, similarly.

These variables are used in a variety of places. You can use them in playbooks, like this, in roles/db/tasks/main.yml:

- name: Create Application Database
  mysql_db: name={{ dbname }} state=present

- name: Create Application DB User
  mysql_user: name={{ dbuser }} password={{ upassword }}
              priv=*.*:ALL host='%' state=present

You can also use these variables in templates, like this, in roles/common/templates/ntp.conf.j2:

driftfile /var/lib/ntp/drift

restrict 127.0.0.1
restrict -6 ::1

server {{ ntpserver }}

includefile /etc/ntp/crypto/pw

keys /etc/ntp/keys

You can see that the variable substitution syntax of {{ and }} is the same for both templates and variables. The syntax inside the curly braces is Jinja2, and you can do all sorts of operations and apply different filters to the data inside. In templates, you can also use for loops and if statements to handle more complex situations, like this, in roles/common/templates/iptables.j2:

{% if inventory_hostname in groups['dbservers'] %}
-A INPUT -p tcp  --dport 3306 -j  ACCEPT
{% endif %}

This is testing to see if the inventory name of the machine we’re currently operating on (inventory_hostname) exists in the inventory group dbservers. If so, that machine will get an iptables ACCEPT line for port 3306.

Here’s another example, from the same template:

{% for host in groups['monitoring'] %}
-A INPUT -p tcp -s {{ hostvars[host].ansible_default_ipv4.address }} --dport 5666 -j ACCEPT
{% endfor %}

This loops over all of the hosts in the group called monitoring, and adds an ACCEPT line for each monitoring hosts’ default IPV4 address to the current machine’s iptables configuration, so that Nagios can monitor those hosts.

You can learn a lot more about Jinja2 and its capabilities here, and you can read more about Ansible variables in general in the Variables section.

The Rolling Upgrade

Now you have a fully-deployed site with web servers, a load balancer, and monitoring. How do you update it? This is where Ansible’s orchestration features come into play. While some applications use the term ‘orchestration’ to mean basic ordering or command-blasting, Ansible refers to orchestration as ‘conducting machines like an orchestra’, and has a pretty sophisticated engine for it.

Ansible has the capability to do operations on multi-tier applications in a coordinated way, making it easy to orchestrate a sophisticated zero-downtime rolling upgrade of our web application. This is implemented in a separate playbook, called rolling_upgrade.yml.

Looking at the playbook, you can see it is made up of two plays. The first play is very simple and looks like this:

- hosts: monitoring
  tasks: []

What’s going on here, and why are there no tasks? You might know that Ansible gathers “facts” from the servers before operating upon them. These facts are useful for all sorts of things: networking information, OS/distribution versions, etc. In our case, we need to know something about all of the monitoring servers in our environment before we perform the update, so this simple play forces a fact-gathering step on our monitoring servers. You will see this pattern sometimes, and it’s a useful trick to know.

The next part is the update play. The first part looks like this:

- hosts: webservers
  user: root
  serial: 1

This is just a normal play definition, operating on the webservers group. The serial keyword tells Ansible how many servers to operate on at once. If it’s not specified, Ansible will parallelize these operations up to the default “forks” limit specified in the configuration file. But for a zero-downtime rolling upgrade, you may not want to operate on that many hosts at once. If you had just a handful of webservers, you may want to set serial to 1, for one host at a time. If you have 100, maybe you could set serial to 10, for ten at a time.

Here is the next part of the update play:

pre_tasks:
- name: disable nagios alerts for this host webserver service
  nagios: action=disable_alerts host={{ inventory_hostname }} services=webserver
  delegate_to: "{{ item }}"
  with_items: groups.monitoring

- name: disable the server in haproxy
  shell: echo "disable server myapplb/{{ inventory_hostname }}" | socat stdio /var/lib/haproxy/stats
  delegate_to: "{{ item }}"
  with_items: groups.lbservers

The pre_tasks keyword just lets you list tasks to run before the roles are called. This will make more sense in a minute. If you look at the names of these tasks, you can see that we are disabling Nagios alerts and then removing the webserver that we are currently updating from the HAProxy load balancing pool.

The delegate_to and with_items arguments, used together, cause Ansible to loop over each monitoring server and load balancer, and perform that operation (delegate that operation) on the monitoring or load balancing server, “on behalf” of the webserver. In programming terms, the outer loop is the list of web servers, and the inner loop is the list of monitoring servers.

Note that the HAProxy step looks a little complicated. We’re using HAProxy in this example because it’s freely available, though if you have (for instance) an F5 or Netscaler in your infrastructure (or maybe you have an AWS Elastic IP setup?), you can use modules included in core Ansible to communicate with them instead. You might also wish to use other monitoring modules instead of nagios, but this just shows the main goal of the ‘pre tasks’ section – take the server out of monitoring, and take it out of rotation.

The next step simply re-applies the proper roles to the web servers. This will cause any configuration management declarations in web and base-apache roles to be applied to the web servers, including an update of the web application code itself. We don’t have to do it this way–we could instead just purely update the web application, but this is a good example of how roles can be used to reuse tasks:

roles:
- common
- base-apache
- web

Finally, in the post_tasks section, we reverse the changes to the Nagios configuration and put the web server back in the load balancing pool:

post_tasks:
- name: Enable the server in haproxy
  shell: echo "enable server myapplb/{{ inventory_hostname }}" | socat stdio /var/lib/haproxy/stats
  delegate_to: "{{ item }}"
  with_items: groups.lbservers

- name: re-enable nagios alerts
  nagios: action=enable_alerts host={{ inventory_hostname }} services=webserver
  delegate_to: "{{ item }}"
  with_items: groups.monitoring

Again, if you were using a Netscaler or F5 or Elastic Load Balancer, you would just substitute in the appropriate modules instead.

Managing Other Load Balancers

In this example, we use the simple HAProxy load balancer to front-end the web servers. It’s easy to configure and easy to manage. As we have mentioned, Ansible has built-in support for a variety of other load balancers like Citrix NetScaler, F5 BigIP, Amazon Elastic Load Balancers, and more. See the About Modules documentation for more information.

For other load balancers, you may need to send shell commands to them (like we do for HAProxy above), or call an API, if your load balancer exposes one. For the load balancers for which Ansible has modules, you may want to run them as a local_action if they contact an API. You can read more about local actions in the Delegation, Rolling Updates, and Local Actions section. Should you develop anything interesting for some hardware where there is not a core module, it might make for a good module for core inclusion!

Continuous Delivery End-To-End

Now that you have an automated way to deploy updates to your application, how do you tie it all together? A lot of organizations use a continuous integration tool like Jenkins or Atlassian Bamboo to tie the development, test, release, and deploy steps together. You may also want to use a tool like Gerrit to add a code review step to commits to either the application code itself, or to your Ansible playbooks, or both.

Depending on your environment, you might be deploying continuously to a test environment, running an integration test battery against that environment, and then deploying automatically into production. Or you could keep it simple and just use the rolling-update for on-demand deployment into test or production specifically. This is all up to you.

For integration with Continuous Integration systems, you can easily trigger playbook runs using the ansible-playbook command line tool, or, if you’re using Ansible Tower, the tower-cli or the built-in REST API. (The tower-cli command ‘joblaunch’ will spawn a remote job over the REST API and is pretty slick).

This should give you a good idea of how to structure a multi-tier application with Ansible, and orchestrate operations upon that app, with the eventual goal of continuous delivery to your customers. You could extend the idea of the rolling upgrade to lots of different parts of the app; maybe add front-end web servers along with application servers, for instance, or replace the SQL database with something like MongoDB or Riak. Ansible gives you the capability to easily manage complicated environments and automate common operations.

See also

lamp_haproxy example
The lamp_haproxy example discussed here.
Playbooks
An introduction to playbooks
Playbook Roles and Include Statements
An introduction to playbook roles
Variables
An introduction to Ansible variables
Ansible.com: Continuous Delivery
An introduction to Continuous Delivery with Ansible

Pending topics may include: Docker, Jenkins, Google Compute Engine, Linode/DigitalOcean, Continuous Deployment, and more.

Developer Information

Learn how to build modules of your own in any language, and also how to extend Ansible through several kinds of plugins. Explore Ansible’s Python API and write Python plugins to integrate with other solutions in your environment.

Python API

There are several interesting ways to use Ansible from an API perspective. You can use the Ansible python API to control nodes, you can extend Ansible to respond to various python events, you can write various plugins, and you can plug in inventory data from external data sources. This document covers the Runner and Playbook API at a basic level.

If you are looking to use Ansible programmatically from something other than Python, trigger events asynchronously, or have access control and logging demands, take a look at Ansible Tower as it has a very nice REST API that provides all of these things at a higher level.

Ansible is written in its own API so you have a considerable amount of power across the board. This chapter discusses the Python API.

Python API

The Python API is very powerful, and is how the ansible CLI and ansible-playbook are implemented.

It’s pretty simple:

import ansible.runner

runner = ansible.runner.Runner(
   module_name='ping',
   module_args='',
   pattern='web*',
   forks=10
)
datastructure = runner.run()

The run method returns results per host, grouped by whether they could be contacted or not. Return types are module specific, as expressed in the About Modules documentation.:

{
    "dark" : {
       "web1.example.com" : "failure message"
    },
    "contacted" : {
       "web2.example.com" : 1
    }
}

A module can return any type of JSON data it wants, so Ansible can be used as a framework to rapidly build powerful applications and scripts.

Detailed API Example

The following script prints out the uptime information for all hosts:

#!/usr/bin/python

import ansible.runner
import sys

# construct the ansible runner and execute on all hosts
results = ansible.runner.Runner(
    pattern='*', forks=10,
    module_name='command', module_args='/usr/bin/uptime',
).run()

if results is None:
   print "No hosts found"
   sys.exit(1)

print "UP ***********"
for (hostname, result) in results['contacted'].items():
    if not 'failed' in result:
        print "%s >>> %s" % (hostname, result['stdout'])

print "FAILED *******"
for (hostname, result) in results['contacted'].items():
    if 'failed' in result:
        print "%s >>> %s" % (hostname, result['msg'])

print "DOWN *********"
for (hostname, result) in results['dark'].items():
    print "%s >>> %s" % (hostname, result)

Advanced programmers may also wish to read the source to ansible itself, for it uses the Runner() API (with all available options) to implement the command line tools ansible and ansible-playbook.

See also

Developing Dynamic Inventory Sources
Developing dynamic inventory integrations
Developing Modules
How to develop modules
Developing Plugins
How to develop plugins
Development Mailing List
Mailing list for development topics
irc.freenode.net
#ansible IRC chat channel
Developing Dynamic Inventory Sources

As described in Dynamic Inventory, ansible can pull inventory information from dynamic sources, including cloud sources.

How do we write a new one?

Simple! We just create a script or program that can return JSON in the right format when fed the proper arguments. You can do this in any language.

Script Conventions

When the external node script is called with the single argument --list, the script must return a JSON hash/dictionary of all the groups to be managed. Each group’s value should be either a hash/dictionary containing a list of each host/IP, potential child groups, and potential group variables, or simply a list of host/IP addresses, like so:

{
    "databases"   : {
        "hosts"   : [ "host1.example.com", "host2.example.com" ],
        "vars"    : {
            "a"   : true
        }
    },
    "webservers"  : [ "host2.example.com", "host3.example.com" ],
    "atlanta"     : {
        "hosts"   : [ "host1.example.com", "host4.example.com", "host5.example.com" ],
        "vars"    : {
            "b"   : false
        },
        "children": [ "marietta", "5points" ]
    },
    "marietta"    : [ "host6.example.com" ],
    "5points"     : [ "host7.example.com" ]
}

New in version 1.0.

Before version 1.0, each group could only have a list of hostnames/IP addresses, like the webservers, marietta, and 5points groups above.

When called with the arguments --host <hostname> (where <hostname> is a host from above), the script must return either an empty JSON hash/dictionary, or a hash/dictionary of variables to make available to templates and playbooks. Returning variables is optional, if the script does not wish to do this, returning an empty hash/dictionary is the way to go:

{
    "favcolor"   : "red",
    "ntpserver"  : "wolf.example.com",
    "monitoring" : "pack.example.com"
}
Tuning the External Inventory Script

New in version 1.3.

The stock inventory script system detailed above works for all versions of Ansible, but calling --host for every host can be rather expensive, especially if it involves expensive API calls to a remote subsystem. In Ansible 1.3 or later, if the inventory script returns a top level element called “_meta”, it is possible to return all of the host variables in one inventory script call. When this meta element contains a value for “hostvars”, the inventory script will not be invoked with --host for each host. This results in a significant performance increase for large numbers of hosts, and also makes client side caching easier to implement for the inventory script.

The data to be added to the top level JSON dictionary looks like this:

{

    # results of inventory script as above go here
    # ...

    "_meta" : {
       "hostvars" : {
          "moocow.example.com"     : { "asdf" : 1234 },
          "llama.example.com"      : { "asdf" : 5678 },
       }
    }

}

See also

Python API
Python API to Playbooks and Ad Hoc Task Execution
Developing Modules
How to develop modules
Developing Plugins
How to develop plugins
Ansible Tower
REST API endpoint and GUI for Ansible, syncs with dynamic inventory
Development Mailing List
Mailing list for development topics
irc.freenode.net
#ansible IRC chat channel
Developing Modules

Ansible modules are reusable units of magic that can be used by the Ansible API, or by the ansible or ansible-playbook programs.

See About Modules for a list of various ones developed in core.

Modules can be written in any language and are found in the path specified by ANSIBLE_LIBRARY or the --module-path command line option.

By default, everything that ships with ansible is pulled from its source tree, but additional paths can be added.

The directory ”./library”, alongside your top level playbooks, is also automatically added as a search directory.

Should you develop an interesting Ansible module, consider sending a pull request to the modules-extras project. There’s also a core repo for more established and widely used modules. “Extras” modules may be promoted to core periodically, but there’s no fundamental difference in the end - both ship with ansible, all in one package, regardless of how you acquire ansible.

Tutorial

Let’s build a very-basic module to get and set the system time. For starters, let’s build a module that just outputs the current time.

We are going to use Python here but any language is possible. Only File I/O and outputting to standard out are required. So, bash, C++, clojure, Python, Ruby, whatever you want is fine.

Now Python Ansible modules contain some extremely powerful shortcuts (that all the core modules use) but first we are going to build a module the very hard way. The reason we do this is because modules written in any language OTHER than Python are going to have to do exactly this. We’ll show the easy way later.

So, here’s an example. You would never really need to build a module to set the system time, the ‘command’ module could already be used to do this. Though we’re going to make one.

Reading the modules that come with ansible (linked above) is a great way to learn how to write modules. Keep in mind, though, that some modules in ansible’s source tree are internalisms, so look at service or yum, and don’t stare too close into things like async_wrapper or you’ll turn to stone. Nobody ever executes async_wrapper directly.

Ok, let’s get going with an example. We’ll use Python. For starters, save this as a file named timetest.py:

#!/usr/bin/python

import datetime
import json

date = str(datetime.datetime.now())
print json.dumps({
    "time" : date
})
Testing Modules

There’s a useful test script in the source checkout for ansible:

git clone git@github.com:ansible/ansible.git --recursive
source ansible/hacking/env-setup
chmod +x ansible/hacking/test-module

Let’s run the script you just wrote with that:

ansible/hacking/test-module -m ./timetest.py

You should see output that looks something like this:

{u'time': u'2012-03-14 22:13:48.539183'}

If you did not, you might have a typo in your module, so recheck it and try again.

Reading Input

Let’s modify the module to allow setting the current time. We’ll do this by seeing if a key value pair in the form time=<string> is passed in to the module.

Ansible internally saves arguments to an arguments file. So we must read the file and parse it. The arguments file is just a string, so any form of arguments are legal. Here we’ll do some basic parsing to treat the input as key=value.

The example usage we are trying to achieve to set the time is:

time time="March 14 22:10"

If no time parameter is set, we’ll just leave the time as is and return the current time.

Note

This is obviously an unrealistic idea for a module. You’d most likely just use the shell module. However, it probably makes a decent tutorial.

Let’s look at the code. Read the comments as we’ll explain as we go. Note that this is highly verbose because it’s intended as an educational example. You can write modules a lot shorter than this:

#!/usr/bin/python

# import some python modules that we'll use.  These are all
# available in Python's core

import datetime
import sys
import json
import os
import shlex

# read the argument string from the arguments file
args_file = sys.argv[1]
args_data = file(args_file).read()

# for this module, we're going to do key=value style arguments
# this is up to each module to decide what it wants, but all
# core modules besides 'command' and 'shell' take key=value
# so this is highly recommended

arguments = shlex.split(args_data)
for arg in arguments:

    # ignore any arguments without an equals in it
    if "=" in arg:

        (key, value) = arg.split("=")

        # if setting the time, the key 'time'
        # will contain the value we want to set the time to

        if key == "time":

            # now we'll affect the change.  Many modules
            # will strive to be 'idempotent', meaning they
            # will only make changes when the desired state
            # expressed to the module does not match
            # the current state.  Look at 'service'
            # or 'yum' in the main git tree for an example
            # of how that might look.

            rc = os.system("date -s \"%s\"" % value)

            # always handle all possible errors
            #
            # when returning a failure, include 'failed'
            # in the return data, and explain the failure
            # in 'msg'.  Both of these conventions are
            # required however additional keys and values
            # can be added.

            if rc != 0:
                print json.dumps({
                    "failed" : True,
                    "msg"    : "failed setting the time"
                })
                sys.exit(1)

            # when things do not fail, we do not
            # have any restrictions on what kinds of
            # data are returned, but it's always a
            # good idea to include whether or not
            # a change was made, as that will allow
            # notifiers to be used in playbooks.

            date = str(datetime.datetime.now())
            print json.dumps({
                "time" : date,
                "changed" : True
            })
            sys.exit(0)

# if no parameters are sent, the module may or
# may not error out, this one will just
# return the time

date = str(datetime.datetime.now())
print json.dumps({
    "time" : date
})

Let’s test that module:

ansible/hacking/test-module -m ./time -a time=\"March 14 12:23\"

This should return something like:

{"changed": true, "time": "2012-03-14 12:23:00.000307"}
Module Provided ‘Facts’

The ‘setup’ module that ships with Ansible provides many variables about a system that can be used in playbooks and templates. However, it’s possible to also add your own facts without modifying the system module. To do this, just have the module return a ansible_facts key, like so, along with other return data:

{
    "changed" : True,
    "rc" : 5,
    "ansible_facts" : {
        "leptons" : 5000,
        "colors" : {
            "red"   : "FF0000",
            "white" : "FFFFFF"
        }
    }
}

These ‘facts’ will be available to all statements called after that module (but not before) in the playbook. A good idea might be make a module called ‘site_facts’ and always call it at the top of each playbook, though we’re always open to improving the selection of core facts in Ansible as well.

Common Module Boilerplate

As mentioned, if you are writing a module in Python, there are some very powerful shortcuts you can use. Modules are still transferred as one file, but an arguments file is no longer needed, so these are not only shorter in terms of code, they are actually FASTER in terms of execution time.

Rather than mention these here, the best way to learn is to read some of the source of the modules that come with Ansible.

The ‘group’ and ‘user’ modules are reasonably non-trivial and showcase what this looks like.

Key parts include always ending the module file with:

from ansible.module_utils.basic import *
if __name__ == '__main__':
    main()

And instantiating the module class like:

module = AnsibleModule(
    argument_spec = dict(
        state     = dict(default='present', choices=['present', 'absent']),
        name      = dict(required=True),
        enabled   = dict(required=True, choices=BOOLEANS),
        something = dict(aliases=['whatever'])
    )
)

The AnsibleModule provides lots of common code for handling returns, parses your arguments for you, and allows you to check inputs.

Successful returns are made like this:

module.exit_json(changed=True, something_else=12345)

And failures are just as simple (where ‘msg’ is a required parameter to explain the error):

module.fail_json(msg="Something fatal happened")

There are also other useful functions in the module class, such as module.sha1(path). See lib/ansible/module_common.py in the source checkout for implementation details.

Again, modules developed this way are best tested with the hacking/test-module script in the git source checkout. Because of the magic involved, this is really the only way the scripts can function outside of Ansible.

If submitting a module to ansible’s core code, which we encourage, use of the AnsibleModule class is required.

Check Mode

New in version 1.1.

Modules may optionally support check mode. If the user runs Ansible in check mode, the module should try to predict whether changes will occur.

For your module to support check mode, you must pass supports_check_mode=True when instantiating the AnsibleModule object. The AnsibleModule.check_mode attribute will evaluate to True when check mode is enabled. For example:

module = AnsibleModule(
    argument_spec = dict(...),
    supports_check_mode=True
)

if module.check_mode:
    # Check if any changes would be made but don't actually make those changes
    module.exit_json(changed=check_if_system_state_would_be_changed())

Remember that, as module developer, you are responsible for ensuring that no system state is altered when the user enables check mode.

If your module does not support check mode, when the user runs Ansible in check mode, your module will simply be skipped.

Common Pitfalls

You should also never do this in a module:

print "some status message"

Because the output is supposed to be valid JSON.

Modules must not output anything on standard error, because the system will merge standard out with standard error and prevent the JSON from parsing. Capturing standard error and returning it as a variable in the JSON on standard out is fine, and is, in fact, how the command module is implemented.

If a module returns stderr or otherwise fails to produce valid JSON, the actual output will still be shown in Ansible, but the command will not succeed.

Always use the hacking/test-module script when developing modules and it will warn you about these kind of things.

Conventions/Recommendations

As a reminder from the example code above, here are some basic conventions and guidelines:

  • If the module is addressing an object, the parameter for that object should be called ‘name’ whenever possible, or accept ‘name’ as an alias.
  • If you have a company module that returns facts specific to your installations, a good name for this module is site_facts.
  • Modules accepting boolean status should generally accept ‘yes’, ‘no’, ‘true’, ‘false’, or anything else a user may likely throw at them. The AnsibleModule common code supports this with “choices=BOOLEANS” and a module.boolean(value) casting function.
  • Include a minimum of dependencies if possible. If there are dependencies, document them at the top of the module file, and have the module raise JSON error messages when the import fails.
  • Modules must be self-contained in one file to be auto-transferred by ansible.
  • If packaging modules in an RPM, they only need to be installed on the control machine and should be dropped into /usr/share/ansible. This is entirely optional and up to you.
  • Modules must output valid JSON only. The toplevel return type must be a hash (dictionary) although they can be nested. Lists or simple scalar values are not supported, though they can be trivially contained inside a dictionary.
  • In the event of failure, a key of ‘failed’ should be included, along with a string explanation in ‘msg’. Modules that raise tracebacks (stacktraces) are generally considered ‘poor’ modules, though Ansible can deal with these returns and will automatically convert anything unparseable into a failed result. If you are using the AnsibleModule common Python code, the ‘failed’ element will be included for you automatically when you call ‘fail_json’.
  • Return codes from modules are not actually not significant, but continue on with 0=success and non-zero=failure for reasons of future proofing.
  • As results from many hosts will be aggregated at once, modules should return only relevant output. Returning the entire contents of a log file is generally bad form.
Documenting Your Module

All modules included in the CORE distribution must have a DOCUMENTATION string. This string MUST be a valid YAML document which conforms to the schema defined below. You may find it easier to start writing your DOCUMENTATION string in an editor with YAML syntax highlighting before you include it in your Python file.

Example

See an example documentation string in the checkout under examples/DOCUMENTATION.yml.

Include it in your module file like this:

#!/usr/bin/python
# Copyright header....

DOCUMENTATION = '''
---
module: modulename
short_description: This is a sentence describing the module
# ... snip ...
'''

The description, and notes fields support formatting with some special macros.

These formatting functions are U(), M(), I(), and C() for URL, module, italic, and constant-width respectively. It is suggested to use C() for file and option names, and I() when referencing parameters; module names should be specified as M(module).

Examples (which typically contain colons, quotes, etc.) are difficult to format with YAML, so these must be written in plain text in an EXAMPLES string within the module like this:

EXAMPLES = '''
- action: modulename opt1=arg1 opt2=arg2
'''

The EXAMPLES section, just like the documentation section, is required in all module pull requests for new modules.

Building & Testing

Put your completed module file into the ‘library’ directory and then run the command: make webdocs. The new ‘modules.html’ file will be built and appear in the ‘docsite/’ directory.

Tip

If you’re having a problem with the syntax of your YAML you can validate it on the YAML Lint website.

Tip

You can set the environment variable ANSIBLE_KEEP_REMOTE_FILES=1 on the controlling host to prevent ansible from deleting the remote files so you can debug your module.

Module Paths

If you are having trouble getting your module “found” by ansible, be sure it is in the ANSIBLE_LIBRARY_PATH.

If you have a fork of one of the ansible module projects, do something like this:

ANSIBLE_LIBRARY=~/ansible-modules-core:~/ansible-modules-extras

And this will make the items in your fork be loaded ahead of what ships with Ansible. Just be sure to make sure you’re not reporting bugs on versions from your fork!

To be safe, if you’re working on a variant on something in Ansible’s normal distribution, it’s not a bad idea to give it a new name while you are working on it, to be sure you know you’re pulling your version.

Getting Your Module Into Ansible

High-quality modules with minimal dependencies can be included in Ansible, but modules (just due to the programming preferences of the developers) will need to be implemented in Python and use the AnsibleModule common code, and should generally use consistent arguments with the rest of the program. Stop by the mailing list to inquire about requirements if you like, and submit a github pull request to the extras project. Included modules will ship with ansible, and also have a chance to be promoted to ‘core’ status, which gives them slightly higher development priority (though they’ll work in exactly the same way).

Module checklist
  • The shebang should always be #!/usr/bin/python, this allows ansible_python_interpreter to work

  • Documentation: Make sure it exists
    • required should always be present, be it true or false
    • If required is false you need to document default, even if its ‘null’
    • default is not needed for required: true
    • Remove unnecessary doc like aliases: [] or choices: []
    • The version is not a float number and value the current development version
    • The verify that arguments in doc and module spec dict are identical
    • For password / secret arguments no_log=True should be set
    • Requirements should be documented, using the requirements=[] field
    • Author should be set, name and github id at least
    • Made use of U() for urls, C() for files and options, I() for params, M() for modules?
    • GPL 3 License header
    • Does module use check_mode? Could it be modified to use it? Document it
    • Examples: make sure they are reproducible
    • Return: document the return structure of the module
  • Exceptions: The module must handle them. (exceptions are bugs)
    • Give out useful messages on what you were doing and you can add the exception message to that.
    • Avoid catchall exceptions, they are not very useful unless the underlying API gives very good error messages pertaining the attempted action.
  • The module must not use sys.exit() –> use fail_json() from the module object

  • Import custom packages in try/except and handled with fail_json() in main() e.g.:

    try:
        import foo
        HAS_LIB=True
    except:
        HAS_LIB=False
    
  • The return structure should be consistent, even if NA/None are used for keys normally returned under other options.

  • Are module actions idempotent? If not document in the descriptions or the notes

  • Import module snippets from ansible.module_utils.basic import * at the bottom, conserves line numbers for debugging.

  • Call your main() from a conditional so that it would be possible to test them in the future example:

    if __name__ == '__main__':
        main()
    
  • Try to normalize parameters with other modules, you can have aliases for when user is more familiar with underlying API name for the option

  • Being pep8 compliant is nice, but not a requirement. Specifically, the 80 column limit now hinders readability more that it improves it

  • Avoid ‘action/command‘, they are imperative and not declarative, there are other ways to express the same thing

  • Sometimes you want to split the module, specially if you are adding a list/info state, you want a _facts version

  • If you are asking ‘how can I have a module execute other modules’ ... you want to write a role

  • Return values must be able to be serialized as json via the python stdlib json library. basic python types (strings, int, dicts, lists, etc) are serializable. A common pitfall is to try returning an object via exit_json(). Instead, convert the fields you need from the object into the fields of a dictionary and return the dictionary.

  • Do not use urllib2 to handle urls. urllib2 does not natively verify TLS certificates and so is insecure for https. Instead, use either fetch_url or open_url from ansible.module_utils.urls.

Windows modules checklist
  • Favour native powershell and .net ways of doing things over calls to COM libraries or calls to native executables which may or may not be present in all versions of windows

  • modules are in powershell (.ps1 files) but the docs reside in same name python file (.py)

  • look at ansible/lib/ansible/module_utils/powershell.ps1 for commmon code, avoid duplication

  • start with:

    #!powershell
    
then::
<GPL header>
then::
# WANT_JSON # POWERSHELL_COMMON
  • Arguments:
    • Try and use state present and state absent like other modules

    • You need to check that all your mandatory args are present:

      If ($params.state) {
          $state = $params.state.ToString().ToLower()
          If (($state -ne 'started') -and ($state -ne 'stopped') -and ($state -ne 'restarted')) {
              Fail-Json $result "state is '$state'; must be 'started', 'stopped', or 'restarted'"
          }
      }
      
    • Look at existing modules for more examples of argument checking.

  • Results
    • The result object should allways contain an attribute called changed set to either $true or $false

    • Create your result object like this:

      $result = New-Object psobject @{
      changed = $false
      other_result_attribute = $some_value
      };
      
      If all is well, exit with a
      Exit-Json $result
      
    • Ensure anything you return, including errors can be converted to json.

    • Be aware that because exception messages could contain almost anything.

    • ConvertTo-Json will fail if it encounters a trailing in a string.

    • If all is not well use Fail-Json to exit.

  • Have you tested for powershell 3.0 and 4.0 compliance?

Deprecating and making module aliases

Starting in 1.8 you can deprecate modules by renaming them with a preceding _, i.e. old_cloud.py to _old_cloud.py, This will keep the module available but hide it from the primary docs and listing.

You can also rename modules and keep an alias to the old name by using a symlink that starts with _. This example allows the stat module to be called with fileinfo, making the following examples equivalent

EXAMPLES = ‘’’ ln -s stat.py _fileinfo.py ansible -m stat -a “path=/tmp” localhost ansible -m fileinfo -a “path=/tmp” localhost ‘’‘

See also

About Modules
Learn about available modules
Developing Plugins
Learn about developing plugins
Python API
Learn about the Python API for playbook and task execution
GitHub Core modules directory
Browse source of core modules
Github Extras modules directory
Browse source of extras modules.
Mailing List
Development mailing list
irc.freenode.net
#ansible IRC chat channel
Developing Plugins

Ansible is pluggable in a lot of other ways separate from inventory scripts and callbacks. Many of these features are there to cover fringe use cases and are infrequently needed, and others are pluggable simply because they are there to implement core features in ansible and were most convenient to be made pluggable.

This section will explore these features, though they are generally not common in terms of things people would look to extend quite as often.

Connection Type Plugins

By default, ansible ships with a ‘paramiko’ SSH, native ssh (just called ‘ssh’), ‘local’ connection type, and there are also some minor players like ‘chroot’ and ‘jail’. All of these can be used in playbooks and with /usr/bin/ansible to decide how you want to talk to remote machines. The basics of these connection types are covered in the Getting Started section. Should you want to extend Ansible to support other transports (SNMP? Message bus? Carrier Pigeon?) it’s as simple as copying the format of one of the existing modules and dropping it into the connection plugins directory. The value of ‘smart’ for a connection allows selection of paramiko or openssh based on system capabilities, and chooses ‘ssh’ if OpenSSH supports ControlPersist, in Ansible 1.2.1 an later. Previous versions did not support ‘smart’.

More documentation on writing connection plugins is pending, though you can jump into lib/ansible/plugins/connections and figure things out pretty easily.

Lookup Plugins

Language constructs like “with_fileglob” and “with_items” are implemented via lookup plugins. Just like other plugin types, you can write your own.

More documentation on writing lookup plugins is pending, though you can jump into lib/ansible/plugins/lookup and figure things out pretty easily.

Vars Plugins

Playbook constructs like ‘host_vars’ and ‘group_vars’ work via ‘vars’ plugins. They inject additional variable data into ansible runs that did not come from an inventory, playbook, or command line. Note that variables can also be returned from inventory, so in most cases, you won’t need to write or understand vars_plugins.

More documentation on writing vars plugins is pending, though you can jump into lib/ansible/inventory/vars_plugins and figure things out pretty easily.

If you find yourself wanting to write a vars_plugin, it’s more likely you should write an inventory script instead.

Filter Plugins

If you want more Jinja2 filters available in a Jinja2 template (filters like to_yaml and to_json are provided by default), they can be extended by writing a filter plugin. Most of the time, when someone comes up with an idea for a new filter they would like to make available in a playbook, we’ll just include them in ‘core.py’ instead.

Jump into lib/ansible/plugins/filter for details.

Callbacks

Callbacks are one of the more interesting plugin types. Adding additional callback plugins to Ansible allows for adding new behaviors when responding to events.

Examples

Example callbacks are shown in lib/ansible/plugins/callback.

The log_plays callback is an example of how to intercept playbook events to a log file, and the mail callback sends email when playbooks complete.

The osx_say callback provided is particularly entertaining – it will respond with computer synthesized speech on OS X in relation to playbook events, and is guaranteed to entertain and/or annoy coworkers.

Configuring

To activate a callback drop it in a callback directory as configured in ansible.cfg.

Development

More information will come later, though see the source of any of the existing callbacks and you should be able to get started quickly. They should be reasonably self-explanatory.

Distributing Plugins

Plugins are loaded from both Python’s site_packages (those that ship with ansible) and a configured plugins directory, which defaults to /usr/share/ansible/plugins, in a subfolder for each plugin type:

* action_plugins
* lookup_plugins
* callback_plugins
* connection_plugins
* filter_plugins
* vars_plugins

To change this path, edit the ansible configuration file.

In addition, plugins can be shipped in a subdirectory relative to a top-level playbook, in folders named the same as indicated above.

See also

About Modules
List of built-in modules
Python API
Learn about the Python API for task execution
Developing Dynamic Inventory Sources
Learn about how to develop dynamic inventory sources
Developing Modules
Learn about how to write Ansible modules
Mailing List
The development mailing list
irc.freenode.net
#ansible IRC chat channel
Helping Testing PRs

If you’re a developer, one of the most valuable things you can do is look at the github issues list and help fix bugs. We almost always prioritize bug fixing over feature development, so clearing bugs out of the way is one of the best things you can do.

Even if you’re not a developer, helping test pull requests for bug fixes and features is still immensely valuable.

This goes for testing new features as well as testing bugfixes.

In many cases, code should add tests that prove it works but that’s not ALWAYS possible and tests are not always comprehensive, especially when a user doesn’t have access to a wide variety of platforms, or that is using an API or web service.

In these cases, live testing against real equipment can be more valuable than automation that runs against simulated interfaces. In any case, things should always be tested manually the first time too.

Thankfully helping test ansible is pretty straightforward, assuming you are already used to how ansible works.

Get Started with A Source Checkout

You can do this by checking out ansible, making a test branch off the main one, merging a GitHub issue, testing, and then commenting on that particular issue on GitHub. Here’s how:

Note

Testing source code from GitHub pull requests sent to us does have some inherent risk, as the source code sent may have mistakes or malicious code that could have a negative impact on your system. We recommend doing all testing on a virtual machine, whether a cloud instance, or locally. Some users like Vagrant or Docker for this, but they are optional. It is also useful to have virtual machines of different Linux or other flavors, since some features (apt vs. yum, for example) are specific to those OS versions.

First, you will need to configure your testing environment with the necessary tools required to run our test suites. You will need at least:

git
python-nosetests (sometimes named python-nose)
python-passlib
python-mock

If you want to run the full integration test suite you’ll also need the following packages installed:

svn
hg
python-pip
gem

Second, if you haven’t already, clone the Ansible source code from GitHub:

git clone https://github.com/ansible/ansible.git --recursive
cd ansible/

Note

If you have previously forked the repository on GitHub, you could also clone it from there.

Note

If updating your repo for testing something module related, use “git rebase origin/devel” and then “git submodule update” to fetch the latest development versions of modules. Skipping the “git submodule update” step will result in versions that will be stale.

Activating The Source Checkout

The Ansible source includes a script that allows you to use Ansible directly from source without requiring a full installation, that is frequently used by developers on Ansible.

Simply source it (to use the Linux/Unix terminology) to begin using it immediately:

source ./hacking/env-setup

This script modifies the PYTHONPATH enviromnent variables (along with a few other things), which will be temporarily set as long as your shell session is open.

If you’d like your testing environment to always use the latest source, you could call the command from startup scripts (for example, .bash_profile).

Finding A Pull Request and Checking It Out On A Branch

Next, find the pull request you’d like to test and make note of the line at the top which describes the source and destination repositories. It will look something like this:

Someuser wants to merge 1 commit into ansible:devel from someuser:feature_branch_name

Note

It is important that the PR request target be ansible:devel, as we do not accept pull requests into any other branch. Dot releases are cherry-picked manually by ansible staff.

The username and branch at the end are the important parts, which will be turned into git commands as follows:

git checkout -b testing_PRXXXX devel
git pull https://github.com/someuser/ansible.git feature_branch_name

The first command creates and switches to a new branch named testing_PRXXXX, where the XXXX is the actual issue number associated with the pull request (for example, 1234). This branch is based on the devel branch. The second command pulls the new code from the users feature branch into the newly created branch.

Note

If the GitHub user interface shows that the pull request will not merge cleanly, we do not recommend proceeding if you are not somewhat familiar with git and coding, as you will have to resolve a merge conflict. This is the responsibility of the original pull request contributor.

Note

Some users do not create feature branches, which can cause problems when they have multiple, un-related commits in their version of devel. If the source looks like someuser:devel, make sure there is only one commit listed on the pull request.

For Those About To Test, We Salute You

At this point, you should be ready to begin testing!

If the PR is a bug-fix pull request, the first things to do are to run the suite of unit and integration tests, to ensure the pull request does not break current functionality:

# Unit Tests
make tests

# Integration Tests
cd test/integration
make

Note

Ansible does provide integration tests for cloud-based modules as well, however we do not recommend using them for some users due to the associated costs from the cloud providers. As such, typically it’s better to run specific parts of the integration battery and skip these tests.

Integration tests aren’t the end all beat all - in many cases what is fixed might not HAVE a test, so determining if it works means checking the functionality of the system and making sure it does what it said it would do.

Pull requests for bug-fixes should reference the bug issue number they are fixing.

We encourage users to provide playbook examples for bugs that show how to reproduce the error, and these playbooks should be used to verify the bugfix does resolve the issue if available. You may wish to also do your own review to poke the corners of the change.

Since some reproducers can be quite involved, you might wish to create a testing directory with the issue # as a sub- directory to keep things organized:

mkdir -p testing/XXXX # where XXXX is again the issue # for the original issue or PR
cd testing/XXXX
<create files or git clone example playbook repo>

While it should go without saying, be sure to read any playbooks before you run them. VMs help with running untrusted content greatly, though a playbook could still do something to your computing resources that you’d rather not like.

Once the files are in place, you can run the provided playbook (if there is one) to test the functionality:

ansible-playbook -vvv playbook_name.yml

If there’s not a playbook, you may have to copy and paste playbook snippets or run a ad-hoc command that was pasted in.

Our issue template also included sections for “Expected Output” and “Actual Output”, which should be used to gauge the output from the provided examples.

If the pull request resolves the issue, please leave a comment on the pull request, showing the following information:

  • “Works for me!”
  • The output from ansible –version.

In some cases, you may wish to share playbook output from the test run as well.

Example!:

Works for me!  Tested on `Ansible 1.7.1`.  I verified this on CentOS 6.5 and also Ubuntu 14.04.

If the PR does not resolve the issue, or if you see any failures from the unit/integration tests, just include that output instead:

This doesn't work for me.

When I ran this my toaster started making loud noises!

Output from the toaster looked like this:

   ```
   BLARG
   StrackTrace
   RRRARRGGG
   ```

When you are done testing a feature branch, you can remove it with the following command:

git branch -D someuser-feature_branch_name

We understand some users may be inexperienced with git, or other aspects of the above procedure, so feel free to stop by ansible-devel list for questions and we’d be happy to help answer them.

Developers will also likely be interested in the fully-discoverable in Ansible Tower. It’s great for embedding Ansible in all manner of applications.

Ansible Tower

Ansible Tower (formerly ‘AWX’) is a web-based solution that makes Ansible even more easy to use for IT teams of all kinds. It’s designed to be the hub for all of your automation tasks.

Tower allows you to control access to who can access what, even allowing sharing of SSH credentials without someone being able to transfer those credentials. Inventory can be graphically managed or synced with a wide variety of cloud sources. It logs all of your jobs, integrates well with LDAP, and has an amazing browsable REST API. Command line tools are available for easy integration with Jenkins as well. Provisioning callbacks provide great support for autoscaling topologies.

Find out more about Tower features and how to download it on the Ansible Tower webpage. Tower is free for usage for up to 10 nodes, and comes bundled with amazing support from Ansible, Inc. As you would expect, Tower is installed using Ansible playbooks!

Community Information & Contributing

Ansible is an open source project designed to bring together administrators and developers of all kinds to collaborate on building IT automation solutions that work well for them.

Should you wish to get more involved – whether in terms of just asking a question, helping other users, introducing new people to Ansible, or helping with the software or documentation, we welcome your contributions to the project.

Ansible Users
I’ve Got A Question

We’re happy to help!

Ansible questions are best asked on the Ansible Google Group Mailing List.

This is a very large high-traffic list for answering questions and sharing tips and tricks. Anyone can join, and email delivery is optional if you just want to read the group online. To cut down on spam, your first post is moderated, though posts are approved quickly.

Please be sure to share any relevant commands you ran, output, and detail, indicate the version of Ansible you are using when asking a question.

Where needed, link to gists or github repos to show examples, rather than sending attachments to the list.

We recommend using Google search to see if a topic has been answered recently, but comments found in older threads may no longer apply, depending on the topic.

Before you post, be sure you are running the latest stable version of Ansible. You can check this by comparing the output of ‘ansible –version’ with the version indicated on PyPi <https://pypi.python.org/pypi/ansible>.

Alternatively, you can also join our IRC channel - #ansible on irc.freenode.net. It’s a very high traffic channel as well, if you don’t get an answer you like, please stop by our mailing list, which is more likely to get attention of core developers since it’s asynchronous.

I’d Like To Keep Up With Release Announcements

Release announcements are posted to ansible-project, though if you don’t want to keep up with the very active list, you can join the Ansible Announce Mailing List

This is a low-traffic read-only list, where we’ll share release announcements and occasionally links to major Ansible Events around the world.

I’d Like To Help Share and Promote Ansible

You can help share Ansible with others by telling friends and colleagues, writing a blog post, or presenting at user groups (like DevOps groups or the local LUG).

You are also welcome to share slides on speakerdeck, sign up for a free account and tag it “Ansible”. On Twitter, you can also share things with #ansible and may wish to follow us.

I’d Like To Help Ansible Move Faster

If you’re a developer, one of the most valuable things you can do is look at the github issues list and help fix bugs. We almost always prioritize bug fixing over feature development, so clearing bugs out of the way is one of the best things you can do.

If you’re not a developer, helping test pull requests for bug fixes and features is still immensely valuable. You can do this by checking out ansible, making a test branch off the main one, merging a GitHub issue, testing, and then commenting on that particular issue on GitHub.

I’d Like To Report A Bug

Ansible practices responsible disclosure - if this is a security related bug, email security@ansible.com instead of filing a ticket or posting to the Google Group and you will receive a prompt response.

Bugs related to the core language should be reported to github.com/ansible/ansible after signing up for a free github account. Before reporting a bug, please use the bug/issue search to see if the issue has already been reported.

MODULE related bugs however should go to ansible-modules-core or ansible-modules-extras based on the classification of the module. This is listed on the bottom of the docs page for any module.

When filing a bug, please use the issue template to provide all relevant information, regardless of what repo you are filing a ticket against.

Knowing your ansible version and the exact commands you are running, and what you expect, saves time and helps us help everyone with their issues more quickly.

Do not use the issue tracker for “how do I do this” type questions. These are great candidates for IRC or the mailing list instead where things are likely to be more of a discussion.

To be respectful of reviewers time and allow us to help everyone efficiently, please provide minimal well-reduced and well-commented examples versus sharing your entire production playbook. Include playbook snippets and output where possible.

When sharing YAML in playbooks, formatting can be preserved by using code blocks.

For multiple-file content, we encourage use of gist.github.com. Online pastebin content can expire, so it’s nice to have things around for a longer term if they are referenced in a ticket.

If you are not sure if something is a bug yet, you are welcome to ask about something on the mailing list or IRC first.

As we are a very high volume project, if you determine that you do have a bug, please be sure to open the issue yourself to ensure we have a record of it. Don’t rely on someone else in the community to file the bug report for you.

It may take some time to get to your report, see our information about priority flags below.

I’d Like To Help With Documentation

Ansible documentation is a community project too!

If you would like to help with the documentation, whether correcting a typo or improving a section, or maybe even documenting a new feature, submit a github pull request to the code that lives in the “docsite/rst” subdirectory of the project for most pages, and there is an “Edit on GitHub” link up on those.

Module documentation is generated from a DOCUMENTATION structure embedded in the source code of each module, which is in either the ansible-modules-core or ansible-modules-extra repos on github, depending on the module. Information about this is always listed on the bottom of the web documentation for each module.

Aside from modules, the main docs are in restructured text format.

If you aren’t comfortable with restructured text, you can also open a ticket on github about any errors you spot or sections you would like to see added. For more information on creating pull requests, please refer to the github help guide.

For Current and Prospective Developers
I’d Like To Learn How To Develop on Ansible

If you’re new to Ansible and would like to figure out how to work on things, stop by the ansible-devel mailing list and say hi, and we can hook you up.

A great way to get started would be reading over some of the development documentation on the module site, and then finding a bug to fix or small feature to add.

Modules are some of the easiest places to get started.

Contributing Code (Features or Bugfixes)

The Ansible project keeps its source on github at github.com/ansible/ansible for the core application, and two sub repos github.com/ansible/ansible-modules-core and ansible/ansible-modules-extras for module related items. If you need to know if a module is in ‘core’ or ‘extras’, consult the web documentation page for that module.

The project takes contributions through github pull requests.

It is usually a good idea to join the ansible-devel list to discuss any large features prior to submission, and this especially helps in avoiding duplicate work or efforts where we decide, upon seeing a pull request for the first time, that revisions are needed. (This is not usually needed for module development, but can be nice for large changes).

Note that we do keep Ansible to a particular aesthetic, so if you are unclear about whether a feature is a good fit or not, having the discussion on the development list is often a lot easier than having to modify a pull request later.

When submitting patches, be sure to run the unit tests first “make tests” and always use, these are the same basic tests that will automatically run on Travis when creating the PR. There are more in depth tests in the tests/integration directory, classified as destructive and non_destructive, run these if they pertain to your modification. They are setup with tags so you can run subsets, some of the tests requrie cloud credentials and will only run if they are provided. When adding new features of fixing bugs it would be nice to add new tests to avoid regressions.

Use “git rebase” vs “git merge” (aliasing git pull to git pull –rebase is a great idea) to avoid merge commits in your submissions. There are also integration tests that can be run in the “test/integration” directory.

In order to keep the history clean and better audit incoming code, we will require resubmission of pull requests that contain merge commits. Use “git pull –rebase” vs “git pull” and “git rebase” vs “git merge”. Also be sure to use topic branches to keep your additions on different branches, such that they won’t pick up stray commits later.

If you make a mistake you do not need to close your PR, create a clean branch locally and then push to github with –force to overwrite the existing branch (permissible in this case as no one else should be using that branch as reference). Code comments won’t be lost, they just won’t be attached to the existing branch.

We’ll then review your contributions and engage with you about questions and so on.

As we have a very large and active community, so it may take awhile to get your contributions in! See the notes about priorities in a later section for understanding our work queue. Be patient, your request might not get merged right away, we also try to keep the devel branch more or less usable so we like to examine Pull requests carefully, which takes time.

Patches should always be made against the ‘devel’ branch.

Keep in mind that small and focused requests are easier to examine and accept, having example cases also help us understand the utility of a bug fix or a new feature.

Contributions can be for new features like modules, or to fix bugs you or others have found. If you are interested in writing new modules to be included in the core Ansible distribution, please refer to the module development documentation.

Ansible’s aesthetic encourages simple, readable code and consistent, conservatively extending, backwards-compatible improvements. Code developed for Ansible needs to support Python 2.6+, while code in modules must run under Python 2.4 or higher. Please also use a 4-space indent and no tabs, we do not enforce 80 column lines, we are fine with 120-140. We do not take ‘style only’ requests unless the code is nearly unreadable, we are “PEP8ish”, but not strictly compliant.

You can also contribute by testing and revising other requests, specially if it is one you are interested in using. Please keep your comments clear and to the point, courteous and constructive, tickets are not a good place to start discussions (ansible-devel and IRC exist for this).

Tip: To easily run from a checkout, source ”./hacking/env-setup” and that’s it – no install required. You’re now live!

Other Topics
Ansible Staff

Ansible, Inc is a company supporting Ansible and building additional solutions based on Ansible. We also do services and support for those that are interested. We also offer an enterprise web front end to Ansible (see Tower below).

Our most important task however is enabling all the great things that happen in the Ansible community, including organizing software releases of Ansible. For more information about any of these things, contact info@ansible.com

On IRC, you can find us as jimi_c, abadger1999, Tybstar, bcoca, and others. On the mailing list, we post with an @ansible.com address.

Mailing List Information

Ansible has several mailing lists. Your first post to the mailing list will be moderated (to reduce spam), so please allow a day or less for your first post.

Ansible Project List is for sharing Ansible Tips, answering questions, and general user discussion.

Ansible Development List is for learning how to develop on Ansible, asking about prospective feature design, or discussions about extending ansible or features in progress.

Ansible Announce list is a read-only list that shares information about new releases of Ansible, and also rare infrequent event information, such as announcements about an AnsibleFest coming up, which is our official conference series.

To subscribe to a group from a non-google account, you can email the subscription address, for example ansible-devel+subscribe@googlegroups.com.

Release Numbering

Releases ending in ”.0” are major releases and this is where all new features land. Releases ending in another integer, like “0.X.1” and “0.X.2” are dot releases, and these are only going to contain bugfixes.

Typically we don’t do dot releases for minor bugfixes (reserving these for larger items), but may occasionally decide to cut dot releases containing a large number of smaller fixes if it’s still a fairly long time before the next release comes out.

Releases are also given code names based on Van Halen songs, that no one really uses.

Tower Support Questions

Ansible Tower is a UI, Server, and REST endpoint for Ansible, produced by Ansible, Inc.

If you have a question about tower, email support@ansible.com rather than using the IRC channel or the general project mailing list.

IRC Channel

Ansible has an IRC channel #ansible on irc.freenode.net.

Notes on Priority Flags

Ansible was one of the top 5 projects with the most OSS contributors on GitHub in 2013, and has over 800 contributors to the project to date, not to mention a very large user community that has downloaded the application well over a million times.

As a result, we have a LOT of incoming activity to process.

In the interest of transparency, we’re telling you how we sort incoming requests.

In our bug tracker you’ll notice some labels - P1, P2, P3, P4, and P5. These are our internal priority orders that we use to sort tickets.

With some exceptions for easy merges (like documentation typos for instance), we’re going to spend most of our time working on P1 and P2 items first, including pull requests. These usually relate to important bugs or features affecting large segments of the userbase. So if you see something categorized “P3 or P4”, and it’s not appearing to get a lot of immediate attention, this is why.

These labels don’t really have definition - they are a simple ordering. However something affecting a major module (yum, apt, etc) is likely to be prioritized higher than a module affecting a smaller number of users.

Since we place a strong emphasis on testing and code review, it may take a few months for a minor feature to get merged.

Don’t worry though – we’ll also take periodic sweeps through the lower priority queues and give them some attention as well, particularly in the area of new module changes. So it doesn’t necessarily mean that we’ll be exhausting all of the higher-priority queues before getting to your ticket.

Every bit of effort helps - if you’re wishing to expedite the inclusion of a P3 feature pull request for instance, the best thing you can do is help close P2 bug reports.

Community Code of Conduct

Ansible’s community welcomes users of all types, backgrounds, and skill levels. Please treat others as you expect to be treated, keep discussions positive, and avoid discrimination of all kinds, profanity, allegations of Cthulhu worship, or engaging in controversial debates (except vi vs emacs is cool).

The same expectations apply to community events as they do to online interactions.

Posts to mailing lists should remain focused around Ansible and IT automation. Abuse of these community guidelines will not be tolerated and may result in banning from community resources.

Contributors License Agreement

By contributing you agree that these contributions are your own (or approved by your employer) and you grant a full, complete, irrevocable copyright license to all users and developers of the project, present and future, pursuant to the license of the project.

Ansible Galaxy

“Ansible Galaxy” can either refer to a website for sharing and downloading Ansible roles, or a command line tool that helps work with roles.

The Website

The website Ansible Galaxy, is a free site for finding, downloading, rating, and reviewing all kinds of community developed Ansible roles and can be a great way to get a jumpstart on your automation projects.

You can sign up with social auth and use the download client ‘ansible-galaxy’ which is included in Ansible 1.4.2 and later.

Read the “About” page on the Galaxy site for more information.

The ansible-galaxy command line tool

The command line ansible-galaxy has many different subcommands.

Installing Roles

The most obvious is downloading roles from the Ansible Galaxy website:

ansible-galaxy install username.rolename
Building out Role Scaffolding

It can also be used to initialize the base structure of a new role, saving time on creating the various directories and main.yml files a role requires:

ansible-galaxy init rolename
Installing Multiple Roles From A File

To install multiple roles, the ansible-galaxy CLI can be fed a requirements file. All versions of ansible allow the following syntax for installing roles from the Ansible Galaxy website:

ansible-galaxy install -r requirements.txt

Where the requirements.txt looks like:

username1.foo_role
username2.bar_role

To request specific versions (tags) of a role, use this syntax in the roles file:

username1.foo_role,version
username2.bar_role,version

Available versions will be listed on the Ansible Galaxy webpage for that role.

Advanced Control over Role Requirements Files

For more advanced control over where to download roles from, including support for remote repositories, Ansible 1.8 and later support a new YAML format for the role requirements file, which must end in a ‘yml’ extension. It works like this:

ansible-galaxy install -r requirements.yml

The extension is important. If the .yml extension is left off, the ansible-galaxy CLI will assume the file is in the “basic” format and will be confused.

And here’s an example showing some specific version downloads from multiple sources. In one of the examples we also override the name of the role and download it as something different:

# from galaxy
- src: yatesr.timezone

# from github
- src: https://github.com/bennojoy/nginx

# from github installing to a relative path
- src: https://github.com/bennojoy/nginx
  path: vagrant/roles/

# from github, overriding the name and specifying a specific tag
- src: https://github.com/bennojoy/nginx
  version: master
  name: nginx_role

# from a webserver, where the role is packaged in a tar.gz
- src: https://some.webserver.example.com/files/master.tar.gz
  name: http-role

# from bitbucket, if bitbucket happens to be operational right now :)
- src: git+http://bitbucket.org/willthames/git-ansible-galaxy
  version: v1.4

# from bitbucket, alternative syntax and caveats
- src: http://bitbucket.org/willthames/hg-ansible-galaxy
  scm: hg

As you can see in the above, there are a large amount of controls available to customize where roles can be pulled from, and what to save roles as.

Roles pulled from galaxy work as with other SCM sourced roles above. To download a role with dependencies, and automatically install those dependencies, the role must be uploaded to the Ansible Galaxy website.

See also

Playbook Roles and Include Statements
All about ansible roles
Mailing List
Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net
#ansible IRC chat channel

Testing Strategies

Integrating Testing With Ansible Playbooks

Many times, people ask, “how can I best integrate testing with Ansible playbooks?” There are many options. Ansible is actually designed to be a “fail-fast” and ordered system, therefore it makes it easy to embed testing directly in Ansible playbooks. In this chapter, we’ll go into some patterns for integrating tests of infrastructure and discuss the right level of testing that may be appropriate.

Note

This is a chapter about testing the application you are deploying, not the chapter on how to test Ansible modules during development. For that content, please hop over to the Development section.

By incorporating a degree of testing into your deployment workflow, there will be fewer surprises when code hits production and, in many cases, tests can be leveraged in production to prevent failed updates from migrating across an entire installation. Since it’s push-based, it’s also very easy to run the steps on the localhost or testing servers. Ansible lets you insert as many checks and balances into your upgrade workflow as you would like to have.

The Right Level of Testing

Ansible resources are models of desired-state. As such, it should not be necessary to test that services are started, packages are installed, or other such things. Ansible is the system that will ensure these things are declaratively true. Instead, assert these things in your playbooks.

tasks:
  - service: name=foo state=started enabled=yes

If you think the service may not be started, the best thing to do is request it to be started. If the service fails to start, Ansible will yell appropriately. (This should not be confused with whether the service is doing something functional, which we’ll show more about how to do later).

Check Mode As A Drift Test

In the above setup, –check mode in Ansible can be used as a layer of testing as well. If running a deployment playbook against an existing system, using the –check flag to the ansible command will report if Ansible thinks it would have had to have made any changes to bring the system into a desired state.

This can let you know up front if there is any need to deploy onto the given system. Ordinarily scripts and commands don’t run in check mode, so if you want certain steps to always execute in check mode, such as calls to the script module, add the ‘always_run’ flag:

roles:
  - webserver

tasks:
  - script: verify.sh
    always_run: True
Modules That Are Useful for Testing

Certain playbook modules are particularly good for testing. Below is an example that ensures a port is open:

tasks:

  - wait_for: host={{ inventory_hostname }} port=22
    delegate_to: localhost

Here’s an example of using the URI module to make sure a web service returns:

tasks:

  - action: uri url=http://www.example.com return_content=yes
    register: webpage

  - fail: msg='service is not happy'
    when: "'AWESOME' not in webpage.content"

It’s easy to push an arbitrary script (in any language) on a remote host and the script will automatically fail if it has a non-zero return code:

tasks:

  - script: test_script1
  - script: test_script2 --parameter value --parameter2 value

If using roles (you should be, roles are great!), scripts pushed by the script module can live in the ‘files/’ directory of a role.

And the assert module makes it very easy to validate various kinds of truth:

tasks:

   - shell: /usr/bin/some-command --parameter value
     register: cmd_result

   - assert:
       that:
         - "'not ready' not in cmd_result.stderr"
         - "'gizmo enabled' in cmd_result.stdout"

Should you feel the need to test for existence of files that are not declaratively set by your Ansible configuration, the ‘stat’ module is a great choice:

tasks:

   - stat: path=/path/to/something
     register: p

   - assert:
       that:
         - p.stat.exists and p.stat.isdir

As mentioned above, there’s no need to check things like the return codes of commands. Ansible is checking them automatically. Rather than checking for a user to exist, consider using the user module to make it exist.

Ansible is a fail-fast system, so when there is an error creating that user, it will stop the playbook run. You do not have to check up behind it.

Testing Lifecycle

If writing some degree of basic validation of your application into your playbooks, they will run every time you deploy.

As such, deploying into a local development VM and a staging environment will both validate that things are according to plan ahead of your production deploy.

Your workflow may be something like this:

- Use the same playbook all the time with embedded tests in development
- Use the playbook to deploy to a staging environment (with the same playbooks) that simulates production
- Run an integration test battery written by your QA team against staging
- Deploy to production, with the same integrated tests.

Something like an integration test battery should be written by your QA team if you are a production webservice. This would include things like Selenium tests or automated API tests and would usually not be something embedded into your Ansible playbooks.

However, it does make sense to include some basic health checks into your playbooks, and in some cases it may be possible to run a subset of the QA battery against remote nodes. This is what the next section covers.

Integrating Testing With Rolling Updates

If you have read into Delegation, Rolling Updates, and Local Actions it may quickly become apparent that the rolling update pattern can be extended, and you can use the success or failure of the playbook run to decide whether to add a machine into a load balancer or not.

This is the great culmination of embedded tests:

---

- hosts: webservers
  serial: 5

  pre_tasks:

    - name: take out of load balancer pool
      command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
      delegate_to: 127.0.0.1

  roles:

     - common
     - webserver
     - apply_testing_checks

  post_tasks:

    - name: add back to load balancer pool
      command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
      delegate_to: 127.0.0.1

Of course in the above, the “take out of the pool” and “add back” steps would be replaced with a call to a Ansible load balancer module or appropriate shell command. You might also have steps that use a monitoring module to start and end an outage window for the machine.

However, what you can see from the above is that tests are used as a gate – if the “apply_testing_checks” step is not performed, the machine will not go back into the pool.

Read the delegation chapter about “max_fail_percentage” and you can also control how many failing tests will stop a rolling update from proceeding.

This above approach can also be modified to run a step from a testing machine remotely against a machine:

---

- hosts: webservers
  serial: 5

  pre_tasks:

    - name: take out of load balancer pool
      command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
      delegate_to: 127.0.0.1

  roles:

     - common
     - webserver

  tasks:
     - script: /srv/qa_team/app_testing_script.sh --server {{ inventory_hostname }}
       delegate_to: testing_server

  post_tasks:

    - name: add back to load balancer pool
      command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
      delegate_to: 127.0.0.1

In the above example, a script is run from the testing server against a remote node prior to bringing it back into the pool.

In the event of a problem, fix the few servers that fail using Ansible’s automatically generated retry file to repeat the deploy on just those servers.

Achieving Continuous Deployment

If desired, the above techniques may be extended to enable continuous deployment practices.

The workflow may look like this:

- Write and use automation to deploy local development VMs
- Have a CI system like Jenkins deploy to a staging environment on every code change
- The deploy job calls testing scripts to pass/fail a build on every deploy
- If the deploy job succeeds, it runs the same deploy playbook against production inventory

Some Ansible users use the above approach to deploy a half-dozen or dozen times an hour without taking all of their infrastructure offline. A culture of automated QA is vital if you wish to get to this level.

If you are still doing a large amount of manual QA, you should still make the decision on whether to deploy manually as well, but it can still help to work in the rolling update patterns of the previous section and incorporate some basic health checks using modules like ‘script’, ‘stat’, ‘uri’, and ‘assert’.

Conclusion

Ansible believes you should not need another framework to validate basic things of your infrastructure is true. This is the case because Ansible is an order-based system that will fail immediately on unhandled errors for a host, and prevent further configuration of that host. This forces errors to the top and shows them in a summary at the end of the Ansible run.

However, as Ansible is designed as a multi-tier orchestration system, it makes it very easy to incorporate tests into the end of a playbook run, either using loose tasks or roles. When used with rolling updates, testing steps can decide whether to put a machine back into a load balanced pool or not.

Finally, because Ansible errors propagate all the way up to the return code of the Ansible program itself, and Ansible by default runs in an easy push-based mode, Ansible is a great step to put into a build environment if you wish to use it to roll out systems as part of a Continuous Integration/Continuous Delivery pipeline, as is covered in sections above.

The focus should not be on infrastructure testing, but on application testing, so we strongly encourage getting together with your QA team and ask what sort of tests would make sense to run every time you deploy development VMs, and which sort of tests they would like to run against the staging environment on every deploy. Obviously at the development stage, unit tests are great too. But don’t unit test your playbook. Ansible describes states of resources declaratively, so you don’t have to. If there are cases where you want to be sure of something though, that’s great, and things like stat/assert are great go-to modules for that purpose.

In all, testing is a very organizational and site-specific thing. Everybody should be doing it, but what makes the most sense for your environment will vary with what you are deploying and who is using it – but everyone benefits from a more robust and reliable deployment system.

See also

About Modules
All the documentation for Ansible modules
Playbooks
An introduction to playbooks
Delegation, Rolling Updates, and Local Actions
Delegation, useful for working with loud balancers, clouds, and locally executed steps.
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel

Frequently Asked Questions

Here are some commonly-asked questions and their answers.

How can I set the PATH or any other environment variable for a task or entire playbook?

Setting environment variables can be done with the environment keyword. It can be used at task or playbook level:

environment:
  PATH: "{{ ansible_env.PATH }}:/thingy/bin"
  SOME: value
How do I handle different machines needing different user accounts or ports to log in with?

Setting inventory variables in the inventory file is the easiest way.

For instance, suppose these hosts have different usernames and ports:

[webservers]
asdf.example.com  ansible_ssh_port=5000   ansible_ssh_user=alice
jkl.example.com   ansible_ssh_port=5001   ansible_ssh_user=bob

You can also dictate the connection type to be used, if you want:

[testcluster]
localhost           ansible_connection=local
/path/to/chroot1    ansible_connection=chroot
foo.example.com
bar.example.com

You may also wish to keep these in group variables instead, or file in them in a group_vars/<groupname> file. See the rest of the documentation for more information about how to organize variables.

How do I get ansible to reuse connections, enable Kerberized SSH, or have Ansible pay attention to my local SSH config file?

Switch your default connection type in the configuration file to ‘ssh’, or use ‘-c ssh’ to use Native OpenSSH for connections instead of the python paramiko library. In Ansible 1.2.1 and later, ‘ssh’ will be used by default if OpenSSH is new enough to support ControlPersist as an option.

Paramiko is great for starting out, but the OpenSSH type offers many advanced options. You will want to run Ansible from a machine new enough to support ControlPersist, if you are using this connection type. You can still manage older clients. If you are using RHEL 6, CentOS 6, SLES 10 or SLES 11 the version of OpenSSH is still a bit old, so consider managing from a Fedora or openSUSE client even though you are managing older nodes, or just use paramiko.

We keep paramiko as the default as if you are first installing Ansible on an EL box, it offers a better experience for new users.

How do I speed up management inside EC2?

Don’t try to manage a fleet of EC2 machines from your laptop. Connect to a management node inside EC2 first and run Ansible from there.

How do I handle python pathing not having a Python 2.X in /usr/bin/python on a remote machine?

While you can write ansible modules in any language, most ansible modules are written in Python, and some of these are important core ones.

By default Ansible assumes it can find a /usr/bin/python on your remote system that is a 2.X version of Python, specifically 2.4 or higher.

Setting of an inventory variable ‘ansible_python_interpreter’ on any host will allow Ansible to auto-replace the interpreter used when executing python modules. Thus, you can point to any python you want on the system if /usr/bin/python on your system does not point to a Python 2.X interpreter.

Some Linux operating systems, such as Arch, may only have Python 3 installed by default. This is not sufficient and you will get syntax errors trying to run modules with Python 3. Python 3 is essentially not the same language as Python 2. Ansible modules currently need to support older Pythons for users that still have Enterprise Linux 5 deployed, so they are not yet ported to run under Python 3.0. This is not a problem though as you can just install Python 2 also on a managed host.

Python 3.0 support will likely be addressed at a later point in time when usage becomes more mainstream.

Do not replace the shebang lines of your python modules. Ansible will do this for you automatically at deploy time.

What is the best way to make content reusable/redistributable?

If you have not done so already, read all about “Roles” in the playbooks documentation. This helps you make playbook content self-contained, and works well with things like git submodules for sharing content with others.

If some of these plugin types look strange to you, see the API documentation for more details about ways Ansible can be extended.

Where does the configuration file live and what can I configure in it?

See Configuration file.

How do I disable cowsay?

If cowsay is installed, Ansible takes it upon itself to make your day happier when running playbooks. If you decide that you would like to work in a professional cow-free environment, you can either uninstall cowsay, or set an environment variable:

export ANSIBLE_NOCOWS=1
How do I see a list of all of the ansible_ variables?

Ansible by default gathers “facts” about the machines under management, and these facts can be accessed in Playbooks and in templates. To see a list of all of the facts that are available about a machine, you can run the “setup” module as an ad-hoc action:

ansible -m setup hostname

This will print out a dictionary of all of the facts that are available for that particular host.

How do I loop over a list of hosts in a group, inside of a template?

A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template configuration file with a list of servers. To do this, you can just access the “$groups” dictionary in your template, like this:

{% for host in groups['db_servers'] %}
    {{ host }}
{% endfor %}

If you need to access facts about these hosts, for instance, the IP address of each hostname, you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers:

- hosts:  db_servers
  tasks:
    - # doesn't matter what you do, just that they were talked to previously.

Then you can use the facts inside your template, like this:

{% for host in groups['db_servers'] %}
   {{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}
{% endfor %}
How do I access a variable name programmatically?

An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied via a role parameter or other input. Variable names can be built by adding strings together, like so:

{{ hostvars[inventory_hostname]['ansible_' + which_interface]['ipv4']['address'] }}

The trick about going through hostvars is necessary because it’s a dictionary of the entire namespace of variables. ‘inventory_hostname’ is a magic variable that indicates the current host you are looping over in the host loop.

How do I access a variable of the first host in a group?

What happens if we want the ip address of the first webserver in the webservers group? Well, we can do that too. Note that if we are using dynamic inventory, which host is the ‘first’ may not be consistent, so you wouldn’t want to do this unless your inventory was static and predictable. (If you are using Ansible Tower, it will use database order, so this isn’t a problem even if you are using cloud based inventory scripts).

Anyway, here’s the trick:

{{ hostvars[groups['webservers'][0]]['ansible_eth0']['ipv4']['address'] }}

Notice how we’re pulling out the hostname of the first machine of the webservers group. If you are doing this in a template, you could use the Jinja2 ‘#set’ directive to simplify this, or in a playbook, you could also use set_fact:

  • set_fact: headnode={{ groups[[‘webservers’][0]] }}
  • debug: msg={{ hostvars[headnode].ansible_eth0.ipv4.address }}

Notice how we interchanged the bracket syntax for dots – that can be done anywhere.

How do I copy files recursively onto a target host?

The “copy” module has a recursive parameter, though if you want to do something more efficient for a large number of files, take a look at the “synchronize” module instead, which wraps rsync. See the module index for info on both of these modules.

How do I access shell environment variables?

If you just need to access existing variables, use the ‘env’ lookup plugin. For example, to access the value of the HOME environment variable on management machine:

---
# ...
  vars:
     local_home: "{{ lookup('env','HOME') }}"

If you need to set environment variables, see the Advanced Playbooks section about environments.

Ansible 1.4 will also make remote environment variables available via facts in the ‘ansible_env’ variable:

{{ ansible_env.SOME_VARIABLE }}
How do I generate crypted passwords for the user module?

The mkpasswd utility that is available on most Linux systems is a great option:

mkpasswd --method=SHA-512

If this utility is not installed on your system (e.g. you are using OS X) then you can still easily generate these passwords using Python. First, ensure that the Passlib password hashing library is installed.

pip install passlib

Once the library is ready, SHA512 password values can then be generated as follows:

python -c "from passlib.hash import sha512_crypt; import getpass; print sha512_crypt.encrypt(getpass.getpass())"
Can I get training on Ansible or find commercial support?

Yes! See our Guru offering for online support, and support is also included with Ansible Tower. You can also read our service page and email info@ansible.com for further details.

Is there a web interface / REST API / etc?

Yes! Ansible, Inc makes a great product that makes Ansible even more powerful and easy to use. See Ansible Tower.

How do I submit a change to the documentation?

Great question! Documentation for Ansible is kept in the main project git repository, and complete instructions for contributing can be found in the docs README viewable on GitHub. Thanks!

How do I keep secret data in my playbook?

If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see Vault.

In Ansible 1.8 and later, if you have a task that you don’t want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful:

- name: secret task
  shell: /usr/bin/do_something --value={{ secret_value }}
  no_log: True

This can be used to keep verbose output but hide sensitive information from others who would otherwise like to be able to see the output.

The no_log attribute can also apply to an entire play:

- hosts: all
  no_log: True

Though this will make the play somewhat difficult to debug. It’s recommended that this be applied to single tasks only, once a playbook is completed.

I don’t see my question here

Please see the section below for a link to IRC and the Google Group, where you can ask your question there.

See also

Ansible V2 Documentation
The documentation index
Playbooks
An introduction to playbooks
Best Practices
Best practices advice
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel

Glossary

The following is a list (and re-explanation) of term definitions used elsewhere in the Ansible documentation.

Consult the documentation home page for the full documentation and to see the terms in context, but this should be a good resource to check your knowledge of Ansible’s components and understand how they fit together. It’s something you might wish to read for review or when a term comes up on the mailing list.

Action

An action is a part of a task that specifies which of the modules to run and the arguments to pass to that module. Each task can have only one action, but it may also have other parameters.

Ad Hoc

Refers to running Ansible to perform some quick command, using /usr/bin/ansible, rather than the orchestration language, which is /usr/bin/ansible-playbook. An example of an ad-hoc command might be rebooting 50 machines in your infrastructure. Anything you can do ad-hoc can be accomplished by writing a playbook, and playbooks can also glue lots of other operations together.

Async

Refers to a task that is configured to run in the background rather than waiting for completion. If you have a long process that would run longer than the SSH timeout, it would make sense to launch that task in async mode. Async modes can poll for completion every so many seconds, or can be configured to “fire and forget” in which case Ansible will not even check on the task again, it will just kick it off and proceed to future steps. Async modes work with both /usr/bin/ansible and /usr/bin/ansible-playbook.

Callback Plugin

Refers to some user-written code that can intercept results from Ansible and do something with them. Some supplied examples in the GitHub project perform custom logging, send email, or even play sound effects.

Check Mode

Refers to running Ansible with the --check option, which does not make any changes on the remote systems, but only outputs the changes that might occur if the command ran without this flag. This is analogous to so-called “dry run” modes in other systems, though the user should be warned that this does not take into account unexpected command failures or cascade effects (which is true of similar modes in other systems). Use this to get an idea of what might happen, but it is not a substitute for a good staging environment.

Connection Type, Connection Plugin

By default, Ansible talks to remote machines through pluggable libraries. Ansible supports native OpenSSH (‘ssh’), or a Python implementation called ‘paramiko’. OpenSSH is preferred if you are using a recent version, and also enables some features like Kerberos and jump hosts. This is covered in the getting started section. There are also other connection types like ‘accelerate’ mode, which must be bootstrapped over one of the SSH-based connection types but is very fast, and local mode, which acts on the local system. Users can also write their own connection plugins.

Conditionals

A conditional is an expression that evaluates to true or false that decides whether a given task will be executed on a given machine or not. Ansible’s conditionals are powered by the ‘when’ statement, and are discussed in the playbook documentation.

Diff Mode

A --diff flag can be passed to Ansible to show how template files change when they are overwritten, or how they might change when used with --check mode. These diffs come out in unified diff format.

Facts

Facts are simply things that are discovered about remote nodes. While they can be used in playbooks and templates just like variables, facts are things that are inferred, rather than set. Facts are automatically discovered by Ansible when running plays by executing the internal ‘setup’ module on the remote nodes. You never have to call the setup module explicitly, it just runs, but it can be disabled to save time if it is not needed. For the convenience of users who are switching from other configuration management systems, the fact module will also pull in facts from the ‘ohai’ and ‘facter’ tools if they are installed, which are fact libraries from Chef and Puppet, respectively.

Filter Plugin

A filter plugin is something that most users will never need to understand. These allow for the creation of new Jinja2 filters, which are more or less only of use to people who know what Jinja2 filters are. If you need them, you can learn how to write them in the API docs section.

Forks

Ansible talks to remote nodes in parallel and the level of parallelism can be set either by passing --forks, or editing the default in a configuration file. The default is a very conservative 5 forks, though if you have a lot of RAM, you can easily set this to a value like 50 for increased parallelism.

Gather Facts (Boolean)

Facts are mentioned above. Sometimes when running a multi-play playbook, it is desirable to have some plays that don’t bother with fact computation if they aren’t going to need to utilize any of these values. Setting gather_facts: False on a playbook allows this implicit fact gathering to be skipped.

Globbing

Globbing is a way to select lots of hosts based on wildcards, rather than the name of the host specifically, or the name of the group they are in. For instance, it is possible to select “www*” to match all hosts starting with “www”. This concept is pulled directly from Func, one of Michael’s earlier projects. In addition to basic globbing, various set operations are also possible, such as ‘hosts in this group and not in another group’, and so on.

Group

A group consists of several hosts assigned to a pool that can be conveniently targeted together, and also given variables that they share in common.

Group Vars

The “group_vars/” files are files that live in a directory alongside an inventory file, with an optional filename named after each group. This is a convenient place to put variables that will be provided to a given group, especially complex data structures, so that these variables do not have to be embedded in the inventory file or playbook.

Handlers

Handlers are just like regular tasks in an Ansible playbook (see Tasks), but are only run if the Task contains a “notify” directive and also indicates that it changed something. For example, if a config file is changed then the task referencing the config file templating operation may notify a service restart handler. This means services can be bounced only if they need to be restarted. Handlers can be used for things other than service restarts, but service restarts are the most common usage.

Host

A host is simply a remote machine that Ansible manages. They can have individual variables assigned to them, and can also be organized in groups. All hosts have a name they can be reached at (which is either an IP address or a domain name) and optionally a port number if they are not to be accessed on the default SSH port.

Host Specifier

Each Play in Ansible maps a series of tasks (which define the role, purpose, or orders of a system) to a set of systems.

This “hosts:” directive in each play is often called the hosts specifier.

It may select one system, many systems, one or more groups, or even some hosts that are in one group and explicitly not in another.

Host Vars

Just like “Group Vars”, a directory alongside the inventory file named “host_vars/” can contain a file named after each hostname in the inventory file, in YAML format. This provides a convenient place to assign variables to the host without having to embed them in the inventory file. The Host Vars file can also be used to define complex data structures that can’t be represented in the inventory file.

Lazy Evaluation

In general, Ansible evaluates any variables in playbook content at the last possible second, which means that if you define a data structure that data structure itself can define variable values within it, and everything “just works” as you would expect. This also means variable strings can include other variables inside of those strings.

Lookup Plugin

A lookup plugin is a way to get data into Ansible from the outside world. These are how such things as “with_items”, a basic looping plugin, are implemented, but there are also lookup plugins like “with_file” which loads data from a file, and even ones for querying environment variables, DNS text records, or key value stores. Lookup plugins can also be accessed in templates, e.g., {{ lookup('file','/path/to/file') }}.

Multi-Tier

The concept that IT systems are not managed one system at a time, but by interactions between multiple systems, and groups of systems, in well defined orders. For instance, a web server may need to be updated before a database server, and pieces on the web server may need to be updated after THAT database server, and various load balancers and monitoring servers may need to be contacted. Ansible models entire IT topologies and workflows rather than looking at configuration from a “one system at a time” perspective.

Idempotency

The concept that change commands should only be applied when they need to be applied, and that it is better to describe the desired state of a system than the process of how to get to that state. As an analogy, the path from North Carolina in the United States to California involves driving a very long way West, but if I were instead in Anchorage, Alaska, driving a long way west is no longer the right way to get to California. Ansible’s Resources like you to say “put me in California” and then decide how to get there. If you were already in California, nothing needs to happen, and it will let you know it didn’t need to change anything.

Includes

The idea that playbook files (which are nothing more than lists of plays) can include other lists of plays, and task lists can externalize lists of tasks in other files, and similarly with handlers. Includes can be parameterized, which means that the loaded file can pass variables. For instance, an included play for setting up a WordPress blog may take a parameter called “user” and that play could be included more than once to create a blog for both “alice” and “bob”.

Inventory

A file (by default, Ansible uses a simple INI format) that describes Hosts and Groups in Ansible. Inventory can also be provided via an “Inventory Script” (sometimes called an “External Inventory Script”).

Inventory Script

A very simple program (or a complicated one) that looks up hosts, group membership for hosts, and variable information from an external resource – whether that be a SQL database, a CMDB solution, or something like LDAP. This concept was adapted from Puppet (where it is called an “External Nodes Classifier”) and works more or less exactly the same way.

Jinja2

Jinja2 is the preferred templating language of Ansible’s template module. It is a very simple Python template language that is generally readable and easy to write.

JSON

Ansible uses JSON for return data from remote modules. This allows modules to be written in any language, not just Python.

Library

A collection of modules made available to /usr/bin/ansible or an Ansible playbook.

Limit Groups

By passing --limit somegroup to ansible or ansible-playbook, the commands can be limited to a subset of hosts. For instance, this can be used to run a playbook that normally targets an entire set of servers to one particular server.

Local Connection

By using “connection: local” in a playbook, or passing “-c local” to /usr/bin/ansible, this indicates that we are managing the local host and not a remote machine.

Local Action

A local_action directive in a playbook targeting remote machines means that the given step will actually occur on the local machine, but that the variable ‘{{ ansible_hostname }}’ can be passed in to reference the remote hostname being referred to in that step. This can be used to trigger, for example, an rsync operation.

Loops

Generally, Ansible is not a programming language. It prefers to be more declarative, though various constructs like “with_items” allow a particular task to be repeated for multiple items in a list. Certain modules, like yum and apt, are actually optimized for this, and can install all packages given in those lists within a single transaction, dramatically speeding up total time to configuration.

Modules

Modules are the units of work that Ansible ships out to remote machines. Modules are kicked off by either /usr/bin/ansible or /usr/bin/ansible-playbook (where multiple tasks use lots of different modules in conjunction). Modules can be implemented in any language, including Perl, Bash, or Ruby – but can leverage some useful communal library code if written in Python. Modules just have to return JSON or simple key=value pairs. Once modules are executed on remote machines, they are removed, so no long running daemons are used. Ansible refers to the collection of available modules as a ‘library’.

Notify

The act of a task registering a change event and informing a handler task that another action needs to be run at the end of the play. If a handler is notified by multiple tasks, it will still be run only once. Handlers are run in the order they are listed, not in the order that they are notified.

Orchestration

Many software automation systems use this word to mean different things. Ansible uses it as a conductor would conduct an orchestra. A datacenter or cloud architecture is full of many systems, playing many parts – web servers, database servers, maybe load balancers, monitoring systems, continuous integration systems, etc. In performing any process, it is necessary to touch systems in particular orders, often to simulate rolling updates or to deploy software correctly. Some system may perform some steps, then others, then previous systems already processed may need to perform more steps. Along the way, emails may need to be sent or web services contacted. Ansible orchestration is all about modeling that kind of process.

paramiko

By default, Ansible manages machines over SSH. The library that Ansible uses by default to do this is a Python-powered library called paramiko. The paramiko library is generally fast and easy to manage, though users desiring Kerberos or Jump Host support may wish to switch to a native SSH binary such as OpenSSH by specifying the connection type in their playbook, or using the “-c ssh” flag.

Playbooks

Playbooks are the language by which Ansible orchestrates, configures, administers, or deploys systems. They are called playbooks partially because it’s a sports analogy, and it’s supposed to be fun using them. They aren’t workbooks :)

Plays

A playbook is a list of plays. A play is minimally a mapping between a set of hosts selected by a host specifier (usually chosen by groups, but sometimes by hostname globs) and the tasks which run on those hosts to define the role that those systems will perform. There can be one or many plays in a playbook.

Pull Mode

By default, Ansible runs in push mode, which allows it very fine-grained control over when it talks to each system. Pull mode is provided for when you would rather have nodes check in every N minutes on a particular schedule. It uses a program called ansible-pull and can also be set up (or reconfigured) using a push-mode playbook. Most Ansible users use push mode, but pull mode is included for variety and the sake of having choices.

ansible-pull works by checking configuration orders out of git on a crontab and then managing the machine locally, using the local connection plugin.

Push Mode

Push mode is the default mode of Ansible. In fact, it’s not really a mode at all – it’s just how Ansible works when you aren’t thinking about it. Push mode allows Ansible to be fine-grained and conduct nodes through complex orchestration processes without waiting for them to check in.

Register Variable

The result of running any task in Ansible can be stored in a variable for use in a template or a conditional statement. The keyword used to define the variable is called ‘register’, taking its name from the idea of registers in assembly programming (though Ansible will never feel like assembly programming). There are an infinite number of variable names you can use for registration.

Resource Model

Ansible modules work in terms of resources. For instance, the file module will select a particular file and ensure that the attributes of that resource match a particular model. As an example, we might wish to change the owner of /etc/motd to ‘root’ if it is not already set to root, or set its mode to ‘0644’ if it is not already set to ‘0644’. The resource models are ‘idempotent’ meaning change commands are not run unless needed, and Ansible will bring the system back to a desired state regardless of the actual state – rather than you having to tell it how to get to the state.

Roles

Roles are units of organization in Ansible. Assigning a role to a group of hosts (or a set of groups, or host patterns, etc.) implies that they should implement a specific behavior. A role may include applying certain variable values, certain tasks, and certain handlers – or just one or more of these things. Because of the file structure associated with a role, roles become redistributable units that allow you to share behavior among playbooks – or even with other users.

Rolling Update

The act of addressing a number of nodes in a group N at a time to avoid updating them all at once and bringing the system offline. For instance, in a web topology of 500 nodes handling very large volume, it may be reasonable to update 10 or 20 machines at a time, moving on to the next 10 or 20 when done. The “serial:” keyword in an Ansible playbook controls the size of the rolling update pool. The default is to address the batch size all at once, so this is something that you must opt-in to. OS configuration (such as making sure config files are correct) does not typically have to use the rolling update model, but can do so if desired.

Runner

A core software component of Ansible that is the power behind /usr/bin/ansible directly – and corresponds to the invocation of each task in a playbook. The Runner is something Ansible developers may talk about, but it’s not really user land vocabulary.

Serial

See “Rolling Update”.

Sudo

Ansible does not require root logins, and since it’s daemonless, definitely does not require root level daemons (which can be a security concern in sensitive environments). Ansible can log in and perform many operations wrapped in a sudo command, and can work with both password-less and password-based sudo. Some operations that don’t normally work with sudo (like scp file transfer) can be achieved with Ansible’s copy, template, and fetch modules while running in sudo mode.

SSH (Native)

Native OpenSSH as an Ansible transport is specified with “-c ssh” (or a config file, or a directive in the playbook) and can be useful if wanting to login via Kerberized SSH or using SSH jump hosts, etc. In 1.2.1, ‘ssh’ will be used by default if the OpenSSH binary on the control machine is sufficiently new. Previously, Ansible selected ‘paramiko’ as a default. Using a client that supports ControlMaster and ControlPersist is recommended for maximum performance – if you don’t have that and don’t need Kerberos, jump hosts, or other features, paramiko is a good choice. Ansible will warn you if it doesn’t detect ControlMaster/ControlPersist capability.

Tags

Ansible allows tagging resources in a playbook with arbitrary keywords, and then running only the parts of the playbook that correspond to those keywords. For instance, it is possible to have an entire OS configuration, and have certain steps labeled “ntp”, and then run just the “ntp” steps to reconfigure the time server information on a remote host.

Tasks

Playbooks exist to run tasks. Tasks combine an action (a module and its arguments) with a name and optionally some other keywords (like looping directives). Handlers are also tasks, but they are a special kind of task that do not run unless they are notified by name when a task reports an underlying change on a remote system.

Templates

Ansible can easily transfer files to remote systems, but often it is desirable to substitute variables in other files. Variables may come from the inventory file, Host Vars, Group Vars, or Facts. Templates use the Jinja2 template engine and can also include logical constructs like loops and if statements.

Transport

Ansible uses “Connection Plugins” to define types of available transports. These are simply how Ansible will reach out to managed systems. Transports included are paramiko, SSH (using OpenSSH), and local.

When

An optional conditional statement attached to a task that is used to determine if the task should run or not. If the expression following the “when:” keyword evaluates to false, the task will be ignored.

Van Halen

For no particular reason, other than the fact that Michael really likes them, all Ansible releases are codenamed after Van Halen songs. There is no preference given to David Lee Roth vs. Sammy Lee Hagar-era songs, and instrumentals are also allowed. It is unlikely that there will ever be a Jump release, but a Van Halen III codename release is possible. You never know.

Vars (Variables)

As opposed to Facts, variables are names of values (they can be simple scalar values – integers, booleans, strings) or complex ones (dictionaries/hashes, lists) that can be used in templates and playbooks. They are declared things, not things that are inferred from the remote system’s current state or nature (which is what Facts are).

YAML

Ansible does not want to force people to write programming language code to automate infrastructure, so Ansible uses YAML to define playbook configuration languages and also variable files. YAML is nice because it has a minimum of syntax and is very clean and easy for people to skim. It is a good data format for configuration files and humans, but also machine readable. Ansible’s usage of YAML stemmed from Michael’s first use of it inside of Cobbler around 2006. YAML is fairly popular in the dynamic language community and the format has libraries available for serialization in many languages (Python, Perl, Ruby, etc.).

See also

Frequently Asked Questions
Frequently asked questions
Playbooks
An introduction to playbooks
Best Practices
Best practices advice
User Mailing List
Have a question? Stop by the google group!
irc.freenode.net
#ansible IRC chat channel

YAML Syntax

This page provides a basic overview of correct YAML syntax, which is how Ansible playbooks (our configuration management language) are expressed.

We use YAML because it is easier for humans to read and write than other common data formats like XML or JSON. Further, there are libraries available in most programming languages for working with YAML.

You may also wish to read Playbooks at the same time to see how this is used in practice.

YAML Basics

For Ansible, nearly every YAML file starts with a list. Each item in the list is a list of key/value pairs, commonly called a “hash” or a “dictionary”. So, we need to know how to write lists and dictionaries in YAML.

There’s another small quirk to YAML. All YAML files (regardless of their association with Ansible or not) should begin with ---. This is part of the YAML format and indicates the start of a document.

All members of a list are lines beginning at the same indentation level starting with a "- " (a dash and a space):

---
# A list of tasty fruits
- Apple
- Orange
- Strawberry
- Mango

A dictionary is represented in a simple key: value form (the colon must be followed by a space):

---
# An employee record
name: Example Developer
job: Developer
skill: Elite

Dictionaries can also be represented in an abbreviated form if you really want to:

---
# An employee record
{name: Example Developer, job: Developer, skill: Elite}

Ansible doesn’t really use these too much, but you can also specify a boolean value (true/false) in several forms:

---
create_key: yes
needs_agent: no
knows_oop: True
likes_emacs: TRUE
uses_cvs: false

Let’s combine what we learned so far in an arbitrary YAML example. This really has nothing to do with Ansible, but will give you a feel for the format:

---
# An employee record
name: Example Developer
job: Developer
skill: Elite
employed: True
foods:
    - Apple
    - Orange
    - Strawberry
    - Mango
languages:
    ruby: Elite
    python: Elite
    dotnet: Lame

That’s all you really need to know about YAML to start writing Ansible playbooks.

Gotchas

While YAML is generally friendly, the following is going to result in a YAML syntax error:

foo: somebody said I should put a colon here: so I did

You will want to quote any hash values using colons, like so:

foo: "somebody said I should put a colon here: so I did"

And then the colon will be preserved.

Further, Ansible uses “{{ var }}” for variables. If a value after a colon starts with a “{”, YAML will think it is a dictionary, so you must quote it, like so:

foo: "{{ variable }}"

See also

Playbooks
Learn what playbooks can do and how to write/run them.
YAMLLint
YAML Lint (online) helps you debug YAML syntax if you are having problems
Github examples directory
Complete playbook files from the github project source
Mailing List
Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net
#ansible IRC chat channel

Deploy Logstash-Elasticsearch-Kibana using ansible

Ansible Roles

common-env

  • Disable requiretty in /etc/sudoers

    Enabling pipelining could reduce the number of SSH operations requried to execute a module on the remote server. When using “sudo:” with pipelining option enabled, requiretty in /etc/sudoers should be disabled.

  • Install build essential packages

Requirements

Ansible-core-module: lineinfile , yum or apt .

Role Variables

Default vars:

sudoers(string)- path to sudoers file.
packages(list)- build essential package names
pkgmgr(dict)
    pkgmgr.pkg - package management tool [yum,apt]
    pkgmgr.name - module option [name,pkg]
    pkgmgr.state - module option [state]
Example Playbook
- name: common settings
  hosts: servers
  gather_facts: true
  roles:
    - { role: commom-env}

sun-jdk

Install sun-jdk-8u45

Requirements

Ansible-core-module: file , unarchive , stat , lineinfile .

Role Variables

Default vars:

base_home: "{{ansible_env.HOME}}/lek"
java_home: "{{base_home}}/jdk1.8.0_45"
jdk_tarball: "jdk-8u45-linux-x64.tar.gz"
Example Playbook
- hosts: servers
  gather_facts: true
  roles:
    - { role: sun-jdk }

logstash

Setup logstash with a tarball and running it as a service.

Requirements

Ansible-core-modules: template , file , stat , unarchive , lineinfile

Role Variables

Default vars:

logstash_srvname: "logstash"
base_home: "{{ansible_env.HOME}}/lek"
logstash_tarball: "logstash-1.5.0.tar.gz"
logstash_home: "{{base_home}}/logstash"
env_profile: "{{ansible_env.HOME}}/.profile"
logstash_whoami: "shipper"
read_from_redis_addr: "127.0.0.1"
read_from_redis_port: 6379
write_to_redis_addr: "127.0.0.1"
write_to_redis_port: 6379
redis_key: "logstash-*"
els_addr: "127.0.0.1"
operation: "install"

OS-specified vars:

RedHat-based
  default_syslog: ["/var/log/messages"]
  default_authlog: ["/var/log/secure"]

Debian-based
  default_syslog: [ "/var/log/syslog", "/var/log/kern.log" ]
  default_authlog: [ "/var/log/authlog" ]
Example Playbook
- hosts: servers
  gather_facts: true
  roles:
     - { role: logstash, logstash_whoami: "shipper", \
         write_to_redis_addr: "127.0.0.1", operation: "install" }

redis

Setup redis on remote servers with redis-3.0.2 source code.

Requirements

Ansible-core-modules: unarchive , stat , shell , template , lineinfile , file .

Role Variables

Default vars:

base_home: "{{ansible_env.HOME}}/lek"
redis_home: "{{base_home}}/redis"
redis_tarball: "redis-3.0.2.tar.gz"
redis_config: "redis_{{redis_port}}.conf"
redis_init: "redis_{{redis_port}}_init"
redis_port: 6379
redis_srvname: "redis_{{redis_port}}"
Example Playbook
- hosts: servers
  gather_facts: true
  roles:
     - { role: redis, redis_port: 6379 }
     - { role: redis, redis_port: 6380 }

elasticsearch

Setup a single node elasticsearch service on remote server.

Requirements

Ansible-core-modules: template , file , unarchive , stat , lineinfile .

Role Variables

Default vars:

base_home: "{{ansible_env.HOME}}/lek"
es_home: "{{base_home}}/elasticsearch"
es_tarball: "elasticsearch-1.6.0.tar.gz"
es_config: "elasticsearch.yml"
es_logging_config: "logging.yml"
es_srvname: "elasticsearch"
env_profile: "{{ansible_env.HOME}}/.profile"
operation: "install"
Dependencies

JDK

Note

JDK 1.8 or later required. Make sure executable java binary installed in /bin:/usr/bin:/sbin:/usr/sbin, or make a symbolic link in /bin:/usr/bin:/sbin:/usr/sbin .

Example Playbook
- hosts: servers
  gather_facts: true
  roles:
     - { role: elasticsearch, operation: "install" }

kibana

Setup Kibana on remote server.

Requirements

Ansible-core-module: file , unarchive , lineinfile , template , stat .

Role Variables

Default vars:

base_home: "{{ansible_env.HOME}}/lek"
kibana_home: "{{base_home}}/kibana"
kibana_tarball: "kibana-4.1.0-linux-x64.tar.gz"
env_profile: "{{ansible_env.HOME}}/.profile"
kibana_config: "kibana.yml"
elasticsearch_url: "http://127.0.0.1:9200"
kibana_port: 5601
operation: "install"
Example Playbook
- hosts: servers
  gather_facts: true
  roles:
     - { role: kibana, elasticsearch_url:"http://10.13.181.85:9200", \
         operation: "install" }

Nova-logging

Send logging information to syslog

Requirements

Ansible-core-modules: lineinfile

Role Variables

Default vars:

nova_conf: "/etc/nova/nova.conf"
nova_facility:
  LOG_LOCAL0: local0
host: "127.0.0.1"
port: "10514"
log_server: "{{host}}:{{port}}"
protocol: "tcp"
lines:
  - {regx: "^verbose=", line: "verbose=False" }
  - {regx: "^debug=", line: "debug=False" }
  - {regx: "^use_syslog=", line: "use_syslog=True" }
  - {regx: "^syslog_log_facility=", line: "syslog_log_facility=LOG_LOCAL0" }
Example Playbook
- hosts: servers
  roles:
     - { role: nova-logging, log_server: "10.32.105.153:10514"}
License

BSD

Cinder-logging

Sending logging information to syslog

Requirements

Ansible-core-modules: lineinfile

Role Variables

Default vars:

cinder_conf: "/etc/cinder/cinder.conf"
cinder_facility:
  LOG_LOCAL1: local1
port: "10514"
host: "127.0.0.1"
log_server: "{{host}}:{{port}}"
protocal: "tcp"
lines:
  - {regx: "^verbose=", line: "verbose=False" }
  - {regx: "^debug=", line: "debug=False" }
  - {regx: "^use_syslog=", line: "use_syslog=True" }
  - {regx: "^syslog_log_facility=", line: "syslog_log_facility=LOG_LOCAL1" }
Example Playbook
- hosts: servers
  roles:
     - { role: cinder-logging, port: "11514" }
License

BSD

Neutron-logging

Send logging information to syslog

Requirements

Ansible-core-module: lineinfile

Role Variables
Default vars:
  neutron_conf: "/etc/neutron/neutron.conf"
  neutron_facility:
    LOG_LOCAL2: local2
  port: "10514"
  host: "127.0.0.1"
  log_server: "{{host}}:{{port}}"
  protocal: "tcp"
  lines:
    - {regx: "^verbose=", line: "verbose=False" }
    - {regx: "^debug=", line: "debug=False" }
    - {regx: "^use_syslog=", line: "use_syslog=True" }
    - {regx: "^syslog_log_facility=", line: "syslog_log_facility=LOG_LOCAL2" }
Example Playbook
- hosts: servers
  roles:
     - { role: neutron-logging, host: "10.32.10.153" }
License

BSD

Playbooks

Linear Execution

  1. Common settings
  2. Setup services
  3. Setup applications

Privilege Problems

Controlling machine have the permission to connect remote nodes as normal users. [ elc, ats, susu ]

But some tasks require root privileges. If you connect to nodes as root, applications you installed will running with root privileges. It is better that using sudo & sudo_user in each task which needs root privileges.

tasks:
  - name: root privilege required
    apt: pkg="redis-server" state=present
    sudo: yes
    sudo_user: root

ansible-playbook -i hosts site.yml –ask-sudo-pass -u elc This command will prompt for a password of elc on every hosts. When you connect to remote nodes with various users and passwords, using group_vars and hosts_vars is a convenient approach to manage each node’s username and passwords.

---
ansible_ssh_user: susu
ansible_sudo_pass: ********

ansible-vault allows keeping sensitive data such as passwords or keys in encrypted files.

ansible-vault create host_vars/10.32.131.107.yml

ansible-vault edit host_vars/10.32.131.107.yml --ask-vault-pass

ansible-vault view host_vars/10.32.131.107.yml --vault-password-file /some/safe/place/pass.txt

ansible-vault rekey host_vars/10.32.131.107.yml

Inventory

Inventory file:

[lek:children]
logstash
els

[logstash:children]
shipper
indexer

[els]
10.13.181.85

[shipper]
10.32.105.214
10.32.131.107

[indexer]
10.32.105.213

[redis]
10.32.105.213

Playbook site.xml

---
- name: common settings
  hosts: lek
  gather_facts: true
  roles:
    - {role: common-env}
    - {role: sun-jdk}

- name: setup redis
  hosts: redis
  gather_facts: true
  roles:
    - {role: redis}

- name: setup kibana and elasticsearch
  hosts: els
  gather_facts: true
  roles:
    - {role: elasticsearch}
    - {role: kibana}

# when redis and elasticsearch are ready
- name: setup logstash
  hosts: logstash
  gather_facts: true
  roles:
    - {role: logstash}

See also

  1. Grok Debugger
  2. Rsyslog RFC5424

Docker

Docker入门 – 介绍

Build once, configure once and run anywhere.

Docker特征

  • 速度飞快以及优雅的隔离框架
  • 物美价廉
  • CPU/内存的低消耗
  • 快速开/关机
  • 跨云计算基础构架

Docker组件和元素

三个组件

  • Docker Client - 用户界面
  • Docker Daemon - 运行于主机上,处理服务请求
  • Docker Index - 中央registry,支持拥有公有与私有访问权限的Docker容器镜像的备份

三个基本要素

  • Docker Containers - 负责应用程序的运行,包括操作系统,用户添加的文件以及元数据
  • Docker Images - 只读模板,用来运行Docker容器
  • DockerFile - 文件指令集,用来说明如何自动创建Docker镜像

Docker使用一下操作系统的功能来提高容器效率

  • NameSpace - 充当隔离的第一级,确保一个容器中运行一个进程而且不能看到或影响容器外的其他进程
  • Control Groups - LXC的重要组成部分,具有资源核算与限制的关键功能
  • UnionFS - 作为容器的构建块,支持Docker的轻量级以及速度快的特性,带有用户层的文件系统。

如何把它们放在一起

运行任何应用程序,都需要有两个基本步骤:

  1. 构建一个镜像

    Docker image 是一个构建容器的只读模板,包含了容器启动所需要的所有信息,包括运行程序和配置数据。每个镜像都源于一个基本的镜像,随后根据Dockerfile中的指令创建模板。对于每个指令,在镜像上创建一个新的层面。

  2. 运行容器

    运行容器源于第一步创建的镜像。当容器被启动后,一个读写层会被添加到镜像的顶层。当分配到合适的网络和IP地址后,需要的应用程序就可以在容器中运行了。

Docker入门 – 命令

docker info

通过运行 docker info 检测docker安装是否正确

# docker info
Containers: 16
Images: 3
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
  Dirs: 35
  Execution Driver: native-0.2
  Kernel Version: 3.13.0-36-generic
  Operating System: Ubuntu 14.04.1 LTS
  CPUs: 4
  Total Memory: 3.729 GiB
  Name: nn
  ID: NZE2:4R7O:GGXU:WYP3:PC4W:3G23:CSM4:XIRH:RAIB:GRV2:HKS2:Y47U
  WARNING: No swap limit support

docker pull

docker pull busybox 从Docker Hub上拉取busybox镜像

docker run

docker run busybox /bin/echo Hello Docker 运行容器

docker run -d busybox /bin/sh -c "while true; do echo Docker; sleep 1; done" 以后台进程的方式运行容器

docker logs

# docker run -d busybox /bin/sh -c "while true; do echo Docker; sleep 1; done"
adc090ccbcf7feb6c9cee7228cc32f2739eddfa78ad2b10a192d8e12b2a2c594

在后台运行容器,会返回一个容器ID,之后使用 docker logs adc090ccbcf7feb6c9cee7228cc32f2739eddfa78ad2b10a192d8e12b2a2c594 可以查看输出的结果。容器ID长度很长,实际使用中可以只取前八位来使用。

docker start & stop & restart

docker stop docker_id 停止容器, docker restart docker_id 重启该容器。如果要完全移除容器,需要先停止容器,然后才能移除:

docker stop acd090ccbc
docker rm acd090ccbc

docker commit

docker commit acd090ccbc sample01 将容器的状态保存为镜像,镜像名称为sample01,镜像名称只能取字符[a-z]和数字[0-9]

docker images

docker images 查看所有镜像的列表

docker history

docker history (image_name) 查看镜像的历史版本

docker push

docker push (image_name) 将镜像推送到registry

docker build

docker build [options] PATH | URL 使用Dockerfile构建镜像

--rm=true – 构建成功后,移除所有中间容器

--no-cache=false – 构建过程中不使用缓存

docker attach

docker attach container 附加到正在运行的容器上

docker diff

docker diff container 列出容器内发生变化的文件和目录

docker events

打印指定时间内容器的实时系统事件

docker import

导入远程文件、本地文件和目录:

docker import http://example.com/example.tar
tar -C image.tar | docker import - image_app

docker export

导出容器的系统文件打包成tarball:

docker export container > imags.tar

docker cp

从容器内复制文件到指定路径上:

docker cp container:path hostpath

docker login

用来登陆Docker Registry服务器:

docker login [options] [server]
docker login localhost:8080

docker inspect

收集关于容器和镜像的底层信息,包括:

* 容器实例的IP地址

* 端口绑定列表

* 特定端口映射的搜索

* 收集配置的详细信息

docker inspect [ --format= ] container/image

docker inispact --format='{{.State}}' container/image

docker kill

发送 SIGKILL 信号来停止容器的主进程:

docker kill [options] container

docker rmi

移除一个或多个镜像:

docker rmi image

docker wait

阻塞对指定容器的其他调用方法,直到容器停止后退出阻塞。

docker load

从tarball中载入镜像到STDIN:

docker load -i app_box.tar

docker save

将镜像保存为tarball并发送到STDOUT:

docker save image > app_box.tar

Dockerfile

Dockerfile包含创建镜像所需要的全部指令。基于在Dockerfile中的指令,我们可以使用Docker build命令来创建镜像。通过减少镜像和容器的创建过程来简化部署。

Dockerfile支持的语法命令如下:

INSTRUCTION argument

指令不区分大小写,但是命名约定为全部大写。

FROM

所有Dockerfile必须以 FROM 命令开始。 FROM 命令指定镜像基于哪个基础镜像创建。接下来的命令也会基于这个基础镜像。可以使用多次FROM命令,表示会创建多个镜像。

FROM <image>
FROM <image>:<tag>
FROM <image>@<digest>

例如 FROM ubuntu 这样的指令告诉我们,新的镜像将基于Ubuntu的景象来构建。

MAINTAINER

设置该镜像的作者:

MAINTAINER <author name>

RUN

在shell或exec环境下执行的命令。会在新创建的镜像上添加新的层面,接下来提交的镜像用在Dockerfile的下一条指令中。

RUN <command>   shell form
RUN ["executable", "param1", "param2"]  exec form

exec形式没有变量扩展,如 RUN ["echo", "$HOME"] 不会展开 $HOME,需要改成 RUN ["sh", "-c", "echo", "$HOME"]

ADD

复制文件指令,将URL或者启动配置上下文中的一个文件复制到容器内的位置:

ADD <src> <destination>

CMD

提供了容器默认的执行命令。Dockerfile只允许使用一次CMD指令,使用多个CMD会抵消之前所有的指令,只有最后一个指令失效:

CMD ["executable", "param1", "param2"]

CMD ["param1", "param2"]

CMD command param1 param2

Note

CMD [“executable”, “param1”, “param2”] 是推荐使用的格式。exec form被解析成JSON数组,因此不能使用单引号,要使用双引号。

EXPOSE

指定容器在运行时监听的端口

EXPOSE <port>

ENTRYPOINT

配置给容器一个可执行的命令,这意味着每次使用镜像创建容器时一个特定的应用程序可以被设置为默认程序。类似与CMD,Docker只允许一个ENTRYPOINT,多个ENTRYPOINT会抵消之前所有的指令。

ENTRYPOINT ["executable", "param1", "param2"]

ENTRYPOINT command param1 param2

WORKDIR

指定RUN、CMD和ENTRYPOINT命令的工作目录。

WORKDIR /path/to/workdir

ENV

设置环境变量,使用键值对,增加运行程序的灵活性。

ENV <key> <value>

USER

镜像正在运行时设置一个UID。

USER <uid>

VOLUME

授权容器访问主机上的目录。

VOLUME ["/data"]

example

# Nginx
#
# VERSION               0.0.1

FROM      ubuntu
MAINTAINER Victor Vieux <victor@docker.com>

LABEL Description="This image is used to start the foobar executable" Vendor="ACME Products" Version="1.0"
RUN apt-get update && apt-get install -y inotify-tools nginx apache2 openssh-server

# Firefox over VNC
#
# VERSION               0.3

FROM ubuntu

# Install vnc, xvfb in order to create a 'fake' display and firefox
RUN apt-get update && apt-get install -y x11vnc xvfb firefox
RUN mkdir ~/.vnc
# Setup a password
RUN x11vnc -storepasswd 1234 ~/.vnc/passwd
# Autostart firefox (might not be the best way, but it does the trick)
RUN bash -c 'echo "firefox" >> /.bashrc'

EXPOSE 5900
CMD    ["x11vnc", "-forever", "-usepw", "-create"]

# Multiple images example
#
# VERSION               0.1

FROM ubuntu
RUN echo foo > bar
# Will output something like ===> 907ad6c2736f

FROM ubuntu
RUN echo moo > oink
# Will output something like ===> 695d7793cbe4

# You᾿ll now have two images, 907ad6c2736f with /bar, and 695d7793cbe4 with
# /oink.

Dockerfile最佳实践

使用缓存

Dockerfile的每条指令都会将结果提交为新的镜像。下一条指令基于上一条指令的镜像进行构建。如果一个镜像拥有相同的父镜像和指令(除了 ADD ),Docker将会使用镜像而不是执行该指令,即缓存。

因此,为了有效的利用缓存,尽量保持Dockerfile一致,并且尽量在末尾修改:

FROM ubuntu

MAINTAINER author <somebody@company.com>

RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe"

RUN apt-get update

RUN apt-get upgrade -y

更改 MAINTAINER 指令会使Docker强制执行 run 指令来更新apt,而不是使用缓存。

如不希望使用缓存,在执行 docker build 的时候加上参数 --no-cache=true

Docker匹配镜像决定是否使用缓存的规则如下:

  1. 从缓存中存在的基础镜像开始,比较所有子镜像,检查它们构建的指令是否和当前的是否完全一致。如果不一致则缓存不匹配。
  2. 多数情况中,使用其中一个子镜像来比较Dockerfile中的指令是足够的。然而,特定的指令需要做更多的判断。
  3. ADD COPY 指令中,将要添加到镜像中的文件也要被检查。通常是检查文件的校验和(checksum)。
  4. 缓存匹配检查并不检查容器中的文件。例如,当使用 RUN apt-get -y update 命令更新了容器中的文件,并不会被缓存检查策略作为缓存匹配的依据。

使用标签

除非是在用Docker做实验,否则你应当通过 -t 选项来 docker build 新的镜像以便于标记构建的镜像。一个简单可读的标签可以帮助管理每个创建的镜像。

docker build -t="tuxknight/luckypython"

Note

始终通过 -t 标记来构建镜像。

公开端口

Docker的核心概念是可重复和可移植,镜像应该可以运行在任何主机上并运行尽可能多的次数。在Dockerfile中你可以映射私有和公有端口,但永远不要通过Dockerfile映射公有端口。这样运行多个镜像的情况下会出现端口冲突的问题。

EXPOSE 80:8080  # 80映射到host的8080,不提倡这种用法

EXPOSE 80 # 80会被docker随机映射一个端口

CMD ENTRYPOINT语法

CMDENTRYPOINT 支持两种语法:

CMD /bin/echo

CMD ["/bin/echo"]

在第一种方式下,Docker会在命令前加上 /bin/sh -c ,可能会导致一些意想不到的问题。第二种方式下 CMD ENTRYPOINT 是一个数组,执行的命令完全和你期待的一样。

容器是短暂的

容器模型是进程而不是机器,不需要开机初始化。在需要时运行,不需要是停止,能够删除后重建,并且配置和启动的最小化。

.dockerignore 文件

docker build 的时候,忽略部分无用的文件和目录可以提高构建的速度。

不要在构建中升级版本

不在容器中更新,更新交给基础镜像来处理。

每个容器只运行一个进程

一个容器只运行一个进程。容器起到了隔离应用隔离数据的作用,不同的应用运行在不同的容器让集群的纵向扩展以及容器的复用都变的更加简单。

最小化层数

需要掌握号Dockerfile的可读性和文件系统层数之间的平衡。控制文件系统层数的时候会降低Dockerfile的可读性。而Dockerfile可读性高的时候,往往会导致更多的文件系统层数。

使用小型基础镜像

debian:jessie 基础镜像足够小并且没有包含任何不需要的包。

使用特定标签

Dockerfile中 FROM 应始终包含依赖的基础镜像的完整仓库名和标签,如使用 FROM debian:jessie 而不是 FROM debian

多行参数顺序

apt-get update 应与 apt-get install 组合,采取 \ 进行多行安装。同时需要注意按照字母表顺序排序避免出现重复。

FROM debian:jessie

RUN apt-get update && apt-get install -y \

build-essential\

git \

make \

python \

RUN dpkg-reconfigure locales && \

locale-gen C.UTF-8 && \

/usr/sbin/update-locale LANG=C.UTF-8

ENV LC_ALL C.UTF-8

Dockerfile指令最佳实践

FROM

使用官方仓库中的镜像作为基础镜像,推荐使用 Debian image ,大小保持在100mb上下,且仍是完整的发行版。

RUN

把复杂的或过长的 RUN 语句写成以 \ 结尾的多行的形式,以提高可读性和可维护性。

apt-get updateapt-get install 一起执行,否则 apt-get install 会出现异常。

避免运行 apt-get upgradedist-upgrade ,在无特权的容器中,很多 必要 的包不能正常升级。如果基础镜像过时了,应当联系维护者。 推荐 apt-get update && apt-get install -y package-a package-b 这种方式,先更新,之后安装最新的软件包。

RUN apt-get update && apt-get install -y \
    aufs-tools \
    automake \
    btrfs-tools \
    build-essential \
    curl \
    dpkg-sig \
    git \
    iptables \
    libapparmor-dev \
    libcap-dev \
    libsqlite3-dev \
    lxc=1.0* \
    mercurial \
    parallel \
    reprepro \
    ruby1.9.1 \
    ruby1.9.1-dev \
    s3cmd=1.1.0*

CMD

CMD 推荐使用 CMD ["executable","param1","param2"] 这样的格式。如果镜像是用来运行服务,需要使用 CMD["apache2","-DFOREGROUND"] ,这种格式的指令适用于任何服务性质的镜像。

ADD COPY

虽然 ADDCOPY 功能类似,但推荐使用 COPYCOPY 只支持基本的文件拷贝功能,更加的可控。而 ADD 具有更多特定,比如tar文件自动提取,支持URL。 通常需要提取tarball中的文件到容器的时候才会用到 ADD

如果在Dockerfile中使用多个文件,每个文件应使用单独的 COPY 指令。这样,只有出现文件变化的指令才会不使用缓存。

为了控制镜像的大小,不建议使用 ADD 指令获取URL文件。正确的做法是在 RUN 指令中使用 wgetcurl 来获取文件,并且在文件不需要的时候删除文件。

RUN mkdir -p /usr/src/things \
    && curl -SL http://example.com/big.tar.gz \
    | tar -xJC /usr/src/things \
    && make -C /usr/src/things all

VOLUME

VOLUME 指令应当暴露出数据库的存储位置,配置文件的存储以及容器中创建的文件或目录。由于容器结束后并不保存任何更改,你应该把所有数据通过 VOLUME 保存到host中。

USER

如果服务不需要特权来运行,使用 USER 指令切换到非root用户。使用 RUN groupadd -r mysql && useradd -r -g mysql mysql 之后用 USER mysql 切换用户

要避免使用 sudo 来提升权限,因为它带来的问题远比它能解决的问题要多。如果你确实需要这样的特性,那么可以选择使用 gosu

最后,不要反复的切换用户。减少不必要的layers。

WORKDIR

WORKDIR 的路径始终使用绝对路径可以保证指令的准确和可靠。 同时,使用 WORKDIR 来替代 RUN cd ... && do-something 这样难以维护的指令。

Docker 案例 – 搭建工作流

流程

  1. 在本地功能分支上完成应用代码
  2. 在Github上发起一个到master分支的pull request
  3. 在Docker容器上运行自动测试
  4. 如果测试通过,手动将pull request merge进master分支
  5. 一旦merge成功,再次运行自动测试
  6. 如果第二次测试也通过,就在Docker Hub上对应用进行构建
  7. 一旦构建完成,自动化的部署到生产环境

概念

  • Dockerfile 中包含了一系列语句,用于对镜像的行为进行描述。
  • 镜像 是一个模板,用来保存环境状态并创建容器。
  • 容器 可以理解为实例化的镜像,并会在其中运行一系列进程。

Why ?

使用Docker意味着能够在开发机上完美地模拟生产环境,而不用再为任何由两者环境、配置差异所造成的问题而担心,此外,docker带给我们的还有:

  • 良好的版本控制
  • 随时便捷地发布或重建整个开发环境
  • 一次构建,随处运行

配置Docker

  • 由于windows NT、Darwin内核缺少运行Docker容器的一些Linux内核功能,需要借助boot2docker,一个用于运行Docker的轻量级Linux发行版。
  • Linux内核的操作系统可以直接运行docker守护进程。
  • docker version

Compose UP

Docker compose 是官方提供的容器业务流程框架,只需通过简单的YAML配置文件就能完成多个容器服务的构建和运行。

pip install docker-compose 安装docker compose

开始搭建Flask+Redis应用

在项目根目录下新建docker-compose.yml文件:

web:
  build: web
volumes:
  - web: /code
ports:
  - "80:5000"
links:
  - redis
command: python app.py
redis:
image: redis:2.8.19
ports:
  - "6379:6379"

上面我们对项目所含的两个服务进行了操作:

  • web: 我们将在web目录下进行容器的构建,并且将其作为Volume挂载到容器的/code目录中,然后通过python app.py来启动Flask应用。最后将容器的5000端口暴露出来,映射到80端口上。
  • redis: 直接使用Docker Hub上的官方镜像来提供所需的Redis服务支持,将6379端口暴露并映射到主机上。

之后在web目录下创建Dockerfile文件用于指导如何构建应用镜像。

构建并运行

docker-compose up 这会根据dockerfile构建Flask应用的镜像,从官方仓库拉取Redis镜像,然后将一切运行起来。docker compose会并行地期都过全部容器,每个容器会被分配各自的名字。

docker-compose ps 可以查看应用进程的运行状态。两个进程运行在不同的容器中,而Docker Compose将它们组织在一起。

我们建立了本地环境,通过Dockerfile详尽描述了如何构建镜像,并基于该镜像启动了相应容器。我们使用Docker Compose来将这一切整合起来,包括构建和容器之间的关联、通信(在Flask和Redis进程之间)。

Docker compose

docker-compose.yml reference

image

tag or partial image ID. Can be local or remote. Compose will attempt to pull if it doesnot exist locally:

image: ubuntu
image: a4c42
build

path to a directory containing a Dockerfile. When the value supplied is a relative path, it is interpreted as relative to the location of the yml file itself. This directory is also the build context that is sent to the Docker daemon.

Compose will build and tag it with a generated name and use that image thereafter

dockerfile

Alternate Dockerfile

compose will use an alternate file to build with

dockerfile: Dockerfile-alternate
command

override the default command

command: bundle exec thin -p 3000
extra_hosts

add hostname mapping:

extra_hosts:
  - "host01:1.2.3.4"
  - "host02:2.2.3.4"
ports

expose ports. Either specify both ports(HOSTPORT:CONTAINPORT) or just the container port:

ports:
  - "3000"
  - "5601:5601"
  - "39200:9200"
  - "127.0.0.1:9200:9200"

Note

When mapping ports in the HOST:CONTAINER format, you may experience erroneous results when using a container port lower than 60, because YAML will parse numbers in the format xx:yy as sexagesimal (base 60). For this reason, we recommend always explicitly specifying your port mappings as strings.

expose

Expose ports without publishing them to the host machine - they will only be accessible to linked services:

expose:
  - "6379"
  - "9200"
volumes

mount paths as volumes, optionally specifying a path on the host machine(HOST:CONTAINER) or an access mode(HOST:CONTAINER:ro):

volumes:
  - /var/lib/mysql
  - ~/configs:/etc/configs/:ro
volumes_from

mount all of the volumes from another services or container:

volues_from:
  - service_name
  - container_name
environment

add environment variables. You can use either an array or a dictionary.

Environment variables with only a key are resolved to their values on the machine Compose is running on, which can be helpful for secret or host-specific values:

environment:
  RACK_ENV: development
  SESSION_SECRET:

environment:
  - RACK_ENV=development
  - SESSION_SECRET
env_file

Add environment variables from a file. Can be a single value or a list.

If you have specified a Compose file with docker-compose -f FILE, paths in env_file are relative to the directory that file is in.

extends

Extend another service, in the current file or another, optionally overriding configuration.

Compose CLI reference

build

builds or rebuilds services

kill

forces running contenters to stop by sending a SIGKILL signal. Optionally the signal can be passed:

docker-compose kill -s SIGTERM
logs

displays log output from services.

port

prints the public port for a port binding

ps

lists containers

pull

pulls service images

restart

restart services

rm

removes stopped service containers

run

run a one-off command on a service

docker-compose run web python manage.py shell

In this example, compose will start web service then run manage.py shell in python. Note that by default, linked services will also be started, unless they are already running.

scale

sets the number of containers to run for a service

docker-compose scale web=2 worker=3
start

starts existing containers for a service

stop

stops running containers without removing them. They can be started again with docker-compose start

up

builds, creates, starts, and attaches to containers for a service.

Linked services will be started, unless they are already running.

By default, docker-compose up will aggregate the output of each container and when it exits, all containers will be stopped.

Running docker-compose up -d will start the containers in the background and leave them running.

By default, docker-compose up will stop and recreate existing containers. If you do not want containers stopped and recreated, use docker-compose up --no-recreate . This will still start any stopped containers, if needed.

Options
--verbose shows more output
-v, --version prints version and exits
-f, --file FILE
 specify what files to read configuration from. If not provided, compose will look for docker-compose.yml
-p, --project-name NAME
 specifies an alternate project name( default: current directory name)
Environment Variables

COMPOSE_PROJECT_NAME

Sets the project name, which is prepended to the name of every container started by Compose

COMPOSE_FILE

Specify what file to read configuration from. If not provided, Compose will look for docker-compose.yml in the current working directory, and then each parent directory successively, until found.

DOCKER_HOST

Sets the URL of the docker daemon. As with the Docker client, defaults to unix:///var/run/docker.sock.

DOCKER_TLS_VERIFY

When set to anything other than an empty string, enables TLS communication with the daemon.

DOCKER_CERT_PATH

Configures the path to the ca.pem, cert.pem, and key.pem files used for TLS verification. Defaults to ~/.docker.

Swarm

Discovery

node discovery
docker run -d -p 2376:2375 swarm manage --discovery dockerhost01:2375,docker02:2375:docker03:2375 \
-H=0.0.0.0:2375
file discovery
docker run -d -p 2376:2375 -v /etc/swarm/cluster_config:/cluster \
swarm manage file:///cluster

# cat /etc/swarm/cluster_config
dockerhost01:2375
dockerhost02:2375
dockerhost03:2375
docker hub token
#docker run --rm swarm create
e9a6015cd02bfddb03ef95848b490450

docker run --rm swarm --addr=10.13.181.85:2375 token://e9a6015cd02bfddb03ef95848b490450

docker run -d -p 2376:2375 swarm manage token://e9a6015cd02bfddb03ef95848b490450

Schedule

spread
docker run -d -p 2376:2375 -v /etc/swarm/cluster_config:/cluster \
swarm manage --strategy=spread file:///cluster
binpack
docker run -d -p 2376:2375 -v /etc/swarm/cluster_config:/cluster \
swarm manage --strategy=binpack file:///cluster
random
docker run -d -p 2376:2375 -v /etc/swarm/cluster_config:/cluster \
swarm manage --strategy=random file:///cluster

Filter

TLS support

Create TLS Certificates for Docker and Docker Swarm

Docker:

docker -d \
--tlsverify \
--tlscacert=/etc/pki/tls/certs/ca.pem \
--tlscert=/etc/pki/tls/certs/dockerhost01-cert.pem \
--tlskey-/etc/pki/tls/private/dockerhost01-key.pem \
-H tcp://0.0.0.0:2376

Swarm master:

swarm manage \
--tlsverify \
--tlscacert=/etc/pki/tls/certs/ca.pem \
--tlscert=/etc/pki/tls/certs/swarm-cert.pem \
--tlskey=/etc/pki/tls/private/swarm-key.pem \
--discovery file://etc/swarm_config \
-H tcp://0.0.0.0:2376

Settings on client:

export DOCKER_HOST=tcp://dockerswarm01:2376
export DOCKER_CERT_PATH="`pwd`"
export DOCKER_TLS_VERIFY=1

Jenkins-CI

Global Settings

Note

HOME: /var/jenkins_home

Workspace Root Directory: ${JENKINS_HOME}/workspace/${ITEM_FULLNAME}

Build Record Root Directory: ${ITEM_ROOTDIR}/builds

  1. Docker Builder

    Docker URL: docker server REST API URL, http://172.17.42.1:2375 if jenkins is running in a docker container.

  2. Gerrit Trigger

    Hostname, Frontend URL, SSH Port, Username, SSH Keyfile(/var/jenkins_home/.ssh/id_rsa), SSH Keyfile Password

    Advanced:

    REST API -  HTTP Username && HTTP Password
    
    Enable code review
    
    Enable Verified
    

Integrate With Gerrit

Concepts:

refs/for/<branch name>
refs/changes/nn/<change-id>/m
git hook  commit-msg  Change-Id

command line: ssh -P -p 29418 jenkins@127.0.0.1 gerrit
get git hook: scp -P 29418 jenkins@127.0.0.1:/hooks/commit-msg ./

Gerrit settings

  • User settings

    1. Add user jenkins on Gerrit Code review

      Email Address == git config user.email

    2. Add SSH Public Keys

    3. Generate HTTP Password for HTTP REST API

    4. Add jenkins to groups: Anonymous Users, Event Streaming Users, Non-Interactive Users

  • Project settings

    1. Create a new project. Rights inherit from All-Projects
    2. Require Change-Id in commit message TRUE or FALSE if necessary
  • Project Access settings

    1. Global Capabilities

      Stream Events: ALLOW - Event Streaming Users

      Stream Events: ALLOW - Non-Interactive Users

    2. Reference: refs/*

      Read: ALLOW - Non-Interactive Users

      Label Verified: +1/-1 - Non-Interactive Users

    3. Reference: refs/heads/*

      Push: ALLOW - Non-Interactive Users

      Label Code-Review: +1/-1 - Non-Interactive Users

      Label Verified: +1/-1 - Non-Interactive Users

      • Add Label Verified to All-Project
      git init project
      git config user.name 'admin'
      git config user.email 'admin@example.com'
      git remote add origin ssh://admin@10.13.182.124:29418/All-Project
      git pull origin refs/meta/config
      cat << EOF >> project.config
      [label "Verified"]
      function = MaxWithBlock
      value = -1 Fails
      value =  0 No score
      value = +1 Verified
      EOF
      git add project.config
      git commit -a -m 'add Label Verified to All-Project'
      git push origin HEAD:refs/meta/config
      

Jenkins Job

Auto Verify
  1. Source Code Management

    git repository: ssh://jenkins@10.13.182.124:29418/esTookit

    Credentials: passwords/ssh key

    Name: gerrit

    Refspec: $GERRIT_REFSPEC

    Branches to build: $GERRIT_BRANCH

    Additional Behaviours

    • Strategy for choosing what to build - Gerrit Trigger
    • Clean before checkout
  2. Build Triggers - Gerrit event

    Choose a server Trigger on

    • Patchset Created
    • Draft Published

    Gerrit Project

    • Type - Plain

    • Pattern - esTookit

    • Branches

      • Type - Path
      • Pattern - **
  3. Build

    Use Execute shell to load extern scripts.

    Perform build & test

  4. Project will be automately checked with Label Verified if build succeed. So, no Post-build actions to perform.

Auto Publish
  1. Gerrit Trigger

    Trigger on - Change Merged

  2. Publish to Cloud Foundry

    Post-build Actions

    Target, Credentials, Space, Allow self-signed certificate Reset app if already exists Read configuration from a manifest file / Enter configuration in Jenkins

Tips && Suggestions

  1. Parameterize Jenkins jobs
  2. Destruct complex Jenkins jobs

Kubernets

Kubernetes from scratch

Binaries

Latest binary_

Find ./kubernetes/server/kubernetes-server-linux-amd64.tar.gz and unzip. In ./kubernetes/server/bin contains all the necessary binaries.

Selecting Images

Docker, kubelet, kube-proxy will run outside of a container.

Etcd, kube-apiserver, kube-controller-manager, kube-scheduler will run as containers. An images might be needed.

  • Use images hosted on Google Container Registry

    • eg gcr.io/google_containers/kube-apiserver:$TAG , where $TAG is the latest release.
  • Build your own image

    • you can find tarballs in ./kubernetes/server/bin , so import these tarballs to docker:

      cat kube-apiserver.tar |docker import - kube-apiserver
      cat kube-controller-manager.tar |docker import - kube-controller-manager
      cat kube-scheduler.tar|docker import - kube-scheduler
      

For ectd

  • Use images hosted on GCR, such as gcr.io/google_containers/etcd:2.0.12
  • Use images hosted on hub.docker.com or quay.io
  • Use etcd binary included in your OS distro.
  • Build your own image cd kubernetes/cluster/images/etcd; make

Configuring and Installing Base Software on Nodes

docker

Note

The newest stable release is a good choice. If you previously had docker installed without setting Kubernets-specific options, you may have a Docker-created bridge and iptables rules. You may want to remove these as follows before proceeding to configure Docker for Kubernetes:

iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0

The way you configure docker will depend in whether you have chosen the routable-vip or overlay-network approaches for your network. Some suggested docker options:

  • create your own bridge for the per-node CIDR ranges, call it cbr0 , and set --bridge=cbr0 option on docker.
  • set --iptables=false so docker will not manipulate iptables for host-ports so that kube-proxy can manage iptables instead of docker.
  • --ip-masq=false
    • if you have setup PodIPs to be routable, then you want this false, otherwise, docker will rewrite the PodIP source-address to a NodeIP.
    • some environments still need you to masquerade out-bound traffic when it leaves the cloud environment. This is very environment specific.
  • --mtu=
    • may required when using Flannel, because of the extra packet size due to udp encapsulation
  • --insecure-registry $CLUSTER_SUBNET
    • to connect to a private registry, if you set one up, without using SSL.
  • You may want to increase the number of open files for docker
    • DOCKER_NOFILE=1000000 where this config goes depends on your node OS. For example, Debian-based distro uses /etc/default/docker
Kubelet

All nodes should run kubelet

Arguments to consider:

  • if following the HTTPS security approach:
    • --api-server=htts://$MASTER_IP
    • --kubeconfig=/var/lib/kubelet/kubeconfig
  • Otherwise, if taking the firewall-based security approach
    • --api-server=http://$MASTERIP
  • --config=/etc/kubernetes/manifests
  • --cluster-dns= to the address of the DNS server you will setup
  • --cluster-domain= to the dns domain prefix to use for cluster DNS addresses
  • --docker-root=
  • --root-dir=
  • --configure-cbr0=
  • --register-node
kube-proxy

All nodes should run kube-proxy .

Arguments to consider:

  • If following the HTTPS security approach:
    • --api-servers=https://$MASTER_IP
    • --kubeconfig=/var/lib/kube-proxy/kubeconfig
  • Otherwise, if taking the firewall-based security approach
    • --api-servers=http://$MASTER_IP
Networking

Each node needs to be allocated its own CIDR range for pod networking. Call this NODE_X_POD_CIDR

A bridge called cbr0 needs to be created on each node. The bridge itself needs an address from $NODE_X_POD_CIDR - by convention the firest IP. Call this NODE_X_BRIDGE_ADDR

  • Recommended, automatic approach
    1. Set --configure-cbr0=true option in kubelet init script and restart kubelet service. Kubelet will configure cbr0 automatically. It will wait to do this until the node controller has set Node.Spec.PodCIDR. Since you have not setup apiserver and node controller yet, the bridge will not be setup immediately.
  • Alternate, manual approach:
    1. Set --configure-cbr0=false on kubelet and restart.
    2. Create a bridge brctl addbr cbr0
    3. Set appropriate MTU ip link set dev cbr0 mtu 1460
    4. Add the clusters network to the bridge, docker will go on other side of bridge. ip addr add $NODE_X_BRIDGE_ADDR dev eth0
    5. Turn it on ip link set dev cbr0 up

Bootstrapping the Cluster

etcd
  • Recommended approach: run one etcd instance, with its log written to a directory backed by durable storage.
  • Alternative: run 3 or 5 etcd instances.
    • log can be written to non-durable storage because storage is replicated.
    • run a single apiserver which connects to one of the etc nodes.

To run the etc instance:

  1. copy cluster/saltbase/salt/etcd/etcd.manifest
  2. make any modifications needed
  3. start the pod by putting it into the kubelet manifest directory

K8S - Network

  1. Highly-coupled container-to-container communications.
  2. Pop-to-Pod communications.
  3. Pod-to-Service communications.
  4. External-to-Service communications.

Kubernetes model

Kubernetes imposes some fundamental requirements on any networking implementation:

* All containers can communicate with all other containers without NAT.

* All nodes can communicate with all containers without NAT.

* The IP that a container sees itself as is the same IP that others see it as.

What this means in practice is that you can not just take two computers running Docker and expect Kubernetes to work.

You must ensure that the fundamental requirements are met.

Kubernetes applies IP addresses at the pod scope, containers within a pod share their network namespace, including their IP address. This means that containers within a pod can all reach each other’s ports on localhost . This does imply that containers within a pod must coordinate port usage, but this is no different that processes in a VM. This is what we call IP-per-pod model.

Note

This is implemented in Docker as a “pod container” which holds the network namespace open while “app container” join that namespace with Docker’s --net=container:<id> function.

As with Docker, it is possible to request host ports, but this is reduced to a very niche operation. In this case a port will be allocated on the host Node and traffic will be forwarded to the Pod . The Pod itself is blind to the existence or non-existence of host ports.

How to achieve this

  1. Google Compute Engine (GCE)

  2. L2 networks and linux bridging

    Four ways to connect a docker container to a local network

  3. Flannel

    Flannel is a very simple overlay network that satisfies the Kubernetes requirements.

  4. OpenVSwitch

    OpenVSwitch is a somewhat more mature but also complicated way to build an overlay network. Kubernetes OpenVSwitch GRE/VxLAN networking

  5. Weave

    Weave is yet another way to build an overlay network, primarily aiming at Docker integration.

  6. Calico

    Calico uses BGP to enable real container IPs.

Kubernetes OpenVSwitch GRE/VxLAN networking

This document describes how OpenVSwitch is used to setup networking between pods across nodes. The tunnel type could be GRE or VxLAN. VxLAN is preferable when large scale isolation needs to be performed within the network.

_images/ovs-networking.png

The vagrant setup in Kubernetes does the following:

The docker bridge is replaced with a brctl generated linux bridge(kbr0) with a 256 address space subnet. Basically, a node gets 10.244.x.0/24 subnet and docker is configured to use that bridge instead of the default docker0 bridge.

Also, an OVS bridge is created(obr0) and added as a port to the kbr0 bridge. All OVS bridges across all nodes are linked with GRE tunnels. So, each node has an outgoing GRE tunnel to all other nodes. It does not need to be a complete mesh really, just meshier the better. STP(spanning tree) mode is enabled in the bridges to prevent loops.

Routing rules enable any 10.244.0.0/16 target to become reachable via the OVS bridge connected with the tunnels.

Jinja2

Jinja2 Intro

Installation

As a Python egg:

easy_install Jinja2
pip install Jinja2

From the tarball release:

wget https://pypi.python.org/packages/source/J/Jinja2/Jinja2-2.7.3.tar.gz
tar xzvf Jinja2-2.7.3.tar.gz
cd Jinja2-2.7.3 && python setup.py install
# root privilege required

Installing the development version:

git clone git://github.com/mitsuhiko/jinja2.git
cd jinja2
ln -s jinja2 /usr/lib/python2.X/site-packages
# package git required

As of version 2.7 Jinja2 depends on the MarkupSafe module.

Basic API Usage

The most basic way to create a template and render it is through Template. This however is not the recommended way to work with it if your templates are not loaded from strings but the file system or another data source:

>>> from jinja2 import Template
>>> template = Template('Hello {{ name }}!')
>>> template.render(name='World')
u'Hello World!'

Jinja2 Design

<!DOCTYPE html>
<html lang="en">
<head>
    <title>My Webpage</title>
</head>
<body>
    <ul id="navigation">
    {% for item in navigation %}
        <li><a href="{{ item.href }}">{{ item.caption }}</a></li>
    {% endfor %}
    </ul>

    <h1>My Webpage</h1>
    {{ a_variable }}

    {# a comment #}
</body>
</html>

A template contains variables and/or expressions, which get replaced with values when a template is rendered; and tags, which control the logic of the template.

The example shows the default configuration settings. An application developer can change the syntax configuration from {% foo %} to <% foo %>, or something similar.

The default Jinja delimiters are configured as follows:

* {%...%} for Statements
* {{...}} for Expressions to print to the template output
* {#...#} for Comments not included in the template output
* #...## for Line Statements

Variables

Template variables are defined by the context dictionary passed to the template.

You can use a dot (.) to access attributes of a variable in addition to the standard Python __getitem__ “subscript” syntax ([]).

The following lines do the same thing:

{{ foo.bar }}
{{ foo['bar'] }}

Filters

Variables can be modified by filters. Filters are separated from the variable by a pipe symbol (|) and may have optional arguments in parentheses. Multiple filters can be chained. The output of one filter is applied to the next.

See List of Builtin Filters below.

Tests

Tests can be used to test a variable against a common expression. To test a variable or expression, you add is plus the name of the test after the variable:

{% if loop.index is divisibleby 3 %}
{% if loop.index is divisibleby(3) %}
{% name is defined %}

See List of Builtin Tests below.

Comments

To comment-out part of a line in a template, use the comment syntax which is by default set to {# ... #}. This is useful to comment out parts of the template for debugging or to add information for other template designers or yourself:

{# note: commented-out template because we no longer use this
  {% for user in users %}
    ...
  {% endfor %}
#}

Whitespace Control

In the default configuration:

* a single trailing newline is stripped if present
* other whitespace (spaces, tabs, newlines etc.) is returned unchanged

If an application configures Jinja to trim_blocks , the first newline after a template tag is removed automatically (like in PHP). The lstrip_blocks option can also be set to strip tabs and spaces from the beginning of a line to the start of a block. (Nothing will be stripped if there are other characters before the start of the block.)

With both trim_blocks and lstrip_blocks enabled, you can put block tags on their own lines, and the entire block line will be removed when rendered, preserving the whitespace of the contents.

You can manually disable the lstrip_blocks behavior by putting a plus sign (+) at the start of a block:

<div>
     {%+ if name %}QWQ{% endif %}
</div>

You can also strip whitespace in templates by hand. If you add a minus sign (-) to the start or end of a block (e.g. a For tag), a comment, or a variable expression, the whitespaces before or after that block will be removed:

{% for item in seq -%}
    {{ item }}
{%- endfor %}

This will yield all elements without whitespace between them. If seq was a list of numbers from 1 to 9, the output would be 123456789.

Escaping

The easiest way to output a literal variable delimiter ({{) is by using a variable expression:

{{ '{{' }}

For bigger sections, it makes sense to mark a block raw. For example, to include example Jinja syntax in a template, you can use this snippet:

{% raw %}
  <ul>
  {% for item in seq %}
    <li>{{ item }}</li>
  {% endfor %}
  </ul>
{% endraw %}

Line Statements

If line statements are enabled by the application, it’s possible to mark a line as a statement. For example, if the line statement prefix is configured to # , the following two examples are equivalent:

<ul>
# for item in seq
  <li>{{ item }}</li>
# endfor
</ul>

<ul>
{% for item in seq %}
  <li>{{ item }}</li>
{% endfor %}
</ul>

The line statement prefix can appear anywhere on the lines as long as no text precedes it.

Note

Line Statements can span multiple lines if there are open parentheses, braces or brackets:

<ul>
# for href, caption in [('index.html', 'Index'),
                        ('about.html', 'About')]:
    <li><a href="{{ href }}">{{ caption }}</a></li>
# endfor
</ul>

List of Control Structures

For

Loop over each item in a sequence:

<h1>Members</h1>
<ul>
{% for user in users %}
  <li>{{ user.username|e }}</li>
{% endfor %}
</ul>

As varialbes in templates retain their object properties, it is possible to iterate over containers like dict:

<dl>
{% for key, value in my_dict.iteritems() %}
  <dt>{{ key|e }}</dt>
  <dd>{{ value|e }}</dd>
{% endfor %}
</dl>

Python dicts are not ordered, so a sorted list of tuples or a collections.OrderedDict or dictsort filter can be used to order dict.

Inside of a for-loop block, you can access some special variables:

Variable Description
loop.index The current iteration of the loop.(1 indexed)
loop.index0 The current iteration of the loop.(0 indexed)
loop.revindex The number of iterations from the end of the loop(1 indexed)
loop.revindex0 The number of iterations from the end of the loop(0 indexed)
loop.first True if first iteration.
loop.last True if last iteration.
loop.length The number of items in the sequence.
loop.cycle A helper function to cycle between a list of sequences.
loop.depth Indicates how deep in deep in a recursive loop the rendering currently is. Starts at level 1.
loop.depth0 Indicates how deep in deep in a recursive loop the rendering currently is. Starts at level 0.

||

If

The if statement in Jinja is comparable with the Python if statement. You can use it to test if a variable is defined, not empty, or not false:

{% if users %}
<ul>
  {% for user in users %}
    <li> {{ user.username|e }} </li>
  {% endfor %}
</ul>
{% endif %}

For multiple branches, elif and else can be used like in Python.

Macros

Macros are comparable with functions in regular programming languages. They are useful to put often used idioms into reusable functions to not repeat yourself (“DRY”).

{% macro input(name, value='', type='text', size=20) -%}
  <input type="{{type}}" name="{{name}}" value="{{value|e}}" size="{{size}}" />
{%- endmacro %}

Then, macro ‘input’ can be called like a function in the namespace:

<p>{{ input('username') }}</p>

List of Builtin Tests

callable(obj)
return whether the object is callable.
defined(value)
return true if variable is defined
divisibleby(value, num)
Check if a variable is divisible by a number.
equalto(value, other)
Check if an object has the same value as another object.
even(value)
return true if the variable is even
iterable(value)
check if it’s possible to iterate over an object.
lower(value)
return true if the variable is lowercased.
maping(value)
return true if the object is a mapping(dict etc.)
none(value)
return true if the variable is none
number(value)
return true if the variable is a number
odd(value)
return true if the variable is odd
sequence(value)
return true if the variable is a sequence.
string(value)
return true if the object is a string.
undefined(value)
return true if variable is undefined
upper(value)
return true if the variable is uppercased

List of Builtin Filters

abs(number)
absolute value of the argument
attr(obj,name)
attribute of object. foo|attr(“bar”) works like foo.bar
capitalize(s)
capitalize a value
center(value, width=80)
centers the value in a field of a given width
default(value,default_value=u’‘, boolean=False)
If the value is undefined it will return the passed default value, otherwise the value of the variable.
dictsort(value, case_sensitive=False, by=’key’)
sort a dict and yield(key,value) pairs.o
escape(s)
Convert the characters &, <, >, ‘, and ” in string s to HTML-safe sequences.
filesizeformat(value, binary=False)
Format the value like a ‘human-readable’ file size (i.e. 13 kB, 4.1 MB, 102 Bytes, etc). Per default decimal prefixes are used (Mega, Giga, etc.), if the second parameter is set to True the binary prefixes are used (Mebi, Gibi).
first(seq)
first item of a sequence
float(value, default=0.0)
convert the value into a floating point number.
forceescape(value)
Enforce HTML escaping
format(value,*args,**kwargs)
apply python string formatting on an object
groupby(value, attribute)
Group a sequence of objects by a common attribute.
indent(s, width=4, indentfirst=False)
return a copy of the passed string, each line indented by 4 spaces
int(value, default=0)
convert the value into integer
join(value, d=u’‘, attribute=None)
Return a string which is the concatenation of the strings in the sequence. The separator between elements is an empty string per default, you can define it with the optional parameter.
last(seq)
last item of a sequence
length(object)
return the number of items of a sequence or mapping.
list(value)
convert the value into a list.
lower(s)
convert a value to lowercase.
map()
Applies a filter on a sequence of objects or looks up an attribute. Basic uasge is mapping on an attribute: {{ users|map(attribute='username')|join(', ') }} Applies a filter on a sequence: {{ titles|map('lower')|join(', ') }}
pprint(value, verbose=False)
pretty print a variable, useful for debugging.
random(seq)
return a random item from the sequence.
replace(s, old, new, count=None)
return a copy of the value with all occurrences of a substring replaced with a new one.
reverse(value)
reverse the object or return an iterator the iterates over it the other way round.
round(value, precision=0, method=’common’)
round the number to a given precision, ‘commom’ rounds either up or down, ‘ceil’ always rounds up, ‘floor’ always rounds down.
select()
Filters a sequence of objects by applying a test to the object and only selecting the ones with the test succeeding.
selectattr()
Filters a sequence of objects by applying a test to an attribute of an object and only selecting the ones with the test succeeding.
slice(value,slices,fill_with=None)
slice an iterator and return a list of lists containing those items.
sort(value, reverse=False, case_sensitive=False, attribute=None)
Sort an iterable. Per default it sorts ascending, if you pass it true as first argument it will reverse the sorting.
string(obj)
make a string unicode if it isn’t already
sum(iterable, attribute=None, start=0)
Returns the sum of a sequence of numbers plus the value of parameter ‘start’ (which defaults to 0). When the sequence is empty it returns start.
trim(value)
strip leading and trailing whitespace
truncate(s, length=255, killwords=False, end=’...’)
Return a truncated copy of the string. The length is specified with the first parameter which defaults to 255. {{"foo bar baz"|truncate(9,True)}} –> “foo ba...”
upper(s)
convert a value to uppercase
urlencode(value)
escape strings for use in URLs
wordcount(s)
count the words in that string
wordwrap(s,width=79,break_long_words=True,wrapstring=None)
Return a copy of the string passed to the filter wrapped after 79 characters.
xmlattr9d,autospace=True)
Create an SGML/XML attribute string based on the items in a dict.

Template inheritance

The most powerful part of Jinja is template inheritance. Template inheritance allows you to build a base “skeleton” template that contains all the common elements of your site and defines blocks that child templates can override.

Base Template

Base.html, defines a simple HTML skeleton document that you might use for a simple two-column page. It’s the job of “child” templates to fill the empty blocks with content:

<!DOCTYPE html>
<html lang="en">
<head>
    {% block head %}
    <link rel="stylesheet" href="style.css" />
    <title>{% block title %}{% endblock %} - My Webpage</title>
    {% endblock %}
</head>
<body>
    <div id="content">{% block content %}{% endblock %}</div>
    <div id="footer">
        {% block footer %}
        &copy; Copyright 2008 by <a href="http://domain.invalid/">you</a>.
        {% endblock %}
    </div>
</body>
</html>

The {% block %} tags define four blocks that child templates can fill in. All the block tag does is tell the template engine that a child template may override those placeholders in the template.

Child Template

A child template might look like this:

{% extends "base.html" %}
{% block title %}Index{% endblock %}
{% block head %}
{{ super() }}
  <style type="text/css">
  .important { color: #336699; }
  </style>
{% endblock %}
{% block content %}
  <h1>Index</h1>
  <p class="important">
  Welcome here
  </p>
{% endblock %}

{% extends %} tells the template engine that this template “extends” another template.When the template system evaluates this template, it first locates the parent. The extends tag should be the first tag in the template. Everything before it is printed out normally and may cause confusion.

You can’t define multiple {% block %} tags with the same name in the same template. This limitation exists because a block tag works in “both” directions. That is, a block tag doesn’t just provide a placeholder to fill - it also defines the content that fills the placeholder in the parent.

If you want to print a block multiple times, you can, however, use the special self variable and call the block with that name:

<title>{% block title %}{% endblock %}</title>
<h1>{{ self.title() }}</h1>
{% block body %}{% endblock %}

iptables

iptables 入门

iptables 介绍

linux的包过滤功能,即linux防火墙,由 netfilteriptables 两个组件组成。

netfilter 组件也叫做内核空间,是内核的一部分,由一些信息包过滤表组成。这些表包含内核用来控制信息包过滤处理的规则集。

iptables 组件是一种工具,也称为用户空间,它使插入、修改和除去信息包过滤表中的规则变得容易。

iptables 的结构

iptables --> Tables --> Chains --> Rules

简单的讲,tables由chains组成,chains又由rules组成。iptables默认有四个表 Filter NAT Mangle Raw ,其包含的链如下:

Raw表 Mangle表 Nat表 Filter表
Prerouting链 Prerouting链 Prerouting链 Input链
Output链 Postrouting链 Postrouting链 Forward链
  Input链 Output链 Output链
  Output链    
  Forward链    

iptables 的工作流程

_images/iptables.jpg

iptables 工作流程

filter 表详解

  1. 在iptables 中,filter表起过滤数据包的功能,它具有以下三种内建链:

    INPUT链 - 处理来自外部的数据
    OUTPUT链 - 处理向外发送的数据
    FORWARD链 - 将数据转发到本机的其他网卡设备上
    
  2. 数据流向场景:

    访问本机: 在INPUT链上做过滤
    本机访问外部: 在OUTPUT链上做过滤
    通过本机访问其他主机: 在FORWARD链上做过滤
    
  3. iptables 基本操作

    • 启动iptables: service iptables start
    • 关闭iptables: service iptables stop
    • 重启iptables: service iptables restart
    • 查看iptables状态: service iptables status
    • 保存iptables配置: service iptables save
    • iptables 服务配置文件: /etc/sysconfig/iptables-config
    • iptables 规则保存文件: /etc/sysconfig/iptables
    • 打开iptables 转发: echo "1" > /proc/sys/net/ipv4/ip_forward

iptables 命令参考

命令:

iptables [ -t 表名 ] 命令选项 [ 链名 ] [ 条件匹配 ] [ -j 目标动作或跳转 ]
  • 表名: Filter Nat Mangle Raw 起包过滤功能的表为Filter ,可以不填,默认为Filter

  • 命令选项

    -A 在指定的链末尾追加一条新的规则
    -D 删除指定链中的一条规则
    -I 在指定的链中插入一条新的规则
    -R 修改或替换指定链中的一条规则
    -L 列出指定链中的所有规则
    -F 清空指定链中的所有规则
    -N 新建一条用户自定义的规则链
    -X 删除指定表中用户自定义的规则链
    -P 设置指定链的默认策略
    -n 以数字形式显示输出结果
    -v 查看规则列表时显示详细信息
    -V 查看iptables 版本信息
    -h 查看帮助信息
    –line-number 查看规则列表时,显示规则在链中的顺序号
  • 链名

    INPUT链 - 处理来自外部的数据
    OUTPUT链 - 处理向外发送的数据
    FORWARD链 - 将数据转发到本机的其他网卡设备上
    
  • 条件匹配

    条件匹配分为基本匹配和扩展匹配,扩展匹配又分为隐式扩展和显示扩展

    • 基本匹配

      匹配参数 说明
      -p 指定规则协议,tcp udp icmp all
      -s 指定数据包的源地址,ip hostname
      -d 指定目的地址
      -i 输入接口
      -o 输出接口
    • 隐式扩展匹配

      隐含扩展条件 需包含 扩展项 说明
      -m tcp -p tcp –sport 源端口
      –dport 目标端口
      –tcp-flags SYN ACK RST FIN
      –syn 第一次握手
      -m udp -p udp –sport 源端口
      –dport 目标端口
      -m icmp -p icmp –icmp-type 8:echo-request 0:echo-reply
    • 显示扩展匹配

    显示扩展条件 扩展项 说明
    -m state –state 检测连接的状态
    -m multiport –source-ports 多个源端口
    –destination-ports 多个目的端口
    –ports 源和目的端口
    -m limit –limit 速率(包/分钟)
    –limit-burst 峰值速率
    -m connlimit –connlimit-above n 多个条件
    -m iprange –src-range ip-ip 源IP范围
    –dst-range ip-ip 目的IP范围
    -m mac –mac-source mac地址限制
    -m string –algo [bm|kmp] 匹配算法
    –string “pattern” 要匹配的字符串
    -m recent –name 设定列表名称
    –rsource 源地址
    –rdest 目的地址
    –set 添加源地址的包到列表中
    –update 每次建立连接都更新列表
    –rcheck 检测地址是否在列表
    –seconds 指定时间内,与rcheck,update共用
    –hitcount 命中次数,与rcheck,update共用
    –remove 在列表中删除相应地址
  • 目标值

    数据包控制方式分为四种:

    ACCEPT:  允许数据包通过
    DROP:    直接丢弃数据包,不给出任何回应信息
    REJECT:  拒绝数据包通过,必须时会给数据发送端一个响应信息
    LOG:     在日志文件中记录日志信息,然后将数据包传递给下一条规则
    QUEUE:   防火墙将数据包移交到用户空间
    RETURN:  防火墙停止执行当前链中的后续Rules,并返回到调用链
    

iptables 常见命令

  1. 删除iptables 现有规则

    iptables -F

  2. 查看iptables规则

    iptables -L (iptables -L -v -n)

  3. 增加一条规则到最后

    iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT

  4. 添加一条规则到指定位置

    iptables -I INPUT 2 -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT

  5. 删除一条规则

    iptables -D INPUT 2

  6. 修改一条规则

    iptables -R INPUT 3 -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT

  7. 设置默认策略

    iptables -P INPUT DROP

  8. 允许远程主机进行ssh连接

    iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT

    iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT

  9. 允许本地主机进行ssh连接

    iptables -A INPUT -i eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT

    iptables -A OUTPUT -o eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT

  10. 允许HTTP请求

    iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT

    iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT

  11. 限制ping 192.168.146.3主机的数据包数,平均2/s,最多不能超过3个

    iptables -A INPUT -i eth0 -d 192.168.146.3 -p icmp --icmp-type 8 -m limit --limit 2/second --limit-burst 3 -j ACCEPT

  12. 限制ssh连接速率(默认策略是DROP)

    iptables -I INPUT 1 -p tcp --dport 22 -d 192.168.146.3 -m state --state ESTABLISHED -j ACCEPT

    iptables -I INPUT 2 -p tcp --dport 22 -d 192.168.146.3 -m limit --limit 2/minute --limit-burst 2 -m state --state NEW -j ACCEPT

如何正确配置iptables

  1. 删除现有规则

    iptables -F
    
  2. 配置默认链策略

    iptables -P INPUT DROP
    iptables -P FORWARD DROP
    iptables -P OUTPUT DROP
    
  3. 允许远程主机进行ssh连接

    iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
    iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT
    
  4. 允许本地主机进行ssh连接

    iptables -A OUTPUT -o eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
    iptables -A INPUT -i eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT
    
  5. 允许HTTP请求

    iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
    iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
    

使用iptables 抵御常见攻击

  1. 防止SYN攻击

    • 限制SYN请求速度,需要设置一个合理的速度,不然会影响正常用户的请求

      iptables -N syn-flood  #新建一条规则链
      iptables -A INPUT -p tcp --syn -j syn-flood  #输入的SYN请求跳转到syn-flood规则链
      iptables -A syn-flood -m limit --limit 1/s --limit-burse 4 -j RETURN
      iptables -A syn-flood -j DROP
      
    • 限制单个IP的最大SYN连接数

      iptables -A INPUT -i eth0 -p tcp --syn -m connlimit --connlimit-above 15 -j DROP
      
  2. 防止DOS攻击

    使用recent模块抵御DOS攻击:

    iptables -I INPUT -p tcp --dport 22 -m connlimit --connlimit-above 3 -j DROP  #单个IP最多连接3个会话
    iptables -I INPUT -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH   #只要是新的连接请求,就把它加入到SSH列表中
    iptables -I INPUT -p tcp --dport 22 -m state --state NEW -m recent --update --seconds 300 --hitcount 3 --name SSH -j DROP  #5分钟内尝试次数达到3次,就拒绝提供SSH列表中的这个IP服务,被限制五分钟后即可恢复访问
    
  3. 防止单个IP访问量过大

    iptables -I INPUT -p tcp --dport 80 -m connlimit --connlimit-above 30 -j DROP
    
  4. 木马反弹

    iptables -A OUTPUT -m state --state NEW -j DROP
    
  5. 防止ping 攻击

    iptables -A INPUT -p icmp --icmp-type echo-request -m limit --limit 1/m -j ACCEPT
    

Screen Usage

你是不是经常需要远程登录到Linux服务器?你是不是经常为一些长时间运行的任务头疼?还在用 nohup 吗?那么来看看 screen 吧,它会给你一个惊喜!

Screen是一个可以在多个进程之间多路复用一个物理终端的窗口管理器。Screen中有会话的概念,用户可以在一个screen会话中创建多个screen窗口,在每一个screen窗口中就像操作一个真实的telnet/SSH连接窗口那样。

一、screen功能

  • 会话恢复
  • 会话共享
  • 多窗口

二、screen参数

  • screen -ls 列出正在运行的screen
  • screen -S name 启动screen的时候以name作为名称
  • -d 将指定的screen作业离线(Detach)
  • screen -r name或pid 进入之前断开的screen
  • screen -d -r name 强抢一个已经存在的screen
  • screen -x name 进入没有断开的screen,这样可以让一个人操作,另外一个人可以看到他的全部操作

三、screen 多窗口管理

每个screen的session中,所有命令都是ctrl+a开头
  • C-a c ==> 在当前screen中创建一个新的shell窗口
  • C-a n ==> 切换到下一个window
  • C-a p ==> 切换到上一个window
  • C-a 0...9 ==> 切换到第0...9个window
  • C-a [space] ==> 由第0个window循环切换到第9个window
  • C-a C-a ==> 在两个最近使用的window之间切换
  • C-a x ==> 锁住当前window,需要密码解锁
  • C-a d ==> dettach,离开当前session
  • C-a w ==> 显示所有窗口列表
  • C-a t ==> 显示当前时间以及系统load
  • C-a k ==> 强制关闭当前window
  • C-a S ==> 水平分屏
  • C-a [TAB] ==> 下一屏,分屏后需要C-a c 新建窗口后方可使用

Indices and tables

Indices and tables