Ansible

Ansible training provides you with complete understanding of playbooks, configuration management, deployment, and orchestration, we’ll learn how to get Ansible installed and some basic concepts. We’ll go over how to execute ad-hoc commands in parallel across your nodes using /usr/bin/ansible. We’ll also see what sort of modules are available in Ansible’s core.

Untitled section

Untitled content 1

Ansible

[email protected]; 2017

Untitled content 2

Ansible

Untitled content 3

Content

Installation

Getting Started

Inventory

Playbooks

Task Control

YAML

Modules & Plugins

Developing Modules

Developing Plugins

Network Modules

[email protected]; 2017

Untitled content 4

Introduction

Before we dive into the really fun parts – playbooks, configuration management, deployment, and orchestration, we’ll learn how to get Ansible installed and some basic concepts. We’ll go over how to execute ad-hoc commands in parallel across your nodes using /usr/bin/ansible. We’ll also see what sort of modules are available in Ansible’s core.

[email protected]; 2017

Untitled content 5

Installation

GETTING ANSIBLE

You may also wish to follow the Github project if you have a github account. This is also where we keep the issue tracker for sharing bugs and feature ideas.

  • Basics / What Will Be Installed
  • Ansible by default manages machines over the SSH protocol.
  • Once Ansible is installed, it will not add a database, and there will be no daemons to start or keep running. You only need to install it on one machine (which could easily be a laptop) and it can manage an entire fleet of remote machines from that central point. When Ansible manages remote machines, it does not leave software installed or running on them, so there’s no real question about how to upgrade Ansible when moving to a new version.

[email protected]; 2017

Untitled content 6

CONTROL MACHINE REQUIREMENTS

Currently Ansible can be run from any machine with Python 2.6 installed (Windows isn’t supported for the control machine).

This includes Red Hat, Debian, CentOS, OS X, any of the BSDs, and so on.

[email protected]; 2017

Untitled content 7

Installing the Control Machine

To configure the PPA on your machine and install ansible run these commands:

  • $ sudo apt-get install software-properties-common
  • $ sudo apt-add-repository ppa:ansible/ansible
  • $ sudo apt-get update
  • $ sudo apt-get install ansible

[email protected]; 2017

Untitled content 8

FOREWORD

Now that you’ve read Installation and installed Ansible, it’s time to dig in and get started with some commands.

What we are showing first are not the powerful configuration/deployment/orchestration of Ansible, called playbooks.

Playbooks are covered in a separate section.

This section is about how to get going initially. Once you have these concepts down, read

[email protected]; 2017

Untitled content 9

REMOTE CONNECTION INFORMATION

Before we get started, it’s important to understand how Ansible is communicating with remote machines over SSH.

By default, Ansible 1.3 and later will try to use native OpenSSH for remote communication when possible. This enables both ControlPersist (a performance feature), Kerberos, and options in ~/.ssh/config such as Jump Host setup.

[email protected]; 2017

Untitled content 10

YOUR FIRST COMMANDS

Now that you’ve installed Ansible, it’s time to get started with some basics.

Edit (or create) /etc/ansible/hosts and put one or more remote systems in it, for which you have your SSH key in

Authorized_keys:

192.168.1.50

aserver.example.org

bserver.example.org

This is an inventory file, which is also explained in greater depth here: Inventory.

[email protected]; 2017

Untitled content 11

We’ll assume you are using SSH keys for authentication. To set up SSH agent to avoid retyping passwords, you can do:

$ ssh-agent bash

$ ssh-add ~/.ssh/id_rsa

(Depending on your setup, you may wish to use Ansible’s --private-key option to specify a pem file instead)

Now ping all your nodes:

$ ansible all -m ping

Ansible will attempt to remote connect to the machines using your current user name, just like SSH would. To override the remote user name, just use the ‘-u’ parameter.

[email protected]; 2017

Untitled content 12

If you would like to access sudo mode, there are also flags to do that:

# as bruce

$ ansible all -m ping -u bruce

# as bruce, sudoing to root

$ ansible all -m ping -u bruce --sudo

# as bruce, sudoing to batman

$ ansible all -m ping -u bruce --sudo --sudo-user batman

(The sudo implementation is changeable in Ansible’s configuration file if you happen to want to use a sudo replacement.

Flags passed to sudo (like -H) can also be set there.)

[email protected]; 2017

Untitled content 13

Now run a live command on all of your nodes

$ ansible all -a "/bin/echo hello“

Congratulations. You’ve just contacted your nodes with Ansible. It’s soon going to be time to read some of the more real-world Introduction To Ad-Hoc Commands, and explore what you can do with different modules, as well as the Ansible Playbooks language. Ansible is not just about running commands, it also has powerful configuration management and deployment features. There’s more to explore, but you already have a fully working infrastructure!

[email protected]; 2017

Untitled content 14

HOST KEY CHECKING

Ansible 1.2.1 and later have host key checking enabled by default. If a host is reinstalled and has a different key in ‘known_hosts’, this will result in an error message until corrected. If a host is not initially in ‘known_hosts’ this will result in prompting for confirmation of the key, which results in an interactive experience if using Ansible, from say, cron. You might not want this.

If you understand the implications and wish to disable this behavior, you can do so by editing /etc/ansible/ansible.cfg or ~/.ansible.cfg:

[defaults]

host_key_checking = False

Alternatively this can be set by an environment variable:

$ export ANSIBLE_HOST_KEY_CHECKING=False

[email protected]; 2017

Untitled content 15

Also note that host key checking in paramiko mode is reasonably slow, therefore switching to ‘ssh’ is also recommended when using this feature.

Ansible will log some information about module arguments on the remote system in the remote syslog, unless a task or play is marked with a “no_log: True” attribute. This is explained later.

To enable basic logging on the control machine see Configuration file document and set the ‘log_path’ configuration file setting. Enterprise users may also be interested in Ansible Tower. Tower provides a very robust database logging feature where it is possible to drill down and see history based on hosts, projects, and particular inventories over time – explorable both graphically and through a REST API.

[email protected]; 2017

Untitled content 16

INVENTORY

Ansible works against multiple systems in your infrastructure at the same time. It does this by selecting portions of systems listed in Ansible’s inventory file, which defaults to being saved in the location /etc/ansible/hosts.

Not only is this inventory configurable, but you can also use multiple inventory files at the same time (explained below) and also pull inventory from dynamic or cloud sources, as described in Dynamic Inventory.

[email protected]; 2017

Untitled content 17

HOST AND GROUPS

The format for /etc/ansible/hosts is an INI format and looks like this:

mail.example.com

[webservers]

foo.example.com

bar.example.com

[dbservers]

one.example.com

two.example.com

three.example.com

[email protected]; 2017

Untitled content 18

The things in brackets are group names, which are used in classifying systems and deciding what systems you are controlling at what times and for what purpose.

It is ok to put systems in more than one group, for instance a server could be both a webserver and a dbserver. If you do, note that variables will come from all of the groups they are a member of, and variable precedence is detailed in a later chapter.

If you have hosts that run on non-standard SSH ports you can put the port number after the hostname with a colon.

Ports listed in your SSH config file won’t be used with the paramiko connection but will be used with the openssh connection.

[email protected]; 2017

Untitled content 19

To make things explicit, it is suggested that you set them if things are not running on the default port:

badwolf.example.com:5309 

Suppose you have just static IPs and want to set up some aliases that don’t live in your host file, or you are connecting through tunnels. You can do things like this:

jumper ansible_ssh_port=5555 ansible_ssh_host=192.168.1.50

In the above example, trying to ansible against the host alias “jumper” (which may not even be a real hostname) will contact 192.168.1.50 on port 5555. Note that this is using a feature of the inventory file to define some special variables. Generally speaking this is not the best way to define variables that describe your system policy, but we’ll share suggestions on doing this later. We’re just getting started.

…Continued

[email protected]; 2017

Untitled content 20

Adding a lot of hosts? If you have a lot of hosts following similar patterns you can do this rather than listing each

hostname:

[webservers]

www[01:50].example.com

For numeric patterns, leading zeros can be included or removed, as desired. Ranges are inclusive. You can also define alphabetic ranges:

[databases]

db-[a:f].example.com

You can also select the connection type and user on a per host basis:

[targets]

localhost ansible_connection=local

other1.example.com ansible_connection=ssh ansible_ssh_user=mpdehaan

other2.example.com ansible_connection=ssh ansible_ssh_user=mdehaan

As mentioned above, setting these in the inventory file is only a shorthand, and we’ll discuss how to store them in individual files in the ‘host_vars’ directory a bit later on.

[email protected]; 2017

Untitled content 21

HOST VARIABLES

As alluded to above, it is easy to assign variables to hosts that will be used later in playbooks:

[atlanta]

host1 http_port=80 maxRequestsPerChild=808

host2 http_port=303 maxRequestsPerChild=909

[email protected]; 2017

Untitled content 22

GROUP VARIABLES

Variables can also be applied to an entire group at once:

[atlanta]

host1

host2

[atlanta:vars]

ntp_server=ntp.atlanta.example.com

proxy=proxy.atlanta.example.com

[email protected]; 2017

Untitled content 23

GROUPS OF GROUPS, and GROUP VARIABLES

It is also possible to make groups of groups and assign variables to groups. These variables can be used by

/usr/bin/ansible-playbook, but not /usr/bin/ansible:

[atlanta]

host1

host2

[raleigh]

host2

host3

[southeast:children]

atlanta

raleigh

[southeast:vars]

…Continued

[email protected]; 2017

Untitled content 24

some_server=foo.southeast.example.com

halon_system_timeout=30

self_destruct_countdown=60

escape_pods=2

[usa:children]

southeast

northeast

southwest

Northwest

If you need to store lists or hash data, or prefer to keep host and group specific variables separate from the inventory file, see the next section.

[email protected]; 2017

Untitled content 25

SPLITTING OUT HOST and GROUP SPECIFIC DATA

The preferred practice in Ansible is actually not to store variables in the main inventory file.

In addition to storing variables directly in the INI file, host and group variables can be stored in individual files relative to the inventory file.

These variable files are in YAML format. See YAML Syntax if you are new to YAML.

Assuming the inventory file path is:

/etc/ansible/hosts

…Continued

[email protected]; 2017

Untitled content 26

If the host is named ‘foosball’, and in groups ‘raleigh’ and ‘webservers’, variables in YAML files at the following locations will be made available to the host:

/etc/ansible/group_vars/raleigh

/etc/ansible/group_vars/webservers

/etc/ansible/host_vars/foosball

For instance, suppose you have hosts grouped by datacenter, and each datacenter uses some different servers. The data in the groupfile ‘/etc/ansible/group_vars/raleigh’ for the ‘raleigh’ group might look like:

---

ntp_server: acme.example.org

database_server: storage.example.org

[email protected]; 2017

Untitled content 27

It is ok if these files do not exist, as this is an optional feature.

  • Tip: In Ansible 1.2 or later the group_vars/ and host_vars/ directories can exist in either the playbook directory OR the inventory directory. If both paths exist, variables in the playbook directory will be loaded second.
  • Tip: Keeping your inventory file and variables in a git repo (or other version control) is an excellent way to track changes to your inventory and host variables.

[email protected]; 2017

Untitled content 28

List of Behavioral Inventory Parameters

As alluded to above, setting the following variables controls how ansible interacts with remote hosts. Some we have already mentioned:

  • ansible_ssh_host

The name of the host to connect to, if different from the alias you wish to give to it.

  • ansible_ssh_port

The ssh port number, if not 22

  • ansible_ssh_user

The default ssh user name to use.

  • ansible_ssh_pass

The ssh password to use (this is insecure, we strongly recommend using --ask-pass or SSH keys)

  • ansible_sudo_pass

The sudo password to use (this is insecure, we strongly recommend using --ask-sudo-pass)

  • ansible_connection

[email protected]; 2017

Untitled content 29

Connection type of the host. Candidates are local, ssh or paramiko. The default is paramiko before Ansible 1.2, and ’smart’ afterwards which detects whether usage of ’ssh’ would be feasible based on whether ControlPersist is supported.

  • ansible_ssh_private_key_file

Private key file used by ssh. Useful if using multiple keys and you don’t want to use SSH agent.

  • ansible_shell_type

The shell type of the target system. By default commands are formatted using ’sh’-style syntax by default. Setting this to ’csh’ or ’fish’ will cause commands executed on target systems to follow those shell’s syntax instead. 

  • ansible_python_interpreter

Continued

[email protected]; 2017

Untitled content 30

The target host python path. This is useful for systems with more than one Python or not located at "/usr/bin/python" such as \*BSD, or where /usr/bin/python is not a 2.X series Python. We do not use the "/usr/bin/env" mechanism as that requires the remote user’s path to be set right and also assumes the "python" executable is named python, where the executable might be named something like "python26".

  • ansible\_\*\_interpreter

Works for anything such as ruby or perl and works just like ansible_python_interpreter.

This replaces shebang of modules which will run on that host.

Examples from a host file:

  • some_host ansible_ssh_port=2222 ansible_ssh_user=manager
  • aws_host ansible_ssh_private_key_file=/home/example/.ssh/aws.pem
  • freebsd_host ansible_python_interpreter=/usr/local/bin/python
  • ruby_module_host ansible_ruby_interpreter=/usr/bin/ruby.1.9.3

[email protected]; 2017

Untitled content 31

DYNAMIC INVENTORY

  • Often a user of a configuration management system will want to keep inventory in a different software system. Ansibleprovides a basic text-based system as described in Inventory but what if you want to use something else?
  • Frequent examples include pulling inventory from a cloud provider, LDAP, Cobbler, or a piece of expensive enterprisey CMDB software.
  • Ansible easily supports all of these options via an external inventory system. The plugins directory contains some of these already – including options for EC2/Eucalyptus, Rackspace Cloud, and OpenStack, Ansible Tower also provides a database to store inventory results that is both web and REST Accessible.
  • Tower syncs with all Ansible dynamic inventory sources you might be using, and also includes a graphical inventory editor. By having a database record of all of your hosts, it’s easy to correlate past event history and see which ones have had failures on their last playbook runs.

[email protected]; 2017

Untitled content 32

PLAY BOOKS

Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process.

If Ansible modules are the tools in your workshop, playbooks are your design plans. At a basic level, playbooks can be used to manage configurations of and deployments to remote machines. At a more advanced level, they can sequence multi-tier rollouts involving rolling updates, and can delegate actions to other hosts, interacting with monitoring servers and load balancers along the way.

While there’s a lot of information here, there’s no need to learn everything at once. You can start small and pick up more features over time as you need them.

Playbooks are designed to be human-readable and are developed in a basic text language. There are multiple ways to organize playbooks and the files they include, and we’ll offer up some suggestions on that and making the most out of Ansible.

[email protected]; 2017

Untitled content 33

About Playbooks

Playbooks are a completely different way to use ansible than in adhoc task execution mode, and are particularly powerful.

Simply put, playbooks are the basis for a really simple configuration management and multi-machine deployment system, unlike any that already exist, and one that is very well suited to deploying complex applications.

Playbooks can declare configurations, but they can also orchestrate steps of any manual ordered process, even as different steps must bounce back and forth between sets of machines in particular orders. They can launch tasks synchronously or asynchronously.

While you might run the main /usr/bin/ansible program for ad-hoc tasks, playbooks are more likely to be kept in source control and used to push out your configuration or assure the configurations of your remote systems are in spec.

[email protected]; 2017

Untitled content 34

PLAYBOOK LANGUAGE EXAMPLE

Playbooks are expressed in YAML format (see YAML Syntax) and have a minimum of syntax, which intentionally tries to not be a programming language or script, but rather a model of a configuration or a process. The goal of a play is to map a group of hosts to some well defined roles, represented by things ansible calls tasks. At a basic level, a task is nothing more than a call to an ansible module.

By composing a playbook of multiple ‘plays’, it is possible to orchestrate multi-machine deployments, running certain steps on all machines in the webservers group, then certain steps on the database server group, then more commands back on the webservers group, etc.

[email protected]; 2017

Untitled content 35

For starters, here’s a playbook that contains just one play:

---

- hosts: webservers

vars:

http_port: 80

max_clients: 200

remote_user: root

tasks:

- name: ensure apache is at the latest version

yum: pkg=httpd state=latest

- name: write the apache config file

template: src=/srv/httpd.j2 dest=/etc/httpd.conf

notify:

- restart apache

- name: ensure apache is running

service: name=httpd state=started

handlers:

- name: restart apache

service: name=httpd state=restarted

[email protected]; 2017

Untitled content 36

For each play in a playbook, you get to choose which machines in your infrastructure to target and what remote user to complete the steps (called tasks) as.

The hosts line is a list of one or more groups or host patterns, separated by colons, as described in the Patterns documentation. The remote_user is just the name of the user account:

---

- hosts: webservers

remote_user: root

Note: The remote_user parameter was formerly called just user. It was renamed in Ansible 1.4 to make it more distinguishable from the user module (used to create users on remote systems).

[email protected]; 2017

Untitled content 37

Remote users can also be defined per task:

---

- hosts: webservers

remote_user: root

tasks:

- name: test connection

ping:

remote_user: yourname

Note: The remote_user parameter for tasks was added in 1.4. Support for running things from sudo is also available:

---

- hosts: webservers

remote_user: yourname

sudo: yes

You can also use sudo on a particular task instead of the whole play: 

---

- hosts: webservers

remote_user: yourname

tasks:

- service: name=nginx state=started

sudo: yes

[email protected]; 2017

Untitled content 38

You can also login as you, and then sudo to different users than root:

---

- hosts: webservers

remote_user: yourname

sudo: yes

sudo_user: postgres

If you need to specify a password to sudo, run ansible-playbook with --ask-sudo-pass (-K). If you run a sudo playbook and the playbook seems to hang, it’s probably stuck at the sudo prompt. Just Control-C to kill it and run it again with -K.

Important: When using sudo_user to a user other than root, the module arguments are briefly written into a random tempfile in /tmp. These are deleted immediately after the command is executed. This only occurs when sudoing from a user like ‘bob’ to ‘timmy’, not when going from ‘bob’ to ‘root’, or logging in directly as ‘bob’ or ‘root’. If this concerns you that this data is briefly readable (not writable), avoid transferring uncrypted passwords with sudo_user set. In other cases, ‘/tmp’ is not used and this does not come into play. Ansible also takes care to not log password parameters.

[email protected]; 2017

Untitled content 39

Tasks list

Each play contains a list of tasks. Tasks are executed in order, one at a time, against all machines matched by the host pattern, before moving on to the next task. It is important to understand that, within a play, all hosts are going to get the same task directives. It is the purpose of a play to map a selection of hosts to tasks.

When running the playbook, which runs top to bottom, hosts with failed tasks are taken out of the rotation for the entire playbook. If things fail, simply correct the playbook file and rerun.

The goal of each task is to execute a module, with very specific arguments. Variables, as mentioned above, can be used in arguments to modules.

[email protected]; 2017

Untitled content 40

Modules are ‘idempotent’, meaning if you run them again, they will make only the changes they must in order to bring the system to the desired state. This makes it very safe to rerun the same playbook multiple times. They won’t change things unless they have to change things.

The command and shell modules will typically rerun the same command again, which is totally ok if the command is something like ‘chmod’ or ‘setsebool’, etc. Though there is a ‘creates’ flag available which can be used to make these modules also idempotent.

Every task should have a name, which is included in the output from running the playbook. This is output for humans, so it is nice to have reasonably good descriptions of each task step. If the name is not provided though, the string fed to ‘action’ will be used for output.

[email protected]; 2017

Untitled content 41

Tasks can be declared using the legacy “action: module options” format, but it is recommended that you use the more conventional “module: options” format. This recommended format is used throughout the documentation, but you may encounter the older format in some playbooks.

Here is what a basic task looks like, as with most modules, the service module takes key=value arguments:

tasks:

- name: make sure apache is running

service: name=httpd state=running

The command and shell modules are the only modules that just take a list of arguments and don’t use the key=value form. This makes them work as simply as you would expect:

tasks:

- name: disable selinux

command: /sbin/setenforce 0

[email protected]; 2017

Untitled content 42

The command and shell module care about return codes, so if you have a command whose successful exit code is not zero, you may wish to do this:

tasks:

- name: run this command and ignore the result

shell: /usr/bin/somecommand || /bin/true

Or this:

tasks:

- name: run this command and ignore the result

shell: /usr/bin/somecommand

ignore_errors: True

[email protected]; 2017

Untitled content 43

If the action line is getting too long for comfort you can break it on a space and indent any continuation lines:

tasks:

- name: Copy ansible inventory file to client

copy: src=/etc/ansible/hosts dest=/etc/ansible/hosts

owner=root group=root mode=0644

Variables can be used in action lines. Suppose you defined a variable called ‘vhost’ in the ‘vars’ section, you could dothis:

tasks:

- name: create a virtual host file for {{ vhost }}

template: src=somefile.j2 dest=/etc/httpd/conf.d/{{ vhost }}

Those same variables are usable in templates, which we’ll get to later.

[email protected]; 2017

Untitled content 44

Action Shorthand

New in version 0.8. Ansible prefers listing modules like this in 0.8 and later:

template: src=templates/foo.j2 dest=/etc/foo.conf

You will notice in earlier versions, this was only available as:

action: template src=templates/foo.j2 dest=/etc/foo.conf

The old form continues to work in newer versions without any plan of deprecation.

[email protected]; 2017

Untitled content 45

Handlers: Running Operations On Change

As we’ve mentioned, modules are written to be ‘idempotent’ and can relay when they have made a change on the remote system. Playbooks recognize this and have a basic event system that can be used to respond to change.

These ‘notify’ actions are triggered at the end of each block of tasks in a playbook, and will only be triggered once even if notified by multiple different tasks.

For instance, multiple resources may indicate that apache needs to be restarted because they have changed a config file, but apache will only be bounced once to avoid unnecessary restarts.

[email protected]; 2017

Untitled content 46

Here’s an example of restarting two services when the contents of a file change, but only if the file changes:

- name: template configuration file

template: src=template.j2 dest=/etc/foo.conf

notify:

- restart memcached

  • restart apache

The things listed in the ‘notify’ section of a task are called handlers.

Handlers are lists of tasks, not really any different from regular tasks, that are referenced by name. Handlers are what notifiers notify. If nothing notifies a handler, it will not run. Regardless of how many things notify a handler, it will run only once, after all of the tasks complete in a particular play.

[email protected]; 2017

Untitled content 47

Here’s an example handlers section:

handlers:

- name: restart memcached

service: name=memcached state=restarted

- name: restart apache

service: name=apache state=restarted

Handlers are best used to restart services and trigger reboots. You probably won’t need them for much else.

Note: Notify handlers are always run in the order written.

Roles are described later on. It’s worthwhile to point out that handlers are automatically processed between ‘pre_tasks’,‘roles’, ‘tasks’, and ‘post_tasks’ sections. If you ever want to flush all the handler commands immediately though.

tasks:

- shell: some tasks go here

- meta: flush_handlers

- shell: some other tasks

In the above example any queued up handlers would be processed early when the ‘meta’ statement was reached. This is a bit of a niche case but can come in handy from time to time.

[email protected]; 2017

Untitled content 48

Executing A Playbook

Now that you’ve learned playbook syntax, how do you run a playbook? It’s simple. Let’s run a playbook using a parallelism level of 10:

ansible-playbook playbook.yml -f 10  

[email protected]; 2017

Untitled content 49

YAML SYNTAX

We use YAML because it is easier for humans to read and write than other common data formats like XML or JSON.

Further, there are libraries available in most programming languages for working with YAML.

You may also wish to read Playbooks at the same time to see how this is used in practice.

[email protected]; 2017

Untitled content 50

YAML Basics

For Ansible, nearly every YAML file starts with a list. Each item in the list is a list of key/value pairs, commonly called a “hash” or a “dictionary”. So, we need to know how to write lists and dictionaries in YAML.

There’s another small quirk to YAML. All YAML files (regardless of their association with Ansible or not) should begin with ---. This is part of the YAML format and indicates the start of a document.

All members of a list are lines beginning at the same indentation level starting with a - (dash) character:

[email protected]; 2017

Untitled content 51

---

# A list of tasty fruits

- Apple

- Orange

- Strawberry

- Mango

A dictionary is represented in a simple key: and value form:

---

# An employee record

name: Example Developer

job: Developer

skill: Elite

Dictionaries can also be represented in an abbreviated form if you really want to:

…Continued

[email protected]; 2017

Untitled content 52

---

# An employee record

{name: Example Developer, job: Developer, skill: Elite}

Ansible doesn’t really use these too much, but you can also specify a boolean value (true/false) in several forms:

---

create_key: yes

needs_agent: no

knows_oop: True

likes_emacs: TRUE

uses_cvs: false

[email protected]; 2017

Untitled content 53

Let’s combine what we learned so far in an arbitrary YAML example. This really has nothing to do with Ansible, but will give you a feel for the format:

---

# An employee record

name: Example Developer

job: Developer

skill: Elite

employed: True

foods:

- Apple

- Orange

- Strawberry

- Mango

languages:

ruby: Elite

python: Elite

dotnet: Lame

That’s all you really need to know about YAML to start writing Ansible playbooks.

[email protected]; 2017

Untitled content 54

Developing Modules

Ansible modules are reusable, standalone scripts that can be used by the Ansible API, or by the ansible or ansible-playbook programs. They return information to ansible by printing a JSON string to stdout before exiting.

[email protected]; 2017

Untitled content 55

Building A Simple Module

Let’s build a very-basic module to get and set the system time. For starters, let’s build a module that just outputs the current time.

We are going to use Python here but any language is possible. Only File I/O and outputting to standard out are required. So, bash, C++, clojure, Python, Ruby, whatever you want is fine.

Now Python Ansible modules contain some extremely powerful shortcuts (that all the core modules use) but first we are going to build a module the very hard way. The reason we do this is because modules written in any language OTHER than Python are going to have to do exactly this

So, here’s an example. You would never really need to build a module to set the system time, the ‘command’ module could already be used to do this.

[email protected]; 2017

Untitled content 56

Reading the modules that come with Ansible (linked above) is a great way to learn how to write modules. Keep in mind, though, that some modules in Ansible’s source tree are internalisms, so look at service - Manage services. or yum - Manages packages with the yum package manager, and don’t stare too close into things like async_wrapper or you’ll turn to stone. Nobody ever executes async_wrapper directly.

Ok, let’s get going with an example. We’ll use Python. For starters, save this as a file named timetest.py

#!/usr/bin/python

import datetime

import json

date = str(datetime.datetime.now())

print(json.dumps({

"time" : date

}))

[email protected]; 2017

Untitled content 57

Testing Your Module

There’s a useful test script in the source checkout for Ansible:

git clone git://github.com/ansible/ansible.git --recursive

source ansible/hacking/env-setup

Let’s run the script you just wrote with that:

ansible/hacking/test-module -m ./timetest.py

You should see output that looks something like this:

{"time": "2012-03-14 22:13:48.539183"}

[email protected]; 2017

Untitled content 58

Developing Plugins

Plugins are pieces of code that augment Ansible’s core functionality. Ansible ships with a number of handy plugins, and you can easily write your own.

The following types of plugins are available:

  • Action plugins are front ends to modules and can execute actions on the controller before calling the modules themselves.
  • Cache plugins are used to keep a cache of ‘facts’ to avoid costly fact-gathering operations.
  • Callback plugins enable you to hook into Ansible events for display or logging purposes.
  • Connection plugins define how to communicate with inventory hosts.
  • Filters plugins allow you to manipulate data inside Ansible plays and/or templates. This is a Jinja2 feature; Ansible ships extra filter plugins.
  • Lookup plugins are used to pull data from an external source. These are implemented using a custom Jinja2 function.

[email protected]; 2017

Untitled content 59

  • Strategy plugins control the flow of a play and execution logic.
  • Shell plugins deal with low-level commands and formatting for the different shells Ansible can encounter on remote hosts.
  • Test plugins allow you to validate data inside Ansible plays and/or templates. This is a Jinja2 feature; Ansible ships extra test plugins.
  • Vars plugins inject additional variable data into Ansible runs that did not come from an inventory, playbook, or the command line.

[email protected]; 2017

Untitled content 60

Types of Plugins

Callback Plugins

Callback plugins enable adding new behaviors to Ansible when responding to events. By default, callback plugins control most of the output you see when running the command line programs.

Example Callback Plugins

  • Ansible comes with a number of callback plugins that you can look at for examples. These can be found in lib/ansible/plugins/callback.
  • The log_plays callback is an example of how to intercept playbook events to a log file, and the mail callback sends email when playbooks complete.
  • The osx_say callback provided is particularly entertaining – it will respond with computer synthesized speech on OS X in relation to playbook events, and is guaranteed to entertain and/or annoy coworkers.

[email protected]; 2017

Untitled content 61

Configuring Callback Plugins

You can activate a custom callback by either dropping it into a callback_plugins directory adjacent to your play or inside a role or by putting it in one of the callback directory sources configured in ansible.cfg.Plugins are loaded in alphanumeric order; for example, a plugin implemented in a file named 1_first.py would run before a plugin file named 2_second.py.

Most callbacks shipped with Ansible are disabled by default and need to be whitelisted in your ansible.cfg file in order to function. For example:

#callback_whitelist = timer, mail, mycallbackplugin

[email protected]; 2017

Untitled content 62

Managing stdout

You can only have one plugin be the main manager of your console output. If you want to replace the default, you should define CALLBACK_TYPE = stdout in the subclass and then configure the stdout plugin in ansible.cfg. For example:

#stdout_callback = mycallbackplugin

[email protected]; 2017

Untitled content 63

Developing Callback Plugins

Callback plugins are created by creating a new class with the Base(Callbacks) class as the parent:

from ansible.plugins.callback import CallbackBase

from ansible import constants as C

class CallbackModule(CallbackBase) :

pass 

From there, override the specific methods from the CallbackBase that you want to provide a callback for. For plugins intended for use with Ansible version 2.0 and later, you should only override methods that start with v2. For a complete list of methods that you can override, please see __init__.py in the lib/ansible/plugins/callback directory.

[email protected]; 2017

Untitled content 64

The following example shows how Ansible’s timer plugin is implemented:

# Make coding more python3-ish

from __future__ import (absolute_import, division, print_function)

__metaclass__ = type

from datetime import datetime

from ansible.plugins.callback import CallbackBase

class CallbackModule(CallbackBase):

""" This callback module tells you how long your plays ran for.

"""

CALLBACK_VERSION = 2.0

CALLBACK_TYPE = 'aggregate'

CALLBACK_NAME = 'timer'

CALLBACK_NEEDS_WHITELIST = True

…Continued

[email protected]; 2017

Untitled content 65

def __init__(self): 

super(CallbackModule, self).__init__() 

self.start_time = datetime.now() 

def days_hours_minutes_seconds(self, runtime):

minutes = (runtime.seconds // 60) % 60 r_seconds = runtime.seconds - (minutes * 60) return runtime.days, runtime.seconds // 3600, minutes, r_seconds 

def playbook_on_stats(self, stats):

self.v2_playbook_on_stats(stats) 

def v2_playbook_on_stats(self, stats):

end_time = datetime.now()

runtime = end_time - self.start_time

self._display.display("Playbook run took %s days, %s hours, %s minutes, %s seconds" % (self.days_hours_minutes_seconds(runtime)))

Note that the CALLBACK_VERSION and CALLBACK_NAME definitions are required for properly functioning plugins for Ansible >=2.0.

[email protected]; 2017

Untitled content 66

Connection Plugins

By default, Ansible ships with a ‘paramiko’ SSH, native ssh (just called ‘ssh’), ‘local’ connection type, and there are also some minor players like ‘chroot’ and ‘jail’. All of these can be used in playbooks and with /usr/bin/ansible to decide how you want to talk to remote machines. Should you want to extend Ansible to support other transports (SNMP, Message bus, etc) it’s as simple as copying the format of one of the existing modules and dropping it into the connection plugins directory. The value of ‘smart’ for a connection allows selection of paramiko or openssh based on system capabilities, and chooses ‘ssh’ if OpenSSH supports ControlPersist, in Ansible 1.2.1 and later. Previous versions did not support ‘smart’. 

More documentation on writing connection plugins is pending, though you can jump into lib/ansible/plugins/connection and figure things out pretty easily.

[email protected]; 2017

Untitled content 67

Lookup Plugins

Looking Plugins are used to pull in data from external data stores. Lookup plugins can be used within playbooks for both looping - playbook language constructs like “with_fileglob” and “with_items” are implemented via lookup plugins - and to return values into a variable or parameter.

Here’s a simple lookup plugin implementation - this lookup returns the contents of a text file as a variable:

from ansible.errors import AnsibleError, AnsibleParserError

from ansible.plugins.lookup import LookupBase

try:

from __main__ import display

except ImportError:

from ansible.utils.display import Display

display = Display()  

class LookupModule(LookupBase): 

def run(self, terms, variables=None, **kwargs): 

ret = [] 

for term in terms:

display.debug("File lookup term: %s" % term) 

# Find the file in the expected search path

lookupfile = self.find_file_in_search_path(variables, 'files', term) display.vvvv(u"File lookup using %s as file" % lookupfile) try:

if lookupfile:

contents, show_data =

self._loader._get_file_contents(lookupfile)

ret.append(contents.rstrip())

else:

raise AnsibleParserError()

except AnsibleParserError:

raise AnsibleError("could not locate file in lookup: %s" % term)  return ret 

[email protected]; 2017

Untitled content 68

An example of how this lookup is called:

----

hosts: all

vars:

contents: "{{ lookup('file', '/etc/foo.txt') }}

tasks: 

- debug: msg="the value of foo.txt is {{ contents }} as seen today {{

lookup('pipe', 'date +"%Y-%m-%d"') }}

Errors encountered during execution should be returned by raising AnsibleError() with a message describing the error. Any strings returned by your lookup plugin implementation that could ever contain non-ASCII characters must be converted into Python’s unicode type because the strings will be run through jinja2. To do this, you can use:

from ansible.module_utils._text import to_text

result_string = to_text(result_string)

For more example lookup plugins, check out the source code for the lookup plugins that are included with Ansible here: lib/ansible/plugins/lookup.

For usage examples of lookup plugins, see Using Lookups.

[email protected]; 2017

Untitled content 69

Vars Plugins

Playbook constructs like ‘host_vars’ and ‘group_vars’ work via ‘vars’ plugins. They inject additional variable data into ansible runs that did not come from an inventory, playbook, or command line. Note that variables can also be returned from inventory, so in most cases, you won’t need to write or understand vars_plugins.

More documentation on writing vars plugins is pending, though you can jump into lib/ansible/inventory/vars_plugins and figure things out pretty easily.

If you find yourself wanting to write a vars_plugin, it’s more likely you should write an inventory script instead.

[email protected]; 2017

Untitled content 70

Filter Plugins

Filter plugins are used for manipulating data. They are a feature of Jinja2 and are also available in Jinja2 templates used by the template module. As with all plugins, they can be easily extended, but instead of having a file for each one you can have several per file. Most of the filter plugins shipped with Ansible reside in a core.py.

See lib/ansible/plugins/filter for details.

[email protected]; 2017

Untitled content 71

Test Plugins

Test plugins are for verifying data. They are a feature of Jinja2 and are also available in Jinja2 templates used by the template module. As with all plugins, they can be easily extended, but instead of having a file for each one you can have several per file. Most of the test plugins shipped with Ansible reside in a core.py. These are specially useful in conjunction with some filter plugins like map and select; they are also available for conditional directives like when:

See lib/ansible/plugins/test for details.

[email protected]; 2017

Untitled content 72

Distributing Plugins

Plugins are loaded from the library installed path and the configured plugins directory (check your ansible.cfg). The location can vary depending on how you installed Ansible (pip, rpm, deb, etc) or by the OS/Distribution/Packager. Plugins are automatically loaded when you have one of the following subfolders adjacent to your playbook or inside a role:

  • action_plugins
  • lookup_plugins
  • callback_plugins
  • connection_plugins
  • filter_plugins
  • strategy_plugins
  • cache_plugins
  • test_plugins
  • shell_plugins

When shipped as part of a role, the plugin will be available as soon as the role is called in the play.

[email protected]; 2017

Untitled content 73

Networking Support

Working with Networking Devices

Starting with Ansible version 2.1, you can now use the familiar Ansible models of playbook authoring and module development to manage heterogenous networking devices. Ansible supports a growing number of network devices using both CLI over SSH and API (when available) transports.

[email protected]; 2017

Untitled content 74

Available Networking Modules

Most standard Ansible modules are designed to work with Linux/Unix or Windows machines and will not work with networking devices. Some modules (including “slurp”, “raw”, and “setup”) are platform-agnostic and will work with networking

  • cloudflare_dns - manage Cloudflare DNS records
  • dnsimple - Interface with dnsimple.com (a DNS hosting service)
  • dnsmadeeasy - Interface with dnsmadeeasy.com (a DNS hosting service)
  • haproxy - Enable, disable, and set weights for HAProxy backend servers using socket commands
  • ipify_facts - Retrieve the public IP of your internet gateway
  • ipinfoio_facts - Retrieve IP geolocation facts of a host’s IP address
  • ldap_attr - Add or remove LDAP attribute values
  • ldap_entry - Add or remove LDAP entries
  • lldp - get details reported by lldp
  • nmcli - Manage Networking
  • nsupdate - Manage DNS records
  • omapi_host - Setup OMAPI hosts
  • snmp_facts - Retrieve facts for a device using SNMP

[email protected]; 2017

Untitled content 75

Connecting to Networking Devices

All core networking modules implement a provider argument, which is a collection of arguments used to define the characteristics of how to connect to the device. This section will assist in understanding how the provider argument is used.

Each core network module supports an underlying operating system and transport. The operating system is a one-to-one match with the module, and the transport maintains a one-to-many relationship to the operating system as appropriate. Some network operating systems only have a single transport option.

[email protected]; 2017

Untitled content 76

Each core network module supports some basic arguments for configuring the transport:

  • host - defines the hostname or IP address of the remote host
  • port - defines the port to connect to
  • username - defines the username to use to authenticate the connection
  • password - defines the password to use to authenticate the connection
  • transport - defines the type of connection transport to build
  • authorize - enables privilege escalation for devices that require it
  • auth_pass - defines the password, if needed, for privilege escalation

Individual modules can set defaults for these arguments to common values that match device default configuration settings. For instance, the default value for transport is universally ‘cli’. Some modules support other values such as EOS (eapi) and NXOS (nxapi), while some only support ‘cli’. All arguments are fully documented for each module.

[email protected]; 2017

Untitled content 77

By allowing individual tasks to set the transport arguments independently, modules that use different transport mechanisms and authentication credentials can be combined as necessary.

One downside to this approach is that every task needs to include the required arguments. This is where the provider argument comes into play. The provider argument accepts keyword arguments and passes them through to the task to assign connection and authentication parameters.

[email protected]; 2017

Untitled content 78

The following two config modules are essentially identical (using nxos_config) as an example but it applies to all core networking modules:

---

nxos_config:

src: config.j2

host: "{{ inventory_hostname }}"

username: "{{ ansible_ssh_user }}"

password: "{{ ansible_ssh_pass }}"

transport: cli

 ---

vars:

cli:

host: "{{ inventory_hostname }}"

username: "{{ ansible_ssh_user }}"

password: "{{ ansible_ssh_pass }} "

transport: cli

 

nxos_config:

src: config.j2

provider: "{{ cli }}"

[email protected]; 2017

Untitled content 79

Given the above two examples that are equivalent, the arguments can also be used to establish precedence and defaults. Consider the following example:

---

vars:

cli:

host: "{{ inventory_hostname }}"

username: operator

password: secret

transport: cli

tasks:

- nxos_config:

src: config.j2

provider: "{{ cli }}"

username: admin

password: admin

In this example, the values of admin for username and admin for password will override the values of operator in cli[‘username’] and secret in cli[‘password’])

[email protected]; 2017

Untitled content 80

This is true for all values in the provider including transport. So you could have a singular task that is now supported over CLI or NXAPI (assuming the configuration is value).

---

vars:

cli:

host: "{{ inventory_hostname }}"

username: operator

password: secret

transport: cli

 

tasks:

-nxos_config:

src: config.j2

provider: "{{ cli }}"

transport: nxapi

…Continued

[email protected]; 2017

Untitled content 81

If all values are provided via the provider argument, the rules for requirements are still honored for the module. For instance, take the following scenario:

---

vars:

conn:

password: cisco_pass

transport: cli

 

tasks:

- nxos_config:

src: config.j2

provider: "{{ conn }}

Running the above task will cause an error to be generated with a message that required parameters are missing.

"msg": "missing required arguments: username,host“

Overall, this provides a very granular level of control over how credentials are used with modules. It provides the playbook designer maximum control for changing context during a playbook run as needed.

[email protected]; 2017

Untitled content 82

Networking Environment Variables

The following environment variables are available to Ansible networking modules:

username ANSIBLE_NET_USERNAME password ANSIBLE_NET_PASSWORD ssh_keyfile ANSIBLE_NET_SSH_KEYFILE authorize ANSIBLE_NET_AUTHORIZE auth_pass ANSIBLE_NET_AUTH_PASS

Variables are evaluated in the following order, listed from lowest to highest priority:

  • Default
  • Environment
  • Provider
  • Task arguments

[email protected]; 2017

Untitled content 83

Conditionals in Networking Modules

Ansible allows you to use conditionals to control the flow of your playbooks. Ansible networking command modules use the following unique conditional statements.

  • eq - Equal
  • neq - Not equal
  • gt - Greater than
  • ge - Greater than or equal
  • lt - Less than
  • le - Less than or equal
  • contains - Object contains specified item

Conditional statements evaluate the results from the commands that are executed remotely on the device. Once the task executes the command set, the wait_for argument can be used to evaluate the results before returning control to the Ansible playbook.

[email protected]; 2017

Untitled content 84

For example:

---

- name: wait for interface to be admin enabled

eos_command:

commands:

- show interface Ethernet4 | json

waitfor:

- "result[0].interfaces.Ethernet4.interfaceStatus eq connected"

In the above example task, the command show interface Ethernet4 | json is executed on the remote device and the results are evaluated. If the path (result[0].interfaces.Ethernet4.interfaceStatus) is not equal to “connected”, then the command is retried. This process continues until either the condition is satisfied or the number of retries has expired (by default, this is 10 retries at 1 second intervals).

[email protected]; 2017

Untitled content 85

The commands module can also evaluate more than one set of command results in an interface. For instance:

---

- name: wait for interfaces to be admin enabled

eos_command:

commands:

- show interface Ethernet4 | json

- show interface Ethernet5 | json

waitfor:

- "result[0].interfaces.Ethernet4.interfaceStatus eq connected"

- "result[1].interfaces.Ethernet4.interfaceStatus eq connected“

In the above example, two commands are executed on the remote device, and the results are evaluated. By specifying the result index value (0 or 1), the correct result output is checked against the conditional.

The wait for argument must always start with result and then the command index in [], where 0 is the first command in the commands list, 1 is the second command, 2 is the third and so on.

[email protected]; 2017

Untitled content 86

[email protected]; 2014