Ansible Ad Hoc

Not playing by the book

Smells like a hack

One of the first things we learn at the Ansible Best Practices University is that roles are a godsend, playbooks are great and ad hoc commands are meh. And that’s absolutely true… Except when it isn’t. I take configuration as code as an achievement comparable to subsidized anti-Polio injections—once we accomplish such a feat as a society, we don’t let it be taken from us too easily. It’s a basic right. We, as hardworking ops people and dignified humans, have the right to keep our configurations in source control and we’re ready to fight for it if needed!

/img/2018/11/2018-11-06-ansible-ad-hoc/configuration-as-code-now-thumb.jpg

Basic rights. (Original by Jerry Kiesewetter on Unsplash.)

But I don’t want to write a playbook just to power off the machines in my LAN. Or get the status of Nginx on my web servers. I also don’t want to pass a cumbersome command to SSH for each of those IPs (in a shell loop!) For those things, I’d rather call an Ansible module from the command line. In this article, I’ll show you some of my favorite ad hoc commands. But first, let’s see a neat way of installing Ansible.

How I like to install Ansible on control machines

Control machines are computers where we can run Ansible from. Those could be a centralized server responsible for general ops tasks, a box running Jenkins or your own workstation.

I like installing Ansible with the Python virtualenv package, which lets us create a local environment isolated from system-level directories. On my development machine (Ubuntu 16.04), I can install Python 3 and the pip package manager like this:

sudo apt update
sudo apt install python3 python3-pip

Once pip is installed, we can use it to install Python packages, like virtualenv:

pip3 install virtualenv

The last command will place the virtualenv package files under ~/.local/lib/python3.x/site-packages/ (x will vary according to the latest Python 3 version available in your package manager). We can then use the virtualenv package to create the isolated environment where Ansible will be installed:

mkdir ~/ansible
cd ~/ansible
virtualenv .venv

Among other things, the last line will create a bash script (~/ansible/.venv/bin/activate) that should be sourced in order to enable the Python virtual environment in the current terminal session:

source ~/ansible/.venv/bin/activate

If we install Ansible with pip now, regardless of the current directory, the package will be placed in the new virtual environment:

(.venv) cwtf@computers.wtf:~$ which pip3
/home/cwtf/ansible/.venv/bin/pip3

(.venv) cwtf@computers.wtf:~$ pip3 install ansible
Collecting ansible
# [...]

(.venv) cwtf@computers.wtf:~$ which ansible
/home/cwtf/ansible/.venv/bin/ansible

Pretty neat, huh? Later on, if we want to upgrade Ansible to the latest version, we can just run pip3 install --upgrade ansible (make sure you’ve activated the virtual environment first):

(.venv) cwtf@computers.wtf:~$ pip3 install --upgrade ansible
Requirement already up-to-date: ansible in ./ansible/.venv/lib/python3.5/site-packages
# [...]

At the end of this article and once we’re done experimenting, I’ll show you how to deactivate the virtual environment and reset your terminal session to its normal state. (Tip: the bash script we sourced earlier also defines a deactivate function.)

Nodes’ listing

The first thing we need to do after installing Ansible is create a listing with all the nodes we intend to manage. This listing is called an inventory in Ansible lingo, and usually takes the form of a hosts file. Here’s my ~/ansible/hosts file:

localhost

Yeah, it looks pretty useless right now, but I wanted to start with a simple example. To begin with, an Ansible managed node doesn’t necessarily need to be a remote node—nothing stops us from treating the control machine itself as a managed node. Next, we’ll run a test on that inventory to check if it works.

The raw module

To test that our new inventory file works, we can call Ansible like this:

$ ansible -i ~/ansible/hosts -c local localhost -m raw -a "pwd"
localhost | CHANGED | rc=0 >>
/home/cwtf

In this example, we asked Ansible to run a raw command on a managed node that happens to be the local host itself. That’s why we used -c local (-c is short for --connection), which tells Ansible that it can do without SSH just fine, since we wanted the command to run locally. The -i (--inventory) flag tells Ansible where to find the inventory file, which contains the nodes we want to manage (localhost). -m stands for --module-name and -a for --args (module arguments). In our case, we used the raw module, which simply executes the default shell on the target node, passing it the arguments specified in the -a flag as the command to be executed from within the shell. The end result is that Ansible ran the pwd command on localhost, printing the current directory (from Ansible’s perspective) in the console output.

An interesting fact about the raw module is that it doesn’t require Python in order to function. The two basic requirements of a remote managed node are a compatible version of Python (version 2.6 or later for Python 2 and version 3.5 or later for Python 3) and an SSH server. Naturally, if we don’t have a working SSH server on our remote node, Ansible won’t be able to communicate with it, not even through the raw module.1 But, assuming our remote node does have SSH installed but still lacks Python, we can use Ansible for that:

$ ansible -i ~/ansible/hosts sherlock.home -K -b -m raw -a "yum install -y python"
SUDO password: ***
# [...]

Note that this time around we didn’t use -c local, since sherlock.home is a remote node. (Well, it’s actually in my LAN, but it still isn’t localhost; hence, “remote.”) We also had to specify the -b (--become) and -K (--ask-become-pass) flags, because the command passed to the raw module in this case requires root permissions in order to be executed. Specifying those two flags (-b and -K) will tell Ansible to run the command with sudo privileges on the remote node, using the secret inserted from the prompt for password. Of course, for this to work, we’d first need to configure the SSH server at sherlock.home to accept connections from our control machine and also add sherlock.home to our inventory file.

Here’s my new ~/ansible/hosts file, now with some of the computers I have on my local network and without the dummy localhost:

dns.home
jenkins.home
minecraft.home
sherlock.home

Once Python has been installed on the remote machine, we can start experimenting with full blown Ansible modules, like the setup module:

$ ansible -i ~/ansible/hosts sherlock.home -m setup -a 'filter=ansible_os_family'
sherlock.home | SUCCESS => {
    "ansible_facts": {
        "ansible_os_family": "RedHat"
    },
    "changed": false
}

We’ll see more examples of the setup module later.

Python version

If you payed close attention to the yum install command we used with the raw module above, you’ll have noticed that the Python version we installed on that node is version two, not three:

$ ansible -i ~/ansible/hosts sherlock.home -m setup -a 'filter=ansible_python_version'
sherlock.home | SUCCESS => {
    "ansible_facts": {
        "ansible_python_version": "2.7.5"
    },
    "changed": false
}

And now you may be wondering why Ansible is still able to function properly on that node. The reason why it still works is because Ansible can run most (if not all) of its modules on both Python 2 and Python 3:

Ansible is pursuing a strategy of having one code base that runs on both Python-2 and Python-3 because we want Ansible to be able to manage a wide variety of machines.2

That’s right: the Python version on the managed node doesn’t have to match the Python version on the control machine. If you still want them to match, though, at least at the major version number, you can install Python 3 on your nodes:

ansible -i ~/ansible/hosts sherlock.home -K -b -m raw -a "yum install -y python3"

Just bear in mind that Python 3 will be usually placed under /usr/bin/python3, and Ansible by default uses the Python interpreter located at /usr/bin/python to run its modules. To make Ansible default to /usr/bin/python3 instead, you can use the ansible_python_interpreter configuration parameter in your hosts file. For example, the following hosts file forces Ansible to use Python 3 on all managed nodes:

dns.home
jenkins.home
minecraft.home
sherlock.home

[all:vars]
ansible_python_interpreter=/usr/bin/python3

Enough with configurations; let’s see some other examples of ad hoc commands.

Routine commands

Following are the ad hoc commands I use on a daily (or almost) basis.

Parse gathered facts

Every time I need to gather information from my servers and am too lazy too ssh into them, I use Ansible’s setup module. We’ve seen it twice in this article already. Unfortunately, its parsing capabilities are very limited, and “the filter option filters only the first level subkey below ansible_facts.”3 So, if we need to get a specific piece of information that resides in the guts of the module’s JSON output, we can’t rely on the filter option.

But here’s a hack: we can massage the output of the setup module a bit and pass it to the jq CLI tool, which will give us a beautifully parsed JSON document that we can query however we want:

$ ansible -i ~/ansible/hosts sherlock.home -m setup | sed "1 s/.*=> //" | jq '.ansible_facts.ansible_default_ipv4 | { ip: .address, interface: .interface, gateway: .gateway }'
{
  "ip": "192.168.0.5",
  "interface": "enp0s25",
  "gateway": "192.168.0.1"
}

I know… It’s a mouthful. To be honest, I never type this one from scratch like that—I keep it in a text file and copy-paste it whenever I need that type of information. (Such a cheater!)

Manage services

The most common ad hoc commands I run have to do with services, and they’re pretty straightforward. As an example, the following command will restart the jenkins service unit in the jenkins.home node:

$ ansible -i ~/ansible/hosts jenkins.home -K -b -m service -a "name=jenkins state=restarted"
SUDO password: ***
jenkins.home | CHANGED => {
    "changed": true,
    "name": "jenkins",
    "state": "started",
    [...]
}

Other possible values for state are reloaded, started and stopped.4

Shut machines down

Finally, my favorite one—turning off all machines with one command line:

$ ansible -i ~/ansible/hosts all -K -b -m command -a "shutdown -h now"
SUDO password: ***
sherlock.home | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: Shared connection to sherlock.home closed.\r\n",
    "unreachable": true
}
# [...]

In the above example we used all for hosts, which is an Ansible variable referring to all hosts defined in the inventory file. And the module used is command, which does just what it says—execute a command on a remote node.

As you can tell from the output, Ansible returned a failed status because the SSH connection was suddenly broken. If you want the shutdown to happen gracefully, replace now with +1; that will tell shutdown to only execute after exactly one minute:

$ ansible -i ~/ansible/hosts all -K -b -m command -a "shutdown -h +1"
SUDO password: ***
sherlock.home | CHANGED | rc=0 >>
Shutdown scheduled for Tue 2018-11-06 00:17:09 EST, use 'shutdown -c' to cancel.
# [...]

Conclusion

That’s it for ad hoc commands. They can be pretty handy in some circumstances, so remember they exist and show them some love. Before I forget: in order to disable the Python virtual environment in your terminal session, just run deactivate from the command line.


  1. This is not entirely true. Ansible works with other connection types (like docker) as well. SSH is by far the most commonly used type of connection, though. [return]
  2. In Ansible and Python 3. [return]
  3. In setup - Gathers facts about remote hosts. [return]
  4. See service - Manage services for more information. [return]