Small Ansible Write-up
-
Just a quick write-up some stuff I was doing with Ansible today.
This is all on a Fedora 23 machine.
Using an ssh key makes all of this easier:
ssh-keygen -t rsa
Then:
ssh-copy-id -i <ip address of remote server>
Download ansible:
sudo dnf -y install ansible
First thing I did was make a new hosts file to clean it up.
sudo mv /etc/ansible/hosts /etc/ansible/hosts.old
Then make another one:
sudo touch /etc/ansible/hosts
The hosts folder holds all of your host names and ip addresses. Make a couple groups for your servers:
[webservers] jhbcomputers.com webserver.com [local] 10.1.10.2 10.1.10.5
There are a couple options I've used in my hosts file, and may be helpful. My website is behind cloudflare, so if I ssh to the domain name it doesn't do anything. You can set a host name and ssh port like this:
[webservers] xxx.xxx.xxx.xxx ansible_ssh_port=<custom port> ansible_host=<domain>
Once the hosts file is set up we can start running commands.
ansible webservers -m ping
returns:
server2 | success >> { "changed": false, "ping": "pong" } server2 | success >> { "changed": false, "ping": "pong" }
Another example would be getting uptime from all of your servers:
ansible webservers -m command -a 'uptime' server1 | success | rc=0 >> 01:34:21 up 5:17, 1 user, load average: 0.00, 0.02, 0.05 server2 | success | rc=0 >> 01:34:23 up 5:15, 1 user, load average: 0.00, 0.01, 0.05
The -m argument tells Ansible which module to use, and the -a argument tells it what arguments to pass to the module.
You could run everything from ad hoc commands like this, but that could get old.
Here's an example playbook I created to update a couple webservers:
--- - hosts: webservers gather_facts: yes remote_user: john tasks: - name: Backup Drupal shell: chdir=/var/www/html/{{ansible_host}} drush archive-dump - name: Update yum: name=* state=latest update_cache=yes become: yes become_method: sudo
The playbook is stored in a .yml file. These always start with three --- at the top.
There are a couple things going on here. First the playbook sees we are running this on the webservers group in our hosts file.
Gather Facts will grab a ton of info about the server and store it in variables. It will store things like the Linux distro, user directory, user id, user gid, disks, amount of free space, ssh key, and a ton more.
Remote_user is the user that you are running the commands with on the remote system.
The tasks section holds the tasks to be completed. Mine is really simple, it just has a couple tasks. You can run things called handlers which will run when a task is completed, but I didn't need them for this.
The first task that runs is named Backup Drupal. It runs these commands:
cd /var/www/html/<sitename>
and then does
drush archive-dump
Each website is stored in a folder that is the domain name. So {{ ansible_host }} grabs the host name from the hosts file (this is the ansible_host=<name> part) and places it in the command. Drush is a utility for Drupal that lets you run a ton of stuff from the cli. So archive-dump creates a backup of the web folder and does a mysql dump of the database and saves it in the home folder of the user who ran the command. This way if something happens after the system update in the second task, I can just run drush archive-restore and it will pull everything back in.
The next task that runs is the system update task. It uses the yum module and updates (state=latest) all packages (name=*). I don't know if I need the update_cache=yes but it was in someone else's write up so I used it. I may not because here's the ansible doc on it:
"Run the equivalent of apt-get update before the operation. Can be run as part of the package installation or as a separate step."
The last two lines tell the task that you want to run as sudo. It will ask you for the sudo password after you run the playbook.
To run this playbook you would type:
ansible-playbook <name-of-playbook>.yml
-
Here's another simple playbook I was messing around with. This lets me create multiple Drupal containers in docker:
--- - hosts: local gather_facts: yes remote_user: john tasks: - name: Drupal shell: docker run --name {{ item.name }} -p {{ item.port }}:80 -d drupal with_items: - { name: 'test1', port: '8081' } - { name: 'test2', port: '8082' } - { name: 'test3', port: '8083' } - { name: 'test4', port: '8084' } - { name: 'test5', port: '8085' } become: yes become_method: sudo
Here's the output:
PLAY [local] ****************************************************************** GATHERING FACTS *************************************************************** ok: [10.0.0.6] TASK: [Drupal] **************************************************************** changed: [10.0.0.6] => (item={'name': 'test1', 'port': '8081'}) changed: [10.0.0.6] => (item={'name': 'test2', 'port': '8082'}) changed: [10.0.0.6] => (item={'name': 'test3', 'port': '8083'}) changed: [10.0.0.6] => (item={'name': 'test4', 'port': '8084'}) changed: [10.0.0.6] => (item={'name': 'test5', 'port': '8085'}) PLAY RECAP ******************************************************************** 10.0.0.6 : ok=2 changed=1 unreachable=0 failed=0
The task in this playbook takes each item name and port and places it into the command. There possibly is a better way to do this, but with a small number of 5 containers and the ease with which you can copy and paste whole lines in vim I just copied them and changed the numbers.
This playbook ran and set up 5 Drupal containers in around 10 seconds or so.
-
Rendering of the OP has freaked out here, is it okay for everyone else?
-
@mlnews said:
Rendering of the OP has freaked out here, is it okay for everyone else?
Went bad here too on FF.
-
@mlnews said:
Rendering of the OP has freaked out here, is it okay for everyone else?
I just fixed it. I don't know what happened. When I clicked edit, it showed up in the preview window correctly. I just clicked save and it's fine now.
-
Good here now, again.
-
So I did a little more with this. Remembering which port goes with which docker app sucks, aint nobody got time for that. So I added an NGINX reverse proxy that adds each host. I couldn't figure out how to make a single .conf file and have it write the template multiple times in the file for each server block. I just had it create a separate .conf file for each host.
Here's the tree for the playbook:
. ├── deploy.yml ├── tasks │ ├── drupal.yml │ └── install_nginx.yml └── templates └── nginx_drupal.j2
Deploy runs the tasks in the tasks folder.
--- - hosts: local gather_facts: yes remote_user: john tasks: - include: tasks/drupal.yml - include: tasks/install_nginx.yml become: yes become_method: sudo
The drupal task creates the drupal sites (same one from above):
--- - name: Drupal shell: docker run --name {{ item.name }} -p {{ item.port }}:80 -d drupal with_items: - { name: 'test1', port: '8081' } - { name: 'test2', port: '8082' } - { name: 'test3', port: '8083' } - { name: 'test4', port: '8084' } - { name: 'test5', port: '8085' } become: yes become_method: sudo
The other task, install_nginx.yml creates an nginx container and then creates a volume to put the configs in. A side note for this, since I'm testing and I spent way too much time getting this figured out I just set SELinux to permissive. It was giving me issues and the nginx container couldn't read from the volume.
--- - name: Create NGINX Container shell: docker run --name nginx -v /var/nginx/conf:/etc/nginx/conf.d/:ro -p 80:80 -d nginx become: yes become_method: sudo - name: Copy template template: src=~/Playbooks/nginx/templates/nginx_drupal.j2 dest=/var/nginx/conf/{{ item.name }}.conf with_items: - { name: 'test1', port: '8081' } - { name: 'test2', port: '8082' } - { name: 'test3', port: '8083' } - { name: 'test4', port: '8084' } - { name: 'test5', port: '8085' }
This task grabs the template from the nginx_drupal.j2 jinja2 file and puts it into each config file with each item (test1, port 8081, test2, port 8082, etc).
Here's the template:
upstream {{ item.name }} { server 10.0.0.6:{{ item.port }}; } server { listen 80; server_name {{ item.name }}.docky.com; location / { proxy_pass http://{{ item.name }}; proxy_set_header Host $http_host proxy_set_header X-Real-IP $remote_addr; } }
You still need open the ports in firewalld or iptables. I didn't have the script do this since I was just playing around and I'm lazy.
It took about 15-20 seconds to create 5 drupal containers and an nginx container with the config files. Not too bad.
Edit. This also did not create any DNS records or static hosts. I manually put those in my EdgeRouter.
-
Very nice, thanks!