Deploying Jenkins with Ansible

1.   Overview

In my previous article, I’ve spoken a lot about Ansible and how we’re using it in our daily tasks. Probably one of the largest projects we used it on in iQuest is our Jenkins CI setup, where we manage all aspects of the deployment and maintenance tasks related to it. This article tries to explain our setup, how, where and how much we used Ansible, and what we gained from it.

To get a clear understanding of what we actually did, we first need to explain what we intended to achieve. Hope the awesome drawing below clears things up a bit. 🙂

Deploying Jenkins with Ansible

The masters are Jenkins CI master nodes, one active and one standby, acting as a backup for failover in a different data center. The slaves are also replicated in a different data center to provide redundancy. “The internet” is actually a very solid VPN connection between the two data centers. Artifactory is our artefacts management system. Since this is an article about Jenkins and Ansible, I won’t go into details on how we achieved high availability and redundancy for Artifactory. However, here’s a hint: it looks very similar to and it’s also done via Ansible.

The two backup servers are used to store Jenkins backups, mostly job configurations and logs. The two replicate by their own rules, via Windows DFS.

And finally, we have the almighty Ansible running laptop. As you will find out, this is the “one ring to rule them all”, as it’s here where the entire setup is created and maintained from. Before you ask, there is nothing special about this laptop, except for it’s awesome humor, and the fact that it can run Ansible playbooks against our hosts and itself.

2.   The first steps

Before we can do anything with Ansible, we need to have the hosts up and running with a minimal OS setup and connectivity in place. Although possible, provisioning the hosts via Ansible is outside of the scope of this article.

All you need in order to be able to run playbooks against a host is to setup ssh or winrm access to it. That being said, we provisioned the hosts, installed minimal OS (as much as minimal goes for Windows server for that matter), exchanged some ssh keys and winrm settings, created the backup mount points, and we made sure everything could reach everything.

3.   The Ansible structure

For easier maintenance in the future, we decided to split our roles based on OS, the role in the architecture and what they provide. This resulted in quite a few roles, starting with ‘common’, ‘backup’, ‘control-services’, ‘control-vms’, ‘master’, ‘slave’, ‘tools-mssql2012’, ‘tools-msvs2012’, ‘tools-msvs2013’ and so on (I guess you can easily figure out which does what).

One of the most useful roles we use is ‘control-vms’, which allows us to control the power state of our hosts, without the clumsy vSphere console (yes, I said vSphere!). This is something we particularly use when performing upgrades and failovers.

The playbook is rather simple, it uses vsphere_guest module from Ansible, and takes the desired state for the machines as argument.

control-vms.yml

---
- hosts: 127.0.0.1
  connection: local
  pre_tasks:
  - raw: which python
    register: mypython
    ignore_errors: True
    tags: always
  -set_fact: ansible_python_interpreter={{mypython.stdout.strip()|default(omit)}}
    tags: always
  roles:
  - control-vms

From role control-vms:

main.yml

---
- include: vms-common.yml
  vars:
    state: powered_on
  tags: start-vms

- include: vms-common.yml
  vars:
    state: powered_off
  tags: stop-vms

vms-common.yml

---
- name: ensure the VM is in the desired running state
  vsphere_guest:
    vcenter_hostname: "{{ vcenter_hostname }}"
    username: "domain\\{{ jenkins_service_username }}"
    password: "{{ jenkins_service_password }}"
    guest: "{{ item }}"
    state: "{{ state }}"
  with_items: groups['linux-slaves'] | union(groups['master']) | union(groups['windows-slaves'])

Note that by using tags such as ‘stop-vms’ and ‘start-vms’, we can only run the certain tasks that we specify

$ ansible-playbook -i inventory_file control-vms.yml --tags start-vms

 

4.   Installing Jenkins master, slaves and tooling

With the instances up and running, installing the Jenkins master, slave and tools is only a matter of installing our required playbooks and running them.

Since some of the roles we want to install are agnostic from a Jenkins setup, and we want to reuse as much of the playbooks as possible, we’ve compiled some of the roles into external repositories that you can install for any playbook.

requirements.yml

---
-src: git+ssh://git@stash.iquestgroup.com/iqdvop/ansible-ansible-common.git
  path: ../

- src: git+ssh://git@stash.iquestgroup.com/iqdvop/ansible-docker-common.git
  path: ../

- src: git+ssh://git@stash.iquestgroup.com/iqdvop/ansible-git-common.git
  path: ../

- src: git+ssh://git@stash.iquestgroup.com/iqdvop/ansible-homebrew-common.git
  path: ../

Installation of the external dependencies is done via ansible-galaxy

$ ansible-galaxy install -r requirements.yml

 

We can finally start installing the master

$ ansible-playbook -i production master.yml

 

the slaves

$ ansible-playbook -i production slaves.yml

 

and the tools on slaves

$ ansible-playbook -i production tools.yml [--tags git[,ansible][,docker][,android-dependencies][,ksh][,rpmbuild]]
 

master.yml

---
- name: ensure the common provisioning steps are performed and the JDK is installed
  hosts: master
  roles:
  - common
  - backup
  - jdk

- name: ensure the nginx and the jenkins server are installed
  hosts: master
  sudo: yes
  roles:
  - nginx
  - master

slaves.yml

---
- name: ensure the slaves are provisioned
  hosts: slaves
  roles:
  - common
  - { role: ansible-homebrew-common,
      when: "ansible_os_family == 'Darwin'" }
  - jdk
  - slave

tools.yml

---
- name: ensure android dependencies are installed across the android-slaves
  hosts: android-slaves
  sudo: yes
  roles:
  - tools-android-dependencies

- name: ensure ansible is installed across the ansible-slaves
  hosts: ansible-slaves
  sudo: yes
  roles:
  - tools-ansible

- name: ensure docker is installed across the docker-slaves
  hosts: docker-slaves
  roles:
  - tools-docker

- name: ensure git is installed across the git-slaves
  hosts: git-slaves
  roles:
  - tools-git

- name: ensure ksh is installed across the ksh-slaves
  hosts: ksh-slaves
  roles:
  - tools-ksh

- name: ensure MS SQL Server 2012 is installed across the vs2012-slaves
  hosts: vs2012-slaves
  roles:
  - tools-mssql2012

- name: ensure MS SQL Server Data Tools for VS 2012 is installed across the vs2012-slaves
  hosts: vs2012-slaves
  roles:
  - tools-msssdt2012

- name: ensure MS Visual Studio 2012 is installed across the vs2012-slaves
  hosts: vs2012-slaves
  roles:
  - tools-msvs2012

- name: ensure MS Visual Studio 2013 is installed across the vs2013-slaves
  hosts: vs2013-slaves
  roles:
  - tools-msvs2013

By going through the yml files, you’ll notice we’re installing everything on the Jenkins master (Centos 6), the Windows slaves and the Mac Server slave.

We repeat the exact same steps, only this time with a different inventory, for our backup site. Simply changing the inventory file, from production to production_backup will recreate the exact same setup as on the main site.

One small difference is how we start services. Since the backup site is only a hot-standby site, meaning we want it to be in sync with the main site, but not actually run anything, we need to somehow make sure that services don’t start in the back-up site, especially the Jenkins master service. For that, we simply ensure the service is stopped after the installation, and we’ll only start the one we want/need, via a control-services playbook that will start the services and cronjobs on the specified inventory file.

5.   Configuring the services

So far we managed to get some hosts, with some apps installed. Now it’s time to actually do some configurations that concern the Jenkins setup itself.

We set up the Jenkins repository with its keys, install Jenkins, create some directories, assign proper permissions, modify Jenkins startup settings, configure logrotate and install a bunch of plugins that we later use in our jobs.

From the Jenkins-master role:
main.yml

---
- include: folders.yml

- name: ensure jenkins repository is configured
  get_url:
    url: '{{ jenkins_repository_url }}'
    dest: /etc/yum.repos.d/jenkins.repo
    owner: root
    group: root
  tags:
  - master
  - packages

- name: ensure jenkins repository key is available
  rpm_key:
    key: '{{ jenkins_repository_key }}'
    state: present
  tags:
  - master
  - packages

- name: ensure jenkins is installed
  yum:
    pkg: 'jenkins-{{ jenkins_version }}'
    state: present
  tags:
  - master
  - packages

- name: make sure jenkins is stopped and not configured to start on boot
  service:
    name: jenkins
    state: stopped
    enabled: no
  tags:
  - master

- include: config.yml

- include: plugins.yml

folders.yml

---
- name: make sure that the data folders are available
  file:
    path: '{{ item }}'
    state: directory
    owner: jenkins
    group: jenkins
    mode: 0750
  with_items:
  - '{{ jenkins_data }}'
  - '{{ jenkins_data }}/userContent'
  - '{{ jenkins_data }}/builds'
  tags:
  - master

config.yml

---
- name: ensure the jenkins options are set
  lineinfile:
    dest: /etc/sysconfig/jenkins
    regexp: '{{ item.regexp }}'
    line: '{{ item.line }}'
  with_items:
  - { regexp: '^JENKINS_JAVA_OPTIONS=.*',
      line: 'JENKINS_JAVA_OPTIONS=" {{ jenkins_memory_args }} {{ jenkins_gc_args }} {{ jenkins_java_opts|join(" ") }}"'}
  - { regexp: '^JENKINS_LISTEN_ADDRESS=.*',
      line: 'JENKINS_LISTEN_ADDRESS="127.0.0.1"'}
  - { regexp: '^JENKINS_AJP_LISTEN_ADDRESS=.*',
      line: 'JENKINS_AJP_LISTEN_ADDRESS="127.0.0.1"'}
  # - { regexp: '', line: ''}
  tags:
  - master
  - config

- name: ensure jenkins logrotate config is updated
  copy:
    src: jenkins.logrotate.conf
    dest: /etc/logrotate.d/jenkins
    owner: root
    group: root
    mode: 0644
  tags:
  - master
  - config

plugins.yml

---
- name: clean any previous plugin deployments
  file:
    path: '{{ jenkins_home }}/plugins'
    state: absent
  tags:
  - master

- name: ensure dir for jenkins plugins exists
  file:
    path: '{{ jenkins_home }}/plugins/'
    state: directory
    owner: jenkins
    group: jenkins
  tags:
  - master

- name: download jenkins plugins
  get_url:
    url: '{{ item.url is defined | ternary(item.url, ["http://mirrors.jenkins-ci.org/plugins/", item.name, "/", item.version, "/", item.name, ".hpi"]|join) }}'
    dest: '{{ jenkins_home }}/plugins/{{ item.name }}.hpi'
    force: yes
    timeout: 30
  with_items: jenkins_plugins
  sudo_user: jenkins
  tags:
  - master
  - plugins

- name: ensure jenkins plugins are pinned
  file:
    path: '{{ jenkins_home }}/plugins/{{ item.name }}.hpi.pinned'
    state: touch
    owner: jenkins
    group: jenkins
  with_items: jenkins_plugins
  sudo_user: jenkins
  tags:
  - master
  - plugins

Note the line saying

with_items: jenkins_plugins

That iterates through all desired plugins. We get those from a var file inside the role. Snippet from that file below:

---
jenkins_data: /data
jenkins_repository_url: 'http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo' # Jenkins LTS (Long-Term Support release) repository
jenkins_repository_key: 'http://pkg.jenkins-ci.org/redhat-stable/jenkins-ci.org.key' # Jenkins LTS (Long-Term Support release) key
jenkins_version: '1.642.1'
jenkins_plugins:
  - name: ace-editor
    version: '1.0.1'
  - name: active-directory
    version: '1.41'
  - name: additional-identities-plugin
    version: '1.1'
  - name: analysis-core
    version: '1.75'
  - name: android-lint
    version: '2.2'
  - name: ansible
    version: '0.4'
...

6.   Final steps

We are almost finished. All that remains to be done now is to link the slaves with the master, and configure jobs. We do this from the Jenkins GUI, as you would with any other Jenkins setup.

The playbook has also installed a backup job that will backup the Jenkins and job configuration files periodically. This is particularly useful when doing failovers or performing upgrades (which imply a failover step).

With this (almost fully) ‘ansibleized’ installation, future maintenance tasks (failover, upgrades, etc.) become a breeze. It also brings consistency amongst multiple sites, guaranteeing we achieve the same desired effect wherever we use them.

Hope you enjoyed our little setup and hope you’ve found useful hints on how to use Ansible to automate and make your job easier. Also, please share your thoughts with us in the comments section below:

Tags: , ,

6 comments

[…] tasks with Ansible. Related to this I found two very good posts about using Ansible in real life. Deploying Jenkins with Ansible Maintaining Jenkins with […]

Mulțumesc pentru împărtășirea experienței!

It’s hard to come by knowledgeable people for this subject, but you sound like you know what
you’re talking about! Thanks

Cheers, hope you liked it 🙂

Can you elaborate more on requirements.yml

Hi Ashutosh,

The requiremetns.yml file is a list of roles dependencies required for the playbook to operate properly.

Example snippet from one of our requirements.yml file:

- src: git+ssh://git@stash.iquestgroup.com/iqdvop/ansible-git-common.git
  path: ../

When using

ansible-galaxy install -r requirements.yml

ansible will checkout the role in a path that it will later lookup.
In the main playbook I can then just say

...
roles:
  ...
  - ansible-git-common
...

which will install git on the target host with the defaults we have defined in the ansible-git-common role (a bit more than a simple yum install git obviously 🙂 )
If defaults need to be overwritten, you can do so in group_vars/ in your playbook or in you playbook’s role vars folder, or even when describing the dependency (see below).

You will also need to install roles that are referenced as dependencies in other roles (in the meta/main.yml), so make sure you put in your requirements file all those referenced roles.
You might also consider splitting a certain role into two/multiple parts: (default) installation and configuration.
The default installation can be a required role that takes no extra vars or configuration (so it relies on what it’s defined in defaults), whilst the configuration part will become a role in your playbook.

Here’s an example how a postgresql role in a playbook requires a basic postgresql server setup and an exporter (to be used with prometheus), via meta/main.yml:

---
dependencies:
  - ansible-postgres-common
  - { role:                                 ansible-prometheus-postgres-exporter-common,
      postgres_exporter_version:            'v0.2.0',
      postgres_exporter_web_listen_address: ':9187',
      postgres_exporter_datasource_name:    'user=postgres host=/tmp sslmode=disable'}

What you see there is actually a simple “postgresql” role that has in meta/main.yml defined two dependencies, with some variables overwrites:
– ansible-postgres-common – simply installs postgresql, with no customizations related to the service the playbook installs/sets up
– ansible-prometheus-postgres-exporter-common – installs the postgresql exporter for Prometheus, and overwrites variables like version, port and other runtime settings.

For these two dependencies, I’d add in my requirements.yml file:

- src: git+ssh://git@stash.iquestgroup.com/iqdvop/ansible-postgres-common.git
  path: ../
- src: git+ssh://git@stash.iquestgroup.com/iqdvop/ansible-prometheus-postgres-exporter-common.git
  path: ../

Please note that role dependencies are installed before the execution of tasks in the role that “asked” for them. So if in my postgresql playbook role my first taks is to setup a user, I can easily do that, as the ansible-postgres-common role from the dependency would already install my server and start it up.
Careful how you tag things though, to keep them consistent :). I usually go with a set of two tags per role, one describing the service, and one the step(s) it executes (e.g. postgresql and install/configure/restart).

Also, one nice feature you can use in more complex scenarios where you’d version service roles, is specify the version of your dependency. As I use Git mostly, I can easily add

 version: master

, or specify a git tag like so

 version: v1.2

in my requirements.yml.file

Cheers!

Leave a Reply

Your email address will not be published. Required fields are marked *