I recently needed to completely automate deploying a full ELK/EFK stack and clients and didn’t find anything that suited my needs so I wrote the playbooks for this in Ansible. This was my first real foray into automation with Ansible, hope it’s useful to others.
What is ELK?
ELK is an acronym for an extremely useful stack of log aggregation and datastore utilities for centralized log management and analysis from Elastic. I’ve written a few posts on the topic before but here I’ll show you how to easily get it deployed on both servers and associated clients on Red Hat/CentOS based systems.
Update: I have added the Elastic X-Pack suite of utilities to install for you as an option.
Update: ELK has been upgraded to 5.5.x and an optional 2.4 git branch is available for anyone that wants to use that one.
Update: By request optional support for Fluentd instead of Logstash, or the EFK stack is available. Note that it will currently use rsyslog to send logs and ships with options to ferry the most common OpenStack logs located in /var/log/.
These Ansible playbooks will work on any CentOS7 or RHEL7+ Linux system both as a server and client (sending logs to ELK). Fedora23 and higher will need the yum python2 python2-dnf libselinux-python packages as Ansible does not yet have support for Python3.
What Does it Do?
- Installs and configures Elasticsearch, Logstash and Kibana on a target Linux server
- Sets up firewall rules depending on if you’re using a firewall and what type (firewalld or iptables-services)
- Uses nginx as a reverse proxy and htpasswd authentication.
- Adjusts the Elasticsearch heapsize to 1/2 of the system memory to a max of 32G
- Generates client and server SSL certificates for secure data transmission including SubjectAltName support (for test environments without proper DNS)
- Listening ports are configurable in install/group_vars/all.yml
- Can substitute Fluentd instead of the default Logstash
- Optionally install the Elasticsearch curator tool for managing indexes
- Optionally install the Elastic X-Pack suite of tools
- Installs the Filebeat client to send logs to the target ELK server on target clients
- Sets up forwarding of most system services and OpenStack logs
- Immediately starts forwarding logs to your specified ELK stack
- Sets up rsyslog if you opt to use Fluentd instead of Logstash.
ELK/EFK 5.5+ and above seem to take significantly more memory and resource, my testing VM has at 8G memory and 4vcpu. Size your server accordingly for best results.
You may also want to tune your system to not swap often:
echo "vm.swappiness=10" >> /etc/sysctl.conf sysctl -p
First you’ll want to clone the git repo here:
git clone https://github.com/sadsfae/ansible-elk
Install Ansible. Substitute yum and the EPEL repo if you’re not using Fedora.
dnf install ansible -y
Edit the Ansible (inventory) hosts file with your target ELK server and ELK clients.
cd ansible-elk sed -i 's/host-01/your_elk_server/' hosts sed -i 's/host-02/your_elk_client/' hosts
Next, take a look at the optional parameters located in install/group_vars/all.yml. You can change things like default ports for services, or substitute Fluentd instead of Logstash. You can also install the optional curator tool for managing your indexes. For most people leaving the defaults will work fine, and you can skip this part.
logging_backend: fluentd <-- using an alternate backend in all.yml
install_curator_tool: true <-- install curator for index management
If you don’t want to automatically add either iptables or firewalld rules for your ELK/EFK server components you can change this to false.
I have also recently included support for the Elastic X-Pack suite of tools, you can enable this also if you like but it will greatly speed up installation time. Don’t worry, you can always run it again later to install it.
install_elasticsearch_xpack: true install_kibana_xpack: true install_logstash_xpack: true
Install your ELK Server
If you are happy with the default options run the elk.yml playbook against your target ELK server. It takes about 3 to 4 minutes to have a full ELK/EFK up and running.
ansible-playbook -i hosts install/elk.yml
When this finishes you’ll see some messages displayed, including the second Ansible command to install on the clients to send logs to the server.
I did not automate this part because I wanted to give people an opportunity to name their index prefix and type as they wish.
Navigate to the URL generated when the playbook completes (mine was http://host-01) and click the blue button to create your index. Use admin/admin to login (change this later at your leisure in the install/group_vars/all.yml configuration).
Once this is done you should see some of the local logs sent to Elasticsearch via the “Discover” tab. You should now have a fully functioning ELK server stack.
Install Filebeat on Clients
Now you’re ready to start sending remote logs to the ELK stack. Go back to your Ansible terminal and copy the printout command at the end, it should reflect how your ELK server was setup. For example in my VM setup my command was this for RHEL7/CentOS7 clients:
ansible-playbook -i hosts install/elk-client.yml \ -e 'elk_server=192.168.122.82'
If you opted to use Fluentd instead of Logstash then it will install and setup rsyslog forwarding instead of the Filebeat client, capturing common logs used for OpenStack in /var/log/. Like all configs you can edit these in the playbook and re-run Ansible to take effect.
Fedora 23+ Clients
If you have any Fedora23+ clients you’ll want to ensure a few extra python2 packages are installed first on any Fedora ELK clients before you run the above command so run the below command first (this is needed until Ansible comes with Python3 support).
ansible fedora-client-01 -u root -m shell -i hosts -a \ "dnf install yum python2 libsemanage-python python2-dnf -y"
Now you can run the above playbook command.
Watch it come together
Now any ELK clients you have should be setup in parallel. Just like the ELK server Ansible run you’ll start to see things come together – this should be pretty fast.
At this point you should see logs trickle in from all the client(s) you have setup. Note that your client /etc/filebeat/filebeat.yml file is setup for generic /var/log/*.log and OpenStack services, edit to your liking to pull in what you desire to ELK.
I’ve got a friend who had the same really creepy pizza delivery guy always say “we’ve done all the hard work, all you have to do is eat.” This is sort of like that except Ansible does all the heavy lifting and you just get to ELK it up.
Using Kibana Dashboards
Kibana 5 adds a lot of functionality to the Kibana dashboard experience, but basically everything that you can log you can graph and visualize with a high level of customization. We won’t cover all of that here, but I’ll be adding more details about further optimizing your Kibana dashboard, perhaps in a different post. For now the extra Kibana dashboards for Filebeat have also been included in these playbooks as a starting point.
ELK 2.4 Series
By default the current version is 5.2, but you can use the 2.4 series by checking out that branch.
git clone https://github.com/sadsfae/ansible-elk cd ansible-elk git checkout 2.4
Below is a video demonstration of the Ansible automation in action.