Logstash
Download:
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.2.2.rpm rpm -iv logstash-5.2.2.rpm
configure SSL ( Optional- if the logs are on local server, skip this step)
Generate SSL Certificates
Since we are going to use Filebeat to ship logs from our Client Servers to our ELK Server, we need to create an SSL certificate and key pair. The certificate is used by Filebeat to verify the identity of ELK Server. Create the directories that will store the certificate and private key with the following commands:
Now you have two options for generating your SSL certificates. If you have a DNS setup that will allow your client servers to resolve the IP address of the ELK Server, use Option 2. Otherwise, Option 1 will allow you to use IP addresses.
Option 1: IP Address
If you don’t have a DNS setup—that would allow your servers, that you will gather logs from, to resolve the IP address of your ELK Server—you will have to add your ELK Server’s private IP address to the subjectAltName
(SAN) field of the SSL certificate that we are about to generate. To do so, open the OpenSSL configuration file:
- sudo vi /etc/pki/tls/openssl.cnf
Find the [ v3_ca ]
section in the file, and add this line under it (substituting in the ELK Server’s private IP address):
- subjectAltName = IP: ELK_server_private_ip
Save and exit.
Now generate the SSL certificate and private key in the appropriate locations (/etc/pki/tls/), with the following commands:
- cd /etc/pki/tls
- sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let’s complete our Logstash configuration. If you went with this option, skip option 2 and move on to Configure Logstash.
Option 2: FQDN (DNS)
If you have a DNS setup with your private networking, you should create an A record that contains the ELK Server’s private IP address—this domain name will be used in the next command, to generate the SSL certificate. Alternatively, you can use a record that points to the server’s public IP address. Just be sure that your servers (the ones that you will be gathering logs from) will be able to resolve the domain name to your ELK Server.
Now generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/…), with the following command (substitute in the FQDN of the ELK Server):
- cd /etc/pki/tls
- sudo openssl req -subj ‘/CN=ELK_server_fqdn/’ -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let’s complete our Logstash configuration.
Configure Logstash
Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.
Let’s create a configuration file called 02-beats-input.conf
and set up our “filebeat” input:
- sudo vi /etc/logstash/conf.d/02-beats-input.conf
Insert the following input configuration:
- input {
- beats {
- port => 5044
- ssl => true
- ssl_certificate => “/etc/pki/tls/certs/logstash-forwarder.crt”
- ssl_key => “/etc/pki/tls/private/logstash-forwarder.key”
- }
- }
Save and quit. This specifies a beats
input that will listen on tcp port 5044
, and it will use the SSL certificate and private key that we created earlier.
Now let’s create a configuration file called 10-syslog-filter.conf
, where we will add a filter for syslog messages:
- sudo vi /etc/logstash/conf.d/10-syslog-filter.conf
Insert the following syslog filter configuration:
filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } }
Save and quit. This filter looks for logs that are labeled as “syslog” type (by Filebeat), and it will try to use grok
to parse incoming syslog logs to make it structured and query-able.
Lastly, we will create a configuration file called 30-elasticsearch-output.conf
:
sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf
Insert the following output configuration:
output { elasticsearch { hosts => ["localhost:9200"] sniffing => true manage_template => false index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" } }
Save and exit. This output basically configures Logstash to store the beats data in Elasticsearch which is running at localhost:9200
, in an index named after the beat used (filebeat, in our case).
If you want to add filters for other applications that use the Filebeat input, be sure to name the files so they sort between the input and the output configuration (i.e. between 02- and 30-).
Test your Logstash configuration with this command:
- sudo service logstash configtest
It should display Configuration OK
if there are no syntax errors. Otherwise, try and read the error output to see what’s wrong with your Logstash configuration.
Restart and enable Logstash to put our configuration changes into effect:
- sudo systemctl restart logstash
- sudo chkconfig logstash on
Next, we’ll load the sample Kibana dashboards.
Load Kibana Dashboards
Elastic provides several sample Kibana dashboards and Beats index patterns that can help you get started with Kibana. Although we won’t use the dashboards in this tutorial, we’ll load them anyway so we can use the Filebeat index pattern that it includes.
First, download the sample dashboards archive to your home directory:
cd ~ curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.3.1.zip
Install the unzip
package with this command:
- sudo yum -y install unzip
Next, extract the contents of the archive:
- unzip beats-dashboards-*.zip
And load the sample dashboards, visualizations and Beats index patterns into Elasticsearch with these commands:
- cd beats-dashboards-*
- ./load.sh
These are the index patterns that we just loaded:
- [packetbeat-]YYYY.MM.DD
- [topbeat-]YYYY.MM.DD
- [filebeat-]YYYY.MM.DD
- [winlogbeat-]YYYY.MM.DD
When we start using Kibana, we will select the Filebeat index pattern as our default.
Load Filebeat Index Template in Elasticsearch
Because we are planning on using Filebeat to ship logs to Elasticsearch, we should load a Filebeat index template. The index template will configure Elasticsearch to analyze incoming Filebeat fields in an intelligent way.
First, download the Filebeat index template to your home directory:
cd ~ curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json
Then load the template with this command:
-
curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' [email protected]
If the template loaded properly, you should see a message like this:
{ "acknowledged" : true }