Installing And Configuring Filebeat On Centos/RHEL

Published on Author gryzli

Filebeat is a perfect tool for scraping your server logs and shipping them to Logstash or directly to ElasticSeearch.

You will find some of my struggles with Filebeat and it’s proper configuration.


Installing Filebeat under Centos/RHEL

As with all ELK products the installation process is really easy and straight forward. Filebeat could be easily installed from the Elastic Repo as follows:


1) Add ElasticSearch repository to your yum.repos.d directory

Intentionally the repo is added with “enabled=0”, so you wont risk incident updates of filebeat (which sometimes could become a problem)

vim /etc/yum.repos.d/elastic.repo
name=Elasticsearch repository for 6.x packages


2) Install the Filebeat package

yum --enablerepo=elasticsearch  install filebeat 


3) Make Filebeat to start at boot time

For centos7:

systemctl enable filebeat


For centos 5/6

chkconfig filebeat on


Next is the part when we are going to get things up and running…

1)[Essential] Configure Filebeat To Read Some Logs

If you want to just test, how it does and see how things work , you could enable the default logs for filebeat.

Uncomment or add the following section in your filebeat configuration file /etc/filebeat/filebeat.yml

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
    - /var/log/*.log
    - /var/log/messages
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

The config above, tells filebeat to read all “*.log” files in /var/log + /var/log/messages as well.

This is pretty str8 forward setup.

Now as we have our source of information configured, we need one more thing – configure the destination or the receiver of the parsed logs.


2)[Essential] Configure Filebeat Output

Filebeat supports different types of Output’s you can use to put your processed log data.

Currently you can choose between the following outputs: Logstash, Kafka, ElasticSearch, Redis, File, Console, Cloud (Elastic Cloud) 

You can have only one output configured in a given moment !

Most of the times you will want to use either ElasticSearch or Logtash as your output.  Also you may want to enable File or Console for debugging purposes (but we will make this later).

This is a basic configuration for using ElasticSearch output inside your Filebeat:

# ElasticSearch output with elasticsearch server host " on 9200" 

  # Array of hosts to connect to.
  hosts: [""]

If you happen to use Authentication inside your Elastic cluster, you can add the following auth lines as well:

# ElasticSearch output with basic authentication enabled
  hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  protocol: "https"
  username: "elastic"
  password: "changeme"


Also if you have multiple ES ingest nodes, you may want to load-balance your connections between them. That is as easy as changing the config like this:

  # Array of hosts to connect to.
  hosts: ["server1:9200","server2:9200","server3:9200"]
  loadbalance: true


At this point you are ready to run your filebeat instance and start throwing your first log line to ElasticSearch, you need to just re-start filebeat (systemctl restart filebeat).


After Filebeat restart, it will start pushing data inside the default filebeat index, which will be called something like:


As you can see, the index name, is dynamically created and contains the version of your Filebeat (6.6.0) + the current date (2019.02.15).

For each log-line that is being pushed to ES, Filebeat will add additional meta information such as: name/os of your machine, the log path from which the line is extracted and so on, you can check on this inside your ES Cluster (Kibana maybe) and inspect the lines being imported.


3)[Optional]Parsing Application Specific Logs By Using Filebeat Modules

Filebeat comes with some pre-installed modules, which could make your life easier, because: 

  • Each module comes with  pre-defined “Ingest Pipelines” for the specific log-type
  • Ingest Pipelines will parse your logs, and extract certain fields from it and add them to a separate index fields
  • Later you will have much more power over the data (aggregations, filtering, graphs …etc)

The default modules Filebeat comes with include: apache2.yml auditd.yml elasticsearch.yml haproxy.yml icinga.yml iis.yml kafka.yml kibana.yml logstash.yml mongodb.yml mysql.yml nginx.yml osquery.yml postgresql.yml redis.yml suricata.yml system.yml traefik.yml

In addition there are many many custom modules, written by different people for different type of logs, which you could also use by installing them.


Using Apache2 Module

[Caution] The current apache2 module, requires that your Elastic cluster has ingest-user-agent and ingest-geoip plugins installed


Here is how to use the already included “apache2” module for parsing your Apache access logs: 

# Go to modules directory 
cd /etc/filebeat/modules.d

# Rename the apache module file 
mv apache2.yml.disabled apache2.yml


The default module configuration for log paths are as follows:

      - /var/log/apache2/access.log*
      - /var/log/apache2/other_vhosts_access.log*

If you are access logs reside in different paths, you will need to describe them inside the /etc/filebeat/modules.d/apache2.yml file (path directive);


If you want to read some more advanced stuff about Filebeat, you can check this out:

Advanced Filebeat Configuration