Advanced Filebeat Configuration

Published on Author gryzli

If you have made it through the initial filebeat installation , you may want to do some more interesting stuff with Filebeat.

Here I will share some of my experience with it.

 

Configuring Filebeat To Tail Files

This was one of the first things I wanted to make Filebeat do. The idea of ‘tail‘ is to tell Filebeat read only new lines from a given log-file, not the whole file.

That’s usefull when you have big log-files and you don’t want FileBeat to read all of them, but just the new events.

 

The right way of configuring log tailing is as follows:

1| Stop filebeat

 

2| Delete filebeat registry file

It is necessary to delete the registry, if you have started Filebeat before with (tail option not enabled). If you don’t do this, the “tail” wont work and Filebeat will continue to read the log from the last position it has.

 

3.a| Add the ‘tail_files‘ option to your Log definition

If you are using Log definitions, it need to look something like this:

 

3.b| Add the ‘tail_files’ option to Filebeat module configuration

If you are using some of the modules, this is how the config should look like (the example is for  the apache2.yml module):

 

4| Finally, start again Filebeat

 

 

Generating Dynamic Index Names In Filebeat

Index names based on Modules / Filesets used

By default, filebeat will push all the data it reads (from log files)  into the same elasticsearch index. This could become tedious for support and messy to navigate into.

I prefer each of my logs (by types) to produce it’s messages in a separate elastic index.

Here is how you can make filebeat, send logs to a separate indexes based on the module name, the log comes from.

The following definition will split your index both by module name and fileset name.

 

For example if you are using apache2.yml module for reading both acccess and error_logs, this will result in 2 separate indexes:

filebeat-apache2-error-yyyy.mm.dd

and

filebeat-apache2-access-yyyy.mm.dd

 

Index names based on the log lines being read

Another useful method is to choose your index based on the log content which is coming. Keep in mind that this method could require some extra CPU  if you have a lot of logs to process.

This will send the logs to bad/good index based on the message content if it has BAD or GOOD in it.

 

More information about using ‘indices:’ you can read here:

https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html

 

 

Modifying Default Filebeat Template (when using ElasticSearch output)

By default, when you first run Filebeat it will try to create template with field mappings in your ElasticSearch cluster.

The template is called “filebeat” and applies to all “filebeat-*” indexes created.

You can see what mappings/definitions the template has, by executing the following in your Kibana Console:

Sometimes there is a need to change field mappings, or default index settings  in that template.

Here are some ways of doing it.

 

Making custom template out of current FB template

1| Dump your current template

Now you can make whatever modifications you like to the filebeat.template file.

Then, copy the file somewhere in filebeat dir, for example to “/etc/filebeat/filebeat.template”

 

2| Overwrite the template in ElasticSearch

The command assumes your cluster is accessible on “localhost:9200”. Change this if needed.

 

3| Make sure Filebeat won’t override the template

After importing your custom template to override the default filebeat template, you should make sure Filebeat is not configured to overwrite templates.

There is a setting in /etc/filebeat/filebeat.yml for this, which looks like this:

If this option is set to “true” ,whenever you restart Filebeat, your template will be overwritten by the default one.

 

4| (Optional) Disable template creation by FB

You can also tell FB to not make filebeat template, and make sure you do it by yourself.

The option is:

 

Modifying FB Template By Editing /etc/filebeat/fields.yml

 

FB generates the template by parsing the file: /etc/filebeat/fields.yml

You could also make changes to this file and re-import the template to Elastic, by issuing:

1| Delete the current template from Kibana

 

2| Make FB regenerate and reimport the template

 

Modifying existing ingest pipelines

If you are using Filebeat to directly ingest data into ElasticSearch, you may want to modify some of the existing ingest pipelines, or write new ones.

I’ve written separate post about working with filebeat pipelines.