Table of Contents
If you have made it through the initial filebeat installation , you may want to do some more interesting stuff with Filebeat.
Here I will share some of my experience with it.
Configuring Filebeat To Tail Files
This was one of the first things I wanted to make Filebeat do. The idea of ‘tail‘ is to tell Filebeat read only new lines from a given log-file, not the whole file.
That’s usefull when you have big log-files and you don’t want FileBeat to read all of them, but just the new events.
The right way of configuring log tailing is as follows:
1| Stop filebeat
systemctl stop filebeat
2| Delete filebeat registry file
rm -vf /var/lib/filebeat/registry
It is necessary to delete the registry, if you have started Filebeat before with (tail option not enabled). If you don’t do this, the “tail” wont work and Filebeat will continue to read the log from the last position it has.
3.a| Add the ‘tail_files‘ option to your Log definition
If you are using Log definitions, it need to look something like this:
- type: log # Change to true to enable this input configuration. enabled: true tail_files: true # Paths that should be crawled and fetched. Glob based paths. paths: - /var/log/*.log - /var/log/messages
3.b| Add the ‘tail_files’ option to Filebeat module configuration
If you are using some of the modules, this is how the config should look like (the example is for the apache2.yml module):
- module: apache2 # Access logs access: enabled: true input: tail_files: true tail: true
4| Finally, start again Filebeat
systemctl start filebeat
Generating Dynamic Index Names In Filebeat
Index names based on Modules / Filesets used
By default, filebeat will push all the data it reads (from log files) into the same elasticsearch index. This could become tedious for support and messy to navigate into.
I prefer each of my logs (by types) to produce it’s messages in a separate elastic index.
Here is how you can make filebeat, send logs to a separate indexes based on the module name, the log comes from.
output.elasticsearch: hosts: ["localhost:9200"] index: "filebeat-%{[fileset.module]}-%{[fileset.name]}-%{+yyyy.MM.dd}"
The following definition will split your index both by module name and fileset name.
For example if you are using apache2.yml module for reading both acccess and error_logs, this will result in 2 separate indexes:
filebeat-apache2-error-yyyy.mm.dd
and
filebeat-apache2-access-yyyy.mm.dd
Index names based on the log lines being read
Another useful method is to choose your index based on the log content which is coming. Keep in mind that this method could require some extra CPU if you have a lot of logs to process.
output.elasticsearch: hosts: ["http://localhost:9200"] indices: - index: "bad-%{[beat.version]}-%{+yyyy.MM.dd}" when.contains: message: "BAD" - index: "good-%{[beat.version]}-%{+yyyy.MM.dd}" when.contains: message: "GOOD"
This will send the logs to bad/good index based on the message content if it has BAD or GOOD in it.
More information about using ‘indices:’ you can read here:
https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html
Modifying Default Filebeat Template (when using ElasticSearch output)
By default, when you first run Filebeat it will try to create template with field mappings in your ElasticSearch cluster.
The template is called “filebeat” and applies to all “filebeat-*” indexes created.
You can see what mappings/definitions the template has, by executing the following in your Kibana Console:
GET /_template/filebeat
Sometimes there is a need to change field mappings, or default index settings in that template.
Here are some ways of doing it.
Making custom template out of current FB template
1| Dump your current template
filebeat export template > filebeat.template
Now you can make whatever modifications you like to the filebeat.template file.
Then, copy the file somewhere in filebeat dir, for example to “/etc/filebeat/filebeat.template”
2| Overwrite the template in ElasticSearch
The command assumes your cluster is accessible on “localhost:9200”. Change this if needed.
curl -XPUT -H 'Content-Type: application/json' http://localhost:9200/_template/filebeat -d@filebeat.template
3| Make sure Filebeat won’t override the template
After importing your custom template to override the default filebeat template, you should make sure Filebeat is not configured to overwrite templates.
There is a setting in /etc/filebeat/filebeat.yml for this, which looks like this:
setup.template.overwrite: false
If this option is set to “true” ,whenever you restart Filebeat, your template will be overwritten by the default one.
4| (Optional) Disable template creation by FB
You can also tell FB to not make filebeat template, and make sure you do it by yourself.
The option is:
setup.template.enable: false
Modifying FB Template By Editing /etc/filebeat/fields.yml
FB generates the template by parsing the file: /etc/filebeat/fields.yml
You could also make changes to this file and re-import the template to Elastic, by issuing:
1| Delete the current template from Kibana
DELETE /_template/filebeat
2| Make FB regenerate and reimport the template
filebeat setup --template
Modifying existing ingest pipelines
If you are using Filebeat to directly ingest data into ElasticSearch, you may want to modify some of the existing ingest pipelines, or write new ones.
I’ve written separate post about working with filebeat pipelines.