Table of Contents
ElasticSearch Example Queries
Searching with wildcard query.
Changing the size of the search result to 100 items. |
GET /filebeat-apache2-access-2019.01.10/_search { "size":100, "query": { "wildcard": { "apache2.access.url" : "*CHAR(*" } } } |
Filter aggregations by minimum document count in the result by using min_doc_count setting. |
# will print aggregation buckets only if they consist of more than # 5 documents ... "aggregations": { "my_aggregation": { "terms": { "field": "Field1", "min_doc_count": 5 } } } } |
Check cluster health information |
GET _cluster/health |
Check cluster settings |
GET _cluster/settings |
Check index mappings and settings |
# Assuming the name of your index is "some_index-2019-01-15" GET /some_index-2019-01-15 |
Check indexes recovery stats |
GET /_cat/recovery?v |
Check shard status |
GET /_cat/shards |
Debugging (with logs)
Debugging Master Discovery:
(Add this to elasticsearch.yml and restart elastic)
logger.org.elasticsearch.cluster.coordination.ClusterBootstrapService: TRACE logger.org.elasticsearch.discovery: TRACE
Cluster Level
Temporary Disabling Shard Re-Allocation
This is probably one of the most frequent thing you may do with your cluster during Maintenance mode.
If you are planning some of the following:
- Restarting a node
- Stop a node for doing upgrade
- Stop a node …for whatever reason
You should consider temporary disabling shard reallocation. If you do not do this, you will end up with shards being reallocated between your running cluster nodes, which depending on your index size, could be really bad.
Disable Shard Re-Allocation
# KIBANA PUT _cluster/settings { "persistent": { "cluster.routing.allocation.enable": "none" } } # CURL API curl -XPUT "https://es.example.com:9200/_cluster/settings" -H 'Content-Type: application/json' -d' { "persistent": { "cluster.routing.allocation.enable": "none" } }'
At this moment you may also want to sync/flush your indexes before the shutting the node from the cluster.
You can do this by:
# KIBANA POST _flush/synced
Finally after your node has been re-started and re-join the cluster, you could re-enable shard -allocation.
Enable Shard Re-Allocation
# KIBANA PUT _cluster/settings { "persistent": { "cluster.routing.allocation.enable": "all" } } # CURL API curl -XPUT "https://es.example.com:9200/_cluster/settings" -H 'Content-Type: application/json' -d' { "persistent": { "cluster.routing.allocation.enable": "all" } }'
Index Level
Creating Index Aliases
There is a very nice feature for creating index aliases. Basically you can make something like this:
index_alias --> index1-01.01.2019,index1-02.01.2019....
Create/Delete index alias
The snippet below will create index alias “shbeat-exim4-main” which will point to 3 different indexes.
POST /_aliases { "actions" : [ { "add" : { "index" :"shbeat-exim4-main-2019.02.02", "alias" : "shbeat-exim4-main" } "add" : { "index" :"shbeat-exim4-main-2019.02.03", "alias" : "shbeat-exim4-main" } "add" : { "index" :"shbeat-exim4-main-2019.02.04", "alias" : "shbeat-exim4-main" } } ] }
Removing is also pretty simple:
To remove only one index from the alias:
POST /_aliases { "actions" : [ { "remove" : { "index" :"shbeat-exim4-main-2019.02.02", "alias" : "shbeat-exim4-main" } } ] }
You could also remove indexes by wildcard or all of them from an alias (removing all indexes, actually removes the alias itself)
POST /_aliases { "actions" : [ { "remove" : { "index" :"*", "alias" : "shbeat-exim4-main" } } ] }
Good article on Index Aliases by Elastic:
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-aliases.html
Increase the timeout for shard re-allocation when a node is missing
If a node leaves your cluster, it will automatically try (after index.unassigned.node_left.delayed_timeout time) to re-allocate the replica shards for your index.
Depending on the size of your indexes, this could lead to really cpu/io/network intensive operations for moving data between your cluster nodes.
If you know that you are going to frequently disconnect or restart nodes, it will be smart to set the unassigned.node_left.delayed_timeout setting for your Index to big enough value.
For example you could set this to “60m” (minutes), which is going to give you 60 minutes, in which your cluster wont do any shard re-allocation .
If your index is called “my_index” you could update the setting by the following command:
# Changing the option for a given index "my_index" # CURL style curl -XPUT "https://es-cluster:9200/my_index/_settings" -H 'Content-Type: application/json' -d' { "settings": { "index.unassigned.node_left.delayed_timeout": "60m" } }' # Kibana DevTools PUT /my_index/_settings { "settings": { "index.unassigned.node_left.delayed_timeout": "60m" } }
Keep in mind, that this option would be valid only for your current existing Indexes.
If you want this option to persist for every created or group of indexes, you should modify your Index Templates, by adding the following snippet inside:
..... ..... "settings" : { "unassigned":{ "node_left":{ "delayed_timeout":"60m" } } } ..... ......
IMPORTANT: If you loose a node temporary and you re-add it, you may want to fasten the process of allocating the unassigned shards. If you want to do that, you could temporary modify the “index.unassigned.node_left.delayed_timeout” setting to a lower value (couple of seconds), and then turn it back to the default option.
Apply setting to all existing indexes
Applying a setting to all existing indexes is easy by using the /_all/_settings api
Let say you want to update the refresh interval for all of your existing indexes to 60 seconds.
# KIBANA PUT /_all/_settings { "settings": { "refresh_interval": "60s" } }