ElasticSearch Security – Installing And Configuring Search-Guard How-To

Published on Author gryzli

Table of Contents

Security is one of the major “missing” things from the free ELK Stack compilation, so let’s talk about achieving it!

Soon or later there will come time, when you may want some more “Secure” ElasticSearch cluster, and by “Secure” I mean some of the following :

  • Encrypted communication between cluster nodes
  • Encrypted communication between “Indexing application/servers” and Elastic nodes
  • Encrypted communication between Kibana and ElasticSearch
  • Authentication – having different users/passwords for connecting to Elastic
  • Authorization   – having different user permissions and roles , things like:
    • Access of a given user to a specific Index
    • Different type of access level (write access to one index, read access to another) and so on….etc

 

Current Solutions For Achieving ElasticSearch / ELK Stack Security

 

1) ElasticStack X-Pack Security

I have never tried X-Pack Security, but considering the fact it is made by the elastic guys, it should be pretty handy of achieving all kind of Elastic related security.

By reading the documentation, seems like it provides almost anything you may need about successfully  securing your cluster. If you or your company are already paying for X-Pack, this may perfectly work for you.

Because X-Pack Security is NOT FREE , I will go on to different solutions for achieving security.

 

 

2) ReadOnlyREST

This is another solution which I have not tried still, but looks really promising.

The plugin is recommended in the ElasticSearch “Security Plugins” documentation section.

They also have all the blink blink features like encryption, authentication, authorization plus they have FREE TO USE version , but also paid plans.

Here you can find some more information about this plugin:

https://github.com/beshu-tech/readonlyrest-docs/blob/master/elasticsearch.md

 

3) Search-Guard

This is the solution which I have tried (and still using) and for now suites best my needs.

They are also selling this product, but having community FREE TO USE version as well (the one I use).

More information about what you can use in the free version, could be seen here:

https://search-guard.com/product/

 

The only one thing I don’t like about search guard, is that the documentation is terrible. That’s why I will try to write deep walk-through of the installation and configuration process, hoping to make some lives easier than main..

 

Installing and Configuring Search-Guard On Existing ElasticSearch Cluster

Here I assume you already have working ElasticSearch cluster. In the examples that follow, I’m using ES cluster consisting of 2 nodes (both being data and master eligible nodes).

Also the nodes are using ElasticSearch version 6.5.1 .

My current configuration is as follows:

(ES node1) 
IP: 192.168.55.1 
DNS: node1.gryzli.info


(ES node2) 
IP: 192.168.55.2 
DNS: node2.gryzli.info

 

 

1) Generating certificates for Search-Guard

The first thing to do is to prepare your SSL certificates for Search-Guard, you will need the following certificates:

Admin Certificate: You will use it for administrating the cluster and do initial settings after search-guard install
Files: admin.crt / admin.key 

Node Certificates: 

Node1 data certificate: Certificate for encrypting communication on port 9300
Files: node1.crt | node1.key 

Node1 http certificate: Certificate for encrypting communication on port 9200 
Files: node1_http.crt | node1_http.key

Node2 data certificate: Certificate for encrypting communication on port 9300
Files: node2.crt | node2.key 

Node2 http certificate: Certificate for encrypting communication on port 9200 
Files: node2_http.crt | node2_http.key

For this tutorial I’m going to use search-guard tlstool for doing the following:

  1. Generate my own Certificate Signing Authority
  2. Generate/Sign self-signed signing certificate
  3. Generate CSR’s for all node certificates
  4. Sign all nodes certificates

 

(HINT) In this step you could actually proceed differently based on your opinion: 

  • You can use real CA signed certificates (or maybe LetsEncrypt or something like that)
  • You may use one certificate for both Data/HTTP channel

 

So let’s generate the certificates by using Search-Guard provided tools, which is really easy to do.

 

First you have to download search-guard tlstool from here:

https://search.maven.org/search?q=a:search-guard-tlstool

 

Extract the files 

gryzli@localhost $ tar xvzf search-guard-tlstool-1.5.tar.gz

 

Create configuration file for the tlstool

gryzli@localhost $ vim config/test_cluster.yml

(Update the information for certificates according to your setup )

###
### Self-generated certificate authority
### 
# 
# If you want to create a new certificate authority, you must specify its parameters here. 
# You can skip this section if you only want to create CSRs
#
ca:
   root:
      # The distinguished name of this CA. You must specify a distinguished name.   
      dn: CN=root.ca.gryzli.info,OU=CA,O=BugBear.BG\, Ltd.,DC=BugBear,DC=com

      # The size of the generated key in bits
      keysize: 2048

      # The validity of the generated certificate in days from now
      validityDays: 3650
      
      # Password for private key
      #   Possible values: 
      #   - auto: automatically generated password, returned in config output; 
      #   - none: unencrypted private key; 
      #   - other values: other values are used directly as password   
      pkPassword: none
      
      # The name of the generated files can be changed here
      file: root-ca.pem
      
   # If you want to use an intermediate certificate as signing certificate,
   # please specify its parameters here. This is optional. If you remove this section,
   # the root certificate will be used for signing.         
   intermediate:
      # The distinguished name of this CA. You must specify a distinguished name.
      dn: CN=signing.ca.gryzli.info,OU=CA,O=BugBear.BG\, Ltd.,DC=BugBear,DC=com
   
      # The size of the generated key in bits   
      keysize: 2048
      
      # The validity of the generated certificate in days from now      
      validityDays: 3650
  
      pkPassword: none
            
      # If you have a certificate revocation list, you can specify its distribution points here      
      crlDistributionPoints: URI:https://raw.githubusercontent.com/floragunncom/unittest-assets/master/revoked.crl

### 
### Default values and global settings
###
defaults:

      # The validity of the generated certificate in days from now
      validityDays: 3650 
      
      # Password for private key
      #   Possible values: 
      #   - auto: automatically generated password, returned in config output; 
      #   - none: unencrypted private key; 
      #   - other values: other values are used directly as password   
      pkPassword: none
      
      # Specifies to recognize legitimate nodes by the distinguished names
      # of the certificates. This can be a list of DNs, which can contain wildcards.
      # Furthermore, it is possible to specify regular expressions by
      # enclosing the DN in //. 
      # Specification of this is optional. The tool will always include
      # the DNs of the nodes specified in the nodes section.            
      #nodesDn:
      #- "CN=*.example.com,OU=Ops,O=Example Com\\, Inc.,DC=example,DC=com"
      # - 'CN=node.other.com,OU=SSL,O=Test,L=Test,C=DE'
      # - 'CN=*.example.com,OU=SSL,O=Test,L=Test,C=DE'
      # - 'CN=elk-devcluster*'
      # - '/CN=.*regex/' 

      # If you want to use OIDs to mark legitimate node certificates, 
      # the OID can be included in the certificates by specifying the following
      # attribute
      
      # nodeOid: "1.2.3.4.5.5"

      # The length of auto generated passwords            
      generatedPasswordLength: 12
      
      # Set this to true in order to generate config and certificates for 
      # the HTTP interface of nodes
      httpsEnabled: true
      
      # Set this to true in order to re-use the node transport certificates
      # for the HTTP interfaces. Only recognized if httpsEnabled is true
      
      # reuseTransportCertificatesForHttp: false
      
      # Set this to true to enable hostname verification
      #verifyHostnames: false
      
      # Set this to true to resolve hostnames
      #resolveHostnames: false
      
      
###
### Nodes
###
#
# Specify the nodes of your ES cluster here
#      
nodes:
  - name: node1
    dn: CN=node1.gryzli.info,OU=Ops,O=BugBear BG\, Ltd.,DC=BugBear,DC=com
    dns: 
      - node1.gryzli.info
    ip: 
      - 192.168.55.1

  - name: node2
    dn: CN=node2.gryzli.info,OU=Ops,O=BugBear BG\, Ltd.,DC=BugBear,DC=com
    dns: 
      - node2.gryzli.info
    ip: 
      - 192.168.55.2

###
### Clients
###
#
# Specify the clients that shall access your ES cluster with certificate authentication here
#
# At least one client must be an admin user (i.e., a super-user). Admin users can
# be specified with the attribute admin: true    
#        
clients:
  - name: gryzli
    dn: CN=root.gryzli.info,OU=Ops,O=BugBear Com\, Inc.,DC=example,DC=com
    admin: true

 

Now generate all the needed certificates

gryzli@localhost $ cd tools/

# Generate new signing authority
gryzli@localhost ~/tools $ ./sgtlstool.sh -c ../config/test_cluster.yml -v -ca

# Generate CSR's for node + admin certs
gryzli@localhost ~/tools $ ./sgtlstool.sh -c ../config/test_cluster.yml -v -csr

# Generate cert/keys
gryzli@localhost ~/tools $ ./sgtlstool.sh -f -o -c ../config/test_cluster.yml -v -crt

 

After executing the commands above, you should have directory “out” create in your current dir (tools)

gryzli@localhost [~/tools]$ find out/ | sort
out/
out/client-certificates.readme
out/gryzli.csr
out/gryzli.key
out/gryzli.pem
out/node1.csr
out/node1_elasticsearch_config_snippet.yml
out/node1_http.csr
out/node1_http.key
out/node1_http.pem
out/node1.key
out/node1.pem
out/node2.csr
out/node2_elasticsearch_config_snippet.yml
out/node2_http.csr
out/node2_http.key
out/node2_http.pem
out/node2.key
out/node2.pem
out/root-ca.key
out/root-ca.pem
out/signing-ca.key
out/signing-ca.pem

 

Uploading necessary certificates to Elastic cluster nodes 

The next step is to upload the certificates each node will need to the appropriate cluster nodes.

 

For both nodes node1/node2 , you should do the following:

gryzli@localhost [~/tools]$ cd out

# Upload node1 certificates
gryzli@localhost [~/tools/out]$ rsync -av --progress node1* root@node1.gryzli.info:/etc/elasticsearch/ssl/

# Upload root-ca.pem to node1
gryzli@localhost [~/tools/out]$ scp root-ca.pem root@node1.gryzli.info:/etc/elasticsearch/ssl/


# Upload node2 certificates
gryzli@localhost [~/tools/out]$ rsync -av --progress node1* root@node2.gryzli.info:/etc/elasticsearch/ssl/

# Upload root-ca.pem to node2
gryzli@localhost [~/tools/out]$ scp root-ca.pem root@node2.gryzli.info:/etc/elasticsearch/ssl/


# Additionally upload your admin certificate to Node1 , you will need it later
gryzli@localhost [~/tools/out]$ scp gryzli* root@node1.gryzli.info:/etc/elasticsearch/ssl/

 

Fix permissions for the newly uploaded certificate files, here I assume your ElasticSearch is installed in the default (for Centos) directory (/etc/elasticsearch) and also that your local user is named “elasticsearch”.

Execute the following command on both elastic nodes:

root@node1$ chown elasticsearch: /etc/elasticsearch/ssl/* 
root@node2$ chown elasticsearch: /etc/elasticsearch/ssl/*

 

Now we are ready to proceed with installing search-guard plugin for ElasticSearch and configure it futher.

 

2) Installing and configuring Search-Guard plugin for ElasticSearch

 

1) Disable cluster shard allocation 

curl -Ss -XPUT 'http://node1.gryzli.info:9200/_cluster/settings?pretty' -H 'Content-Type: application/json' -d '{"persistent":{"cluster.routing.allocation.enable": "none" }}'

 

2) Check which search-guard plugin version you need to install

Before installing the plugin, you need to make sure, what exact version you should use, based on your ElasticSearch cluster version. This could be checked from here:

https://docs.search-guard.com/latest/search-guard-versions

For my current example I’m using ElasticSearch 6.5.1 and the corresponding plugin version is: com.floragunn:search-guard-6:6.5.1-23.2

 

3) Stop ElasticSearch server on your cluster nodes 

root@node1$ systemctl stop elasticsearch

root@node2$ systemctl stop elasticsearch

 

4) Install search-guard plugin on both Node1/Node2

root@node1$ cd /usr/share/elasticsearch/
root@node1$ ./bin/elasticsearch-plugin install -b com.floragunn:search-guard-6:6.5.1-23.2


root@node1$ cd /usr/share/elasticsearch/
root@node1$ ./bin/elasticsearch-plugin install -b com.floragunn:search-guard-6:6.5.1-23.2

5) Add search-guard configuration to elasticsearch.yml  on Node1

We are going to first configure Node1 , from where we will re-enable cluster shard allocation and then initialize search-guard index, which is needed for storing search-guard roles/permissions and other related data.

At this point ElasticSearch is still stopped on both nodes !

5.1) Add search-guard configuration to Node1->elasticsearch.yml

Open the config and add the following to the end of the configuration file.

root@node1$ vim /etc/elasticsearch/elasticsearch.yml

If your ElasticSearch comes with X-Pack integrated (default for Centos6/7 elasticsearch.rpm install) , make sure to DISABLE X-PACK SECURITY !

.......
.......
# ADD THIS TO THE END OF YOUR elasticsearch.yml 
.......
.......
# Disable XPack -> Security plugin 
xpack.security.enabled: false


searchguard.ssl.transport.pemcert_filepath: ssl/node1.pem
searchguard.ssl.transport.pemkey_filepath: ssl/node1.key
searchguard.ssl.transport.pemtrustedcas_filepath: ssl/root-ca.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.transport.resolve_hostname: false
searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: ssl/node1_http.pem
searchguard.ssl.http.pemkey_filepath: ssl/node1_http.key
searchguard.ssl.http.pemtrustedcas_filepath: ssl/root-ca.pem
searchguard.nodes_dn:
- CN=node1.gryzli.info,OU=Ops,O=BugBear BG\, Ltd.,DC=BugBear,DC=com
- CN=node2.gryzli.info,OU=Ops,O=BugBear BG\, Ltd.,DC=BugBear,DC=com
searchguard.authcz.admin_dn:
- CN=root.gryzli.info,OU=Ops,O=BugBear Com\, Inc.,DC=example,DC=com


 

5.2) Start ElasticSearch server on Node1
root@node1$ systemctl start elasticsearch

 

5.3) Re-enable cluster shard allocation by using sgadmin tool
# Go to search-guard plugin, tools directory
root@node1$ cd /usr/share/elasticsearch/plugins/search-guard-6/tools/

# Re-enable cluster shard allocation
[root@node1 tools]$ bash sgadmin.sh --enable-shard-allocation -key /etc/elasticsearch/ssl/gryzli.key -cert /etc/elasticsearch/ssl/gryzli.pem -cacert /etc/elasticsearch/ssl/root-ca.pem -icl -nhnv -h node1.gryzli.info

----
Search Guard Admin v6
Will connect to node1.gryzli.info:9300 ... done
Elasticsearch Version: 6.5.1
Search Guard Version: 6.5.1-23.2
Connected as CN=root.gryzli.info,OU=Ops,O=BugBear Com\, Inc.,DC=example,DC=com
Persistent and transient shard allocation enabled

 

5.4) (Optional) Change the default admin password before initializing search-guard index

By default search-guard comes with pre-defined users and passwords, which are described in the following configuration file:

/usr/share/elasticsearch/plugins/search-guard-6/sgconfig/sg_internal_users.yml

If your cluster is publicly accessible, you may want to first change the default passwords for the users or maybe comment out some of the pre-defined users as well.

Here is how to change the default admin password, before starting Elastic and importing Search-Guard index.

1) Generate hash for the new password by using search-guard hash.sh

[root@node1 tools]$ bash /usr/share/elasticsearch/plugins/search-guard-6/tools/hash.sh 
...
...
[Password:]
$2y$12$yfGVke3Xsik1f7X4qap6vu2h4ScQk2vNHtbVRP7xKsK1xbzXUqhYW

2) Copy the hash and replace it inside the user configuration file:

/usr/share/elasticsearch/plugins/search-guard-6/sgconfig/sg_internal_users.yml

 

This is everything you should do ,later after initializing the SG index, your passwords will be changed.

 

 

5.4) Initialize search-guard index by using sgadmin tool
[root@node1]$ cd /usr/share/elasticsearch/plugins/search-guard-6/tools

[root@node1 tools]$ bash sgadmin.sh -cd /usr/share/elasticsearch/plugins/search-guard-6/sgconfig -icl -key /etc/elasticsearch/ssl/gryzli.key -cert /etc/elasticsearch/ssl/gryzli.pem -cacert /etc/elasticsearch/ssl/root-ca.pem -nhnv -h node1.gryzli.info

---
Search Guard Admin v6
Will connect to node1.gryzli.info:9300 ... done
Elasticsearch Version: 6.5.1
Search Guard Version: 6.5.1-23.2
Connected as CN=root.gryzli.info,OU=Ops,O=BugBear Com\, Inc.,DC=example,DC=com
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW clusterstate ...
Clustername: test_cluster
Clusterstate: GREEN
Number of nodes: 1
Number of data nodes: 1
searchguard index already exists, so we do not need to create one.
Populate config from /usr/share/elasticsearch/plugins/search-guard-6/sgconfig/
Will update 'sg/config' with /usr/share/elasticsearch/plugins/search-guard-6/sgconfig/sg_config.yml
SUCC: Configuration for 'config' created or updated
Will update 'sg/roles' with /usr/share/elasticsearch/plugins/search-guard-6/sgconfig/sg_roles.yml
SUCC: Configuration for 'roles' created or updated
Will update 'sg/rolesmapping' with /usr/share/elasticsearch/plugins/search-guard-6/sgconfig/sg_roles_mapping.yml
SUCC: Configuration for 'rolesmapping' created or updated
Will update 'sg/internalusers' with /usr/share/elasticsearch/plugins/search-guard-6/sgconfig/sg_internal_users.yml
SUCC: Configuration for 'internalusers' created or updated
Will update 'sg/actiongroups' with /usr/share/elasticsearch/plugins/search-guard-6/sgconfig/sg_action_groups.yml
SUCC: Configuration for 'actiongroups' created or updated
Done with success

 

At this point, your search-guard plugin is initially configured !

 

From now on, you should always use “https://” when communicating with the HTTP channel and also, you will need to provide basic auth user and password, to communicate with the cluster .

 

The default search-guard configuration is stored here:

/usr/share/elasticsearch/plugins/search-guard-6/sgconfig

 

The default admin credentials  are  user:admin and password:admin 

5.5) Validate cluster status, by using HTTPS and admin/admin credentials

(Here I’m piping the output of curl to “jq” which prettyfies JSON, if you don;t have it installed, you can remove it from the pipe, or just install it )

[root@node1 sgconfig]# curl -Ss -k https://admin:admin@node1.gryzli.info:9200/_cluster/health | jq 
...
...

{
  "cluster_name": "test_cluster",
  "status": "green",
  "timed_out": false,
  "number_of_nodes": 2,
  "number_of_data_nodes": 2,
  "active_primary_shards": 1,
  "active_shards": 2,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 0,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 100
}

 

5.6) Add search-guard configuration settings to Node2 and start ElasticSearch

I assume you have already uploaded node2* + root-ca.pem certificates on Node2 (inside /etc/elasticsearch/ssl).

The only thing left is to add configuration to elasticsearch.yml and start ElasticSearch.

 

Add the following to your: /etc/elasticsearch/elasticsearch.yml on Node2 : 

# Disable xpack-security
xpack.security.enabled: false

# Add search-guard settings
searchguard.ssl.transport.pemcert_filepath: ssl/node2.pem
searchguard.ssl.transport.pemkey_filepath: ssl/node2.key
searchguard.ssl.transport.pemtrustedcas_filepath: ssl/root-ca.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.transport.resolve_hostname: false
searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: ssl/node2_http.pem
searchguard.ssl.http.pemkey_filepath: ssl/node2_http.key
searchguard.ssl.http.pemtrustedcas_filepath: ssl/root-ca.pem
searchguard.nodes_dn:
- CN=node1.gryzli.info,OU=Ops,O=BugBear BG\, Ltd.,DC=BugBear,DC=com
- CN=node2.gryzli.info,OU=Ops,O=BugBear BG\, Ltd.,DC=BugBear,DC=com
searchguard.authcz.admin_dn:
- CN=root.gryzli.info,OU=Ops,O=BugBear Com\, Inc.,DC=example,DC=com

 

Finally start elastic there:

root@node2$ systemctl start elasticsearch

 

5.7) Update the configuration of all your elasticsearch clients (logstash, kibana, scripts ..etc) 

After enabling search-guard, all your existing clients connecting through plain connection to ElasticSearch, would be unable to connect anymore.

You will need to either create separate users/user roles/ or use existing ones and configure your clients apropriate.

Authentication from clients happens by using HTTP Basic-Auth , so you should basically just update your Elastic URL’s, to look like this :

# Before search-guard 

$url="http://node1.gryzli.info:9300" 

# After search-guard 
# Here I'm using the default admin:admin user/pass
# Also the connection now is using HTTPS instead of plain HTTP
$url="https://admin:admin@node1.gryzli.info:9300

 

3) (Optional) Installing Search-Guard Kibana Plugin

By default search-guard comes with pre-created user for Kibana (if you have used the default sgconf before initializing your SG index), credentials for which are:

user: kibanaserver
pass: kibanaserver

 

Also you are able to install search-guard module for Kibana, which will give you the following benefits:

  • GUI Interface for managing your SG roles / users and mappings
  • SG Login page for accessing Kibana through HTTP

Also keep in mind, that if you are using X-Pack->Monitoring (which is free) , it won’t work unless you install Kibana search-guard plugin (at least it didn’t worked for me before that – I was getting constant redirects).

 

1) Downloading Kibana Search-Guard plugin

The first thing to do is to download the correct version of search-guard plugin for Kibana.

You should search the plugin from here:

https://search.maven.org/search?q=g:com.floragunn%20AND%20a:search-guard-kibana-plugin&core=gav

 

I’m going to download and install the plugin on Node1 which in the current scenario would be used for hosting Kibana as well.

Download the plugin:

root@node1$ cd /usr/src
root@node1 /usr/src$ wget 'https://search.maven.org/remotecontent?filepath=com/floragunn/search-guard-kibana-plugin/6.5.1-16/search-guard-kibana-plugin-6.5.1-16.zip'  -O kibana-plugin.zip

 

Make sure Kibana is stopped

root@node1$ systemctl stop kibana

 

Install the plugin (The optimizing and caching browser bundles part may take few minutes to complete)

[root@node1 /usr/src]$ /usr/share/kibana/bin/kibana-plugin install file:///usr/src/kibana-plugin.zip 
Attempting to transfer from file:///usr/src/kibana-plugin.zip
Transferring 2059943 bytes....................
Transfer complete
Retrieving metadata from plugin archive
Extracting plugin archive
Extraction complete
Optimizing and caching browser bundles...
.....
Plugin installation complete

 

Now add the following configuration to your kibana.yml (usually /etc/kibana/kibana.yml)

 

# Disable xpack.security plugin
xpack.security.enabled: false

elasticsearch.url: "https://node1.gryzli.info:9200"
elasticsearch.ssl.verificationMode: none
elasticsearch.username: "kibanaserver"
elasticsearch.password: "kibanaserver"

Finally start Kibana and go to your login URL , where you should see something like this:

Kibana Search Guard login screen
Kibana Search Guard login screen

For login you should use the default admin credentials (user: admin , pass: admin).

 

 

4) (Optional) Configure Logstash to authenticate in ElasticSearch + Search-Guard

If you already using Logstash, you should add some additional configuration to it in order to be able to communicate with the Elastic cluster after installing Search-Guard.

The steps you should take are as follows.

 

4.1) Upload root-ca.pem to your Logstash server

You could upload it to “/etc/logstash/root-ca.pem”

 

4.2) Update the configuration of all pipeline files, which use “elasticsearch” output

Your output definition should look like this:

  elasticsearch {
         hosts => ["https://node1.gryzli.info:9200"] 
         user => logstash
         password => "logstash"
         ssl => true
         ssl_certificate_verification => true
         cacert => "/etc/logstash/root-ca.pem" 
  }

I assume you are using the default search-guard configuration, where there is already user for logstash created, that has the following creds: 

user: logstash
pass: logstash

 

4.3) Restart your Logstash and make sure it is connecting to your cluster (by observing /var/log/logstash/logstash-plain.log )

 

5) (BONUS) Creating Write Only User for ElasticSearch with Search-Guard

This was my first intention at the beginning I started to search for authentication/authorization plugins for ElasticSearch.

Understanding how roles_mappings and roles work in SG could be difficult, so I’m going to explain it here.

 

The Task

Let’s assume we have the following task

1) Generate password hash for our new user  (writeonly)

2) Create user called “writeonly”

3) Give write-only permissions to user “writeonly” and indexes matching the pattern “write_only*”

4) Apply the permissions to our ES cluster

 

Doing it

1) Generate password hash

First we need to generate hash for our password, which could be done by using search-guard hash.sh:

# Generating hash for password "my_password"
[root@node1 tools]# bash /usr/share/elasticsearch/plugins/search-guard-6/tools/hash.sh  -p my_password
....
$2y$12$FLheAZHSNprNs4YRG5w22O7AI5dUt8nqQCK5NPO4AnVSNfAkgz4ZW

 

2) Add user information to sg_internal_users.yml

Here I’m using the Hash generated in the previous step.

root@node1$ vim /usr/share/elasticsearch/plugins/search-guard-6/sgconfig/sg_internal_users.yml
...
# Add the following
...

writeonly:
  hash:$2y$12$FLheAZHSNprNs4YRG5w22O7AI5dUt8nqQCK5NPO4AnVSNfAkgz4ZW
  roles:
    - write_only

As you may recall, here the role name is written as “write_only” instead of “sg_write_only”. That is how search-guard works (“sg_” is stripped from role names, when referring).

 

3) Create backend role granting write only access to our index “write_only”

The role name should be prefixed with “sg_”  , which is stripped during referring the role from another files (don’t know why this is like that).

root@node1$ vim /usr/share/elasticsearch/plugins/search-guard-6/sgconfig/sg_roles.yml

...
# Add the following
... 

sg_write_only:
  cluster:
     - indices:data/write/index
  indices:
    'write_only*':
         '*':
           - "indices:data/write*"
           - indicee:data/write/index
           - indices:admin/mapping/put
           - indices:admin/create

 

4) Create role mapping to connect our user with our newly created backend role 

Here again, “sg_” is stripped when referring backend role.

root@node1$ vim /usr/share/elasticsearch/plugins/search-guard-6/sgconfig/sg_roles_mapping.yml

...
# Add the following
...

sg_write_only: 
  backendroles:
    - write_only
  users: 
    - writeonly

 

5) Apply the new search-guard settings 

Finally we need to re-apply our search-guard configuration, by executing the following :

root@node1$ cd /usr/share/elasticsearch/plugins/search-guard-6/tools 

root@node1$ bash sgadmin.sh -cd /usr/share/elasticsearch/plugins/search-guard-6/sgconfig -icl -key /etc/elasticsearch/ssl/gryzli.key -cert /etc/elasticsearch/ssl/gryzli.pem -cacert /etc/elasticsearch/ssl/root-ca.pem -nhnv -h node1.gryzli.info

 

6) Test our user 

 

Try to index document inside “write_only” index:

[root@node1]$ curl -X PUT   -k https://writeonly:my_password@node1.gryzli.info:9200/write_only/_doc/1 -H"Content-Type: application/json" -d'{"data":"test_data"}'   | jq 
...
...

{
  "_index": "write_only",
  "_type": "_doc",
  "_id": "1",
  "_version": 3,
  "result": "updated",
  "_shards": {
    "total": 2,
    "successful": 1,
    "failed": 0
  },
  "_seq_no": 2,
  "_primary_term": 1
}

The request is successfull.

 

Now try to read a document from Index (write_only)  (this should fail)

[root@node1 sgconfig]$ curl -sS -X GET  -k    https://writeonly:my_password@node1.gryzli.info:9200/write_only/_search | jq 
{
  "error": {
    "root_cause": [
      {
        "type": "security_exception",
        "reason": "no permissions for [indices:data/read/search] and User [name=writeonly, roles=[write_only], requestedTenant=null]"
      }
    ],
    "type": "security_exception",
    "reason": "no permissions for [indices:data/read/search] and User [name=writeonly, roles=[write_only], requestedTenant=null]"
  },
  "status": 403
}

The request has failed – as expected.