Elasticsearch 8.X Cluster
In this step-by-step guide, we will show you how to set up an Elasticsearch cluster with 3 nodes and Kibana. In addition, an NGINX reverse proxy and Let's Encrypt are used for SSL termination.
Prerequisites
- At least 3x Elasticsearch servers (Debian, pre-installed only, with pre-installed NGINX reverse proxy, standard configuration)
- SSH access via
root
is required for these instructions
We recommend using a private network or one of our VPC networks for clustering Elasticsearch.
The article does not consider the distribution of the different Elasticsearch roles on different servers. All servers are set up as master nodes in the instructions!
Preparation Node 1
Limitation of the usable working memory
In order to ensure operability even with a larger workload, we recommend that you always limit the working memory so that no OOM kill (Out-Of-Memory) can occur.
We use the following formula (max. server RAM - 3 GiB = Elasticsearch RAM limit) to set the following limits for a server with 8 GiB RAM:
nano /etc/elasticsearch/jvm.options.d/creoline.options
Insert the following part and adjust the values accordingly if necessary:
# min ram
-Xms5g
# max ram
-Xmx5g
Customize basic Elasticsearch configuration Node 1:
First of all, the first node of the cluster must be configured. It is important to note that the following changes are only made on the first node and that the Elasticsearch server is only activated and started here.
Open the Elasticsearch server configuration as follows.
nano /etc/elasticsearch/elasticsearch.yml
Change the configuration as follows:
cluster.name: <cluster-name>
node.name: <server-name>
node.roles: [ master, data ]
network.host: <VPC IP address>
discovery.seed_hosts: ["<node-1-VPC-IP>:9300", "<node-2-VPC-IP>:9300", "<node-3-VPC-IP>:9300"]
cluster.initial_master_nodes: ["<node-1-fqdn>", "<node-2-fqdn>", "<node-3-fqdn>"]
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
http.host: <VPC IP address>
# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0
Save the changed configuration and activate the Elasticsearch service:
systemctl enable elasticsearch.service
You can then start it:
systemctl restart elasticsearch.service
Customization NGINX configuration Node 1-3:
If a domain other than the server's working domain is to be used, this must be added as the server name in the NGINX configuration. Alternatively, you can simply replace the working domain with your custom domain:
nano /etc/nginx/conf.d/elasticsearch-proxy.conf
server {
server_name localhost sXXXXX.creoline.cloud <Custom-Domain.tld>;
The IP address for port forwarding on port 9200 must then be adapted to the VPC IP address of the server and SSL verification for proxy connections must be deactivated. This prevents errors from occurring due to the internal SSL-secured communication between the Elasticsearch nodes and Kibana:
server {
server_name localhost sXXXXX.creoline.cloud <Custom-Domain.tld>;
location / {
# proxy_pass http://127.0.0.1:9200;
proxy_pass https://<VPC-IP-address>:9200;
proxy_ssl_verify off;
The NGINX service must be reloaded for the changes to take effect:
systemctl reload nginx.service
issue Let's Encrypt certificate for custom domain:
You can use the following command to issue a free SSL certificate from Let's Encrypt for your custom domain:
certbot --nginx -d <custom-domain> --non-interactive --agree-tos -m <email technical contact customer>
Custom-Domain Kibana Node 1:
When using a custom domain, this must also be stored in the Kibana configuration:
nano /etc/kibana/kibana.yml
#server.name: "XXXXXX.creoline.cloud"
server.name: <"Custom-Domain.tld">
In addition, the IP address of the Elasticsearch host must be adjusted in the following two lines of the configuration so that Kibana addresses the Elasticsearch server via the VPC IP address of the server:
# This section was automatically generated during setup.
#elasticsearch.hosts: ['https://127.0.0.1:9200']
elasticsearch.hosts: ['https://<VPC-IP-Address>:9200']
#xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://127.0.0.1:9200'], ca_trusted_fingerprint: e69ac43bd506dfae1d65962422a0746e899a9449751b0199f464e73d25028510}]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://<VPC-IP-Adresse>:9200'], ca_trusted_fingerprint: e69ac43bd506dfae1d65962422a0746e899a9449751b0199f464e73d25028510}]
Restart the Kibana service for the changes to take effect:
systemctl restart kibana.service
Cluster setup
In the next step, generate a cluster enrollment token on Node 1 and save it temporarily.
This will be needed later to integrate Node 2 and Node 3 into the cluster.
You can generate the enrollment token as follows:
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node
eyJ2ZXIiOiI4LjEyLjIiLCJhZHIiOlsiNS4xLjc4LjE3OjkyMDAiXSwiZmdyIjoiMDJjMGRkMDdlNDNjM2NmYTFlMzk3YTFmM
GM2MjIzOTM2YjMwYjYwZTk0ZGM5ODA3YTgxNzA0NzM3NTIxNzljZSIsImtleSI6IktRNHU5WTBCdUN2VlJ5bm0yRVJ4Ojd4Sj
c3UWoxUVdLQ054UXdSTDRWNUEifQ==
You must then create an enrollment token for the subsequent intergeration of the Kibana instances of Node 2 and Node 3. Also save the issued token separately:
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token --scope kibana
eyJ2ZXIiOiI4LjEyLjIiLCJhZHIiOlsiNS4xLjc4LjE3OjkyMDAiXSwiZmdyIjoiMDJjMGRkMDdlNDNjM2NmYTFlMzk3YTFmMGM
2MjIzOTM2YjMwYjYwZTk0ZGM5ODA3YTgxNzA0NzM3NTIxNzljZSIsImtleSI6IlVLTjc5WTBCVWtnSDVsMndzM3FIOnJibFk4Q2
5TU3ppXzdxNjlYT0lqSEEifQ==
Elasticsearch Cluster Join
If the Elasticsearch server has already been configured and started once on node 2 or 3, it must be completely reinstalled, as a cluster join is no longer possible after a single start after the installation.
Execute the following commands to uninstall Elasticsearch and Kibana, clean up any remaining files and reinstall them:
apt purge elasticsearch kibana
rm -rf /etc/elasticsearch
rm -rf /var/lib/elasticsearch
rm -rf /etc/kibana
rm -rf /var/lib/kibana
apt update && apt install -y elasticsearch kibana
Enable automatic service restart
Copy the following command in full and execute it:
echo "[Service]
Restart=on-failure" > /etc/systemd/system/elasticsearch.service.d/override.conf
Then execute the following command so that the changed configuration becomes active:
systemctl daemon-reload
Cluster Join Node 2 and 3
Execute the following command directly after the installation on Node 2 and 3 to perform the cluster join. Replace <enrollment-token-node-1>
with the previously generated enrollment token for the cluster join:
/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <enrollment-token-node-1>
The following entry should then have been added to the configuration file /etc/elasticsearch/elasticsearch.yml
:
discovery.seed_hosts: ["<VPC-IP-Address>:9300"] -> IP address corresponds to that of Node 1!
Please note that the cluster join means that only the access data for the user elastic
that was set by node 1 can be used.
Setup Node 2 + Node 3
After the successful cluster join, the configuration of Elasticsearch and Kibana can be adjusted.
Limitation of the usable working memory
Example for a server with 8 GiB memory:
nano /etc/elasticsearch/jvm.options.d/creoline.options
Insert the following:
# min ram
-Xms5g
# max ram
-Xmx5g
Customize basic Elasticsearch configuration Node 2 & 3:
The Elasticsearch configuration can now be customized in a similar way to Node 1:
nano /etc/elasticsearch/elasticsearch.yml
Define the settings as follows and save them:
cluster.name: <cluster-name>
node.name: <server-name>
node.roles: [ master, data ]
network.host: <10.20.0.x (VPC network/IP address)>
cluster.initial_master_nodes: ["<node-1-fqdn>", "<node-2-fqdn>", "<node-3-fqdn>"]
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
http.host: <VPC IP address>
# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0
Enable and start Elasticsearch + Kibana:
In order for the changes to become active, you must activate and start Elasticsearch and Kibana:
systemctl daemon-reload
systemctl enable elasticsearch.service
systemctl start elasticsearch.service
Generate the Kibana Encryption Keys
Generate the Kibana Encryption Keys as follows so that Kibana is able to communicate with the Elasticsearch service later on:
/usr/share/kibana/bin/kibana-encryption-keys generate -q >> /etc/kibana/kibana.yml
Basic configuration Kibana:
Open the Kibana configuration on Node 2 and Node 3:
nano /etc/kibana/kibana.yml
Customize the configuration as follows.
If a custom domain is used, this must also be stored for Kibana:
server.host: "127.0.0.1"
server.port: 5601
#server.name: "sXXXXX.creoline.cloud"
server.name: <"Custom-Domain.tld">
server.basePath: "/kibana"
server.rewriteBasePath: true
server.publicBaseUrl: "http://127.0.0.1/kibana"
Restart Kibana for the changes to take effect:
systemctl enable kibana.service
systemctl start kibana.service
Connect Elasticsearch and Kibana on Node 2 & 3 to Node 1
If you are using the working domain of the server, you can call Kibana in the browser as follows:
https://XXXXXX.creoline.cloud/kibana
If you are using a custom domain, you can call Kibana as follows:
https://<custom-domain.tld>/kibana
In the following view, insert the enrollment token previously generated for Kibana and confirm the details.
Generate Kibana confirmation code to complete the setup
You can generate the confirmation code to complete the Kibana setup as follows. Execute the command on the node on which you are currently setting up Kibana (Node 2 or Node 3):
/usr/share/kibana/bin/kibana-verification-code
Enter the 6-digit code and wait until the setup is complete.
After completing the setup, the following should have been added to kibana.yml:
# This section was automatically generated during setup.
elasticsearch.hosts: ['https://<VPC-IP-Node-1>:9200']
elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE3NDQ4Nzc5MjI0NTY6TWxDY2JtUEtTa0NONkxkeTVaN0VkUQ
elasticsearch.ssl.certificateAuthorities: [/var/lib/kibana/ca_1744877923188.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://<VPC-IP-Node-1>:9200'], ca_trusted_fingerprint: e69ac43bd506dfae1d65962422a0746e899a9449751b0199f464e73d25028510}]
Completion of the cluster setup
Finalize cluster setup
Once Kibana has been set up, the cluster setup is almost complete.
The following final changes must now be made to elasticsearch.yml on all nodes:
The following adjustments must be made at the latest before the cluster is put into productive operation, and at the earliest after the first restart of all Elastic nodes after the cluster has been initially set up, as otherwise a potential security risk may arise.
nano /etc/elasticsearch/elasticsearch.yml
→ Comment out cluster.initial_master_nodes
# Additional nodes can still join the cluster later
#cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
→ customize discovery.seed_hosts (add node 2 & 3)
# Discover existing nodes in the cluster
discovery.seed_hosts: ["VPC-IP-Node-1:9300", "VPC-IP-Node-2:9300", "VPC-IP-Node-3:9300"]
Restart Elasticsearch:
systemctl restart elasticsearch.service
Perform checks
Check inter-node communication via curl
:
curl -k -XGET "https://<FQDN>/_cat/nodes?v" -u elastic
Enter host password for user 'elastic':
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
<VPC-IP-Node-1> 9 99 1 0.00 0.03 0.06 dm * Node-1
<VPC-IP-Node-2> 11 91 3 0.17 0.11 0.09 dm - Node-2
<VPC-IP-Node-3> 4 92 2 0.16 0.18 0.12 dm - Node-3
Perform health check via curl
:
curl -k -XGET "https://<FQDN>/_cat/health?v" -u elastic
Enter host password for user 'elastic':
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1710497165 10:06:05 cluster-name green 3 3 61 30 0 0 0 0 0 - 100.0%
Alternatively, you can also call up the following URLs in your browser to receive the corresponding output after successful authentication:
https://XXXXX.creoline.cloud/_cat/nodes?v
https://XXXXX.creoline.cloud/_cat/health?v
Cloud firewall recommendations
We recommend that you store the following firewall rules in this order in the cloud firewall for all cluster nodes:
- Allow access to port 443 only for certain IPv4/IPv6 addresses
- Allow access to port 22 only for certain IPv4/IPv6 addresses
- Explicitly block access to port 9200 on the WAN interface
- Explicitly block access to port 9300 on the WAN interface
- Explicitly block access to port 5601 on the WAN interface
- Prohibit all further access
- Elasticsearch 8.X Cluster
- Prerequisites
- Limitation of the usable working memory
- Customize basic Elasticsearch configuration Node 1:
- Customization NGINX configuration Node 1-3:
- issue Let's Encrypt certificate for custom domain:
- Custom-Domain Kibana Node 1:
- Elasticsearch Cluster Join
- Enable automatic service restart
- Cluster Join Node 2 and 3
- Limitation of the usable working memory
- Customize basic Elasticsearch configuration Node 2 & 3:
- Enable and start Elasticsearch + Kibana:
- Generate the Kibana Encryption Keys
- Basic configuration Kibana:
- Connect Elasticsearch and Kibana on Node 2 & 3 to Node 1
- Generate Kibana confirmation code to complete the setup
- Finalize cluster setup
- Perform checks