How to dockerize Elasticsearch and Kibana

Elasticsearch and Kibana are two powerful tools commonly used in conjunction to manage and analyze large volumes of data, particularly in the context of search, log and event data, and real-time analytics. They are both part of the Elastic Stack, formerly known as the ELK Stack (Elasticsearch, Logstash, Kibana), but now also known as the Elastic Stack due to the addition of more components beyond just ELK.

Elasticsearch

  • Elasticsearch is a distributed, RESTful search and analytics engine built on top of Apache Lucene. It is designed for horizontally scaling data storage and retrieval. Elasticsearch is particularly well-suited for full-text search, log and event data analysis, and other types of data exploration.
  • It stores data in a schema-less JSON format, making it flexible and adaptable to various data types.
  • Elasticsearch is known for its speed and scalability, making it a popular choice for applications that require fast and efficient searching and data analysis.

Kibana

  • Kibana is an open-source data visualization and exploration tool that is often used alongside Elasticsearch. It provides a user-friendly web interface for interacting with Elasticsearch data.
  • With Kibana, we can create interactive dashboards, charts, and graphs to visualize and explore our data. It supports a wide range of data visualization options, including bar charts, pie charts, maps, and more.
  • Kibana is commonly used for log and event data analysis, monitoring infrastructure and application performance, and creating custom dashboards for real-time insights.

Docker setup

Let’s have a look at the Dockerfile used to set up Elasticsearch and Kibana.

FROM ubuntu:20.04
# Installing helper packages
RUN apt-get update &&\
apt-get install -y gnupg2 &&\
apt-get install -y curl
# Installing Elasticsearch
RUN curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add - &&\
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-7.x.list &&\
apt-get update &&\
apt-get install elasticsearch
# Installing Kibana
RUN apt-get update &&\
apt-get install kibana
  • Line 1: The Dockerfile starts with specifying the base image, ubuntu:20.04 in our case.

    • Please avoid using the latest tag for the base image. Instead, specify the exact version, as we’ve specified 20.04.

  • Lines 4–7: After that, we just installed some helper packages.

  • Line 9: We used cURL to import the Elasticsearch public GPG key into APT. Note that we are using the arguments -fsSL to silence all progress and possible errors (except for a server failure) and to allow cURL to make a request on a new location if redirected. Pipe the output of the cURL command into the apt-key program, which adds the public GPG key to APT.

  • Line 10: We added the Elastic source list to the sources.list.d directory, where APT will look for new sources.

  • Line 11: We updated the package lists so APT will read the new Elastic source.

  • Line 12: Finally, we installed Elasticsearch.

  • Lines 15–16: We updated the package lists and installed Kibana.

Configure Kibana

To configure Kibana, we need to edit its main configuration file kibana.yml where most of its configuration options are stored. An easy way of doing it is to modify the kibana.yml file manually, and then replace the default kibana.yml file using the following command:

COPY kibana.yml /etc/kibana/

Note: We can follow the same strategy to configure Elasticsearch’s configuration file elasticsearch.yml which resides in the /etc/elasticsearch directory.

For example, on the Educative platform, the servers are run on the host 0.0.0.0. Therefore, we use the following kibana.yml file. In addition, please note that Kibana runs on port 5601.

Start services

To start Elasticsearch and Kibana, we need to:

  1. Starts the Elasticseach service using nohup. We use nohup (“no hang up”) so that the output that would normally go to the terminal goes to a file called nohup. In this way, the terminal gets free and we can execute the next commands.

  2. Starts the Kibana service.

The following command can be used to start both services:

nohup service elasticsearch start && service kibana start

Demo

Press the “Run” button to start running Elasticsearch and Kibana. You can click the URL given against the “Your app can be found at:” field under the widget below to see Elasticsearch and Kibana running.

Note: Kibana takes around a minute or so to start. Therefore, you could see “Your app refused to connect” for a while. Just refresh the page after a while to see the output.

# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana as if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# If may use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"
Demo: Elasticsearch and Kibana

Free Resources

Copyright ©2025 Educative, Inc. All rights reserved