How To Set Up an Elasticsearch, Fluentd, and Kibana (EFK) Logging Stack on Kubernetes

Muhammad Azmi Farih
4 min readDec 7, 2021

--

I write this to complete the Kubernetes challenge from Digital Ocean. I executed the command from Windows 11.

Step 1 — Creating Kubernetes Cluster

First, creating a Kubernetes cluster:

doctl k8s cluster create k8s-chl `
--auto-upgrade=false `
--node-pool “name=k8s-chl;size=s-4vcpu-8gb-amd;count=3;tag=k8s-chl;label=type=basic;auto-scale=true;min-nodes=2;max-nodes=4” `
--region sgp1

This is the output:

Notice: Cluster is provisioning, waiting for cluster to be running
………………………………………………………………
Notice: Cluster created, fetching credentials
Notice: Adding cluster credentials to kubeconfig file found in “C:\\Users\\comsy/.kube/config”
Notice: Setting current-context to do-sgp1-k8s-chl
ID Name Region Version Auto Upgrade Status Node Pools
cb8a6f5c-716d-4d93-aaf0–4378e6cf5f2d k8s-chl sgp1 1.21.5-do.0 false running k8s-chl

Next, set kubectlcontext to point to my cluster:

doctl kubernetes cluster kubeconfig save k8s-chl

The output looks similar to this:

Notice: Adding cluster credentials to kubeconfig file found in “C:\\Users\\comsy/.kube/config”
Notice: Setting current-context to do-sgp1-k8s-chl

Test connection to my cluster:

kubectl get namespaces

This is the output:

NAME STATUS AGE
default Active 8m39s
kube-node-lease Active 8m40s
kube-public Active 8m40s
kube-system Active 8m40s

Step 2— Creating a Namespace

Creating a namespace with creating and editing file logging.yaml:

Create the namespace:

kubectl create -f .\logging.yaml

The output should be like the following:

namespace/logging created

I check a namespace was successfully created:

kubectl get namespaces

The output:

NAME STATUS AGE
default Active 33m
kube-node-lease Active 33m
kube-public Active 33m
kube-system Active 33m
logging Active 2m35s

Step 3 — Creating the Headless Service

Creating the Headless Service with creating and editing file es_svc.yaml:

Create the service:

kubectl create -f .\es_svc.yaml

This is the output:

service/elasticsearch created

This is to check:

kubectl get services -n logging

The output should be like this:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 90s

Step 4— Creating the Elasticsearch StatefulSet

Creating the Elasticsearch Statefulset with creating and editing file es_statefulset.yaml:

I use ElasticSearch 7.16.0.

Deploy the StatefulSet:

kubectl create -f es_statefulset.yaml

The following output:

statefulset.apps/es-cluster created

Monitor the StatefulSet:

kubectl rollout status sts/es-cluster -n logging

The output should be like the following:

Waiting for 3 pods to be ready…
Waiting for 2 pods to be ready…
Waiting for 1 pods to be ready…
partitioned roll out complete: 3 new pods have been updated…

Check my Elasticsearch cluster is working properly:

kubectl port-forward es-cluster-0 9200:9200 -n logging

Browsing with browser:

http://localhost:9200/_cluster/state?pretty

The output should be like the following:

Checking Elasticsearch Cluster

Step 5— Creating the Kibana Deployment and Service

Creating the Kibana with creating and editing file kibana.yaml:

I use Kibana 7.16.0.

Deploy Kibana:

kubectl create -f .\kibana.yaml

The output:

service/kibana created
deployment.apps/kibana created

Get the Kabana pod details:

kubectl get pods -n logging

An example of the output:

NAME READY STATUS RESTARTS AGE
es-cluster-0 1/1 Running 0 15m
es-cluster-1 1/1 Running 0 15m
es-cluster-2 1/1 Running 0 14m
kibana-5555fffb64-qvxzv 1/1 Running 0 60s

Forward local port 5061 to 5061 on Kibana pod:

kubectl port-forward kibana-5555fffb64-qvxzv 5601:5601 -n logging

Browsing with browser:

http://localhost:5601/

The output should be like this:

Kibana GUI

Step 6— Creating the Fluentd DaemonSet

Creating the Fluentd DaemonSet with creating and editing file fluentd.yaml:

Execute this to create:

kubectl create -f fluentd.yaml

The output:

serviceaccount/fluentd created
clusterrole.rbac.authorization.k8s.io/fluentd created
clusterrolebinding.rbac.authorization.k8s.io/fluentd created
daemonset.apps/fluentd created

While the Kibana GUI is still open, navigate to http://localhost:5061.

Click Discover:

Discover Tab Kibana GUI

Click “Create index pattern”:

Create index pattern Kibana GUI

Fill Name with logstash-*, fill Timestamp filed with @timestamp and then click “Create index pattern”.

Click again Discover and it will show:

logstash-* GUI

Step 7 — Testing Container Logging

Create and edit file counter.yaml:

Deploy counter pod:

kubectl create -f counter.yaml

The output:

pod/counter created

From the Discover page, fill in the search bar with kubernetes.pod_name:counter

Discover Page

It shows that our logging is working properly.

--

--

Muhammad Azmi Farih
Muhammad Azmi Farih

Written by Muhammad Azmi Farih

Linux Engineer — Sharing experiences configuring, fixing servers — Docker, Kubernetes, Ubuntu, CentOS

Responses (1)