top of page

Maximizing Data Resiliency with Kubernetes Zone Awareness for Elasticsearch


 

Kubernetes provides a robust, scalable, and maintainable platform for orchestrating containerized application workloads. Spectric Labs deploys applications, such as Elasticsearch, utilizing Kubernetes through the use of Helm-Charts, which offers a customizable baseline for deployments across various clusters with different purposes. Our expertise led to the identification of an issue involving zone awareness within the Helm-Charts and we quickly developed a solution to ensure data-resilience and fault tolerance through the recognition of host Kubernetes zone awareness injected into Elasticsearch to ensure proper data replication.


 

At Spectric Labs, we leverage Kubernetes to run multiple Elasticsearch clusters to collect, enrich, and visualize data. Elasticsearch is designed for data resilience and availability by distributing replicas of data across multiple logical nodes within a cluster. During a node failure or downtime, replicas ensure that operations of the cluster can continue without noticeable impacts to users. As long as one replica remains available, the cluster remains available.


For example, if you have a multi-node cluster, the primary replica of your data may reside on node 1 and be replicated on node 2. At first glance, it would look like data was replicated according to best practices with fault tolerance, but within Kubernetes these nodes might both reside on the same physical host and a single failure of that host would cause a complete loss of availability for the data. We can ensure that replicas are routed to maintain physical host independency by making cluster routing aware of the Kubernetes node name using the following Elasticsearch configuration:


node.attr.k8s_node_name: ${NODE_NAME}
cluster.routing.allocation.awareness.attributes: k8s_node_name

Because we utilize very large Kubernetes clusters, the clusters are split into multiple availability zones. Availability zones separate the IT infrastructure into independent groups to allow cluster maintenance and failures to occur within one zone without affecting other zones. During a zone upgrade event, we encountered a situation where multiple replicas were lost simultaneously. Through our troubleshooting, we discovered that Elasticsearch was replicating data across the available nodes without consideration for which Kubernetes zone it was running on. Despite data being successfully distributed across physical hosts, the replicas were all routed to the same Kubernetes zone topology. During zone upgrades or other zone failures our architecture would lead to data loss or application unavailability. To mitigate this disruption of service, we can add zone information to the routing awareness configuration:


node.attr.k8s_node_name: ${NODE_NAME}
node.attr.zone: ${MYZONE}
cluster.routing.allocation.awareness.attributes: zone

Unfortunately, the ability to set pod environment variables with the node name and zone cannot be easily accomplished because the ability to query the downward API for these values is currently not supported. We needed to identify a mechanism, compatible with Helm-Charts, that would allow us to inject the desired node attributes into our Elasticsearch configuration. Helm-Charts are a Kubernetes package manager in which we manage our clusters and ensure uniform configurations and automation of updates. Helm-Charts allow developers to manage and deploy applications consistently across clusters and configure the settings necessary to ensure data resiliency.

Our approach was to customize the Elasticsearch launch process through utilizing bash commands which are then referenced in our Helm-Charts. The commands would echo back the Kubernetes zone topology of the node into an environment variable and then export that environment variable into the container to be referenced by Elasticsearch.


startUpCommands:
    nodename=$(awk 'BEGIN {print ENVIRON["node.name"]}')
        for LL in ($(cat config/zones.cfg); do
            HOSTNAME= 'echo $LL | cut -f 1 -d '. ''
            ZONENAME= 'echo $LL | cut -f 2 -d '. ''
            if [ $HOSTNAME = $nodename ]; then
                echo $ZONENAME
                export MYZONE=$ZONENAME
             fi
          done
          printenv

The commands above are inserted/injected into the pod through a series of steps. The first step was to create a Zones directory in our Helm-Chart. This directory contained a text file for each Kubernetes cluster deployed and had each pod’s name and corresponding Kubernetes zone topology. The text file prod-zoning.txt below shows an example of a production cluster’s pod name and Kubernetes zone information to be ingested.


prod-zoning.txt
---
pod-es-data-1, zone-pod1
pod-es-data-2, zone-pod2
pod-es-data-3, zone-pod3

Once this was finished, we needed to configure a Configmap to inject the cluster specific zone data from the Zones directory into the container for the commands to have a data point to read from and execute. Helm-Charts are a powerful tool in Kubernetes deployments, the below image shows the zones.cfg file being configured by the {{ .Files.Get (printf "Zones/%s-zoning.txt .Values.cluster.name ) | indent 4 }} command which grabs the cluster specific zoning.txt file from the Zone directory dynamically by parsing the .Values.cluster.name value deployed by your custom configuration and then indents 4 spaces. The % sign allows for dynamic variables/values to be assigned to it based on your definition.


apiVersion: v1
kind: ConfigMap
metadata:
    name: {{template "uname" . }}-zoneconfig
    labels:
        heritage: {{ .Release.Service | quote }}
        release: {{ .Release.Name | quote }}
        chart: "{{ .Chart.Name}}-{{ .Chart.Version }}"
        app: "{{ template "uname" . }}"
data:
    zones.cfg: |-
{{ .Files.Get (printf "Zones/%s-zoning.txt .Values.cluster.name) | indent 4}}

Next, we added the commands into the prod-values.yaml file to be deployed with our Helm-Chart and be referenced in the StatefulSet commands. Helm allows for the use of multiple values files which helps for visibility and custom cluster specific deployments. There usually is one uniform values file for all deployments and then other custom cluster specific values files like testing-values.yaml, development-values.yaml which add an extra layer of customization in conjunction with the values.yaml file. The value below could be referenced through {{ .Values.startUpCommands }} within a deployment or StatefulSet.


startUpCommands:
    nodename=$(awk 'BEGIN {print ENVIRON["node.name"]}')
        for LL in $(cat config/zones.cfg); do
            HOSTNAME= 'echo $LL | cut -f 1 -d '.''
             ECHONAME= 'echo $LL | cut -f 2 -d '.''
             if [ $HOSTNAME =$nodename ]; then
                 echo $ZONENAME
                 export MYZONE=$ZONENAME
              fi
          done
          printenv

Lastly, our pods are orchestrated by a Kubernetes object called a StatefulSet as mentioned above. The StatefulSet dictates underlying configuration for the application structure in an ordered manner. The “command” field within the StatefulSet overrides the container entrypoint which is the first command ran on an image, which allowed us to run our zone awareness commands referenced via Helm templating language and export them into the container for ingestion before application startup.


command:
-"sh"
- -c
- |
   {{- range $k, := .Values.startUpCommands}}
   {{ . | toYaml | trim }}
   {{- end }}
   /bin/tini -- /usr/local/bin/docker-entrypoint.sh

By inserting these commands into our Helm-Chart, the cluster becomes node and zone aware. Data gets rerouted based on the underlying Kubernetes zone topology, ensuring data resilience and fault tolerance to zone failures.


Spectric Labs Github: https://github.com/spectriclabs


Image created using Bing AI Image Creator


For more information/questions contact info@spectric.com


 

Komentáře


bottom of page