December-31-2022
In this blog, We will be using loki-grafana stack to install various components like promtail, fluentd, Prometheus and Grafana.
Before we move forward let's take a look at the tools we will be using:
Prerequisites:
Let's start by first installing helm:
Install helm latest version
curl -fsSL -o get_helm.sh
https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
Install kubectl
Download the latest release of kubectl with the following command:
curl -LO "https://dl.k8s.io/release/$(curl -L -s
https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
Check to ensure the version you installed is up-to-date:
kubectl version --client
Now, we will start an EKS cluster and node group It will take few minutes to start the cluster, we can check cluster creation status with the following command:
aws eks --region us-east-1 describe-cluster --name cluster-01 --query cluster.status
Where, cluster-01 is my cluster name. We will also have to update the kube-config
aws eks --region us-east-1 update-kubeconfig --name cluster-01
Check node group status
kubectl get nodes
After our cluster is up and running and our node group has started, we have to install loki, promtail, grafana etc in our cluster.
We will be using helm chart for this.
First lets’s add the grafana chart
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm upgrade --install loki grafana/loki-stack --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false
Notice in the above command persistentVolume is false we will later enable it.
Now, let’s check if the required pods are running up or not with this command:
Kubectl get pods
Let’s now create a test deployment that would echo the “testing” string in a loop.
kubectl create deploy loki-medium-logs --image=busybox -- sh -c 'for run in $(seq 1 10000); do echo "testing $run"; sleep 2; done'
We can now apply patch for changing the Loki’s Grafana service type from ClusterIP to LoadBalancer.
kubectl patch svc loki-grafana -p '{"spec": {"type": "LoadBalancer"}}'
We can get login credentials for grafana with the following command:
kubectl get secret loki-grafana -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'
Let’s wait for load balancer to provision.
After the load balancer is provisioned, Now we can open the browser and provide the admin credentials.
Data source in grafana should be loki by default, if not change it to loki
Check logs in logs explorer - it can take few minutes to show initial logs
In grafana we will Import dashboard with id - **12019**
Edit logs panels and change logs query to :
{namespace="$namespace", pod=~"$pod"} |~ "$search"
You should now see logs of pods
For persistence volume we can use this command while installing loki stack
helm upgrade --install loki grafana/loki-stack --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false,loki.persistence.enabled=true
For specific storage size
helm upgrade --install loki grafana/loki-stack --namespace=monitoring --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false,loki.persistence.enabled=true,loki.persistence.storageClassName=gp2,loki.persistence.size=5Gi
We need to also install aws-ebs-csi-driver addon from the cluster, and attach AmazonEBSCSIDriverPolicy to both EKSworkerNodePolicy(Node worker policy) and eksClusterRole(Cluster policy)
Share this post:
Get in touch_
Or just write me a letter here_