Part-1: Setup Prometheus, Kube State metrics and Integrate Grafana with Kubernetes

When a number of servers, services, and applications are running on a Kubernetes cluster and services interact in a distributed Microservice environment, monitoring the pods, containers, and servers is critical to ensure the system’s health and availability of the appropriate cores/instances.
Prometheus monitoring is one of the most popular monitoring tools for Docker and Kubernetes. This post will show you how to use Prometheus to monitor Kubernetes clusters. You’ll learn how to set up a Prometheus server, metrics exporters, and Kube-State-Metrics, as well as how to pull, scrape, and collect metrics, configure Alertmanager alerts, and Grafana dashboards.
Prerequisites
Before you begin with this guide, below are the prerequisites:
- A Kubernetes cluster. (You can refer k3d docs or any other Kubernetes engine)
- A fully configured kubectl command-line interface on your local machine
Create a Namespace
We’ll start by building a Kubernetes namespace for all of our monitoring components. A namespace is where Kubernetes starts all of its resources. The system utilizes the default namespace unless one is supplied. We’ll create a monitoring namespace to provide us with more control over the cluster monitoring process.
For easy reference, we are going to name the namespace: monitoring.
Apply changes by running the below command
kubectl create -f monitoring-namespace.yaml
Create Roles & Service Account
Next, in the monitoring namespace, we’ll create a role that grants access to all Kubernetes resources and a service account to which the role will be applied. We’ll create a ClusterRole, since a typical role only allows access to resources inside the same namespace, and Prometheus will need access to nodes and pods across the cluster to obtain all of the metrics we’ll provide.
The ServiceAccount is a unique identity for resources and pods that are currently executing. If no ServiceAccount is specified then the default service account is applied, so we’re going to make a default service account for the Monitoring namespace. That means Prometheus will use this service account by default.
In the end, we’re applying a ClusterRoleBinding to bind the role to the service account.
Apply changes by running the below command
kubectl create -f clusterRole.yaml
Create a ConfigMap
In Kubernetes, a ConfigMap gives configuration data to all pods in a deployment. This Config Map contains 2 data sources
- prometheus.rules
- prometheus
In the 1st section, we are creating a ConfigMap with the name=”prometheus-server-conf
” in monitoring
the namespace where the Prometheus Deployment should also be running.
prometheus.rules
will contain all the alert rules for sending alerts to the alert manager.
Below that in the data section, there’s a very simple prometheus.yml file.
The prometheus.yaml
contains all the configurations to dynamically discover pods and services running in the Kubernetes cluster. We have the following scrape jobs
in our Prometheus scrape configuration.
kubernetes-apiservers
: Collects all the metrics from the API servers.kubernetes-nodes
: Collects all Kubernetes node metricskubernetes-pods
: All the pod metrics will be discovered if the pod metadata is annotated withprometheus.io/scrape
andprometheus.io/port
annotations.kubernetes-cadvisor
: Collects all cAdvisor metrics.kubernetes-service-endpoints
: All the Service endpoints will be scrapped if the service metadata is annotated withprometheus.io/scrape
andprometheus.io/port
annotations. It will be a black box monitoring.
Apply changes by running the below command
kubectl create -f config-map.yaml
Create a Prometheus Deployment
The Prometheus config map is mounted as a file in /etc/prometheus. We are using the official Prometheus image from the docker hub.
This file may vary depending on how you configured and deployed Prometheus. The prometheus.yml
key containing Prometheus’s configuration is mounted to /etc/prometheus/prometheus.yml
.
Apply changes by running the below command
kubectl apply -f prometheus-deployment.yaml
Connection to Prometheus Dashboard
There are two ways to access the Prometheus dashboard that has been deployed.
- Using Kubectl Port Forwarding
- Using NodePort/Load Balancer by exposing the Prometheus Deployment
If you want to go with the port-forward way, use the below command
kubectl port-forward <your-prometheus-deployment-podName> 8080:9090 -n monitoring
Else use the 2nd option. For this we create prometheus-service.yaml
, we have to use the Kubernetes service to access the Prometheus dashboard over an IP or a DNS name.
We will expose Prometheus on all Kubernetes node IPs on port 30000.
The above-mentioned service’s annotations ensure that Prometheus ignores the service endpoint.
prometheus.io/scrape
: The default configuration will scrape all pods and, if set to-false
, this annotation will exclude the pod from the scraping process.prometheus.io/port
: Scrape the pod on the indicated port instead of the pod’s declared ports (default is a port-free target if none are declared).
We also have
prometheus.io/path
which is used if the metrics path is not/metrics
Apply changes by running the below command:
kubectl create -f prometheus-service.yaml --namespace=monitoring
Once the service is up, we can view the dashboard on localhost:9090

Conclusion
In this tutorial, we set up ClusterRole, ServiceAccount, and Setup ConfigMap with Prometheus configurations & mounted them in a Prometheus Deployment. We also set up a Prometheus service to access the dashboard on our localhost.
In the next part, we will see what is Kube state metrics & how to set them up. We’ll also look at how to integrate Grafana with our Prometheus as a data source.
Follow the next part here
Part-2: Setup Prometheus, Kube State metrics and Integrate Grafana with Kubernetes