I am using Prometheus to monitor my Kubernetes cluster. I have set up Prometheus in a separate namespace. I have multiple namespaces and multiple pods are running. Each pod container exposes a custom metrics at this end point, :80/data/metrics
. I am getting the Pods CPU, memory metrics etc, but how to configure Prometheus to pull data from :80/data/metrics
in each available pod ? I have used this tutorial to set up Prometheus, Link
You have to add this three annotation to your pods:
prometheus.io/scrape: 'true'
prometheus.io/path: '/data/metrics'
prometheus.io/port: '80'
How it will work?
Look at the kubernetes-pods
job of config-map.yaml
you are using to configure prometheus,
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
Check this three relabel configuration
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
Here, __metrics_path__
and port
and whether to scrap metrics from this pod are being read from pod annotations.
For, more details on how to configure Prometheus see here.