* docs: fixes units in the dashboard * docs: add opendal errors panel * docs: opendal traffic use decbytes * docs: update readme --------- Co-authored-by: zyy17 <zyylsxm@gmail.com>
4.1 KiB
Grafana dashboards for GreptimeDB
Overview
This repository maintains the Grafana dashboards for GreptimeDB. It has two types of dashboards:
cluster/dashboard.json: The Grafana dashboard for the GreptimeDB cluster. Read the dashboard.md for more details.standalone/dashboard.json: The Grafana dashboard for the standalone GreptimeDB instance. It's generated from thecluster/dashboard.jsonby removing the instance filter through themake dashboardscommand. Read the dashboard.md for more details.
As the rapid development of GreptimeDB, the metrics may be changed, and please feel free to submit your feedback and/or contribution to this dashboard 🤗
NOTE:
-
The Grafana version should be greater than 9.0.
-
If you want to modify the dashboards, you only need to modify the
cluster/dashboard.jsonand run themake dashboardscommand to generate thestandalone/dashboard.jsonand other related files.
To maintain the dashboards easily, we use the dac tool to generate the intermediate dashboards and markdown documents:
cluster/dashboard.yaml: The intermediate dashboard for the GreptimeDB cluster.standalone/dashboard.yaml: The intermediate dashboard for the standalone GreptimeDB instance.
Data Sources
There are two data sources for the dashboards to fetch the metrics:
- Prometheus: Expose the metrics of GreptimeDB.
- Information Schema: It is the MySQL port of the current monitored instance. The
overviewdashboard will use this datasource to show the information schema of the current instance.
Instance Filters
To deploy the dashboards for multiple scenarios (K8s, bare metal, etc.), we prefer to use the instance label when filtering instances.
Additionally, we recommend including the pod label in the legend to make it easier to identify each instance, even though this field will be empty in bare metal scenarios.
For example, the following query is recommended:
sum(process_resident_memory_bytes{instance=~"$datanode"}) by (instance, pod)
And the legend will be like: [{{instance}}]-[{{ pod }}].
Deployment
Helm
If you use the Helm chart to deploy a GreptimeDB cluster, you can enable self-monitoring by setting the following values in your Helm chart:
monitoring.enabled=true: Deploys a standalone GreptimeDB instance dedicated to monitoring the cluster;grafana.enabled=true: Deploys Grafana and automatically imports the monitoring dashboard;
The standalone GreptimeDB instance will collect metrics from your cluster, and the dashboard will be available in the Grafana UI. For detailed deployment instructions, please refer to our Kubernetes deployment guide.
Self-host Prometheus and import dashboards manually
-
Configure Prometheus to scrape the cluster
The following is an example configuration(Please modify it according to your actual situation):
# example config # only to indicate how to assign labels to each target # modify yours accordingly scrape_configs: - job_name: metasrv static_configs: - targets: ['<metasrv-ip>:<port>'] - job_name: datanode static_configs: - targets: ['<datanode0-ip>:<port>', '<datanode1-ip>:<port>', '<datanode2-ip>:<port>'] - job_name: frontend static_configs: - targets: ['<frontend-ip>:<port>'] -
Configure the data sources in Grafana
You need to add two data sources in Grafana:
- Prometheus: It is the Prometheus instance that scrapes the GreptimeDB metrics.
- Information Schema: It is the MySQL port of the current monitored instance. The dashboard will use this datasource to show the information schema of the current instance.
-
Import the dashboards based on your deployment scenario
- Cluster: Import the
cluster/dashboard.jsondashboard. - Standalone: Import the
standalone/dashboard.jsondashboard.
- Cluster: Import the