4.2 KiB
title | description | slug |
---|---|---|
Prometheus | Accessing TDengine with Prometheus | /third-party-tools/data-collection/prometheus |
import Prometheus from "./_prometheus.mdx"
Prometheus is a popular open-source monitoring and alerting system. It joined the Cloud Native Computing Foundation (CNCF) in 2016, becoming the second hosted project after Kubernetes, and it has a very active community of developers and users.
Prometheus provides remote_write
and remote_read
interfaces to utilize other database products as its storage engine. To allow users in the Prometheus ecosystem to take advantage of TDengine's efficient writing and querying capabilities, TDengine also supports these two interfaces.
With appropriate configuration, Prometheus data can be stored in TDengine through the remote_write
interface, and data stored in TDengine can be queried through the remote_read
interface, fully leveraging TDengine's efficient storage and querying performance for time-series data and its cluster processing capabilities.
Prerequisites
To write Prometheus data into TDengine, the following preparations are required:
- The TDengine cluster has been deployed and is running normally.
- The taosAdapter has been installed and is running normally. For detailed information, please refer to the taosAdapter User Manual.
- Prometheus has been installed. For installation, please refer to the official documentation.
Configuration Steps
Verification Method
After restarting Prometheus, you can use the following examples to verify that data is written from Prometheus to TDengine and can be read correctly.
Query Written Data Using TDengine CLI
taos> show databases;
name |
=================================
information_schema |
performance_schema |
prometheus_data |
Query OK, 3 row(s) in set (0.000585s)
taos> use prometheus_data;
Database changed.
taos> show stables;
name |
=================================
metrics |
Query OK, 1 row(s) in set (0.000487s)
taos> select * from metrics limit 10;
ts | value | labels |
=============================================================================================
2022-04-20 07:21:09.193000000 | 0.000024996 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:14.193000000 | 0.000024996 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:19.193000000 | 0.000024996 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:24.193000000 | 0.000024996 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:29.193000000 | 0.000024996 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:09.193000000 | 0.000054249 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:14.193000000 | 0.000054249 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:19.193000000 | 0.000054249 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:24.193000000 | 0.000054249 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:29.193000000 | 0.000054249 | {"__name__":"go_gc_duration... |
Query OK, 10 row(s) in set (0.011146s)
Use promql-cli to Read Data from TDengine via remote_read
Install promql-cli:
go install github.com/nalbury/promql-cli@latest
Query Prometheus data while TDengine and taosAdapter services are running:
ubuntu@shuduo-1804 ~ $ promql-cli --host "http://127.0.0.1:9090" "sum(up) by (job)"
JOB VALUE TIMESTAMP
prometheus 1 2022-04-20T08:05:26Z
node 1 2022-04-20T08:05:26Z
After stopping the taosAdapter service, query the Prometheus data:
ubuntu@shuduo-1804 ~ $ sudo systemctl stop taosadapter.service
ubuntu@shuduo-1804 ~ $ promql-cli --host "http://127.0.0.1:9090" "sum(up) by (job)"
VALUE TIMESTAMP
:::note
The subtable names generated by TDengine by default are unique ID values based on specific rules.
:::