docs: update grafana doc for 3.0 (#21300)
* docs: fix tab value and label * docs: fix tab value and label * docs: add dbname note in grafa and fix lots of unicode characters in english doc
This commit is contained in:
parent
9e8cc7d18c
commit
ef80b5b05b
|
@ -5,7 +5,7 @@ description: This website contains the user manuals for TDengine, an open-source
|
|||
slug: /
|
||||
---
|
||||
|
||||
TDengine is an [open-source](https://tdengine.com/tdengine/open-source-time-series-database/), [cloud-native](https://tdengine.com/tdengine/cloud-native-time-series-database/) [time-series database](https://tdengine.com/tsdb/) optimized for the Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design, and other topics. It’s written mainly for architects, developers, and system administrators.
|
||||
TDengine is an [open-source](https://tdengine.com/tdengine/open-source-time-series-database/), [cloud-native](https://tdengine.com/tdengine/cloud-native-time-series-database/) [time-series database](https://tdengine.com/tsdb/) optimized for the Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design, and other topics. It's written mainly for architects, developers, and system administrators.
|
||||
|
||||
To get an overview of TDengine, such as a feature list, benchmarks, and competitive advantages, please browse through the [Introduction](./intro) section.
|
||||
|
||||
|
|
|
@ -57,7 +57,7 @@ By making full use of [characteristics of time series data](https://tdengine.com
|
|||
|
||||
- **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
|
||||
|
||||
- **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered over 19k stars on GitHub. There is an active developer community, and over 140k running instances worldwide.
|
||||
- **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengine's core modules, including cluster feature, are all available under open source licenses. It has gathered over 19k stars on GitHub. There is an active developer community, and over 140k running instances worldwide.
|
||||
|
||||
With TDengine, the total cost of ownership of your time-series data platform can be greatly reduced.
|
||||
|
||||
|
@ -109,8 +109,8 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
|
|||
|
||||
| **System Performance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
|
||||
| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | --------------------------------------------------------------------------------------------------------------------------- |
|
||||
| Very large total processing capacity | | | √ | TDengine’s cluster functions can easily improve processing capacity via multi-server coordination. |
|
||||
| Extremely high-speed data processing | | | √ | TDengine’s storage and data processing are optimized for IoT, and can process data many times faster than similar products. |
|
||||
| Very large total processing capacity | | | √ | TDengine's cluster functions can easily improve processing capacity via multi-server coordination. |
|
||||
| Extremely high-speed data processing | | | √ | TDengine's storage and data processing are optimized for IoT, and can process data many times faster than similar products. |
|
||||
| Extremely fast processing of high resolution data | | | √ | TDengine has achieved the same or better performance than other relational and NoSQL data processing systems. |
|
||||
|
||||
### System Maintenance Requirements
|
||||
|
|
|
@ -127,7 +127,7 @@ To make full use of time-series data characteristics, TDengine adopts a strategy
|
|||
|
||||
If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.**
|
||||
|
||||
TDengine suggests using DCP ID as the table name (like d1001 in the above table). Each DCP may collect one or multiple metrics (like the `current`, `voltage`, `phase` as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the timestamp as the index, and won’t build the index on any metrics stored. Column wise storage is used.
|
||||
TDengine suggests using DCP ID as the table name (like d1001 in the above table). Each DCP may collect one or multiple metrics (like the `current`, `voltage`, `phase` as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the timestamp as the index, and won't build the index on any metrics stored. Column wise storage is used.
|
||||
|
||||
Complex devices, such as connected cars, may have multiple DCPs. In this case, multiple tables are created for a single device, one table per DCP.
|
||||
|
||||
|
|
|
@ -12,4 +12,4 @@ When using REST connection, the feature of bulk pulling can be enabled if the si
|
|||
{{#include docs/examples/java/src/main/java/com/taos/example/WSConnectExample.java:main}}
|
||||
```
|
||||
|
||||
More configuration about connection,please refer to [Java Connector](/reference/connector/java)
|
||||
More configuration about connection, please refer to [Java Connector](/reference/connector/java)
|
||||
|
|
|
@ -1,3 +1,3 @@
|
|||
```php title="原生连接"
|
||||
```php title=""native"
|
||||
{{#include docs/examples/php/connect.php}}
|
||||
```
|
||||
|
|
|
@ -33,7 +33,7 @@ There are two ways for a connector to establish connections to TDengine:
|
|||
|
||||
For REST and native connections, connectors provide similar APIs for performing operations and running SQL statements on your databases. The main difference is the method of establishing the connection, which is not visible to users.
|
||||
|
||||
Key differences:
|
||||
Key differences:
|
||||
|
||||
3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade.
|
||||
1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc.
|
||||
|
@ -198,7 +198,7 @@ The sample code below are based on dotnet6.0, they may need to be adjusted if yo
|
|||
<TabItem label="R" value="r">
|
||||
|
||||
1. Download [taos-jdbcdriver-version-dist.jar](https://repo1.maven.org/maven2/com/taosdata/jdbc/taos-jdbcdriver/3.0.0/).
|
||||
2. Install the dependency package `RJDBC`:
|
||||
2. Install the dependency package `RJDBC`:
|
||||
|
||||
```R
|
||||
install.packages("RJDBC")
|
||||
|
@ -213,7 +213,7 @@ If the client driver (taosc) is already installed, then the C connector is alrea
|
|||
</TabItem>
|
||||
<TabItem label="PHP" value="php">
|
||||
|
||||
**Download Source Code Package and Unzip:**
|
||||
**Download Source Code Package and Unzip: **
|
||||
|
||||
```shell
|
||||
curl -L -o php-tdengine.tar.gz https://github.com/Yurunsoft/php-tdengine/archive/refs/tags/v1.0.2.tar.gz \
|
||||
|
@ -223,13 +223,13 @@ curl -L -o php-tdengine.tar.gz https://github.com/Yurunsoft/php-tdengine/archive
|
|||
|
||||
> Version number `v1.0.2` is only for example, it can be replaced to any newer version, please check available version from [TDengine PHP Connector Releases](https://github.com/Yurunsoft/php-tdengine/releases).
|
||||
|
||||
**Non-Swoole Environment:**
|
||||
**Non-Swoole Environment: **
|
||||
|
||||
```shell
|
||||
phpize && ./configure && make -j && make install
|
||||
```
|
||||
|
||||
**Specify TDengine Location:**
|
||||
**Specify TDengine Location: **
|
||||
|
||||
```shell
|
||||
phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/3.0.0.0 && make -j && make install
|
||||
|
@ -238,7 +238,7 @@ phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/3.0.0.0 &&
|
|||
> `--with-tdengine-dir=` is followed by the TDengine installation location.
|
||||
> This way is useful in case TDengine location can't be found automatically or macOS.
|
||||
|
||||
**Swoole Environment:**
|
||||
**Swoole Environment: **
|
||||
|
||||
```shell
|
||||
phpize && ./configure --enable-swoole && make -j && make install
|
||||
|
|
|
@ -69,7 +69,7 @@ For more details please refer to [InfluxDB Line Protocol](https://docs.influxdat
|
|||
|
||||
## Query Examples
|
||||
|
||||
If you want query the data of `location=California.LosAngeles,groupid=2`,here is the query SQL:
|
||||
If you want query the data of `location=California.LosAngeles,groupid=2`, here is the query SQL:
|
||||
|
||||
```sql
|
||||
SELECT * FROM meters WHERE location = "California.LosAngeles" AND groupid = 2;
|
||||
|
|
|
@ -84,7 +84,7 @@ Query OK, 4 row(s) in set (0.005399s)
|
|||
|
||||
## Query Examples
|
||||
|
||||
If you want query the data of `location=California.LosAngeles groupid=3`,here is the query SQL:
|
||||
If you want query the data of `location=California.LosAngeles groupid=3`, here is the query SQL:
|
||||
|
||||
```sql
|
||||
SELECT * FROM `meters.current` WHERE location = "California.LosAngeles" AND groupid = 3;
|
||||
|
|
|
@ -97,7 +97,7 @@ Query OK, 2 row(s) in set (0.004076s)
|
|||
|
||||
## Query Examples
|
||||
|
||||
If you want query the data of "tags": {"location": "California.LosAngeles", "groupid": 1},here is the query SQL:
|
||||
If you want query the data of "tags": {"location": "California.LosAngeles", "groupid": 1}, here is the query SQL:
|
||||
|
||||
```sql
|
||||
SELECT * FROM `meters.current` WHERE location = "California.LosAngeles" AND groupid = 3;
|
||||
|
|
|
@ -49,7 +49,7 @@ If the data source is Kafka, then the application program is a consumer of Kafka
|
|||
|
||||
On the server side, database configuration parameter `vgroups` needs to be set carefully to maximize the system performance. If it's set too low, the system capability can't be utilized fully; if it's set too big, unnecessary resource competition may be produced. A normal recommendation for `vgroups` parameter is 2 times of the number of CPU cores. However, depending on the actual system resources, it may still need to tuned.
|
||||
|
||||
For more configuration parameters, please refer to [Database Configuration](../../../taos-sql/database) and [Server Configuration](../../../reference/config)。
|
||||
For more configuration parameters, please refer to [Database Configuration](../../../taos-sql/database) and [Server Configuration](../../../reference/config).
|
||||
|
||||
## Sample Programs
|
||||
|
||||
|
@ -98,7 +98,7 @@ The main Program is responsible for:
|
|||
3. Start reading threads
|
||||
4. Output writing speed every 10 seconds
|
||||
|
||||
The main program provides 4 parameters for tuning:
|
||||
The main program provides 4 parameters for tuning:
|
||||
|
||||
1. The number of reading threads, default value is 1
|
||||
2. The number of writing threads, default value is 2
|
||||
|
@ -192,7 +192,7 @@ TDENGINE_JDBC_URL="jdbc:TAOS://localhost:6030?user=root&password=taosdata"
|
|||
|
||||
If you want to launch the sample program on a remote server, please follow below steps:
|
||||
|
||||
1. Package the sample programs. Execute below command under directory `TDengine/docs/examples/java` :
|
||||
1. Package the sample programs. Execute below command under directory `TDengine/docs/examples/java`:
|
||||
```
|
||||
mvn package
|
||||
```
|
||||
|
@ -385,7 +385,7 @@ SQLWriter class encapsulates the logic of composing SQL and writing data. Please
|
|||
pip3 install faster-fifo
|
||||
```
|
||||
|
||||
3. Click the "Copy" in the above sample programs to copy `fast_write_example.py` 、 `sql_writer.py` and `mockdatasource.py`.
|
||||
3. Click the "Copy" in the above sample programs to copy `fast_write_example.py`, `sql_writer.py`, and `mockdatasource.py`.
|
||||
|
||||
4. Execute the program
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
### python Kafka 客户端
|
||||
### python Kafka client
|
||||
|
||||
For python kafka client, please refer to [kafka client](https://cwiki.apache.org/confluence/display/KAFKA/Clients#Clients-Python). In this document, we use [kafka-python](http://github.com/dpkp/kafka-python).
|
||||
|
||||
|
@ -88,7 +88,7 @@ In addition to python's built-in multithreading and multiprocessing library, we
|
|||
<details>
|
||||
<summary>kafka_example_consumer</summary>
|
||||
|
||||
`kafka_example_consumer` is `consumer`,which is responsible for consuming data from kafka and writing it to TDengine.
|
||||
`kafka_example_consumer` is `consumer`, which is responsible for consuming data from kafka and writing it to TDengine.
|
||||
|
||||
```py
|
||||
{{#include docs/examples/python/kafka_example_consumer.py}}
|
||||
|
|
|
@ -20,10 +20,10 @@ import CAsync from "./_c_async.mdx";
|
|||
|
||||
## Introduction
|
||||
|
||||
SQL is used by TDengine as its query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine's CLI `taos` can also be used to execute ad hoc SQL queries. Here is the list of major query functionalities supported by TDengine:
|
||||
SQL is used by TDengine as its query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine's CLI `taos` can also be used to execute ad hoc SQL queries. Here is the list of major query functionalities supported by TDengine:
|
||||
|
||||
- Query on single column or multiple columns
|
||||
- Filter on tags or data columns:>, <, =, <\>, like
|
||||
- Filter on tags or data columns: >, <, =, <\>, like
|
||||
- Grouping of results: `Group By` - Sorting of results: `Order By` - Limit the number of results: `Limit/Offset`
|
||||
- Windowed aggregate queries for time windows (interval), session windows (session), and state windows (state_window)
|
||||
- Arithmetic on columns of numeric types or aggregate results
|
||||
|
@ -160,7 +160,7 @@ In the section describing [Insert](/develop/insert-data/sql-writing), a database
|
|||
:::note
|
||||
|
||||
1. With either REST connection or native connection, the above sample code works well.
|
||||
2. Please note that `use db` can't be used in case of REST connection because it's stateless.
|
||||
2. Please note that `use db` can't be used in case of REST connection because it's stateless. You can specify the database name by either the REST endpoint's parameter or <db_name>.<table_name> in the SQL command.
|
||||
|
||||
:::
|
||||
|
||||
|
|
|
@ -252,9 +252,9 @@ create table battery(ts timestamp, vol1 float, vol2 float, vol3 float, deviceId
|
|||
```
|
||||
Create the UDF:
|
||||
```bash
|
||||
create aggregate function max_vol as '/root/udf/libmaxvol.so' outputtype binary(64) bufsize 10240 language 'C';
|
||||
create aggregate function max_vol as '/root/udf/libmaxvol.so' outputtype binary(64) bufsize 10240 language 'C';
|
||||
```
|
||||
Use the UDF in the query:
|
||||
Use the UDF in the query:
|
||||
```bash
|
||||
select max_vol(vol1,vol2,vol3,deviceid) from battery;
|
||||
```
|
||||
|
@ -271,9 +271,9 @@ select max_vol(vol1,vol2,vol3,deviceid) from battery;
|
|||
## Implement a UDF in Python
|
||||
|
||||
Implement the specified interface functions when implementing a UDF in Python.
|
||||
- implement `process` function for the scalar UDF。
|
||||
- implement `start`, `reduce`, `finish` for the aggregate UDF。
|
||||
- implement `init` for initialization and `destroy` for termination。
|
||||
- implement `process` function for the scalar UDF.
|
||||
- implement `start`, `reduce`, `finish` for the aggregate UDF.
|
||||
- implement `init` for initialization and `destroy` for termination.
|
||||
|
||||
### Implement a Scalar UDF in Python
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@ When using TDengine to store and query data, the most important part of the data
|
|||
- The format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128`.
|
||||
- Internal function `NOW` can be used to get the current timestamp on the client side.
|
||||
- The current timestamp of the client side is applied when `NOW` is used to insert data.
|
||||
- Epoch Time:timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from UTC 1970-01-01 00:00:00.
|
||||
- Epoch Time: timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from UTC 1970-01-01 00:00:00.
|
||||
- Add/subtract operations can be carried out on timestamps. For example `NOW-2h` means 2 hours prior to the time at which query is executed. The units of time in operations can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `SELECT * FROM t1 WHERE ts > NOW-2w AND ts <= NOW-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operations.
|
||||
|
||||
Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`. The default time precision is millisecond. In the statement below, the precision is set to nanonseconds.
|
||||
|
|
|
@ -72,8 +72,8 @@ database_option: {
|
|||
- 0: The database can contain multiple supertables.
|
||||
- 1: The database can contain only one supertable.
|
||||
- STT_TRIGGER: specifies the number of file merges triggered by flushed files. The default is 8, ranging from 1 to 16. For high-frequency scenarios with few tables, it is recommended to use the default configuration or a smaller value for this parameter; For multi-table low-frequency scenarios, it is recommended to configure this parameter with a larger value.
|
||||
- TABLE_PREFIX: The prefix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the prefix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then "0001" is used if TSDB_PREFIX is set to 2 but "v3" is used if TSDB_PREFIX is set to -2; It can help you to control the distribution of tables.
|
||||
- TABLE_SUFFIX:The suffix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the suffix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then "v300" is used if TSDB_SUFFIX is set to 2 but "01" is used if TSDB_SUFFIX is set to -2; It can help you to control the distribution of tables.
|
||||
- TABLE_PREFIX: The prefix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the prefix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then "0001" is used if TSDB_PREFIX is set to 2 but "v3" is used if TSDB_PREFIX is set to -2; It can help you to control the distribution of tables.
|
||||
- TABLE_SUFFIX: The suffix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the suffix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then "v300" is used if TSDB_SUFFIX is set to 2 but "01" is used if TSDB_SUFFIX is set to -2; It can help you to control the distribution of tables.
|
||||
- TSDB_PAGESIZE: The page size of the data storage engine in a vnode. The unit is KB. The default is 4 KB. The range is 1 to 16384, that is, 1 KB to 16 MB.
|
||||
- WAL_RETENTION_PERIOD: specifies the maximum time of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a time in seconds. The default value 0. A value of 0 indicates that WAL files are not required to keep for consumption. Alter it with a proper value at first to create topics.
|
||||
- WAL_RETENTION_SIZE: specifies the maximum total size of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a size in KB. The default value is 0. A value of 0 indicates that the total size of WAL files to keep for consumption has no upper limit.
|
||||
|
|
|
@ -82,7 +82,7 @@ One or multiple rows can be inserted into multiple tables in a single SQL statem
|
|||
|
||||
```sql
|
||||
INSERT INTO d1001 VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
|
||||
d1002 (ts, current, phase) VALUES ('2021-07-13 14:06:34.255', 10.27, 0.31);
|
||||
d1002 (ts, current, phase) VALUES ('2021-07-13 14:06:34.255', 10.27, 0.31);
|
||||
```
|
||||
|
||||
## Automatically Create Table When Inserting
|
||||
|
|
|
@ -373,7 +373,7 @@ FROM temp_stable t1, temp_stable t2
|
|||
WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
|
||||
```
|
||||
|
||||
For sub-table and super table:
|
||||
For sub-table and super table:
|
||||
|
||||
```sql
|
||||
SELECT *
|
||||
|
|
|
@ -6,14 +6,14 @@ description: Use Tag Index to Improve Query Performance
|
|||
|
||||
## Introduction
|
||||
|
||||
Prior to TDengine 3.0.3.0 (excluded),only one index is created by default on the first tag of each super table, but it's not allowed to dynamically create index on any other tags. From version 3.0.30, you can dynamically create index on any tag of any type. The index created automatically by TDengine is still valid. Query performance can benefit from indexes if you use properly.
|
||||
Prior to TDengine 3.0.3.0 (excluded), only one index is created by default on the first tag of each super table, but it's not allowed to dynamically create index on any other tags. From version 3.0.30, you can dynamically create index on any tag of any type. The index created automatically by TDengine is still valid. Query performance can benefit from indexes if you use properly.
|
||||
|
||||
## Syntax
|
||||
|
||||
1. The syntax of creating an index
|
||||
|
||||
```sql
|
||||
CREATE INDEX index_name ON tbl_name (tagColName)
|
||||
CREATE INDEX index_name ON tbl_name (tagColName)
|
||||
```
|
||||
|
||||
In the above statement, `index_name` if the name of the index, `tbl_name` is the name of the super table,`tagColName` is the name of the tag on which the index is being created. `tagColName` can be any type supported by TDengine.
|
||||
|
|
|
@ -434,7 +434,7 @@ TO_ISO8601(expr [, timezone])
|
|||
|
||||
**More explanations**:
|
||||
|
||||
- You can specify a time zone in the following format: [z/Z, +/-hhmm, +/-hh, +/-hh:mm]。 For example, TO_ISO8601(1, "+00:00").
|
||||
- You can specify a time zone in the following format: [z/Z, +/-hhmm, +/-hh, +/-hh:mm]. For example, TO_ISO8601(1, "+00:00").
|
||||
- If the input is a UNIX timestamp, the precision of the returned value is determined by the digits of the input timestamp
|
||||
- If the input is a column of TIMESTAMP type, the precision of the returned value is same as the precision set for the current data base in use
|
||||
|
||||
|
@ -626,7 +626,7 @@ algo_type: {
|
|||
|
||||
**Applicable table types**: standard tables and supertables
|
||||
|
||||
**Explanations**:
|
||||
**Explanations**:
|
||||
- _p_ is in range [0,100], when _p_ is 0, the result is same as using function MIN; when _p_ is 100, the result is same as function MAX.
|
||||
- `algo_type` can only be input as `default` or `t-digest` Enter `default` to use a histogram-based algorithm. Enter `t-digest` to use the t-digest algorithm to calculate the approximation of the quantile. `default` is used by default.
|
||||
- The approximation result of `t-digest` algorithm is sensitive to input data order. For example, when querying STable with different input data order there might be minor differences in calculated results.
|
||||
|
@ -672,7 +672,7 @@ If you input a specific column, the number of non-null values in the column is r
|
|||
ELAPSED(ts_primary_key [, time_unit])
|
||||
```
|
||||
|
||||
**Description**:`elapsed` function can be used to calculate the continuous time length in which there is valid data. If it's used with `INTERVAL` clause, the returned result is the calculated time length within each time window. If it's used without `INTERVAL` caluse, the returned result is the calculated time length within the specified time range. Please be noted that the return value of `elapsed` is the number of `time_unit` in the calculated time length.
|
||||
**Description**: `elapsed` function can be used to calculate the continuous time length in which there is valid data. If it's used with `INTERVAL` clause, the returned result is the calculated time length within each time window. If it's used without `INTERVAL` caluse, the returned result is the calculated time length within the specified time range. Please be noted that the return value of `elapsed` is the number of `time_unit` in the calculated time length.
|
||||
|
||||
**Return value type**: Double if the input value is not NULL;
|
||||
|
||||
|
@ -680,7 +680,7 @@ ELAPSED(ts_primary_key [, time_unit])
|
|||
|
||||
**Applicable tables**: table, STable, outer in nested query
|
||||
|
||||
**Explanations**:
|
||||
**Explanations**:
|
||||
- `ts_primary_key` parameter can only be the first column of a table, i.e. timestamp primary key.
|
||||
- The minimum value of `time_unit` is the time precision of the database. If `time_unit` is not specified, the time precision of the database is used as the default time unit. Time unit specified by `time_unit` can be:
|
||||
1b (nanoseconds), 1u (microseconds), 1a (milliseconds), 1s (seconds), 1m (minutes), 1h (hours), 1d (days), or 1w (weeks)
|
||||
|
@ -758,7 +758,7 @@ SUM(expr)
|
|||
HYPERLOGLOG(expr)
|
||||
```
|
||||
|
||||
**Description**:
|
||||
**Description**:
|
||||
The cardinal number of a specific column is returned by using hyperloglog algorithm. The benefit of using hyperloglog algorithm is that the memory usage is under control when the data volume is huge.
|
||||
However, when the data volume is very small, the result may be not accurate, it's recommended to use `select count(data) from (select unique(col) as data from table)` in this case.
|
||||
|
||||
|
@ -772,10 +772,10 @@ HYPERLOGLOG(expr)
|
|||
### HISTOGRAM
|
||||
|
||||
```sql
|
||||
HISTOGRAM(expr,bin_type, bin_description, normalized)
|
||||
HISTOGRAM(expr, bin_type, bin_description, normalized)
|
||||
```
|
||||
|
||||
**Description**:Returns count of data points in user-specified ranges.
|
||||
**Description**: Returns count of data points in user-specified ranges.
|
||||
|
||||
**Return value type** If normalized is set to 1, a DOUBLE is returned; otherwise a BIGINT is returned
|
||||
|
||||
|
@ -783,18 +783,18 @@ HISTOGRAM(expr,bin_type, bin_description, normalized)
|
|||
|
||||
**Applicable table types**: table, STable
|
||||
|
||||
**Explanations**:
|
||||
- bin_type: parameter to indicate the bucket type, valid inputs are: "user_input", "linear_bin", "log_bin"。
|
||||
- bin_description: parameter to describe how to generate buckets,can be in the following JSON formats for each bin_type respectively:
|
||||
**Explanations**:
|
||||
- bin_type: parameter to indicate the bucket type, valid inputs are: "user_input", "linear_bin", "log_bin".
|
||||
- bin_description: parameter to describe how to generate buckets can be in the following JSON formats for each bin_type respectively:
|
||||
- "user_input": "[1, 3, 5, 7]":
|
||||
User specified bin values.
|
||||
|
||||
- "linear_bin": "{"start": 0.0, "width": 5.0, "count": 5, "infinity": true}"
|
||||
"start" - bin starting point. "width" - bin offset. "count" - number of bins generated. "infinity" - whether to add(-inf, inf)as start/end point in generated set of bins.
|
||||
"start" - bin starting point. "width" - bin offset. "count" - number of bins generated. "infinity" - whether to add (-inf, inf) as start/end point in generated set of bins.
|
||||
The above "linear_bin" descriptor generates a set of bins: [-inf, 0.0, 5.0, 10.0, 15.0, 20.0, +inf].
|
||||
|
||||
- "log_bin": "{"start":1.0, "factor": 2.0, "count": 5, "infinity": true}"
|
||||
"start" - bin starting point. "factor" - exponential factor of bin offset. "count" - number of bins generated. "infinity" - whether to add(-inf, inf)as start/end point in generated range of bins.
|
||||
"start" - bin starting point. "factor" - exponential factor of bin offset. "count" - number of bins generated. "infinity" - whether to add (-inf, inf) as start/end point in generated range of bins.
|
||||
The above "linear_bin" descriptor generates a set of bins: [-inf, 1.0, 2.0, 4.0, 8.0, 16.0, +inf].
|
||||
- normalized: setting to 1/0 to turn on/off result normalization. Valid values are 0 or 1.
|
||||
|
||||
|
@ -1107,7 +1107,7 @@ ignore_negative: {
|
|||
**More explanation**:
|
||||
|
||||
- It can be used together with `PARTITION BY tbname` against a STable.
|
||||
- It can be used together with a selected column. For example: select \_rowts, DERIVATIVE() from。
|
||||
- It can be used together with a selected column. For example: select \_rowts, DERIVATIVE() from.
|
||||
|
||||
### DIFF
|
||||
|
||||
|
@ -1131,7 +1131,7 @@ ignore_negative: {
|
|||
**More explanation**:
|
||||
|
||||
- The number of result rows is the number of rows subtracted by one, no output for the first row
|
||||
- It can be used together with a selected column. For example: select \_rowts, DIFF() from。
|
||||
- It can be used together with a selected column. For example: select \_rowts, DIFF() from.
|
||||
|
||||
|
||||
### IRATE
|
||||
|
@ -1183,7 +1183,7 @@ STATECOUNT(expr, oper, val)
|
|||
**Applicable parameter values**:
|
||||
|
||||
- oper : Can be one of `'LT'` (lower than), `'GT'` (greater than), `'LE'` (lower than or equal to), `'GE'` (greater than or equal to), `'NE'` (not equal to), `'EQ'` (equal to), the value is case insensitive, the value must be in quotes.
|
||||
- val : Numeric types
|
||||
- val: Numeric types
|
||||
|
||||
**Return value type**: Integer
|
||||
|
||||
|
@ -1210,7 +1210,7 @@ STATEDURATION(expr, oper, val, unit)
|
|||
**Applicable parameter values**:
|
||||
|
||||
- oper : Can be one of `'LT'` (lower than), `'GT'` (greater than), `'LE'` (lower than or equal to), `'GE'` (greater than or equal to), `'NE'` (not equal to), `'EQ'` (equal to), the value is case insensitive, the value must be in quotes.
|
||||
- val : Numeric types
|
||||
- val: Numeric types
|
||||
- unit: The unit of time interval. Enter one of the following options: 1b (nanoseconds), 1u (microseconds), 1a (milliseconds), 1s (seconds), 1m (minutes), 1h (hours), 1d (days), or 1w (weeks) If you do not enter a unit of time, the precision of the current database is used by default.
|
||||
|
||||
**Return value type**: Integer
|
||||
|
|
|
@ -69,19 +69,20 @@ These pseudocolumns occur after the aggregation clause.
|
|||
`FILL` clause is used to specify how to fill when there is data missing in any window, including:
|
||||
|
||||
1. NONE: No fill (the default fill mode)
|
||||
2. VALUE:Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)` Note: The value filled depends on the data type. For example, if you run FILL(VALUE 1.23) on an integer column, the value 1 is filled.
|
||||
3. PREV:Fill with the previous non-NULL value, `FILL(PREV)`
|
||||
4. NULL:Fill with NULL, `FILL(NULL)`
|
||||
5. LINEAR:Fill with the closest non-NULL value, `FILL(LINEAR)`
|
||||
6. NEXT:Fill with the next non-NULL value, `FILL(NEXT)`
|
||||
2. VALUE: Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)` Note: The value filled depends on the data type. For example, if you run FILL(VALUE 1.23) on an integer column, the value 1 is filled.
|
||||
3. PREV: Fill with the previous non-NULL value, `FILL(PREV)`
|
||||
4. NULL: Fill with NULL, `FILL(NULL)`
|
||||
5. LINEAR: Fill with the closest non-NULL value, `FILL(LINEAR)`
|
||||
6. NEXT: Fill with the next non-NULL value, `FILL(NEXT)`
|
||||
|
||||
In the above filling modes, except for `NONE` mode, the `fill` clause will be ignored if there is no data in the defined time range, i.e. no data would be filled and the query result would be empty. This behavior is reasonable when the filling mode is `PREV`, `NEXT`, `LINEAR`, because filling can't be performed if there is not any data. For filling modes `NULL` and `VALUE`, however, filling can be performed even though there is not any data, filling or not depends on the choice of user's application. To accomplish the need of this force filling behavior and not break the behavior of existing filling modes, TDengine added two new filling modes since version 3.0.3.0.
|
||||
|
||||
1. NULL_F: Fill `NULL` by force
|
||||
2. VALUE_F: Fill `VALUE` by force
|
||||
|
||||
The detailed beaviors of `NULL`, `NULL_F`, `VALUE`, and VALUE_F are described below:
|
||||
- When used with `INTERVAL`: `NULL_F` and `VALUE_F` are filling by force;`NULL` and `VALUE` don't fill by force. The behavior of each filling mode is exactly same as what the name suggests.
|
||||
The detailed beaviors of `NULL`, `NULL_F`, `VALUE`, and VALUE_F are described below:
|
||||
|
||||
- When used with `INTERVAL`: `NULL_F` and `VALUE_F` are filling by force; `NULL` and `VALUE` don't fill by force. The behavior of each filling mode is exactly same as what the name suggests.
|
||||
- When used with `INTERVAL` in stream processing: `NULL_F` and `NULL` are same, i.e. don't fill by force; `VALUE_F` and `VALUE` and same, i.e. don't fill by force. It's suggested that there is no filling by force in stream processing.
|
||||
- When used with `INTERP`: `NULL` and `NULL_F` and same, i.e. filling by force; `VALUE` and `VALUE_F` are same, i.e. filling by force. It's suggested that there is always filling by force when used with `INTERP`.
|
||||
|
||||
|
@ -97,7 +98,7 @@ The detailed beaviors of `NULL`, `NULL_F`, `VALUE`, and VALUE_F are described be
|
|||
|
||||
There are two kinds of time windows: sliding window and flip time/tumbling window.
|
||||
|
||||
The `INTERVAL` clause is used to generate time windows of the same time interval. The `SLIDING` parameter is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining a continuous query, both the size of the time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time/tumbling window.
|
||||
The `INTERVAL` clause is used to generate time windows of the same time interval. The `SLIDING` parameter is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining a continuous query, both the size of the time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e], [t1s, t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time/tumbling window.
|
||||
|
||||

|
||||
|
||||
|
@ -121,7 +122,7 @@ Please note that the `timezone` parameter should be configured to be the same va
|
|||
|
||||
### State Window
|
||||
|
||||
In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two state windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12].
|
||||
In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two state windows according to status, [2019-04-28 14:22:07, 2019-04-28 14:22:10] and [2019-04-28 14:22:11, 2019-04-28 14:22:12].
|
||||
|
||||

|
||||
|
||||
|
@ -145,7 +146,7 @@ SELECT tbname, _wstart, CASE WHEN voltage >= 205 and voltage <= 235 THEN 1 ELSE
|
|||
|
||||
### Session Window
|
||||
|
||||
The primary key, i.e. timestamp, is used to determine which session window a row belongs to. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30] because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
|
||||
The primary key, i.e. timestamp, is used to determine which session window a row belongs to. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10, 2019-04-28 14:22:30] and [2019-04-28 14:23:10, 2019-04-28 14:23:30] because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
|
||||
|
||||

|
||||
|
||||
|
@ -178,7 +179,7 @@ select _wstart, _wend, count(*) from t event_window start with c1 > 0 end with c
|
|||
|
||||
### Examples
|
||||
|
||||
A table of intelligent meters can be created by the SQL statement below:
|
||||
A table of intelligent meters can be created by the SQL statement below:
|
||||
|
||||
```
|
||||
CREATE TABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT);
|
||||
|
|
|
@ -112,7 +112,7 @@ SHOW STREAMS;
|
|||
|
||||
When you create a stream, you can use the TRIGGER parameter to specify triggering conditions for it.
|
||||
|
||||
For non-windowed processing, triggering occurs in real time. For windowed processing, there are three methods of triggering,the default value is AT_ONCE:
|
||||
For non-windowed processing, triggering occurs in real time. For windowed processing, there are three methods of triggering, the default value is AT_ONCE:
|
||||
|
||||
1. AT_ONCE: triggers on write
|
||||
|
||||
|
|
|
@ -67,7 +67,7 @@ description: This document describes the JSON data type in TDengine.
|
|||
|
||||
- The maximum length of keys in JSON is 256 bytes, and key must be printable ASCII characters. The maximum total length of a JSON is 4,096 bytes.
|
||||
|
||||
- JSON format:
|
||||
- JSON format:
|
||||
|
||||
- The input string for JSON can be empty, i.e. "", "\t", or NULL, but it can't be non-NULL string, bool or array.
|
||||
- object can be {}, and the entire JSON is empty if so. Key can be "", and it's ignored if so.
|
||||
|
|
|
@ -20,7 +20,7 @@ description: This document describes the usage of escape characters in TDengine.
|
|||
|
||||
1. If there are escape characters in identifiers (database name, table name, column name)
|
||||
- Identifier without ``: Error will be returned because identifier must be constituted of digits, ASCII characters or underscore and can't be started with digits
|
||||
- Identifier quoted with ``: Original content is kept, no escaping
|
||||
- Identifier quoted with ``: Original content is kept, no escaping
|
||||
2. If there are escape characters in values
|
||||
- The escape characters will be escaped as the above table. If the escape character doesn't match any supported one, the escape character "\" will be ignored.
|
||||
- "%" and "\_" are used as wildcards in `like`. `\%` and `\_` should be used to represent literal "%" and "\_" in `like`,. If `\%` and `\_` are used out of `like` context, the evaluation result is "`\%`"and "`\_`", instead of "%" and "\_".
|
||||
|
|
|
@ -184,7 +184,7 @@ Provides information about standard tables and subtables.
|
|||
|
||||
## INS_COLUMNS
|
||||
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | ---------------------- |
|
||||
| 1 | table_name | BINARY(192) | Table name |
|
||||
| 2 | db_name | BINARY(64) | Database name |
|
||||
|
|
|
@ -4,7 +4,7 @@ sidebar_label: SHOW Statement
|
|||
description: This document describes how to use the SHOW statement in TDengine.
|
||||
---
|
||||
|
||||
`SHOW` command can be used to get brief system information. To get details about metadata, information, and status in the system, please use `select` to query the tables in database `INFORMATION_SCHEMA`.
|
||||
`SHOW` command can be used to get brief system information. To get details about metadata, information, and status in the system, please use `select` to query the tables in database `INFORMATION_SCHEMA`.
|
||||
|
||||
## SHOW APPS
|
||||
|
||||
|
@ -187,10 +187,10 @@ SHOW TABLE DISTRIBUTED table_name;
|
|||
|
||||
Shows how table data is distributed.
|
||||
|
||||
Examples: Below is an example of this command to display the block distribution of table `d0` in detailed format.
|
||||
Examples: Below is an example of this command to display the block distribution of table `d0` in detailed format.
|
||||
|
||||
```sql
|
||||
show table distributed d0\G;
|
||||
show table distributed d0\G;
|
||||
```
|
||||
|
||||
<details>
|
||||
|
@ -201,31 +201,31 @@ _block_dist: Total_Blocks=[5] Total_Size=[93.65 KB] Average_size=[18.73 KB] Comp
|
|||
|
||||
Total_Blocks : Table `d0` contains total 5 blocks
|
||||
|
||||
Total_Size: The total size of all the data blocks in table `d0` is 93.65 KB
|
||||
Total_Size: The total size of all the data blocks in table `d0` is 93.65 KB
|
||||
|
||||
Average_size: The average size of each block is 18.73 KB
|
||||
|
||||
Compression_Ratio: The data compression rate is 23.98%
|
||||
|
||||
|
||||
*************************** 2.row ***************************
|
||||
_block_dist: Total_Rows=[20000] Inmem_Rows=[0] MinRows=[3616] MaxRows=[4096] Average_Rows=[4000]
|
||||
|
||||
Total_Rows: Table `d0` contains 20,000 rows
|
||||
|
||||
Inmem_Rows: The rows still in memory, i.e. not committed in disk, is 0, i.e. none such rows
|
||||
Inmem_Rows: The rows still in memory, i.e. not committed in disk, is 0, i.e. none such rows
|
||||
|
||||
MinRows: The minimum number of rows in a block is 3,616
|
||||
MinRows: The minimum number of rows in a block is 3,616
|
||||
|
||||
MaxRows: The maximum number of rows in a block is 4,096B
|
||||
MaxRows: The maximum number of rows in a block is 4,096B
|
||||
|
||||
Average_Rows: The average number of rows in a block is 4,000
|
||||
Average_Rows: The average number of rows in a block is 4,000
|
||||
|
||||
*************************** 3.row ***************************
|
||||
_block_dist: Total_Tables=[1] Total_Files=[2]
|
||||
|
||||
Total_Tables: The number of child tables, 1 in this example
|
||||
Total_Tables: The number of child tables, 1 in this example
|
||||
|
||||
Total_Files: The number of files storing the table's data, 2 in this example
|
||||
Total_Files: The number of files storing the table's data, 2 in this example
|
||||
|
||||
*************************** 4.row ***************************
|
||||
|
||||
|
@ -361,7 +361,7 @@ SHOW VARIABLES;
|
|||
SHOW DNODE dnode_id VARIABLES;
|
||||
```
|
||||
|
||||
Shows the working configuration of the parameters that must be the same on each node. You can also specify a dnode to show the working configuration for that node.
|
||||
Shows the working configuration of the parameters that must be the same on each node. You can also specify a dnode to show the working configuration for that node.
|
||||
|
||||
## SHOW VGROUPS
|
||||
|
||||
|
@ -369,7 +369,7 @@ Shows the working configuration of the parameters that must be the same on each
|
|||
SHOW [db_name.]VGROUPS;
|
||||
```
|
||||
|
||||
Shows information about all vgroups in the current database.
|
||||
Shows information about all vgroups in the current database.
|
||||
|
||||
## SHOW VNODES
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ Syntax Specifications used in this chapter:
|
|||
- Information that you input is given in lowercase.
|
||||
- \[ \] means optional input, excluding [] itself.
|
||||
- | means one of a few options, excluding | itself.
|
||||
- … means the item prior to it can be repeated multiple times.
|
||||
- ... means the item prior to it can be repeated multiple times.
|
||||
|
||||
To better demonstrate the syntax, usage and rules of TDengine SQL, hereinafter it's assumed that there is a data set of data from electric meters. Each meter collects 3 data measurements: current, voltage, phase. The data model is shown below:
|
||||
|
||||
|
|
|
@ -22,11 +22,11 @@ wget https://github.com/taosdata/grafanaplugin/raw/master/dashboards/TDinsight.s
|
|||
chmod +x TDinsight.sh
|
||||
```
|
||||
|
||||
Prepare:
|
||||
Prepare:
|
||||
|
||||
1. TDengine Server
|
||||
|
||||
- The URL of REST service:for example `http://localhost:6041` if TDengine is deployed locally
|
||||
- The URL of REST service: for example `http://localhost:6041` if TDengine is deployed locally
|
||||
- User name and password
|
||||
|
||||
2. Grafana Alert Notification
|
||||
|
|
|
@ -9,13 +9,13 @@ When a TDengine client is unable to access a TDengine server, the network connec
|
|||
|
||||
Diagnostics for network connections can be executed between Linux/Windows/macOS.
|
||||
|
||||
Diagnostic steps:
|
||||
Diagnostic steps:
|
||||
|
||||
1. If the port range to be diagnosed is being occupied by a `taosd` server process, please first stop `taosd.
|
||||
2. On the server side, execute command `taos -n server -P <port> -l <pktlen>` to monitor the port range starting from the port specified by `-P` parameter with the role of "server".
|
||||
3. On the client side, execute command `taos -n client -h <fqdn of server> -P <port> -l <pktlen>` to send a testing package to the specified server and port.
|
||||
|
||||
-l <pktlen\>: The size of the testing package, in bytes. The value range is [11, 64,000] and default value is 1,000.
|
||||
-l <pktlen\>: The size of the testing package, in bytes. The value range is [11, 64,000] and default value is 1,000.
|
||||
Please note that the package length must be same in the above 2 commands executed on server side and client side respectively.
|
||||
|
||||
Output of the server side for the example is below:
|
||||
|
|
|
@ -83,13 +83,13 @@ For example, `http://h1.taos.com:6041/rest/sql/test` is a URL to `h1.taos.com:60
|
|||
|
||||
TDengine supports both Basic authentication and custom authentication mechanisms, and subsequent versions will provide a standard secure digital signature mechanism for authentication.
|
||||
|
||||
- authentication information is shown below:
|
||||
- authentication information is shown below:
|
||||
|
||||
```text
|
||||
Authorization: Taosd <TOKEN>
|
||||
```
|
||||
|
||||
- Basic authentication information is shown below:
|
||||
- Basic authentication information is shown below:
|
||||
|
||||
```text
|
||||
Authorization: Basic <TOKEN>
|
||||
|
|
|
@ -12,9 +12,9 @@ C/C++ developers can use TDengine's client driver and the C/C++ connector, to de
|
|||
|
||||
After TDengine server or client installation, `taos.h` is located at
|
||||
|
||||
- Linux:`/usr/local/taos/include`
|
||||
- Windows:`C:\TDengine\include`
|
||||
- macOS:`/usr/local/include`
|
||||
- Linux: usr/local/taos/include`
|
||||
- Windows: C:\TDengine\include`
|
||||
- macOS: usr/local/include`
|
||||
|
||||
The dynamic libraries for the TDengine client driver are located in.
|
||||
|
||||
|
@ -412,7 +412,8 @@ In addition to writing data using the SQL method or the parameter binding API, w
|
|||
Note that the timestamp resolution parameter only takes effect when the protocol type is `SML_LINE_PROTOCOL`.
|
||||
For OpenTSDB's text protocol, timestamp resolution follows its official resolution rules - time precision is confirmed by the number of characters contained in the timestamp.
|
||||
|
||||
schemaless 其他相关的接口
|
||||
schemaless interfaces:
|
||||
|
||||
- `TAOS_RES *taos_schemaless_insert_with_reqid(TAOS *taos, char *lines[], int numLines, int protocol, int precision, int64_t reqid)`
|
||||
- `TAOS_RES *taos_schemaless_insert_raw(TAOS *taos, char *lines, int len, int32_t *totalRows, int protocol, int precision)`
|
||||
- `TAOS_RES *taos_schemaless_insert_raw_with_reqid(TAOS *taos, char *lines, int len, int32_t *totalRows, int protocol, int precision, int64_t reqid)`
|
||||
|
|
|
@ -970,8 +970,8 @@ TaosConsumer consumer = new TaosConsumer<>(config);
|
|||
- group.id: consumer: Specifies the group that the consumer is in.
|
||||
- value.deserializer: To deserialize the results, you can inherit `com.taosdata.jdbc.tmq.ReferenceDeserializer` and specify the result set bean. You can also inherit `com.taosdata.jdbc.tmq.Deserializer` and perform custom deserialization based on the SQL result set.
|
||||
- td.connect.type: Specifies the type connect with TDengine, `jni` or `WebSocket`. default is `jni`
|
||||
- httpConnectTimeout:WebSocket connection timeout in milliseconds, the default value is 5000 ms. It only takes effect when using WebSocket type.
|
||||
- messageWaitTimeout:socket timeout in milliseconds, the default value is 10000 ms. It only takes effect when using WebSocket type.
|
||||
- httpConnectTimeout: WebSocket connection timeout in milliseconds, the default value is 5000 ms. It only takes effect when using WebSocket type.
|
||||
- messageWaitTimeout: socket timeout in milliseconds, the default value is 10000 ms. It only takes effect when using WebSocket type.
|
||||
- For more information, see [Consumer Parameters](../../../develop/tmq).
|
||||
|
||||
#### Subscribe to consume data
|
||||
|
@ -1267,9 +1267,9 @@ The source code of the sample application is under `TDengine/examples/JDBC`:
|
|||
|
||||
5. java.lang.NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer; ... taos-jdbcdriver-3.0.1.jar
|
||||
|
||||
**Cause**:taos-jdbcdriver 3.0.1 is compiled on JDK 11.
|
||||
**Cause**: taos-jdbcdriver 3.0.1 is compiled on JDK 11.
|
||||
|
||||
**Solution**: Use taos-jdbcdriver 3.0.2.
|
||||
**Solution**: Use taos-jdbcdriver 3.0.2.
|
||||
|
||||
For additional troubleshooting, see [FAQ](../../../train-faq/faq).
|
||||
|
||||
|
|
|
@ -121,7 +121,7 @@ The parameters are described as follows:
|
|||
- **username/password**: Username and password used to create connections.
|
||||
- **host/port**: Specifies the server and port to establish a connection. If you do not specify a hostname or port, native connections default to `localhost:6030` and Websocket connections default to `localhost:6041`.
|
||||
- **database**: Specify the default database to connect to. It's optional.
|
||||
- **params**:Optional parameters.
|
||||
- **params**: Optional parameters.
|
||||
|
||||
A sample DSN description string is as follows:
|
||||
|
||||
|
|
|
@ -255,7 +255,7 @@ The `connect()` function returns a `taos.TaosConnection` instance. In client-sid
|
|||
|
||||
All arguments to the `connect()` function are optional keyword arguments. The following are the connection parameters specified.
|
||||
|
||||
- `url`: The URL of taosAdapter REST service. The default is <http://localhost:6041>.
|
||||
- `url`: The URL of taosAdapter REST service. The default is <http://localhost:6041>.
|
||||
- `user`: TDengine user name. The default is `root`.
|
||||
- `password`: TDengine user password. The default is `taosdata`.
|
||||
- `timeout`: HTTP request timeout. Enter a value in seconds. The default is `socket._GLOBAL_DEFAULT_TIMEOUT`. Usually, no configuration is needed.
|
||||
|
|
|
@ -321,18 +321,18 @@ let cursor = conn.cursor();
|
|||
| package name | version | TDengine version | Description |
|
||||
|------------------|---------|---------------------|------------------------------------------------------------------|
|
||||
| @tdengine/client | 3.0.0 | 3.0.0 | Supports TDengine 3.0. Not compatible with TDengine 2.x. |
|
||||
| td2.0-connector | 2.0.12 | 2.4.x;2.5.x;2.6.x | Fixed cursor.close() bug. |
|
||||
| td2.0-connector | 2.0.11 | 2.4.x;2.5.x;2.6.x | Supports parameter binding, JSON tags and schemaless interface |
|
||||
| td2.0-connector | 2.0.10 | 2.4.x;2.5.x;2.6.x | Supports connection management, standard queries, connection queries, system information, and data subscription |
|
||||
| td2.0-connector | 2.0.12 | 2.4.x; 2.5.x; 2.6.x | Fixed cursor.close() bug. |
|
||||
| td2.0-connector | 2.0.11 | 2.4.x; 2.5.x; 2.6.x | Supports parameter binding, JSON tags and schemaless interface |
|
||||
| td2.0-connector | 2.0.10 | 2.4.x; 2.5.x; 2.6.x | Supports connection management, standard queries, connection queries, system information, and data subscription |
|
||||
### REST Connector
|
||||
|
||||
| package name | version | TDengine version | Description |
|
||||
|----------------------|---------|---------------------|---------------------------------------------------------------------------|
|
||||
| @tdengine/rest | 3.0.0 | 3.0.0 | Supports TDengine 3.0. Not compatible with TDengine 2.x. |
|
||||
| td2.0-rest-connector | 1.0.7 | 2.4.x;2.5.x;2.6.x | Removed default port 6041。 |
|
||||
| td2.0-rest-connector | 1.0.6 | 2.4.x;2.5.x;2.6.x | Fixed affectRows bug with create, insert, update, and alter. |
|
||||
| td2.0-rest-connector | 1.0.5 | 2.4.x;2.5.x;2.6.x | Support cloud token |
|
||||
| td2.0-rest-connector | 1.0.3 | 2.4.x;2.5.x;2.6.x | Supports connection management, standard queries, system information, error information, and continuous queries |
|
||||
| td2.0-rest-connector | 1.0.7 | 2.4.x; 2.5.x; 2.6.x | Removed default port 6041 |
|
||||
| td2.0-rest-connector | 1.0.6 | 2.4.x; 2.5.x; 2.6.x | Fixed affectRows bug with create, insert, update, and alter. |
|
||||
| td2.0-rest-connector | 1.0.5 | 2.4.x; 2.5.x; 2.6.x | Support cloud token |
|
||||
| td2.0-rest-connector | 1.0.3 | 2.4.x; 2.5.x; 2.6.x | Supports connection management, standard queries, system information, error information, and continuous queries |
|
||||
|
||||
## API Reference
|
||||
|
||||
|
|
|
@ -165,7 +165,7 @@ The parameters are described as follows:
|
|||
* **username/password**: Username and password used to create connections.
|
||||
* **host/port**: Specifies the server and port to establish a connection. Websocket connections default to `localhost:6041`.
|
||||
* **database**: Specify the default database to connect to. It's optional.
|
||||
* **params**:Optional parameters.
|
||||
* **params**: Optional parameters.
|
||||
|
||||
A sample DSN description string is as follows:
|
||||
|
||||
|
@ -279,7 +279,7 @@ ws://localhost:6041/test
|
|||
| TDengine.Connector | Description |
|
||||
|--------------------|--------------------------------|
|
||||
| 3.0.2 | Support .NET Framework 4.5 and above. Support .Net standard 2.0. Nuget package includes dynamic library for WebSocket.|
|
||||
| 3.0.1 | Support WebSocket and Cloud,With function query, insert, and parameter binding|
|
||||
| 3.0.1 | Support WebSocket and Cloud, With function query, insert, and parameter binding|
|
||||
| 3.0.0 | Supports TDengine 3.0.0.0. TDengine 2.x is not supported. Added `TDengine.Impl.GetData()` interface to deserialize query results. |
|
||||
| 1.0.7 | Fixed TDengine.Query() memory leak. |
|
||||
| 1.0.6 | Fix schemaless bug in 1.0.4 and 1.0.5. |
|
||||
|
|
|
@ -8,23 +8,23 @@ description: This document describes the TDengine PHP connector.
|
|||
|
||||
PHP Connector relies on TDengine client driver.
|
||||
|
||||
Project Repository:<https://github.com/Yurunsoft/php-tdengine>
|
||||
Project Repository: <https://github.com/Yurunsoft/php-tdengine>
|
||||
|
||||
After TDengine client or server is installed, `taos.h` is located at:
|
||||
|
||||
- Linux:`/usr/local/taos/include`
|
||||
- Windows:`C:\TDengine\include`
|
||||
- macOS:`/usr/local/include`
|
||||
- Linux: `/usr/local/taos/include`
|
||||
- Windows: `C:\TDengine\include`
|
||||
- macOS: `/usr/local/include`
|
||||
|
||||
TDengine client driver is located at:
|
||||
|
||||
- Linux: `/usr/local/taos/driver/libtaos.so`
|
||||
- Windows: `C:\TDengine\taos.dll`
|
||||
- macOS:`/usr/local/lib/libtaos.dylib`
|
||||
- macOS: `/usr/local/lib/libtaos.dylib`
|
||||
|
||||
## Supported Platforms
|
||||
|
||||
- Windows、Linux、MacOS
|
||||
- Windows, Linux, and macOS
|
||||
|
||||
- PHP >= 7.4
|
||||
|
||||
|
@ -44,7 +44,7 @@ Regarding how to install TDengine client driver please refer to [Install Client
|
|||
|
||||
### Install php-tdengine
|
||||
|
||||
**Download Source Code Package and Unzip:**
|
||||
**Download Source Code Package and Unzip: **
|
||||
|
||||
```shell
|
||||
curl -L -o php-tdengine.tar.gz https://github.com/Yurunsoft/php-tdengine/archive/refs/tags/v1.0.2.tar.gz \
|
||||
|
@ -54,13 +54,13 @@ curl -L -o php-tdengine.tar.gz https://github.com/Yurunsoft/php-tdengine/archive
|
|||
|
||||
> Version number `v1.0.2` is only for example, it can be replaced to any newer version, please find available versions in [TDengine PHP Connector Releases](https://github.com/Yurunsoft/php-tdengine/releases).
|
||||
|
||||
**Non-Swoole Environment:**
|
||||
**Non-Swoole Environment: **
|
||||
|
||||
```shell
|
||||
phpize && ./configure && make -j && make install
|
||||
```
|
||||
|
||||
**Specify TDengine location:**
|
||||
**Specify TDengine location: **
|
||||
|
||||
```shell
|
||||
phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/3.0.0.0 && make -j && make install
|
||||
|
@ -69,7 +69,7 @@ phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/3.0.0.0 &&
|
|||
> `--with-tdengine-dir=` is followed by TDengine location.
|
||||
> It's useful in case TDengine installatio location can't be found automatically or MacOS.
|
||||
|
||||
**Swoole Environment:**
|
||||
**Swoole Environment: **
|
||||
|
||||
```shell
|
||||
phpize && ./configure --enable-swoole && make -j && make install
|
||||
|
|
|
@ -245,7 +245,7 @@ The parameters listed in this section apply to all function modes.
|
|||
- ** trying_interval ** : Specify interval between keep trying insert. Valid value is a positive number. Only valid when keep trying be enabled. Available with v3.0.9+.
|
||||
|
||||
- ** childtable_from and childtable_to ** : specify the child table range to create. The range is [childtable_from, childtable_to).
|
||||
|
||||
|
||||
- ** continue_if_fail ** : allow the user to specify the reaction if the insertion failed.
|
||||
|
||||
- "continue_if_fail" : "no" // means taosBenchmark will exit if it fails to insert as default reaction behavior.
|
||||
|
|
|
@ -233,7 +233,7 @@ After the importing is done, `TDinsight for 3.x` dashboard is available on the p
|
|||
|
||||
In the `TDinsight for 3.x` dashboard, choose the database used by taosKeeper to store monitoring data, you can see the monitoring result.
|
||||
|
||||

|
||||

|
||||
|
||||
## TDinsight dashboard details
|
||||
|
||||
|
|
|
@ -151,7 +151,7 @@ Please note the `taoskeeper` needs to be installed and running to create the `lo
|
|||
| -------- | -------------------------------------------- |
|
||||
| Applicable | Server Only |
|
||||
| Meaning |Switch for allowing TDengine to collect and report crash related information |
|
||||
| Value Range | 0,1 0: Not allowed;1:allowed |
|
||||
| Value Range | 0,1 0: Not allowed; 1: allowed |
|
||||
| Default Value | 1 |
|
||||
|
||||
|
||||
|
@ -183,7 +183,7 @@ Please note the `taoskeeper` needs to be installed and running to create the `lo
|
|||
| -------- | -------------------------------- |
|
||||
| Applicable | Server only |
|
||||
| Meaning | count()/hyperloglog() return value or not if the input data is empty or NULL |
|
||||
| Vlue Range | 0:Return empty line,1:Return 0 |
|
||||
| Vlue Range | 0: Return empty line, 1: Return 0 |
|
||||
| Default | 1 |
|
||||
| Notes | When this parameter is setting to 1, for queries containing GROUP BY, PARTITION BY and INTERVAL clause, and input data in certain groups or windows is empty or NULL, the corresponding groups or windows have no return values |
|
||||
|
||||
|
@ -661,7 +661,7 @@ The charset that takes effect is UTF-8.
|
|||
|
||||
## 3.0 Parameters
|
||||
|
||||
| # | **参数** | **Applicable to 2.x ** | **Applicable to 3.0 ** | Current behavior in 3.0 |
|
||||
| # | **Parameter** | **Applicable to 2.x ** | **Applicable to 3.0 ** | Current behavior in 3.0 |
|
||||
| --- | :---------------------: | --------------- | --------------- | ------------------------------------------------- |
|
||||
| 1 | firstEp | Yes | Yes | |
|
||||
| 2 | secondEp | Yes | Yes | |
|
||||
|
|
|
@ -200,11 +200,16 @@ As shown above, select the `TDengine` data source in the `Query` and enter the c
|
|||
- Group by column name(s): `group by` or `partition by` columns name split by comma. By setting `Group by column name(s)`, it can show multi-dimension data if Sql is `group by` or `partition by`. Such as, it can show data by `dnode_ep` if sql is `select _wstart as ts, avg(mem_system), dnode_ep from log.dnodes_info where ts>=$from and ts<=$to partition by dnode_ep interval($interval)` and `Group by column name(s)` is `dnode_ep`.
|
||||
- Format to: format legend for `group by` or `partition by`. Such as it can display series data by `dnode_ep` if sql is `select _wstart as ts, avg(mem_system), dnode_ep from log.dnodes_info where ts>=$from and ts<=$to partition by dnode_ep interval($interval)` and `Group by column name(s)` is `dnode_ep` and `Format to` is `mem_system_{{dnode_ep}}`.
|
||||
|
||||
:::note
|
||||
|
||||
Since the REST connection because is stateless. Grafana plugin can use <db_name>.<table_name> in the SQL command to specify the database name.
|
||||
|
||||
:::
|
||||
|
||||
Follow the default prompt to query the average system memory usage for the specified interval on the server where the current TDengine deployment is located as follows.
|
||||
|
||||

|
||||
|
||||
查询每台 TDengine 服务器指定间隔系统内存平均使用量如下.
|
||||
The example to query the average system memory usage for the specified interval on each server as follows.
|
||||
|
||||

|
||||
|
@ -217,7 +222,7 @@ You can install TDinsight dashboard in data source configuration page (like `htt
|
|||
|
||||

|
||||
|
||||
A dashboard for TDengine 2.x has been published on Grafana: [Dashboard 15167 - TDinsight](https://grafana.com/grafana/dashboards/15167)) 。 Check the [TDinsight User Manual](/reference/tdinsight/) for the details.
|
||||
A dashboard for TDengine 2.x has been published on Grafana: [Dashboard 15167 - TDinsight](https://grafana.com/grafana/dashboards/15167)). Check the [TDinsight User Manual](/reference/tdinsight/) for the details.
|
||||
|
||||
For more dashboards using TDengine data source, [search here in Grafana](https://grafana.com/grafana/dashboards/?dataSource=tdengine-datasource). Here is a sub list:
|
||||
|
||||
|
|
|
@ -47,7 +47,7 @@ Select "Rule" in the "Rule Engine" on the left and click the "Create" button: !
|
|||
|
||||
### Edit SQL fields
|
||||
|
||||
Copy SQL bellow and paste it to the SQL edit area:
|
||||
Copy SQL bellow and paste it to the SQL edit area:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
|
@ -76,7 +76,8 @@ Select "WebHook" and fill in the request URL as the address and port of the serv
|
|||
|
||||
### Edit "action"
|
||||
|
||||
Edit the resource configuration to add the key/value pairing for Authorization. If you use the default TDengine username and password then the value of key Authorization is:
|
||||
Edit the resource configuration to add the key/value pairing for Authorization. If you use the default TDengine username and password then the value of key Authorization is:
|
||||
|
||||
```
|
||||
Basic cm9vdDp0YW9zZGF0YQ==
|
||||
```
|
||||
|
|
|
@ -10,7 +10,7 @@ TDengine is a high-performance, scalable time-series database that supports SQL.
|
|||
|
||||
The TDengine team immediately saw the benefits of using TDengine to process time-series data with Data Studio to analyze it, and they got to work to create a connector for Data Studio.
|
||||
|
||||
With the release of the TDengine connector in Data Studio, you can now get even more out of your data. To obtain the connector, first go to the Data Studio Connector Gallery, click Connect to Data, and search for “TDengine”.
|
||||
With the release of the TDengine connector in Data Studio, you can now get even more out of your data. To obtain the connector, first go to the Data Studio Connector Gallery, click Connect to Data, and search for "TDengine".
|
||||
|
||||

|
||||
|
||||
|
@ -30,8 +30,8 @@ After the connection is established, you can use Data Studio to process your dat
|
|||
|
||||

|
||||
|
||||
In Data Studio, TDengine timestamps and tags are considered dimensions, and all other items are considered metrics. You can create all kinds of custom charts with your data – some examples are shown below.
|
||||
In Data Studio, TDengine timestamps and tags are considered dimensions, and all other items are considered metrics. You can create all kinds of custom charts with your data - some examples are shown below.
|
||||
|
||||

|
||||
|
||||
With the ability to process petabytes of data per day and provide monitoring and alerting in real time, TDengine is a great solution for time-series data management. Now, with the Data Studio connector, we’re sure you’ll be able to gain new insights and obtain even more value from your data.
|
||||
With the ability to process petabytes of data per day and provide monitoring and alerting in real time, TDengine is a great solution for time-series data management. Now, with the Data Studio connector, we're sure you'll be able to gain new insights and obtain even more value from your data.
|
||||
|
|
|
@ -26,9 +26,9 @@ A complete TDengine system runs on one or more physical nodes. Logically, a comp
|
|||
|
||||
**Management node (mnode)**: A virtual logical unit (M in the figure) responsible for monitoring and maintaining the running status of all data nodes and load balancing among nodes. At the same time, the management node is also responsible for the storage and management of metadata (including users, databases, tables, static tags, etc.), so it is also called Meta Node. Multiple (up to 3) mnodes can be configured in a TDengine cluster, and they are automatically constructed into a virtual management node group (M0, M1, M2 in the figure). mnode adopts RAFT protocol to guarantee high data availability and high data reliability. Any data operation can only be performed through the Leader in the RAFT group. The first mnode in the mnode RAFT group is created automatically when the first dnode of the cluster is deployed. Other two follower mnodes need to be created through SQL command in TDengine CLI. There can be at most one mnode in a single dnode, and the mnode is identified by the EP of the dnode where it's located. Each dnode can communicate with each other to automatically get the EP of all mnodes.
|
||||
|
||||
**Computation node (qnode)**: A virtual logical unit (Q in the figure) responsible for executing query and computing tasks including the `show` commands based on system built-in tables. There can be multiple qnodes configured in a TDengine cluster to share the query and computing tasks. A qnode is not coupled with a specific database, that means each qnode can execute the query tasks for multiple databases in parallel. There can be at most one qnode in a single dnode, and the qnode is identified by the EP of the dnode. TDengine client driver can get the list of qnodes through the communication with mnode. If there is no qnode available in the system, query and computing tasks are executed by vnodes. When a query task is executed, according to the execution plan, one or more qnodes may be scheduled by the scheduler to execute the task. qnode can get data from vnode, and send the execution result to other qnodes for further processing. With introducing qnodes, TDengine achieves the separation between storage and computing.
|
||||
**Computation node (qnode)**: A virtual logical unit (Q in the figure) responsible for executing query and computing tasks including the `show` commands based on system built-in tables. There can be multiple qnodes configured in a TDengine cluster to share the query and computing tasks. A qnode is not coupled with a specific database, that means each qnode can execute the query tasks for multiple databases in parallel. There can be at most one qnode in a single dnode, and the qnode is identified by the EP of the dnode. TDengine client driver can get the list of qnodes through the communication with mnode. If there is no qnode available in the system, query and computing tasks are executed by vnodes. When a query task is executed, according to the execution plan, one or more qnodes may be scheduled by the scheduler to execute the task. qnode can get data from vnode, and send the execution result to other qnodes for further processing. With introducing qnodes, TDengine achieves the separation between storage and computing.
|
||||
|
||||
**Stream Processing node (snode)**: A virtual logical unit (S in the figure) responsible for stream processing tasks is introduced in TDengine. There can be multiple snodes configured in a TDengine cluster to share the burden of stream processing tasks. snode is not coupled with a specific stream, that means a single snode can execute the tasks of multiple streams. There can be at most one snode in a single dnode, it's identified by the EP of the dnode. mnode schedules available snodes to perform the stream processing tasks. If there is no snode available in the system, stream processing tasks are executed in vnodes.
|
||||
**Stream Processing node (snode)**: A virtual logical unit (S in the figure) responsible for stream processing tasks is introduced in TDengine. There can be multiple snodes configured in a TDengine cluster to share the burden of stream processing tasks. snode is not coupled with a specific stream, that means a single snode can execute the tasks of multiple streams. There can be at most one snode in a single dnode, it's identified by the EP of the dnode. mnode schedules available snodes to perform the stream processing tasks. If there is no snode available in the system, stream processing tasks are executed in vnodes.
|
||||
|
||||
**Virtual node group (VGroup)**: Vnodes on different data nodes can form a virtual node group to ensure the high availability of the system. The virtual node group is managed using RAFT protocol. Write operations can only be performed on the leader vnode, and then replicated to follower vnodes, thus ensuring that one single replica of data is copied on multiple physical nodes. The number of virtual nodes in a vgroup equals the number of data replicas. If the number of replicas of a DB is N, the system must have at least N data nodes. The number of replicas can be specified by the parameter `replica` when creating a DB, and the default is 1. Using the multiple replication feature of TDengine, the same high data reliability can be achieved without the need for expensive storage devices such as disk arrays. Virtual node groups are created and managed by the management node, and the management node assigns a system unique ID, aka VGroup ID, to each vgroup. Virtual nodes with the same vnode group ID belong to the same vgroup. If `replica` is set to 1, it means no data replication. The number of replication for a database can be dynamically changed to 3 for high data reliability. Even if a virtual node group is deleted, its ID will not be reused.
|
||||
|
||||
|
@ -59,7 +59,7 @@ After obtaining the mnode EP list, the data node initiates the connection. It wi
|
|||
- Step : Connect to the existing working data node using TDengine CLI, and then add the End Point of the new data node with the command "create dnode"
|
||||
- Step 2: In the system configuration parameter file `taos.cfg` of the new data node, set the `firstEp` and `secondEp` parameters to the EP of any two data nodes in the existing cluster. If there is only one existing data node in the system, skip parameter `secondEp`. Please refer to the user tutorial for detailed steps. In this way, the cluster will be established step by step.
|
||||
|
||||
**Redirection**: Regardless of dnode or TAOSC, the connection to the mnode is initiated first. The mnode is automatically created and maintained by the system, so the user does not know which dnode is running the mnode. TDengine only requires a connection to any working dnode in the system. Because any running dnode maintains the currently running mnode EP List, when receiving a connecting request from the newly started dnode or TAOSC, if it’s not an mnode itself, it will reply to the connection initiator with the mnode EP List. After receiving this list, TAOSC or the newly started dnode will try to establish the connection again with mnode. When the mnode EP List changes, each data node quickly obtains the latest list and notifies TAOSC through messaging interaction among nodes.
|
||||
**Redirection**: Regardless of dnode or TAOSC, the connection to the mnode is initiated first. The mnode is automatically created and maintained by the system, so the user does not know which dnode is running the mnode. TDengine only requires a connection to any working dnode in the system. Because any running dnode maintains the currently running mnode EP List, when receiving a connecting request from the newly started dnode or TAOSC, if it's not an mnode itself, it will reply to the connection initiator with the mnode EP List. After receiving this list, TAOSC or the newly started dnode will try to establish the connection again with mnode. When the mnode EP List changes, each data node quickly obtains the latest list and notifies TAOSC through messaging interaction among nodes.
|
||||
|
||||
### A Typical Data Writing Process
|
||||
|
||||
|
@ -107,7 +107,7 @@ For large-scale data management, to achieve scale-out, it is generally necessary
|
|||
|
||||
VNode (Virtual Data Node) is responsible for providing writing, query and computing functions for collected time-series data. To facilitate load balancing, data recovery and support heterogeneous environments, TDengine splits a data node into multiple vnodes according to its computing and storage resources. The management of these vnodes is done automatically by TDengine and is completely transparent to the application.
|
||||
|
||||
For a single data collection point, regardless of the amount of data, a vnode (or vnode group, if the number of replicas is greater than 1) has enough computing resource and storage resource to process (if a 16-byte record is generated per second, the original data generated in one year will be less than 0.5 G). So TDengine stores all the data of a table (a data collection point) in one vnode instead of distributing the data to two or more dnodes. Moreover, a vnode can store data from multiple data collection points (tables), and the upper limit of the tables’ quantity for a vnode is one million. By design, all tables in a vnode belong to the same DB. On a data node, unless specially configured, the number of vnodes owned by a DB will not exceed the number of system cores.
|
||||
For a single data collection point, regardless of the amount of data, a vnode (or vnode group, if the number of replicas is greater than 1) has enough computing resource and storage resource to process (if a 16-byte record is generated per second, the original data generated in one year will be less than 0.5 G). So TDengine stores all the data of a table (a data collection point) in one vnode instead of distributing the data to two or more dnodes. Moreover, a vnode can store data from multiple data collection points (tables), and the upper limit of the tables' quantity for a vnode is one million. By design, all tables in a vnode belong to the same DB. On a data node, unless specially configured, the number of vnodes owned by a DB will not exceed the number of system cores.
|
||||
|
||||
When creating a DB, the system does not allocate resources immediately. However, when creating a table, the system will check if there is an allocated vnode with free tablespace. If so, the table will be created in the vacant vnode immediately. If not, the system will create a new vnode on a dnode from the cluster according to the current workload, and then a table. If there are multiple replicas of a DB, the system does not create only one vnode, but a vgroup (virtual data node group). The system has no limit on the number of vnodes, which is just limited by the computing and storage resources of physical nodes.
|
||||
|
||||
|
@ -132,9 +132,9 @@ Leader Vnode uses a writing process as follows:
|
|||
<center> Figure 3: TDengine Leader writing process </center>
|
||||
|
||||
1. Leader vnode receives the application data insertion request, verifies, and moves to next step;
|
||||
2. Leader vnode will write the original request packet into database log file WAL. If the database configuration parameter `“wal_level”` is set to 1, vnode doesn't invoked fsync. If `wal_level` is set to 2, fsync is invoked according to another database parameter `wal_fsync_period`.
|
||||
2. Leader vnode will write the original request packet into database log file WAL. If the database configuration parameter `"wal_level"` is set to 1, vnode doesn't invoked fsync. If `wal_level` is set to 2, fsync is invoked according to another database parameter `wal_fsync_period`.
|
||||
3. If there are multiple replicas, the leader vnode will forward data packet to follower vnodes in the same virtual node group, and the forwarded packet has a version number with data;
|
||||
4. Leader vnode Writes the data into memory and add the record to “skip list”;
|
||||
4. Leader vnode Writes the data into memory and add the record to "skip list";
|
||||
5. Leader vnode returns a confirmation message to the application, indicating a successful write.
|
||||
6. If any of Step 2, 3 or 4 fails, the error will directly return to the application.
|
||||
|
||||
|
@ -148,7 +148,7 @@ For a follower vnode, the write process as follows:
|
|||
|
||||
1. Follower vnode receives a data insertion request forwarded by Leader vnode;
|
||||
2. The behavior regarding `wal_level` and `wal_fsync_period` in a follower vnode is same as the leader vnode.
|
||||
3. Write into memory and add the record to “skip list”.
|
||||
3. Write into memory and add the record to "skip list".
|
||||
|
||||
Compared with Leader vnode, follower vnode has no forwarding or reply confirmation step. But writing into memory and WAL is exactly the same.
|
||||
|
||||
|
@ -156,7 +156,7 @@ Compared with Leader vnode, follower vnode has no forwarding or reply confirmati
|
|||
|
||||
Vnode maintains a version number. When memory data is persisted, the version number is also persisted. For each data update operation, whether it is time-series data or metadata, this version number will be increased by one.
|
||||
|
||||
When a vnode starts, its role (leader, follower) is uncertain, and the data is in an unsynchronized state. It’s necessary to establish TCP connections with other vnodes in the virtual node group and exchange status, including version and its own role. Through the exchange, the system implements a leader-selection process according to standard RAFT protocol.
|
||||
When a vnode starts, its role (leader, follower) is uncertain, and the data is in an unsynchronized state. It's necessary to establish TCP connections with other vnodes in the virtual node group and exchange status, including version and its own role. Through the exchange, the system implements a leader-selection process according to standard RAFT protocol.
|
||||
|
||||
### Synchronous Replication
|
||||
|
||||
|
@ -192,7 +192,7 @@ When data is written to disk, the system decides whether to compress the data ba
|
|||
|
||||
### Tiered Storage
|
||||
|
||||
By default, TDengine saves all data in /var/lib/taos directory, and the data files of each vnode are saved in a different directory under this directory. In order to expand the storage space, minimize the bottleneck of file reading and improve the data throughput rate, TDengine can configure the system parameter “dataDir” to allow multiple mounted hard disks to be used by system at the same time. In addition, TDengine also provides the function of tiered data storage, i.e. storage on different storage media according to the time stamps of data files. For example, the latest data is stored on SSD, the data older than a week is stored on local hard disk, and data older than four weeks is stored on network storage device. This reduces storage costs and ensures efficient data access. The movement of data on different storage media is automatically done by the system and is completely transparent to applications. Tiered storage of data is also configured through the system parameter “dataDir”.
|
||||
By default, TDengine saves all data in /var/lib/taos directory, and the data files of each vnode are saved in a different directory under this directory. In order to expand the storage space, minimize the bottleneck of file reading and improve the data throughput rate, TDengine can configure the system parameter "dataDir" to allow multiple mounted hard disks to be used by system at the same time. In addition, TDengine also provides the function of tiered data storage, i.e. storage on different storage media according to the time stamps of data files. For example, the latest data is stored on SSD, the data older than a week is stored on local hard disk, and data older than four weeks is stored on network storage device. This reduces storage costs and ensures efficient data access. The movement of data on different storage media is automatically done by the system and is completely transparent to applications. Tiered storage of data is also configured through the system parameter "dataDir".
|
||||
|
||||
dataDir format is as follows:
|
||||
|
||||
|
@ -202,7 +202,7 @@ dataDir data_path [tier_level]
|
|||
|
||||
Where data_path is the folder path of mount point and tier_level is the media storage-tier. The higher the media storage-tier, means the older the data file. Multiple hard disks can be mounted at the same storage-tier, and data files on the same storage-tier are distributed on all hard disks within the tier. TDengine supports up to 3 tiers of storage, so tier_level values are 0, 1, and 2. When configuring dataDir, there must be only one mount path without specifying tier_level, which is called special mount disk (path). The mount path defaults to level 0 storage media and contains special file links, which cannot be removed, otherwise it will have a devastating impact on the written data.
|
||||
|
||||
Suppose there is a physical node with six mountable hard disks/mnt/disk1,/mnt/disk2, …,/mnt/disk6, where disk1 and disk2 need to be designated as level 0 storage media, disk3 and disk4 are level 1 storage media, and disk5 and disk6 are level 2 storage media. Disk1 is a special mount disk, you can configure it in/etc/taos/taos.cfg as follows:
|
||||
Suppose there is a physical node with six mountable hard disks/mnt/disk1,/mnt/disk2, ..., /mnt/disk6, where disk1 and disk2 need to be designated as level 0 storage media, disk3 and disk4 are level 1 storage media, and disk5 and disk6 are level 2 storage media. Disk1 is a special mount disk, you can configure it in/etc/taos/taos.cfg as follows:
|
||||
|
||||
```
|
||||
dataDir /mnt/disk1/taos
|
||||
|
|
|
@ -200,7 +200,7 @@ After migrating via DataX, we found that we can significantly improve the effici
|
|||
|
||||
### 2. Manual data migration
|
||||
|
||||
Suppose you need to use the multi-value model for data writing. In that case, you need to develop a tool to export data from OpenTSDB, confirm which timelines can be merged and imported into the same timeline, and then pass the time to import simultaneously through the SQL statement—written to the database.
|
||||
Suppose you need to use the multi-value model for data writing. In that case, you need to develop a tool to export data from OpenTSDB, confirm which timelines can be merged and imported into the same timeline, and then pass the time to import simultaneously through the SQL statement-written to the database.
|
||||
|
||||
Manual migration of data requires attention to the following two issues:
|
||||
|
||||
|
@ -258,7 +258,7 @@ Equivalent function: apercentile
|
|||
Example:
|
||||
|
||||
```sql
|
||||
Select apercentile(col1, 50, “t-digest”) from table_name
|
||||
select apercentile(col1, 50, "t-digest") from table_name
|
||||
```
|
||||
|
||||
Remark:
|
||||
|
|
|
@ -161,7 +161,7 @@ Query OK, 6 rows in database (0.005515s)
|
|||
:::note
|
||||
|
||||
1. 无论是使用 REST 连接还是原生连接的连接器,以上示例代码都能正常工作。
|
||||
2. 唯一需要注意的是:由于 REST 接口无状态, 不能使用 `use db` 语句来切换数据库。
|
||||
2. 唯一需要注意的是:由于 REST 接口无状态, 不能使用 `use db` 语句来切换数据库。除了在 REST 参数中指定数据库以外也可以在 SQL 语句中使用 <db_name>.<table_name> 来指定数据库。
|
||||
|
||||
:::
|
||||
|
||||
|
|
|
@ -200,6 +200,12 @@ docker run -d \
|
|||
- Group by column name(s): **半角**逗号分隔的 `group by` 或 `partition by` 列名。如果是 `group by` or `partition by` 查询语句,设置 `Group by` 列,可以展示多维数据。例如:INPUT SQL 为 `select _wstart as ts, avg(mem_system), dnode_ep from log.dnodes_info where ts>=$from and ts<=$to partition by dnode_ep interval($interval)`,设置 Group by 列名为 `dnode_ep`,可以按 `dnode_ep` 展示数据。
|
||||
- Format to: Group by 或 Partition by 场景下多维数据 legend 格式化格式。例如上述 INPUT SQL,将 Format to 设置为 `mem_system_{{dnode_ep}}`,展示的 legend 名字为格式化的列名。
|
||||
|
||||
:::note
|
||||
|
||||
由于 REST 接口无状态, 不能使用 `use db` 语句来切换数据库。Grafana 插件中 SQL 语句中可以使用 <db_name>.<table_name> 来指定数据库。
|
||||
|
||||
:::
|
||||
|
||||
按照默认提示查询当前 TDengine 部署所在服务器指定间隔系统内存平均使用量如下:
|
||||
|
||||

|
||||
|
|
Loading…
Reference in New Issue