other:merge 3.0

This commit is contained in:
Haojun Liao 2022-09-19 10:29:39 +08:00
commit e7a1f34808
65 changed files with 822 additions and 391 deletions

View File

@ -2,7 +2,7 @@
IF (DEFINED VERNUMBER) IF (DEFINED VERNUMBER)
SET(TD_VER_NUMBER ${VERNUMBER}) SET(TD_VER_NUMBER ${VERNUMBER})
ELSE () ELSE ()
SET(TD_VER_NUMBER "3.0.1.0") SET(TD_VER_NUMBER "3.0.1.1")
ENDIF () ENDIF ()
IF (DEFINED VERCOMPATIBLE) IF (DEFINED VERCOMPATIBLE)

View File

@ -2,7 +2,7 @@
# taosadapter # taosadapter
ExternalProject_Add(taosadapter ExternalProject_Add(taosadapter
GIT_REPOSITORY https://github.com/taosdata/taosadapter.git GIT_REPOSITORY https://github.com/taosdata/taosadapter.git
GIT_TAG 71e7ccf GIT_TAG 05fb2ff
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter" SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter"
BINARY_DIR "" BINARY_DIR ""
#BUILD_IN_SOURCE TRUE #BUILD_IN_SOURCE TRUE

View File

@ -2,7 +2,7 @@
# taos-tools # taos-tools
ExternalProject_Add(taos-tools ExternalProject_Add(taos-tools
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
GIT_TAG 3588b3d GIT_TAG 125c77a
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools" SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
BINARY_DIR "" BINARY_DIR ""
#BUILD_IN_SOURCE TRUE #BUILD_IN_SOURCE TRUE

View File

@ -4,7 +4,7 @@ sidebar_label: Documentation Home
slug: / slug: /
--- ---
TDengine is an [open-source](https://tdengine.com/tdengine/open-source-time-series-database/), [cloud-native](https://tdengine.com/tdengine/cloud-native-time-series-database/) time-series database optimized for the Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design, and other topics. Its written mainly for architects, developers, and system administrators. TDengine is an [open-source](https://tdengine.com/tdengine/open-source-time-series-database/), [cloud-native](https://tdengine.com/tdengine/cloud-native-time-series-database/) [time-series database](https://tdengine.com/tsdb/) optimized for the Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design, and other topics. Its written mainly for architects, developers, and system administrators.
To get an overview of TDengine, such as a feature list, benchmarks, and competitive advantages, please browse through the [Introduction](./intro) section. To get an overview of TDengine, such as a feature list, benchmarks, and competitive advantages, please browse through the [Introduction](./intro) section.
@ -22,6 +22,8 @@ If you want to know more about TDengine tools, the REST API, and connectors for
If you are very interested in the internal design of TDengine, please read the chapter [Inside TDengine](./tdinternal), which introduces the cluster design, data partitioning, sharding, writing, and reading processes in detail. If you want to study TDengine code or even contribute code, please read this chapter carefully. If you are very interested in the internal design of TDengine, please read the chapter [Inside TDengine](./tdinternal), which introduces the cluster design, data partitioning, sharding, writing, and reading processes in detail. If you want to study TDengine code or even contribute code, please read this chapter carefully.
To get more general introduction about time series database, please read through [a series of articles](https://tdengine.com/tsdb/). To lean more competitive advantages about TDengine, please read through [a series of blogs](https://tdengine.com/tdengine/).
TDengine is an open-source database, and we would love for you to be a part of TDengine. If you find any errors in the documentation or see parts where more clarity or elaboration is needed, please click "Edit this page" at the bottom of each page to edit it directly. TDengine is an open-source database, and we would love for you to be a part of TDengine. If you find any errors in the documentation or see parts where more clarity or elaboration is needed, please click "Edit this page" at the bottom of each page to edit it directly.
Together, we make a difference! Together, we make a difference!

View File

@ -3,7 +3,7 @@ title: Introduction
toc_max_heading_level: 2 toc_max_heading_level: 2
--- ---
TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](../develop/cache), [stream processing](../develop/stream), [data subscription](../develop/tmq) and other functionalities to reduce the system complexity and cost of development and operation. TDengine is an [open source](https://tdengine.com/tdengine/open-source-time-series-database/), [high-performance](https://tdengine.com/tdengine/high-performance-time-series-database/), [cloud native](https://tdengine.com/tdengine/cloud-native-time-series-database/) [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](../develop/cache), [stream processing](../develop/stream), [data subscription](../develop/tmq) and other functionalities to reduce the system complexity and cost of development and operation.
This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine. This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine.
@ -43,7 +43,7 @@ For more details on features, please read through the entire documentation.
## Competitive Advantages ## Competitive Advantages
By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine differentiates itself from other time series databases, with the following advantages. By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine differentiates itself from other [time series databases](https://tdengine.com/tsdb), with the following advantages.
- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression. - **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
@ -127,3 +127,8 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
- [TDengine vs OpenTSDB](https://tdengine.com/2019/09/12/710.html) - [TDengine vs OpenTSDB](https://tdengine.com/2019/09/12/710.html)
- [TDengine vs Cassandra](https://tdengine.com/2019/09/12/708.html) - [TDengine vs Cassandra](https://tdengine.com/2019/09/12/708.html)
- [TDengine vs InfluxDB](https://tdengine.com/2019/09/12/706.html) - [TDengine vs InfluxDB](https://tdengine.com/2019/09/12/706.html)
## More readings
- [Introduction to Time-Series Database](https://tdengine.com/tsdb/)
- [Introduction to TDengine competitive advantages](https://tdengine.com/tdengine/)

View File

@ -111,7 +111,7 @@ Note: TDengine only supports Windows Server 2016/2019 and Windows 10/11 on the W
</Tabs> </Tabs>
:::info :::info
For information about TDengine releases, see [Release History](../../releases). For information about TDengine releases, see [Release History](../../releases/tdengine).
::: :::
:::note :::note

View File

@ -348,19 +348,15 @@ SELECT ... FROM (SELECT ... FROM ...) ...;
:::info :::info
- Only one layer of nesting is allowed, that means no sub query is allowed within a sub query - The result of a nested query is returned as a virtual table used by the outer query. It's recommended to give an alias to this table for the convenience of using it in the outer query.
- The result set returned by the inner query will be used as a "virtual table" by the outer query. The "virtual table" can be renamed using `AS` keyword for easy reference in the outer query.
- Sub query is not allowed in continuous query.
- JOIN operation is allowed between tables/STables inside both inner and outer queries. Join operation can be performed on the result set of the inner query. - JOIN operation is allowed between tables/STables inside both inner and outer queries. Join operation can be performed on the result set of the inner query.
- UNION operation is not allowed in either inner query or outer query. - The features that can be used in the inner query are the same as those that can be used in a non-nested query.
- The functions that can be used in the inner query are the same as those that can be used in a non-nested query.
- `ORDER BY` inside the inner query is unnecessary and will slow down the query performance significantly. It is best to avoid the use of `ORDER BY` inside the inner query. - `ORDER BY` inside the inner query is unnecessary and will slow down the query performance significantly. It is best to avoid the use of `ORDER BY` inside the inner query.
- Compared to the non-nested query, the functionality that can be used in the outer query has the following restrictions: - Compared to the non-nested query, the functionality that can be used in the outer query has the following restrictions:
- Functions - Functions
- If the result set returned by the inner query doesn't contain timestamp column, then functions relying on timestamp can't be used in the outer query, like `TOP`, `BOTTOM`, `FIRST`, `LAST`, `DIFF`. - If the result set returned by the inner query doesn't contain timestamp column, then functions relying on timestamp can't be used in the outer query, like INTERP,DERIVATIVE, IRATE, LAST_ROW, FIRST, LAST, TWA, STATEDURATION, TAIL, UNIQUE.
- Functions that need to scan the data twice can't be used in the outer query, like `STDDEV`, `PERCENTILE`. - If the result set returned by the inner query are not sorted in order by timestamp, then functions relying on data ordered by timestamp can't be used in the outer query, like LEASTSQUARES, ELAPSED, INTERP, DERIVATIVE, IRATE, TWA, DIFF, STATECOUNT, STATEDURATION, CSUM, MAVG, TAIL, UNIQUE.
- `IN` operator is not allowed in the outer query but can be used in the inner query. - Functions that need to scan the data twice can't be used in the outer query, like PERCENTILE.
- `GROUP BY` is not supported in the outer query.
::: :::

View File

@ -126,7 +126,7 @@ SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause]
SELECT FLOOR(field_name) FROM { tb_name | stb_name } [WHERE clause]; SELECT FLOOR(field_name) FROM { tb_name | stb_name } [WHERE clause];
``` ```
**Description**: The rounded down value of a specific field **Description**: The rounded down value of a specific field
**More explanations**: The restrictions are same as those of the `CEIL` function. **More explanations**: The restrictions are same as those of the `CEIL` function.
#### LOG #### LOG
@ -173,7 +173,7 @@ SELECT POW(field_name, power) FROM { tb_name | stb_name } [WHERE clause]
SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause]; SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause];
``` ```
**Description**: The rounded value of a specific field. **Description**: The rounded value of a specific field.
**More explanations**: The restrictions are same as those of the `CEIL` function. **More explanations**: The restrictions are same as those of the `CEIL` function.
@ -434,7 +434,7 @@ SELECT TO_ISO8601(ts[, timezone]) FROM { tb_name | stb_name } [WHERE clause];
**More explanations**: **More explanations**:
- You can specify a time zone in the following format: [z/Z, +/-hhmm, +/-hh, +/-hh:mm]。 For example, TO_ISO8601(1, "+00:00"). - You can specify a time zone in the following format: [z/Z, +/-hhmm, +/-hh, +/-hh:mm]。 For example, TO_ISO8601(1, "+00:00").
- If the input is a UNIX timestamp, the precision of the returned value is determined by the digits of the input timestamp - If the input is a UNIX timestamp, the precision of the returned value is determined by the digits of the input timestamp
- If the input is a column of TIMESTAMP type, the precision of the returned value is same as the precision set for the current data base in use - If the input is a column of TIMESTAMP type, the precision of the returned value is same as the precision set for the current data base in use
@ -769,14 +769,14 @@ SELECT HISTOGRAM(field_namebin_type, bin_description, normalized) FROM tb_nam
**Explanations** **Explanations**
- bin_type: parameter to indicate the bucket type, valid inputs are: "user_input", "linear_bin", "log_bin"。 - bin_type: parameter to indicate the bucket type, valid inputs are: "user_input", "linear_bin", "log_bin"。
- bin_description: parameter to describe how to generate bucketscan be in the following JSON formats for each bin_type respectively: - bin_description: parameter to describe how to generate bucketscan be in the following JSON formats for each bin_type respectively:
- "user_input": "[1, 3, 5, 7]": - "user_input": "[1, 3, 5, 7]":
User specified bin values. User specified bin values.
- "linear_bin": "{"start": 0.0, "width": 5.0, "count": 5, "infinity": true}" - "linear_bin": "{"start": 0.0, "width": 5.0, "count": 5, "infinity": true}"
"start" - bin starting point. "width" - bin offset. "count" - number of bins generated. "infinity" - whether to add-inf, infas start/end point in generated set of bins. "start" - bin starting point. "width" - bin offset. "count" - number of bins generated. "infinity" - whether to add-inf, infas start/end point in generated set of bins.
The above "linear_bin" descriptor generates a set of bins: [-inf, 0.0, 5.0, 10.0, 15.0, 20.0, +inf]. The above "linear_bin" descriptor generates a set of bins: [-inf, 0.0, 5.0, 10.0, 15.0, 20.0, +inf].
- "log_bin": "{"start":1.0, "factor": 2.0, "count": 5, "infinity": true}" - "log_bin": "{"start":1.0, "factor": 2.0, "count": 5, "infinity": true}"
"start" - bin starting point. "factor" - exponential factor of bin offset. "count" - number of bins generated. "infinity" - whether to add-inf, infas start/end point in generated range of bins. "start" - bin starting point. "factor" - exponential factor of bin offset. "count" - number of bins generated. "infinity" - whether to add-inf, infas start/end point in generated range of bins.
The above "linear_bin" descriptor generates a set of bins: [-inf, 1.0, 2.0, 4.0, 8.0, 16.0, +inf]. The above "linear_bin" descriptor generates a set of bins: [-inf, 1.0, 2.0, 4.0, 8.0, 16.0, +inf].
@ -862,9 +862,9 @@ SELECT INTERP(field_name) FROM { tb_name | stb_name } [WHERE where_condition] RA
- `INTERP` is used to get the value that matches the specified time slice from a column. If no such value exists an interpolation value will be returned based on `FILL` parameter. - `INTERP` is used to get the value that matches the specified time slice from a column. If no such value exists an interpolation value will be returned based on `FILL` parameter.
- The input data of `INTERP` is the value of the specified column and a `where` clause can be used to filter the original data. If no `where` condition is specified then all original data is the input. - The input data of `INTERP` is the value of the specified column and a `where` clause can be used to filter the original data. If no `where` condition is specified then all original data is the input.
- The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1<=timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified. - The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1<=timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified.
- The number of rows in the result set of `INTERP` is determined by the parameter `EVERY`. Starting from timestamp1, one interpolation is performed for every time interval specified `EVERY` parameter. - The number of rows in the result set of `INTERP` is determined by the parameter `EVERY`. Starting from timestamp1, one interpolation is performed for every time interval specified `EVERY` parameter.
- Interpolation is performed based on `FILL` parameter. - Interpolation is performed based on `FILL` parameter.
- `INTERP` can only be used to interpolate in single timeline. So it must be used with `partition by tbname` when it's used on a STable. - `INTERP` can only be used to interpolate in single timeline. So it must be used with `partition by tbname` when it's used on a STable.
### LAST ### LAST
@ -917,7 +917,7 @@ SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause];
**Return value type**:Same as the data type of the column being operated upon **Return value type**:Same as the data type of the column being operated upon
**Applicable data types**: Numeric, Timestamp **Applicable data types**: Numeric
**Applicable table types**: standard tables and supertables **Applicable table types**: standard tables and supertables
@ -932,7 +932,7 @@ SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause];
**Return value type**:Same as the data type of the column being operated upon **Return value type**:Same as the data type of the column being operated upon
**Applicable data types**: Numeric, Timestamp **Applicable data types**: Numeric
**Applicable table types**: standard tables and supertables **Applicable table types**: standard tables and supertables
@ -968,7 +968,7 @@ SELECT SAMPLE(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
**Applicable table types**: standard tables and supertables **Applicable table types**: standard tables and supertables
**More explanations**: **More explanations**:
This function cannot be used in expression calculation. This function cannot be used in expression calculation.
- Must be used with `PARTITION BY tbname` when it's used on a STable to force the result on each single timeline - Must be used with `PARTITION BY tbname` when it's used on a STable to force the result on each single timeline
@ -1046,10 +1046,10 @@ SELECT CSUM(field_name) FROM { tb_name | stb_name } [WHERE clause]
**Applicable table types**: standard tables and supertables **Applicable table types**: standard tables and supertables
**More explanations**: **More explanations**:
- Arithmetic operation can't be performed on the result of `csum` function - Arithmetic operation can't be performed on the result of `csum` function
- Can only be used with aggregate functions This function can be used with supertables and standard tables. - Can only be used with aggregate functions This function can be used with supertables and standard tables.
- Must be used with `PARTITION BY tbname` when it's used on a STable to force the result on each single timeline - Must be used with `PARTITION BY tbname` when it's used on a STable to force the result on each single timeline
@ -1067,8 +1067,8 @@ SELECT DERIVATIVE(field_name, time_interval, ignore_negative) FROM tb_name [WHER
**Applicable table types**: standard tables and supertables **Applicable table types**: standard tables and supertables
**More explanation**: **More explanation**:
- It can be used together with `PARTITION BY tbname` against a STable. - It can be used together with `PARTITION BY tbname` against a STable.
- It can be used together with a selected column. For example: select \_rowts, DERIVATIVE() from。 - It can be used together with a selected column. For example: select \_rowts, DERIVATIVE() from。
@ -1086,7 +1086,7 @@ SELECT {DIFF(field_name, ignore_negative) | DIFF(field_name)} FROM tb_name [WHER
**Applicable table types**: standard tables and supertables **Applicable table types**: standard tables and supertables
**More explanation**: **More explanation**:
- The number of result rows is the number of rows subtracted by one, no output for the first row - The number of result rows is the number of rows subtracted by one, no output for the first row
- It can be used together with a selected column. For example: select \_rowts, DIFF() from。 - It can be used together with a selected column. For example: select \_rowts, DIFF() from。
@ -1123,9 +1123,9 @@ SELECT MAVG(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
**Applicable table types**: standard tables and supertables **Applicable table types**: standard tables and supertables
**More explanations**: **More explanations**:
- Arithmetic operation can't be performed on the result of `MAVG`. - Arithmetic operation can't be performed on the result of `MAVG`.
- Can only be used with data columns, can't be used with tags. - Can't be used with aggregate functions. - Can only be used with data columns, can't be used with tags. - Can't be used with aggregate functions.
- Must be used with `PARTITION BY tbname` when it's used on a STable to force the result on each single timeline - Must be used with `PARTITION BY tbname` when it's used on a STable to force the result on each single timeline

View File

@ -5,11 +5,11 @@ title: Time-Series Extensions
As a purpose-built database for storing and processing time-series data, TDengine provides time-series-specific extensions to standard SQL. As a purpose-built database for storing and processing time-series data, TDengine provides time-series-specific extensions to standard SQL.
These extensions include tag-partitioned queries and windowed queries. These extensions include partitioned queries and windowed queries.
## Tag-Partitioned Queries ## Partitioned Queries
When you query a supertable, you may need to partition the supertable by tag and perform additional operations on a specific partition. In this case, you can use the following SQL clause: When you query a supertable, you may need to partition the supertable by some dimensions and perform additional operations on a specific partition. In this case, you can use the following SQL clause:
```sql ```sql
PARTITION BY part_list PARTITION BY part_list
@ -17,22 +17,24 @@ PARTITION BY part_list
part_list can be any scalar expression, such as a column, constant, scalar function, or a combination of the preceding items. part_list can be any scalar expression, such as a column, constant, scalar function, or a combination of the preceding items.
A PARTITION BY clause with a tag is processed as follows: A PARTITION BY clause is processed as follows:
- The PARTITION BY clause must occur after the WHERE clause and cannot be used with a JOIN clause. - The PARTITION BY clause must occur after the WHERE clause
- The PARTITION BY clause partitions the super table by the specified tag group, and the specified calculation is performed on each partition. The calculation performed is determined by the rest of the statement - a window clause, GROUP BY clause, or SELECT clause. - The PARTITION BY caluse partitions the data according to the specified dimentions, then perform computation on each partition. The performed computation is determined by the rest of the statement - a window clause, GROUP BY clause, or SELECT clause.
- You can use PARTITION BY together with a window clause or GROUP BY clause. In this case, the window or GROUP BY clause takes effect on every partition. For example, the following statement partitions the table by the location tag, performs downsampling over a 10 minute window, and returns the maximum value: - The PARTITION BY clause can be used together with a window clause or GROUP BY clause. In this case, the window or GROUP BY clause takes effect on every partition. For example, the following statement partitions the table by the location tag, performs downsampling over a 10 minute window, and returns the maximum value:
```sql ```sql
select max(current) from meters partition by location interval(10m) select max(current) from meters partition by location interval(10m)
``` ```
The most common usage of PARTITION BY is partitioning the data in subtables by tags then perform computation when querying data in a supertable. More specifically, `PARTITION BY TBNAME` partitions the data of each subtable into a single timeline, and this method facilitates the statistical analysis in many use cases of processing timeseries data.
## Windowed Queries ## Windowed Queries
Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window. Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are three kinds of windows: time window, status window, and session window. There are two kinds of time windows: sliding window and flip time/tumbling window. The query syntax is as follows: Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window. Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are three kinds of windows: time window, status window, and session window. There are two kinds of time windows: sliding window and flip time/tumbling window. The query syntax is as follows:
```sql ```sql
SELECT function_list FROM tb_name SELECT select_list FROM tb_name
[WHERE where_condition] [WHERE where_condition]
[SESSION(ts_col, tol_val)] [SESSION(ts_col, tol_val)]
[STATE_WINDOW(col)] [STATE_WINDOW(col)]
@ -42,15 +44,9 @@ SELECT function_list FROM tb_name
The following restrictions apply: The following restrictions apply:
### Restricted Functions
- Aggregate functions and select functions can be used in `function_list`, with each function having only one output. For example COUNT, AVG, SUM, STDDEV, LEASTSQUARES, PERCENTILE, MIN, MAX, FIRST, LAST. Functions having multiple outputs, such as DIFF or arithmetic operations can't be used.
- `LAST_ROW` can't be used together with window aggregate.
- Scalar functions, like CEIL/FLOOR, can't be used with window aggregate.
### Other Rules ### Other Rules
- The window clause must occur after the PARTITION BY clause and before the GROUP BY clause. It cannot be used with a GROUP BY clause. - The window clause must occur after the PARTITION BY clause. It cannot be used with a GROUP BY clause.
- SELECT clauses on windows can contain only the following expressions: - SELECT clauses on windows can contain only the following expressions:
- Constants - Constants
- Aggregate functions - Aggregate functions
@ -82,7 +78,7 @@ These pseudocolumns occur after the aggregation clause.
1. A huge volume of interpolation output may be returned using `FILL`, so it's recommended to specify the time range when using `FILL`. The maximum number of interpolation values that can be returned in a single query is 10,000,000. 1. A huge volume of interpolation output may be returned using `FILL`, so it's recommended to specify the time range when using `FILL`. The maximum number of interpolation values that can be returned in a single query is 10,000,000.
2. The result set is in ascending order of timestamp when you aggregate by time window. 2. The result set is in ascending order of timestamp when you aggregate by time window.
3. If aggregate by window is used on STable, the aggregate function is performed on all the rows matching the filter conditions. If `GROUP BY` is not used in the query, the result set will be returned in ascending order of timestamp; otherwise the result set is not exactly in the order of ascending timestamp in each group. 3. If aggregate by window is used on STable, the aggregate function is performed on all the rows matching the filter conditions. If `PARTITION BY` is not used in the query, the result set will be returned in strict ascending order of timestamp; otherwise the result set will be returned in the order of ascending timestamp in each group.
::: :::
@ -112,9 +108,9 @@ When using time windows, note the following:
Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side. Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side.
- The result set is in ascending order of timestamp when you aggregate by time window. - The result set is in ascending order of timestamp when you aggregate by time window.
### Status Window ### State Window
In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two status windows according to status, [2019-04-28 14:22:072019-04-28 14:22:10] and [2019-04-28 14:22:112019-04-28 14:22:12]. Status window is not applicable to STable for now. In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two state windows according to status, [2019-04-28 14:22:072019-04-28 14:22:10] and [2019-04-28 14:22:112019-04-28 14:22:12].
![TDengine Database Status Window](./timewindow-3.webp) ![TDengine Database Status Window](./timewindow-3.webp)
@ -124,13 +120,19 @@ In case of using integer, bool, or string to represent the status of a device at
SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status); SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status);
``` ```
Only care about the information of the status window when the status is 2. For example:
```
SELECT * FROM (SELECT COUNT(*) AS cnt, FIRST(ts) AS fst, status FROM temp_tb_1 STATE_WINDOW(status)) t WHERE status = 2;
```
### Session Window ### Session Window
The primary key, i.e. timestamp, is used to determine which session window a row belongs to. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:102019-04-28 14:22:30] and [2019-04-28 14:23:102019-04-28 14:23:30] because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds. The primary key, i.e. timestamp, is used to determine which session window a row belongs to. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:102019-04-28 14:22:30] and [2019-04-28 14:23:102019-04-28 14:23:30] because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
![TDengine Database Session Window](./timewindow-2.webp) ![TDengine Database Session Window](./timewindow-2.webp)
If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now. If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically.
``` ```

View File

@ -5,7 +5,9 @@ title: Reserved Keywords
## Keyword List ## Keyword List
There are about 200 keywords reserved by TDengine, they can't be used as the name of database, STable or table with either upper case, lower case or mixed case. The following list shows all reserved keywords: There are more than 200 keywords reserved by TDengine, they can't be used as the name of database, table, STable, subtable, column or tag with either upper case, lower case or mixed case. If you need to use these keywords, use the symbol `` ` `` to enclose the keywords, e.g. \`ADD\`.
The following list shows all reserved keywords:
### A ### A
@ -14,15 +16,20 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
- ACCOUNTS - ACCOUNTS
- ADD - ADD
- AFTER - AFTER
- AGGREGATE
- ALL - ALL
- ALTER - ALTER
- ANALYZE
- AND - AND
- APPS
- AS - AS
- ASC - ASC
- AT_ONCE
- ATTACH - ATTACH
### B ### B
- BALANCE
- BEFORE - BEFORE
- BEGIN - BEGIN
- BETWEEN - BETWEEN
@ -32,19 +39,27 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
- BITNOT - BITNOT
- BITOR - BITOR
- BLOCKS - BLOCKS
- BNODE
- BNODES
- BOOL - BOOL
- BUFFER
- BUFSIZE
- BY - BY
### C ### C
- CACHE - CACHE
- CACHELAST - CACHEMODEL
- CACHESIZE
- CASCADE - CASCADE
- CAST
- CHANGE - CHANGE
- CLIENT_VERSION
- CLUSTER - CLUSTER
- COLON - COLON
- COLUMN - COLUMN
- COMMA - COMMA
- COMMENT
- COMP - COMP
- COMPACT - COMPACT
- CONCAT - CONCAT
@ -52,15 +67,18 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
- CONNECTION - CONNECTION
- CONNECTIONS - CONNECTIONS
- CONNS - CONNS
- CONSUMER
- CONSUMERS
- CONTAINS
- COPY - COPY
- COUNT
- CREATE - CREATE
- CTIME - CURRENT_USER
### D ### D
- DATABASE - DATABASE
- DATABASES - DATABASES
- DAYS
- DBS - DBS
- DEFERRED - DEFERRED
- DELETE - DELETE
@ -69,18 +87,23 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
- DESCRIBE - DESCRIBE
- DETACH - DETACH
- DISTINCT - DISTINCT
- DISTRIBUTED
- DIVIDE - DIVIDE
- DNODE - DNODE
- DNODES - DNODES
- DOT - DOT
- DOUBLE - DOUBLE
- DROP - DROP
- DURATION
### E ### E
- EACH
- ENABLE
- END - END
- EQ - EVERY
- EXISTS - EXISTS
- EXPIRED
- EXPLAIN - EXPLAIN
### F ### F
@ -88,18 +111,20 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
- FAIL - FAIL
- FILE - FILE
- FILL - FILL
- FIRST
- FLOAT - FLOAT
- FLUSH
- FOR - FOR
- FROM - FROM
- FSYNC - FUNCTION
- FUNCTIONS
### G ### G
- GE
- GLOB - GLOB
- GRANT
- GRANTS - GRANTS
- GROUP - GROUP
- GT
### H ### H
@ -110,15 +135,18 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
- ID - ID
- IF - IF
- IGNORE - IGNORE
- IMMEDIA - IMMEDIATE
- IMPORT - IMPORT
- IN - IN
- INITIAL - INDEX
- INDEXES
- INITIALLY
- INNER
- INSERT - INSERT
- INSTEAD - INSTEAD
- INT - INT
- INTEGER - INTEGER
- INTERVA - INTERVAL
- INTO - INTO
- IS - IS
- ISNULL - ISNULL
@ -126,6 +154,7 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
### J ### J
- JOIN - JOIN
- JSON
### K ### K
@ -135,46 +164,57 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
### L ### L
- LE - LAST
- LAST_ROW
- LICENCES
- LIKE - LIKE
- LIMIT - LIMIT
- LINEAR - LINEAR
- LOCAL - LOCAL
- LP
- LSHIFT
- LT
### M ### M
- MATCH - MATCH
- MAX_DELAY
- MAXROWS - MAXROWS
- MERGE
- META
- MINROWS - MINROWS
- MINUS - MINUS
- MNODE
- MNODES - MNODES
- MODIFY - MODIFY
- MODULES - MODULES
### N ### N
- NE - NCHAR
- NEXT
- NMATCH
- NONE - NONE
- NOT - NOT
- NOTNULL - NOTNULL
- NOW - NOW
- NULL - NULL
- NULLS
### O ### O
- OF - OF
- OFFSET - OFFSET
- ON
- OR - OR
- ORDER - ORDER
- OUTPUTTYPE
### P ### P
- PARTITION - PAGES
- PAGESIZE
- PARTITIONS
- PASS - PASS
- PLUS - PLUS
- PORT
- PPS - PPS
- PRECISION - PRECISION
- PREV - PREV
@ -182,47 +222,63 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
### Q ### Q
- QNODE
- QNODES
- QTIME - QTIME
- QUERIE - QUERIES
- QUERY - QUERY
- QUORUM
### R ### R
- RAISE - RAISE
- REM - RANGE
- RATIO
- READ
- REDISTRIBUTE
- RENAME
- REPLACE - REPLACE
- REPLICA - REPLICA
- RESET - RESET
- RESTRIC - RESTRICT
- RETENTIONS
- REVOKE
- ROLLUP
- ROW - ROW
- RP
- RSHIFT
### S ### S
- SCHEMALESS
- SCORES - SCORES
- SELECT - SELECT
- SEMI - SEMI
- SERVER_STATUS
- SERVER_VERSION
- SESSION - SESSION
- SET - SET
- SHOW - SHOW
- SLASH - SINGLE_STABLE
- SLIDING - SLIDING
- SLIMIT - SLIMIT
- SMALLIN - SMA
- SMALLINT
- SNODE
- SNODES
- SOFFSET - SOFFSET
- STable - SPLIT
- STableS - STABLE
- STABLES
- STAR - STAR
- STATE - STATE
- STATEMEN - STATE_WINDOW
- STATE_WI - STATEMENT
- STORAGE - STORAGE
- STREAM - STREAM
- STREAMS - STREAMS
- STRICT
- STRING - STRING
- SUBSCRIPTIONS
- SYNCDB - SYNCDB
- SYSINFO
### T ### T
@ -233,20 +289,24 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
- TBNAME - TBNAME
- TIMES - TIMES
- TIMESTAMP - TIMESTAMP
- TIMEZONE
- TINYINT - TINYINT
- TO
- TODAY
- TOPIC - TOPIC
- TOPICS - TOPICS
- TRANSACTION
- TRANSACTIONS
- TRIGGER - TRIGGER
- TRIM
- TSERIES - TSERIES
- TTL - TTL
### U ### U
- UMINUS
- UNION - UNION
- UNSIGNED - UNSIGNED
- UPDATE - UPDATE
- UPLUS
- USE - USE
- USER - USER
- USERS - USERS
@ -256,8 +316,11 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
- VALUE - VALUE
- VALUES - VALUES
- VARCHAR
- VARIABLE - VARIABLE
- VARIABLES - VARIABLES
- VERBOSE
- VGROUP
- VGROUPS - VGROUPS
- VIEW - VIEW
- VNODES - VNODES
@ -265,14 +328,25 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
### W ### W
- WAL - WAL
- WAL_FSYNC_PERIOD
- WAL_LEVEL
- WAL_RETENTION_PERIOD
- WAL_RETENTION_SIZE
- WAL_ROLL_PERIOD
- WAL_SEGMENT_SIZE
- WATERMARK
- WHERE - WHERE
- WINDOW_CLOSE
- WITH
- WRITE
### \_ ### \_
- \_C0 - \_C0
- \_QSTART
- \_QSTOP
- \_QDURATION - \_QDURATION
- \_WSTART - \_QEND
- \_WSTOP - \_QSTART
- \_ROWTS
- \_WDURATION - \_WDURATION
- \_WEND
- \_WSTART

View File

@ -9,15 +9,54 @@ This document describes how to manage permissions in TDengine.
## Create a User ## Create a User
```sql ```sql
CREATE USER use_name PASS 'password'; CREATE USER user_name PASS 'password' [SYSINFO {1|0}];
``` ```
This statement creates a user account. This statement creates a user account.
The maximum length of use_name is 23 bytes. The maximum length of user_name is 23 bytes.
The maximum length of password is 128 bytes. The password can include leters, digits, and special characters excluding single quotation marks, double quotation marks, backticks, backslashes, and spaces. The password cannot be empty. The maximum length of password is 128 bytes. The password can include leters, digits, and special characters excluding single quotation marks, double quotation marks, backticks, backslashes, and spaces. The password cannot be empty.
`SYSINFO` indicates whether the user is allowed to view system information. `1` means allowed, `0` means not allowed. System information includes server configuration, dnode, vnode, storage. The default value is `1`.
For example, we can create a user whose password is `123456` and is able to view system information.
```sql
taos> create user test pass '123456' sysinfo 1;
Query OK, 0 of 0 rows affected (0.001254s)
```
## View Users
To show the users in the system, please use
```sql
SHOW USERS;
```
This is an example:
```sql
taos> show users;
name | super | enable | sysinfo | create_time |
================================================================================
test | 0 | 1 | 1 | 2022-08-29 15:10:27.315 |
root | 1 | 1 | 1 | 2022-08-29 15:03:34.710 |
Query OK, 2 rows in database (0.001657s)
```
Alternatively, you can get the user information by querying a built-in table, INFORMATION_SCHEMA.INS_USERS. For example:
```sql
taos> select * from information_schema.ins_users;
name | super | enable | sysinfo | create_time |
================================================================================
test | 0 | 1 | 1 | 2022-08-29 15:10:27.315 |
root | 1 | 1 | 1 | 2022-08-29 15:03:34.710 |
Query OK, 2 rows in database (0.001953s)
```
## Delete a User ## Delete a User
```sql ```sql
@ -40,6 +79,13 @@ alter_user_clause: {
- ENABLE: Specify whether the user is enabled or disabled. 1 indicates enabled and 0 indicates disabled. - ENABLE: Specify whether the user is enabled or disabled. 1 indicates enabled and 0 indicates disabled.
- SYSINFO: Specify whether the user can query system information. 1 indicates that the user can query system information and 0 indicates that the user cannot query system information. - SYSINFO: Specify whether the user can query system information. 1 indicates that the user can query system information and 0 indicates that the user cannot query system information.
For example, you can use below command to disable user `test`:
```sql
taos> alter user test enable 0;
Query OK, 0 of 0 rows affected (0.001160s)
```
## Grant Permissions ## Grant Permissions
@ -62,7 +108,7 @@ priv_level : {
} }
``` ```
Grant permissions to a user. Grant permissions to a user, this feature is only available in enterprise edition.
Permissions are granted on the database level. You can grant read or write permissions. Permissions are granted on the database level. You can grant read or write permissions.
@ -92,4 +138,4 @@ priv_level : {
``` ```
Revoke permissions from a user. Revoke permissions from a user, this feature is only available in enterprise edition.

View File

@ -2,7 +2,7 @@
title: TDengine Monitoring title: TDengine Monitoring
--- ---
After TDengine is started, a database named `log` is created automatically to help with monitoring. Information that includes CPU, memory and disk usage, bandwidth, number of requests, disk I/O speed, slow queries, is written into the `log` database at a predefined interval. Additionally, some important system operations, like logon, create user, drop database, and alerts and warnings generated in TDengine are written into the `log` database too. A system operator can view the data in `log` database from TDengine CLI or from a web console. After TDengine is started, it automatically writes monitoring data including CPU, memory and disk usage, bandwidth, number of requests, disk I/O speed, slow queries, into a designated database at a predefined interval through taosKeeper. Additionally, some important system operations, like logon, create user, drop database, and alerts and warnings generated in TDengine are written into the `log` database too. A system operator can view the data in `log` database from TDengine CLI or from a web console.
The collection of the monitoring information is enabled by default, but can be disabled by parameter `monitor` in the configuration file. The collection of the monitoring information is enabled by default, but can be disabled by parameter `monitor` in the configuration file.
@ -10,7 +10,7 @@ The collection of the monitoring information is enabled by default, but can be d
TDinsight is a complete solution which uses the monitoring database `log` mentioned previously, and Grafana, to monitor a TDengine cluster. TDinsight is a complete solution which uses the monitoring database `log` mentioned previously, and Grafana, to monitor a TDengine cluster.
From version 2.3.3.0, more monitoring data has been added in the `log` database. Please refer to [TDinsight Grafana Dashboard](https://grafana.com/grafana/dashboards/15167) to learn more details about using TDinsight to monitor TDengine. Please refer to [TDinsight Grafana Dashboard](../../reference/tdinsight) to learn more details about using TDinsight to monitor TDengine.
A script `TDinsight.sh` is provided to deploy TDinsight automatically. A script `TDinsight.sh` is provided to deploy TDinsight automatically.
@ -30,31 +30,14 @@ Prepare
2. Grafana Alert Notification 2. Grafana Alert Notification
There are two ways to setup Grafana alert notification. You can use below command to setup Grafana alert notification.
- An existing Grafana Notification Channel can be specified with parameter `-E`, the notifier uid of the channel can be obtained by `curl -u admin:admin localhost:3000/api/alert-notifications |jq` An existing Grafana Notification Channel can be specified with parameter `-E`, the notifier uid of the channel can be obtained by `curl -u admin:admin localhost:3000/api/alert-notifications |jq`
```bash ```bash
sudo ./TDinsight.sh -a http://localhost:6041 -u root -p taosdata -E <notifier uid> sudo ./TDinsight.sh -a http://localhost:6041 -u root -p taosdata -E <notifier uid>
``` ```
- The AliCloud SMS alert built in TDengine data source plugin can be enabled with parameter `-s`, the parameters of enabling this plugin are listed below:
- `-I`: AliCloud SMS Key ID
- `-K`: AliCloud SMS Key Secret
- `-S`: AliCloud SMS Signature
- `-C`: SMS notification template
- `-T`: Input parameters in JSON format for the SMS notification template, for example`{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}`
- `-B`: List of mobile numbers to be notified
Below is an example of the full command using the AliCloud SMS alert.
```bash
sudo ./TDinsight.sh -a http://localhost:6041 -u root -p taosdata -s \
-I XXXXXXX -K XXXXXXXX -S taosdata -C SMS_1111111 -B 18900000000 \
-T '{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}'
```
Launch `TDinsight.sh` with the command above and restart Grafana, then open Dashboard `http://localhost:3000/d/tdinsight`. Launch `TDinsight.sh` with the command above and restart Grafana, then open Dashboard `http://localhost:3000/d/tdinsight`.
For more use cases and restrictions please refer to [TDinsight](/reference/tdinsight/). For more use cases and restrictions please refer to [TDinsight](/reference/tdinsight/).

View File

@ -4,7 +4,7 @@ import PkgListV3 from "/components/PkgListV3";
<PkgListV3 type={1} sys="Linux" /> <PkgListV3 type={1} sys="Linux" />
[All Downloads](../../releases) [All Downloads](../../releases/tdengine)
2. Unzip 2. Unzip

View File

@ -4,7 +4,7 @@ import PkgListV3 from "/components/PkgListV3";
<PkgListV3 type={4} sys="Windows" /> <PkgListV3 type={4} sys="Windows" />
[All Downloads](../../releases) [All Downloads](../../releases/tdengine)
2. Execute the installer, select the default value as prompted, and complete the installation 2. Execute the installer, select the default value as prompted, and complete the installation
3. Installation path 3. Installation path

View File

@ -30,7 +30,7 @@ taosAdapter provides the following features.
### Install taosAdapter ### Install taosAdapter
If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [TDengine 3.0 released versions](../../releases) to download the TDengine server installation package. If you need to deploy taosAdapter separately on another server other than the TDengine server, you should install the full TDengine server package on that server to install taosAdapter. If you need to build taosAdapter from source code, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/3.0/BUILD.md) documentation. If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [TDengine 3.0 released versions](../../releases/tdengine) to download the TDengine server installation package. If you need to deploy taosAdapter separately on another server other than the TDengine server, you should install the full TDengine server package on that server to install taosAdapter. If you need to build taosAdapter from source code, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/3.0/BUILD.md) documentation.
### Start/Stop taosAdapter ### Start/Stop taosAdapter

View File

@ -211,7 +211,7 @@
], ],
"timeFrom": null, "timeFrom": null,
"timeShift": null, "timeShift": null,
"title": "Leader MNode", "title": "Master MNode",
"transformations": [ "transformations": [
{ {
"id": "filterByValue", "id": "filterByValue",
@ -221,7 +221,7 @@
"config": { "config": {
"id": "regex", "id": "regex",
"options": { "options": {
"value": "leader" "value": "master"
} }
}, },
"fieldName": "role" "fieldName": "role"
@ -300,7 +300,7 @@
], ],
"timeFrom": null, "timeFrom": null,
"timeShift": null, "timeShift": null,
"title": "Leader MNode Create Time", "title": "Master MNode Create Time",
"transformations": [ "transformations": [
{ {
"id": "filterByValue", "id": "filterByValue",
@ -310,7 +310,7 @@
"config": { "config": {
"id": "regex", "id": "regex",
"options": { "options": {
"value": "leader" "value": "master"
} }
}, },
"fieldName": "role" "fieldName": "role"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 5.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.1 KiB

After

Width:  |  Height:  |  Size: 6.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.5 KiB

After

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 6.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

View File

@ -153,7 +153,7 @@
], ],
"timeFrom": null, "timeFrom": null,
"timeShift": null, "timeShift": null,
"title": "Leader MNode", "title": "Master MNode",
"transformations": [ "transformations": [
{ {
"id": "filterByValue", "id": "filterByValue",
@ -163,7 +163,7 @@
"config": { "config": {
"id": "regex", "id": "regex",
"options": { "options": {
"value": "leader" "value": "master"
} }
}, },
"fieldName": "role" "fieldName": "role"
@ -246,7 +246,7 @@
], ],
"timeFrom": null, "timeFrom": null,
"timeShift": null, "timeShift": null,
"title": "Leader MNode Create Time", "title": "Master MNode Create Time",
"transformations": [ "transformations": [
{ {
"id": "filterByValue", "id": "filterByValue",
@ -256,7 +256,7 @@
"config": { "config": {
"id": "regex", "id": "regex",
"options": { "options": {
"value": "leader" "value": "master"
} }
}, },
"fieldName": "role" "fieldName": "role"

View File

@ -5,15 +5,23 @@ sidebar_label: TDinsight
TDinsight is a solution for monitoring TDengine using the builtin native monitoring database and [Grafana]. TDinsight is a solution for monitoring TDengine using the builtin native monitoring database and [Grafana].
After TDengine starts, it will automatically create a monitoring database `log`. TDengine will automatically write many metrics in specific intervals into the `log` database. The metrics may include the server's CPU, memory, hard disk space, network bandwidth, number of requests, disk read/write speed, slow queries, other information like important system operations (user login, database creation, database deletion, etc.), and error alarms. With [Grafana] and [TDengine Data Source Plugin](https://github.com/taosdata/grafanaplugin/releases), TDinsight can visualize cluster status, node information, insertion and query requests, resource usage, vnode, dnode, and mnode status, exception alerts and many other metrics. This is very convenient for developers who want to monitor TDengine cluster status in real-time. This article will guide users to install the Grafana server, automatically install the TDengine data source plug-in, and deploy the TDinsight visualization panel using the `TDinsight.sh` installation script. After TDengine starts, it automatically writes many metrics in specific intervals into a designated database. The metrics may include the server's CPU, memory, hard disk space, network bandwidth, number of requests, disk read/write speed, slow queries, other information like important system operations (user login, database creation, database deletion, etc.), and error alarms. With [Grafana] and [TDengine Data Source Plugin](https://github.com/taosdata/grafanaplugin/releases), TDinsight can visualize cluster status, node information, insertion and query requests, resource usage, vnode, dnode, and mnode status, exception alerts and many other metrics. This is very convenient for developers who want to monitor TDengine cluster status in real-time. This article will guide users to install the Grafana server, automatically install the TDengine data source plug-in, and deploy the TDinsight visualization panel using the `TDinsight.sh` installation script.
## System Requirements ## System Requirements
To deploy TDinsight, a single-node TDengine server or a multi-node TDengine cluster and a [Grafana] server are required. This dashboard requires TDengine 2.3.3.0 and above, with the `log` database enabled (`monitor = 1`). To deploy TDinsight, we need
- a single-node TDengine server or a multi-node TDengine cluster and a [Grafana] server are required. This dashboard requires TDengine 3.0.1.0 and above, with the monitoring feature enabled. For detailed configuration, please refer to [TDengine monitoring configuration](../config/#monitoring-parameters).
- taosAdapter has been instaleld and running, please refer to [taosAdapter](../taosadapter).
- taosKeeper has been installed and running, please refer to [taosKeeper](../taoskeeper).
Please record
- The endpoint of taosAdapter REST service, for example `http://tdengine.local:6041`
- Authentication of taosAdapter, e.g. user name and password
- The database name used by taosKeeper to store monitoring data
## Installing Grafana ## Installing Grafana
We recommend using the latest [Grafana] version 7 or 8 here. You can install Grafana on any [supported operating system](https://grafana.com/docs/grafana/latest/installation/requirements/#supported-operating-systems) by following the [official Grafana documentation Instructions](https://grafana.com/docs/grafana/latest/installation/) to install [Grafana]. We recommend using the latest [Grafana] version 8 or 9 here. You can install Grafana on any [supported operating system](https://grafana.com/docs/grafana/latest/installation/requirements/#supported-operating-systems) by following the [official Grafana documentation Instructions](https://grafana.com/docs/grafana/latest/installation/) to install [Grafana].
### Installing Grafana on Debian or Ubuntu ### Installing Grafana on Debian or Ubuntu
@ -71,7 +79,7 @@ chmod +x TDinsight.sh
./TDinsight.sh ./TDinsight.sh
``` ```
This script will automatically download the latest [Grafana TDengine data source plugin](https://github.com/taosdata/grafanaplugin/releases/latest) and [TDinsight dashboard](https://grafana.com/grafana/dashboards/15167) with configurable parameters for command-line options to the [Grafana Provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/) configuration file to automate deployment and updates, etc. With the alert setting options provided by this script, you can also get built-in support for AliCloud SMS alert notifications. This script will automatically download the latest [Grafana TDengine data source plugin](https://github.com/taosdata/grafanaplugin/releases/latest) and [TDinsight dashboard](https://github.com/taosdata/grafanaplugin/blob/master/dashboards/TDinsightV3.json) with configurable parameters for command-line options to the [Grafana Provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/) configuration file to automate deployment and updates, etc. With the alert setting options provided by this script, you can also get built-in support for AliCloud SMS alert notifications.
Assume you use TDengine and Grafana's default services on the same host. Run `. /TDinsight.sh` and open the Grafana browser window to see the TDinsight dashboard. Assume you use TDengine and Grafana's default services on the same host. Run `. /TDinsight.sh` and open the Grafana browser window to see the TDinsight dashboard.
@ -106,18 +114,6 @@ Install and configure TDinsight dashboard in Grafana on Ubuntu 18.04/20.04 syste
-E, --external-notifier <string> Apply external notifier uid to TDinsight dashboard. -E, --external-notifier <string> Apply external notifier uid to TDinsight dashboard.
Alibaba Cloud SMS as Notifier:
-s, --sms-enabled To enable tdengine-datasource plugin builtin Alibaba Cloud SMS webhook.
-N, --sms-notifier-name <string> Provisioning notifier name.[default: TDinsight Builtin SMS]
-U, --sms-notifier-uid <string> Provisioning notifier uid, use lowercase notifier name by default.
-D, --sms-notifier-is-default Set notifier as default.
-I, --sms-access-key-id <string> Alibaba Cloud SMS access key id
-K, --sms-access-key-secret <string> Alibaba Cloud SMS access key secret
-S, --sms-sign-name <string> Sign name
-C, --sms-template-code <string> Template code
-T, --sms-template-param <string> Template param, a escaped JSON string like '{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}'
-B, --sms-phone-numbers <string> Comma-separated numbers list, eg "189xxxxxxxx,132xxxxxxxx"
-L, --sms-listen-addr <string> [default: 127.0.0.1:9100]
``` ```
Most command-line options can take effect the same as environment variables. Most command-line options can take effect the same as environment variables.
@ -136,17 +132,6 @@ Most command-line options can take effect the same as environment variables.
| -t | --tdinsight-title | TDINSIGHT_DASHBOARD_TITLE | TDinsight dashboard title. [Default: TDinsight] | -e | -tdinsight-title | -t | --tdinsight-title | TDINSIGHT_DASHBOARD_TITLE | TDinsight dashboard title. [Default: TDinsight] | -e | -tdinsight-title
| -e | --tdinsight-editable | TDINSIGHT_DASHBOARD_EDITABLE | If the dashboard is configured to be editable. [Default: false] | -e | --external | -e | --tdinsight-editable | TDINSIGHT_DASHBOARD_EDITABLE | If the dashboard is configured to be editable. [Default: false] | -e | --external
| -E | --external-notifier | EXTERNAL_NOTIFIER | Apply the external notifier uid to the TDinsight dashboard. | -s | -E | --external-notifier | EXTERNAL_NOTIFIER | Apply the external notifier uid to the TDinsight dashboard. | -s
| -s | --sms-enabled | SMS_ENABLED | Enable the tdengine-datasource plugin built into Alibaba Cloud SMS webhook. | -s
| -N | --sms-notifier-name | SMS_NOTIFIER_NAME | The name of the provisioning notifier. [Default: `TDinsight Builtin SMS`] | -U
| -U | --sms-notifier-uid | SMS_NOTIFIER_UID | "Notification Channel" `uid`, lowercase of the program name is used by default, other characters are replaced by "-". |-sms
| -D | --sms-notifier-is-default | SMS_NOTIFIER_IS_DEFAULT | Set built-in SMS notification to default value. |-sms-notifier-is-default
| -I | --sms-access-key-id | SMS_ACCESS_KEY_ID | Alibaba Cloud SMS access key id |
| -K | --sms-access-key-secret | SMS_ACCESS_KEY_SECRET | AliCloud SMS-access-secret-key |
| -S | --sms-sign-name | SMS_SIGN_NAME | Signature |
| -C | --sms-template-code | SMS_TEMPLATE_CODE | Template code |
| -T | --sms-template-param | SMS_TEMPLATE_PARAM | JSON template for template parameters |
| -B | --sms-phone-numbers | SMS_PHONE_NUMBERS | A comma-separated list of phone numbers, e.g. `"189xxxxxxxx,132xxxxxxxx"` |
| -L | --sms-listen-addr | SMS_LISTEN_ADDR | Built-in SMS webhook listener address, default is `127.0.0.1:9100` |
Suppose you start a TDengine database on host `tdengine` with HTTP API port `6041`, user `root1`, and password `pass5ord`. Execute the script. Suppose you start a TDengine database on host `tdengine` with HTTP API port `6041`, user `root1`, and password `pass5ord`. Execute the script.
@ -166,24 +151,10 @@ Use the `uid` value obtained above as `-E` input.
sudo ./TDinsight.sh -a http://tdengine:6041 -u root1 -p pass5ord -E existing-notifier sudo ./TDinsight.sh -a http://tdengine:6041 -u root1 -p pass5ord -E existing-notifier
``` ```
If you want to use the [Alibaba Cloud SMS](https://www.aliyun.com/product/sms) service as a notification channel, you should enable it with the `-s` flag add the following parameters.
- `-N`: Notification Channel name, default is `TDinsight Builtin SMS`.
- `-U`: Channel uid, default is lowercase of `name`, any other character is replaced with -, for the default `-N`, its uid is `tdinsight-builtin-sms`.
- `-I`: Alibaba Cloud SMS access key id.
- `-K`: Alibaba Cloud SMS access secret key.
- `-S`: Alibaba Cloud SMS signature.
- `-C`: Alibaba Cloud SMS template id.
- `-T`: Alibaba Cloud SMS template parameters, for JSON format template, example is as follows `'{"alarm_level":"%s", "time":"%s", "name":"%s", "content":"%s"}'`. There are four parameters: alarm level, time, name and alarm content.
- `-B`: a list of phone numbers, separated by a comma `,`.
If you want to monitor multiple TDengine clusters, you need to set up numerous TDinsight dashboards. Setting up non-default TDinsight requires some changes: the `-n` `-i` `-t` options need to be changed to non-default names, and `-N` and `-L` should also be changed if using the built-in SMS alerting feature. If you want to monitor multiple TDengine clusters, you need to set up numerous TDinsight dashboards. Setting up non-default TDinsight requires some changes: the `-n` `-i` `-t` options need to be changed to non-default names, and `-N` and `-L` should also be changed if using the built-in SMS alerting feature.
```bash ```bash
sudo . /TDengine.sh -n TDengine-Env1 -a http://another:6041 -u root -p taosdata -i tdinsight-env1 -t 'TDinsight Env1' sudo . /TDengine.sh -n TDengine-Env1 -a http://another:6041 -u root -p taosdata -i tdinsight-env1 -t 'TDinsight Env1'
# If using built-in SMS notifications
sudo . /TDengine.sh -n TDengine-Env1 -a http://another:6041 -u root -p taosdata -i tdinsight-env1 -t 'TDinsight Env1' \
-s -N 'Env1 SMS' -I xx -K xx -S xx -C SMS_XX -T '' -B 00000000000 -L 127.0.0.01:10611
``` ```
Please note that the configuration data source, notification channel, and dashboard are not changeable on the front end. You should update the configuration again via this script or manually change the configuration file in the `/etc/grafana/provisioning` directory (this is the default directory for Grafana, use the `-P` option to change it as needed). Please note that the configuration data source, notification channel, and dashboard are not changeable on the front end. You should update the configuration again via this script or manually change the configuration file in the `/etc/grafana/provisioning` directory (this is the default directory for Grafana, use the `-P` option to change it as needed).
@ -249,21 +220,23 @@ Save and test. It will report 'TDengine Data source is working' under normal cir
### Importing dashboards ### Importing dashboards
Point to **+** / **Create** - **import** (or `/dashboard/import` url). In the page of configuring data source, click **Dashboards** tab.
![TDengine Database TDinsight Import Dashboard and Configuration](./assets/import_dashboard.webp) ![TDengine Database TDinsight Import Dashboard and Configuration](./assets/import_dashboard.webp)
Type the dashboard ID `15167` in the **Import via grafana.com** location and **Load**. Choose `TDengine for 3.x` and click `import`.
![TDengine Database TDinsight Import via grafana.com](./assets/import-dashboard-15167.webp) After the importing is done, `TDinsight for 3.x` dashboard is available on the page of `search dashboards by name`.
Once the import is complete, the full page view of TDinsight is shown below. ![TDengine Database TDinsight Import via grafana.com](./assets/import_dashboard_view.webp)
![TDengine Database TDinsight show](./assets/TDinsight-full.webp) In the `TDinsight for 3.x` dashboard, choose the database used by taosKeeper to store monitoring data, you can see the monitoring result.
![TDengine Database TDinsight 选择数据库](./assets/select_dashboard_db.webp)
## TDinsight dashboard details ## TDinsight dashboard details
The TDinsight dashboard is designed to provide the usage and status of TDengine-related resources [dnodes, mnodes, vnodes](../../taos-sql/node/) or databases. The TDinsight dashboard is designed to provide the usage and status of TDengine-related resources, e.g. dnodes, mnodes, vnodes and databases.
Details of the metrics are as follows. Details of the metrics are as follows.
@ -285,7 +258,6 @@ This section contains the current information and status of the cluster, the ale
- **Measuring Points Used**: The number of measuring points used to enable the alert rule (no data available in the community version, healthy by default). - **Measuring Points Used**: The number of measuring points used to enable the alert rule (no data available in the community version, healthy by default).
- **Grants Expire Time**: the expiration time of the enterprise version of the enabled alert rule (no data available for the community version, healthy by default). - **Grants Expire Time**: the expiration time of the enterprise version of the enabled alert rule (no data available for the community version, healthy by default).
- **Error Rate**: Aggregate error rate (average number of errors per second) for alert-enabled clusters. - **Error Rate**: Aggregate error rate (average number of errors per second) for alert-enabled clusters.
- **Variables**: `show variables` table display.
### DNodes Status ### DNodes Status
@ -294,7 +266,6 @@ This section contains the current information and status of the cluster, the ale
- **DNodes Status**: simple table view of `show dnodes`. - **DNodes Status**: simple table view of `show dnodes`.
- **DNodes Lifetime**: the time elapsed since the dnode was created. - **DNodes Lifetime**: the time elapsed since the dnode was created.
- **DNodes Number**: the number of DNodes changes. - **DNodes Number**: the number of DNodes changes.
- **Offline Reason**: if any dnode status is offline, the reason for offline is shown as a pie chart.
### MNode Overview ### MNode Overview
@ -309,7 +280,6 @@ This section contains the current information and status of the cluster, the ale
1. **Requests Rate(Inserts per Second)**: average number of inserts per second. 1. **Requests Rate(Inserts per Second)**: average number of inserts per second.
2. **Requests (Selects)**: number of query requests and change rate (count of second). 2. **Requests (Selects)**: number of query requests and change rate (count of second).
3. **Requests (HTTP)**: number of HTTP requests and request rate (count of second).
### Database ### Database
@ -319,9 +289,8 @@ Database usage, repeated for each value of the variable `$database` i.e. multipl
1. **STables**: number of super tables. 1. **STables**: number of super tables.
2. **Total Tables**: number of all tables. 2. **Total Tables**: number of all tables.
3. **Sub Tables**: the number of all super table subtables. 3. **Tables**: number of normal tables.
4. **Tables**: graph of all normal table numbers over time. 4. **Table number for each vgroup**: number of tables per vgroup.
5. **Tables Number Foreach VGroups**: The number of tables contained in each VGroups.
### DNode Resource Usage ### DNode Resource Usage
@ -356,12 +325,11 @@ Currently, only the number of logins per minute is reported.
Support monitoring taosAdapter request statistics and status details. Includes. Support monitoring taosAdapter request statistics and status details. Includes.
1. **http_request**: contains the total number of requests, the number of failed requests, and the number of requests being processed 1. **http_request_inflight**: number of real-time requests.
2. **top 3 request endpoint**: data of the top 3 requests by endpoint group 2. **http_request_total**: number of total requests.
3. **Memory Used**: taosAdapter memory usage 3. **http_request_fail**: number of failed requets.
4. **latency_quantile(ms)**: quantile of (1, 2, 5, 9, 99) stages 4. **CPU Used**: CPU usage of taosAdapter.
5. **top 3 failed request endpoint**: data of the top 3 failed requests by endpoint grouping 5. **Memory Used**: Memory usage of taosAdapter.
6. **CPU Used**: taosAdapter CPU usage
## Upgrade ## Upgrade
@ -403,13 +371,6 @@ services:
TDENGINE_API: ${TDENGINE_API} TDENGINE_API: ${TDENGINE_API}
TDENGINE_USER: ${TDENGINE_USER} TDENGINE_USER: ${TDENGINE_USER}
TDENGINE_PASS: ${TDENGINE_PASS} TDENGINE_PASS: ${TDENGINE_PASS}
SMS_ACCESS_KEY_ID: ${SMS_ACCESS_KEY_ID}
SMS_ACCESS_KEY_SECRET: ${SMS_ACCESS_KEY_SECRET}
SMS_SIGN_NAME: ${SMS_SIGN_NAME}
SMS_TEMPLATE_CODE: ${SMS_TEMPLATE_CODE}
SMS_TEMPLATE_PARAM: '${SMS_TEMPLATE_PARAM}'
SMS_PHONE_NUMBERS: $SMS_PHONE_NUMBERS
SMS_LISTEN_ADDR: ${SMS_LISTEN_ADDR}
ports: ports:
- 3000:3000 - 3000:3000
volumes: volumes:

98
docs/en/20-third-party/13-Jupyter.md vendored Normal file
View File

@ -0,0 +1,98 @@
---
sidebar_label: JupyterLab
title: Connect JupyterLab to TDengine
---
JupyterLab is the next generation of the ubiquitous Jupyter Notebook. In this note we show you how to install the TDengine Python connector to connect to TDengine in JupyterLab. You can then insert data and perform queries against the TDengine instance within JupyterLab.
## Install JupyterLab
Installing JupyterLab is very easy. Installation instructions can be found at:
https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html.
If you don't feel like clicking on the link here are the instructions.
Jupyter's preferred Python package manager is pip, so we show the instructions for pip.
You can also use **conda** or **pipenv** if you are managing Python environments.
````
pip install jupyterlab
````
For **conda** you can run:
````
conda install -c conda-forge jupyterlab
````
For **pipenv** you can run:
````
pipenv install jupyterlab
pipenv shell
````
## Run JupyterLab
You can start JupyterLab from the command line by running:
````
jupyter lab
````
This will automatically launch your default browser and connect to your JupyterLab instance, usually on port 8888.
## Install the TDengine Python connector
You can now install the TDengine Python connector as follows.
Start a new Python kernel in JupyterLab.
If using **conda** run the following:
````
# Install a conda package in the current Jupyter kernel
import sys
!conda install --yes --prefix {sys.prefix} taospy
````
If using **pip** run the following:
````
# Install a pip package in the current Jupyter kernel
import sys
!{sys.executable} -m pip install taospy
````
## Connect to TDengine
You can find detailed examples to use the Python connector, in the TDengine documentation here.
Once you have installed the TDengine Python connector in your JupyterLab kernel, the process of connecting to TDengine is the same as that you would use if you weren't using JupyterLab.
Each TDengine instance, has a database called "log" which has monitoring information about the TDengine instance.
In the "log" database there is a [supertable](https://docs.tdengine.com/taos-sql/stable/) called "disks_info".
The structure of this table is as follows:
````
taos> desc disks_info;
Field | Type | Length | Note |
=================================================================================
ts | TIMESTAMP | 8 | |
datadir_l0_used | FLOAT | 4 | |
datadir_l0_total | FLOAT | 4 | |
datadir_l1_used | FLOAT | 4 | |
datadir_l1_total | FLOAT | 4 | |
datadir_l2_used | FLOAT | 4 | |
datadir_l2_total | FLOAT | 4 | |
dnode_id | INT | 4 | TAG |
dnode_ep | BINARY | 134 | TAG |
Query OK, 9 row(s) in set (0.000238s)
````
The code below is used to fetch data from this table into a pandas DataFrame.
````
import sys
import taos
import pandas
def sqlQuery(conn):
df: pandas.DataFrame = pandas.read_sql("select * from log.disks_info limit 500", conn)
print(df)
return df
conn = taos.connect()
result = sqlQuery(conn)
print(result)
````
TDengine has connectors for various languages including Node.js, Go, PHP and there are kernels for these languages which can be found [here](https://github.com/jupyter/jupyter/wiki/Jupyter-kernels).

View File

@ -1,9 +0,0 @@
---
sidebar_label: Releases
title: Released Versions
---
import Release from "/components/ReleaseV3";
<Release versionPrefix="3.0" />

View File

@ -0,0 +1,17 @@
---
sidebar_label: TDengine
title: TDengine
description: TDengine release history, Release Notes and download links.
---
import Release from "/components/ReleaseV3";
## 3.0.1.1
<Release type="tdengine" version="3.0.1.1" />
## 3.0.1.0
<Release type="tdengine" version="3.0.1.0" />

View File

@ -0,0 +1,15 @@
---
sidebar_label: taosTools
title: taosTools
description: taosTools release history, Release Notes, download links.
---
import Release from "/components/ReleaseV3";
## 2.2.0
<Release type="tools" version="2.2.0" />
## 2.1.3
<Release type="tools" version="2.1.3" />

View File

@ -0,0 +1 @@
label: Releases

View File

@ -73,6 +73,7 @@ install.sh 安装脚本在执行过程中,会通过命令行交互界面询问
</TabItem> </TabItem>
<TabItem value="apt-get" label="apt-get"> <TabItem value="apt-get" label="apt-get">
可以使用 `apt-get` 工具从官方仓库安装。 可以使用 `apt-get` 工具从官方仓库安装。
**配置包仓库** **配置包仓库**

View File

@ -218,7 +218,7 @@ void Close()
```sql ```sql
DROP DATABASE IF EXISTS tmqdb; DROP DATABASE IF EXISTS tmqdb;
CREATE DATABASE tmqdb; CREATE DATABASE tmqdb;
CREATE TABLE tmqdb.stb (ts TIMESTAMP, c1 INT, c2 FLOAT, c3 VARCHAR(16) TAGS(t1 INT, t3 VARCHAR(16)); CREATE TABLE tmqdb.stb (ts TIMESTAMP, c1 INT, c2 FLOAT, c3 VARCHAR(16)) TAGS(t1 INT, t3 VARCHAR(16));
CREATE TABLE tmqdb.ctb0 USING tmqdb.stb TAGS(0, "subtable0"); CREATE TABLE tmqdb.ctb0 USING tmqdb.stb TAGS(0, "subtable0");
CREATE TABLE tmqdb.ctb1 USING tmqdb.stb TAGS(1, "subtable1"); CREATE TABLE tmqdb.ctb1 USING tmqdb.stb TAGS(1, "subtable1");
INSERT INTO tmqdb.ctb0 VALUES(now, 0, 0, 'a0')(now+1s, 0, 0, 'a00'); INSERT INTO tmqdb.ctb0 VALUES(now, 0, 0, 'a0')(now+1s, 0, 0, 'a00');

View File

@ -4,7 +4,7 @@ sidebar_label: REST API
description: 详细介绍 TDengine 提供的 RESTful API. description: 详细介绍 TDengine 提供的 RESTful API.
--- ---
为支持各种不同类型平台的开发TDengine 提供符合 REST 设计标准的 API即 REST API。为最大程度降低学习成本不同于其他数据库 REST API 的设计方法TDengine 直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库,仅需要一个 URL。REST 连接器的使用参见 [视频教程](https://www.taosdata.com/blog/2020/11/11/1965.html)。 为支持各种不同类型平台的开发TDengine 提供符合 RESTful 设计标准的 API即 REST API。为最大程度降低学习成本不同于其他数据库 REST API 的设计方法TDengine 直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库,仅需要一个 URL。REST API 的使用参见 [视频教程](https://www.taosdata.com/blog/2020/11/11/1965.html)。
:::note :::note
与原生连接器的一个区别是RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,所有对表名、超级表名的引用都需要指定数据库名前缀。支持在 RESTful URL 中指定 db_name这时如果 SQL 语句中没有指定数据库名前缀的话,会使用 URL 中指定的这个 db_name。 与原生连接器的一个区别是RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,所有对表名、超级表名的引用都需要指定数据库名前缀。支持在 RESTful URL 中指定 db_name这时如果 SQL 语句中没有指定数据库名前缀的话,会使用 URL 中指定的这个 db_name。
@ -18,7 +18,7 @@ RESTful 接口不依赖于任何 TDengine 的库,因此客户端不需要安
在已经安装 TDengine 服务器端的情况下,可以按照如下方式进行验证。 在已经安装 TDengine 服务器端的情况下,可以按照如下方式进行验证。
下面以 Ubuntu 环境中使用 curl 工具(确认已经安装)来验证 RESTful 接口的正常,验证前请确认 taosAdapter 服务已开启,在 Linux 系统上此服务默认由 systemd 管理,使用命令 `systemctl start taosadapter` 启动。 下面以 Ubuntu 环境中使用 `curl` 工具(请确认已经安装)来验证 RESTful 接口是否工作正常,验证前请确认 taosAdapter 服务已开启,在 Linux 系统上此服务默认由 systemd 管理,使用命令 `systemctl start taosadapter` 启动。
下面示例是列出所有的数据库,请把 h1.taosdata.com 和 6041缺省值替换为实际运行的 TDengine 服务 FQDN 和端口号: 下面示例是列出所有的数据库,请把 h1.taosdata.com 和 6041缺省值替换为实际运行的 TDengine 服务 FQDN 和端口号:

View File

@ -0,0 +1,38 @@
---
title: Schemaless API
sidebar_label: Schemaless API
description: 详细介绍 TDengine 提供的 Schemaless API.
---
TDengine 提供了兼容 InfluxDB (v1) 和 OpenTSDB 行协议的 Schemaless API。支持 InfluxDBv1) 或 OpenTSDB 行协议写入数据的第三方软件无需修改代码,只要修改配置的 EndPoint URL 就可以直接把数据写入 TDengine 数据库。
### 兼容 InfluxDB 行协议写入的方法
您可以配置任何支持使用 InfluxDBv1 行协议的应用访问地址 `http://<fqdn>:6041/<APIEndPoint>` 来写入 InfluxDB 兼容格式的数据到 TDengine。EndPoint 如下:
```text
/influxdb/v1/write?<param1=value1>?<param2=value2>...
```
支持 InfluxDB 查询参数如下:
- `db` 指定 TDengine 使用的数据库名
- `precision` TDengine 使用的时间精度
- `u` TDengine 用户名
- `p` TDengine 密码
注意: 目前不支持 InfluxDB 的 token 验证方式,仅支持 Basic 验证和查询参数验证。
参考链接:[InfluxDB v1 写接口](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
### 兼容 OpenTSDB 行协议写入的方法
您可以配置任何支持 OpenTSDB 行协议的应用访问地址 `http://<fqdn>:6041/<APIEndPoint>` 来写入 OpenTSDB 兼容格式的数据到 TDengine。EndPoint 如下:
```text
/opentsdb/v1/put/json/<db>
/opentsdb/v1/put/telnet/<db>
```
参考链接:
- [OpenTSDB JSON](http://opentsdb.net/docs/build/html/api_http/put.html)
- [OpenTSDB Telnet](http://opentsdb.net/docs/build/html/api_telnet/put.html)

View File

@ -168,7 +168,7 @@ Query OK, 8 row(s) in set (0.001154s)
## 删除数据节点 ## 删除数据节点
先停止要删除的数据节点的 taosd 进程,然后启动 CLI 程序 taos执行 启动 CLI 程序 taos执行
```sql ```sql
DROP DNODE "fqdn:port"; DROP DNODE "fqdn:port";

View File

@ -104,7 +104,7 @@ SELECT location, groupid, current FROM d1001 LIMIT 2;
### 结果去重 ### 结果去重
`DISINTCT` 关键字可以对结果集中的一列或多列进行去重,去除的列既可以是标签列也可以是数据列。 `DISTINCT` 关键字可以对结果集中的一列或多列进行去重,去除的列既可以是标签列也可以是数据列。
对标签列去重: 对标签列去重:
@ -356,7 +356,7 @@ SELECT ... FROM (SELECT ... FROM ...) ...;
- 与非嵌套的查询语句相比,外层查询所能支持的功能特性存在如下限制: - 与非嵌套的查询语句相比,外层查询所能支持的功能特性存在如下限制:
- 计算函数部分: - 计算函数部分:
- 如果内层查询的结果数据未提供时间戳那么计算过程隐式依赖时间戳的函数在外层会无法正常工作。例如INTERP, DERIVATIVE, IRATE, LAST_ROW, FIRST, LAST, TWA, STATEDURATION, TAIL, UNIQUE。 - 如果内层查询的结果数据未提供时间戳那么计算过程隐式依赖时间戳的函数在外层会无法正常工作。例如INTERP, DERIVATIVE, IRATE, LAST_ROW, FIRST, LAST, TWA, STATEDURATION, TAIL, UNIQUE。
- 如果内层查询的结果数据不是有效的时间序列,那么计算过程依赖数据为时间序列的函数在外层会无法正常工作。例如LEASTSQUARES, ELAPSED, INTERP, DERIVATIVE, IRATE, TWA, DIFF, STATECOUNT, STATEDURATION, CSUM, MAVG, TAIL, UNIQUE。 - 如果内层查询的结果数据不是按时间戳有序,那么计算过程依赖数据按时间有序的函数在外层会无法正常工作。例如LEASTSQUARES, ELAPSED, INTERP, DERIVATIVE, IRATE, TWA, DIFF, STATECOUNT, STATEDURATION, CSUM, MAVG, TAIL, UNIQUE。
- 计算过程需要两遍扫描的函数在外层查询中无法正常工作。例如此类函数包括PERCENTILE。 - 计算过程需要两遍扫描的函数在外层查询中无法正常工作。例如此类函数包括PERCENTILE。
::: :::

View File

@ -127,7 +127,7 @@ SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause]
SELECT FLOOR(field_name) FROM { tb_name | stb_name } [WHERE clause]; SELECT FLOOR(field_name) FROM { tb_name | stb_name } [WHERE clause];
``` ```
**功能说明**:获得指定字段的向下取整数的结果。 **功能说明**:获得指定字段的向下取整数的结果。
其他使用说明参见 CEIL 函数描述。 其他使用说明参见 CEIL 函数描述。
#### LOG #### LOG
@ -174,7 +174,7 @@ SELECT POW(field_name, power) FROM { tb_name | stb_name } [WHERE clause]
SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause]; SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause];
``` ```
**功能说明**:获得指定字段的四舍五入的结果。 **功能说明**:获得指定字段的四舍五入的结果。
其他使用说明参见 CEIL 函数描述。 其他使用说明参见 CEIL 函数描述。
@ -435,7 +435,7 @@ SELECT TO_ISO8601(ts[, timezone]) FROM { tb_name | stb_name } [WHERE clause];
**使用说明** **使用说明**
- timezone 参数允许输入的时区格式为: [z/Z, +/-hhmm, +/-hh, +/-hh:mm]。例如TO_ISO8601(1, "+00:00")。 - timezone 参数允许输入的时区格式为: [z/Z, +/-hhmm, +/-hh, +/-hh:mm]。例如TO_ISO8601(1, "+00:00")。
- 如果输入是表示 UNIX 时间戳的整形,返回格式精度由时间戳的位数决定; - 如果输入是表示 UNIX 时间戳的整形,返回格式精度由时间戳的位数决定;
- 如果输入是 TIMESTAMP 类型的列,返回格式的时间戳精度与当前 DATABASE 设置的时间精度一致。 - 如果输入是 TIMESTAMP 类型的列,返回格式的时间戳精度与当前 DATABASE 设置的时间精度一致。
@ -770,14 +770,14 @@ SELECT HISTOGRAM(field_namebin_type, bin_description, normalized) FROM tb_nam
**详细说明** **详细说明**
- bin_type 用户指定的分桶类型, 有效输入类型为"user_input“, ”linear_bin", "log_bin"。 - bin_type 用户指定的分桶类型, 有效输入类型为"user_input“, ”linear_bin", "log_bin"。
- bin_description 描述如何生成分桶区间,针对三种桶类型,分别为以下描述格式(均为 JSON 格式字符串) - bin_description 描述如何生成分桶区间,针对三种桶类型,分别为以下描述格式(均为 JSON 格式字符串)
- "user_input": "[1, 3, 5, 7]" - "user_input": "[1, 3, 5, 7]"
用户指定 bin 的具体数值。 用户指定 bin 的具体数值。
- "linear_bin": "{"start": 0.0, "width": 5.0, "count": 5, "infinity": true}" - "linear_bin": "{"start": 0.0, "width": 5.0, "count": 5, "infinity": true}"
"start" 表示数据起始点,"width" 表示每次 bin 偏移量, "count" 为 bin 的总数,"infinity" 表示是否添加(-inf, inf作为区间起点和终点 "start" 表示数据起始点,"width" 表示每次 bin 偏移量, "count" 为 bin 的总数,"infinity" 表示是否添加(-inf, inf作为区间起点和终点
生成区间为[-inf, 0.0, 5.0, 10.0, 15.0, 20.0, +inf]。 生成区间为[-inf, 0.0, 5.0, 10.0, 15.0, 20.0, +inf]。
- "log_bin": "{"start":1.0, "factor": 2.0, "count": 5, "infinity": true}" - "log_bin": "{"start":1.0, "factor": 2.0, "count": 5, "infinity": true}"
"start" 表示数据起始点,"factor" 表示按指数递增的因子,"count" 为 bin 的总数,"infinity" 表示是否添加(-inf, inf作为区间起点和终点 "start" 表示数据起始点,"factor" 表示按指数递增的因子,"count" 为 bin 的总数,"infinity" 表示是否添加(-inf, inf作为区间起点和终点
生成区间为[-inf, 1.0, 2.0, 4.0, 8.0, 16.0, +inf]。 生成区间为[-inf, 1.0, 2.0, 4.0, 8.0, 16.0, +inf]。
@ -918,7 +918,7 @@ SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause];
**返回数据类型**:同应用的字段。 **返回数据类型**:同应用的字段。
**适用数据类型**:数值类型,时间戳类型 **适用数据类型**:数值类型。
**适用于**:表和超级表。 **适用于**:表和超级表。
@ -933,7 +933,7 @@ SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause];
**返回数据类型**:同应用的字段。 **返回数据类型**:同应用的字段。
**适用数据类型**:数值类型,时间戳类型 **适用数据类型**:数值类型。
**适用于**:表和超级表。 **适用于**:表和超级表。
@ -969,7 +969,7 @@ SELECT SAMPLE(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
**适用于**:表和超级表。 **适用于**:表和超级表。
**使用说明** **使用说明**
- 不能参与表达式计算;该函数可以应用在普通表和超级表上; - 不能参与表达式计算;该函数可以应用在普通表和超级表上;
- 使用在超级表上的时候,需要搭配 PARTITION by tbname 使用,将结果强制规约到单个时间线。 - 使用在超级表上的时候,需要搭配 PARTITION by tbname 使用,将结果强制规约到单个时间线。
@ -1047,10 +1047,10 @@ SELECT CSUM(field_name) FROM { tb_name | stb_name } [WHERE clause]
**适用于**:表和超级表。 **适用于**:表和超级表。
**使用说明** **使用说明**
- 不支持 +、-、*、/ 运算,如 csum(col1) + csum(col2)。 - 不支持 +、-、*、/ 运算,如 csum(col1) + csum(col2)。
- 只能与聚合Aggregation函数一起使用。 该函数可以应用在普通表和超级表上。 - 只能与聚合Aggregation函数一起使用。 该函数可以应用在普通表和超级表上。
- 使用在超级表上的时候,需要搭配 PARTITION BY tbname使用将结果强制规约到单个时间线。 - 使用在超级表上的时候,需要搭配 PARTITION BY tbname使用将结果强制规约到单个时间线。
@ -1068,8 +1068,8 @@ SELECT DERIVATIVE(field_name, time_interval, ignore_negative) FROM tb_name [WHER
**适用于**:表和超级表。 **适用于**:表和超级表。
**使用说明**: **使用说明**:
- DERIVATIVE 函数可以在由 PARTITION BY 划分出单独时间线的情况下用于超级表(也即 PARTITION BY tbname - DERIVATIVE 函数可以在由 PARTITION BY 划分出单独时间线的情况下用于超级表(也即 PARTITION BY tbname
- 可以与选择相关联的列一起使用。 例如: select \_rowts, DERIVATIVE() from。 - 可以与选择相关联的列一起使用。 例如: select \_rowts, DERIVATIVE() from。
@ -1087,7 +1087,7 @@ SELECT {DIFF(field_name, ignore_negative) | DIFF(field_name)} FROM tb_name [WHER
**适用于**:表和超级表。 **适用于**:表和超级表。
**使用说明**: **使用说明**:
- 输出结果行数是范围内总行数减一,第一行没有结果输出。 - 输出结果行数是范围内总行数减一,第一行没有结果输出。
- 可以与选择相关联的列一起使用。 例如: select \_rowts, DIFF() from。 - 可以与选择相关联的列一起使用。 例如: select \_rowts, DIFF() from。
@ -1124,9 +1124,9 @@ SELECT MAVG(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
**适用于**:表和超级表。 **适用于**:表和超级表。
**使用说明** **使用说明**
- 不支持 +、-、*、/ 运算,如 mavg(col1, k1) + mavg(col2, k1); - 不支持 +、-、*、/ 运算,如 mavg(col1, k1) + mavg(col2, k1);
- 只能与普通列选择Selection、投影Projection函数一起使用不能与聚合Aggregation函数一起使用 - 只能与普通列选择Selection、投影Projection函数一起使用不能与聚合Aggregation函数一起使用
- 使用在超级表上的时候,需要搭配 PARTITION BY tbname使用将结果强制规约到单个时间线。 - 使用在超级表上的时候,需要搭配 PARTITION BY tbname使用将结果强制规约到单个时间线。

View File

@ -46,7 +46,7 @@ SELECT select_list FROM tb_name
### 窗口子句的规则 ### 窗口子句的规则
- 窗口子句位于数据切分子句之后,GROUP BY 子句之前,且不可以和 GROUP BY 子句一起使用。 - 窗口子句位于数据切分子句之后,不可以和 GROUP BY 子句一起使用。
- 窗口子句将数据按窗口进行切分,对每个窗口进行 SELECT 列表中的表达式的计算SELECT 列表中的表达式只能包含: - 窗口子句将数据按窗口进行切分,对每个窗口进行 SELECT 列表中的表达式的计算SELECT 列表中的表达式只能包含:
- 常量。 - 常量。
- _wstart伪列、_wend伪列和_wduration伪列。 - _wstart伪列、_wend伪列和_wduration伪列。
@ -71,7 +71,7 @@ FILL 语句指定某一窗口区间数据缺失的情况下的填充模式。填
1. 使用 FILL 语句的时候可能生成大量的填充输出,务必指定查询的时间区间。针对每次查询,系统可返回不超过 1 千万条具有插值的结果。 1. 使用 FILL 语句的时候可能生成大量的填充输出,务必指定查询的时间区间。针对每次查询,系统可返回不超过 1 千万条具有插值的结果。
2. 在时间维度聚合中,返回的结果中时间序列严格单调递增。 2. 在时间维度聚合中,返回的结果中时间序列严格单调递增。
3. 如果查询对象是超级表,则聚合函数会作用于该超级表下满足值过滤条件的所有表的数据。如果查询中没有使用 PARTITION BY 语句,则返回的结果按照时间序列严格单调递增;如果查询中使用了 PARTITION BY 语句分组,则返回结果中每个 PARTITION 内按照时间序列严格单调递增。 3. 如果查询对象是超级表,则聚合函数会作用于该超级表下满足值过滤条件的所有表的数据。如果查询中没有使用 PARTITION BY 语句,则返回的结果按照时间序列严格单调递增;如果查询中使用了 PARTITION BY 语句分组,则返回结果中每个 PARTITION 内按照时间序列严格单调递增。
::: :::
@ -113,6 +113,12 @@ SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m);
SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status); SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status);
``` ```
仅关心 status 为 2 时的状态窗口的信息。例如:
```
SELECT * FROM (SELECT COUNT(*) AS cnt, FIRST(ts) AS fst, status FROM temp_tb_1 STATE_WINDOW(status)) t WHERE status = 2;
```
### 会话窗口 ### 会话窗口
会话窗口根据记录的时间戳主键的值来确定是否属于同一个会话。如下图所示,如果设置时间戳的连续的间隔小于等于 12 秒,则以下 6 条记录构成 2 个会话窗口,分别是:[2019-04-28 14:22:102019-04-28 14:22:30]和[2019-04-28 14:23:102019-04-28 14:23:30]。因为 2019-04-28 14:22:30 与 2019-04-28 14:23:10 之间的时间间隔是 40 秒超过了连续时间间隔12 秒)。 会话窗口根据记录的时间戳主键的值来确定是否属于同一个会话。如下图所示,如果设置时间戳的连续的间隔小于等于 12 秒,则以下 6 条记录构成 2 个会话窗口,分别是:[2019-04-28 14:22:102019-04-28 14:22:30]和[2019-04-28 14:23:102019-04-28 14:23:30]。因为 2019-04-28 14:22:30 与 2019-04-28 14:23:10 之间的时间间隔是 40 秒超过了连续时间间隔12 秒)。

View File

@ -6,7 +6,8 @@ description: TDengine 保留关键字的详细列表
## 保留关键字 ## 保留关键字
目前 TDengine 有将近 200 个内部保留关键字这些关键字无论大小写如果需要用作库名、表名、STable 名、数据列名及标签列名等,需要使用符合``将关键字括起来使用,例如`ADD`。 目前 TDengine 有 200 多个内部保留关键字,这些关键字如果需要用作库名、表名、超级表名、子表名、数据列名及标签列名等,无论大小写,需要使用符号 `` ` `` 将关键字括起来使用,例如 \`ADD\`。
关键字列表如下: 关键字列表如下:
### A ### A
@ -16,15 +17,20 @@ description: TDengine 保留关键字的详细列表
- ACCOUNTS - ACCOUNTS
- ADD - ADD
- AFTER - AFTER
- AGGREGATE
- ALL - ALL
- ALTER - ALTER
- ANALYZE
- AND - AND
- APPS
- AS - AS
- ASC - ASC
- AT_ONCE
- ATTACH - ATTACH
### B ### B
- BALANCE
- BEFORE - BEFORE
- BEGIN - BEGIN
- BETWEEN - BETWEEN
@ -34,19 +40,27 @@ description: TDengine 保留关键字的详细列表
- BITNOT - BITNOT
- BITOR - BITOR
- BLOCKS - BLOCKS
- BNODE
- BNODES
- BOOL - BOOL
- BUFFER
- BUFSIZE
- BY - BY
### C ### C
- CACHE - CACHE
- CACHELAST - CACHEMODEL
- CACHESIZE
- CASCADE - CASCADE
- CAST
- CHANGE - CHANGE
- CLIENT_VERSION
- CLUSTER - CLUSTER
- COLON - COLON
- COLUMN - COLUMN
- COMMA - COMMA
- COMMENT
- COMP - COMP
- COMPACT - COMPACT
- CONCAT - CONCAT
@ -54,15 +68,18 @@ description: TDengine 保留关键字的详细列表
- CONNECTION - CONNECTION
- CONNECTIONS - CONNECTIONS
- CONNS - CONNS
- CONSUMER
- CONSUMERS
- CONTAINS
- COPY - COPY
- COUNT
- CREATE - CREATE
- CTIME - CURRENT_USER
### D ### D
- DATABASE - DATABASE
- DATABASES - DATABASES
- DAYS
- DBS - DBS
- DEFERRED - DEFERRED
- DELETE - DELETE
@ -71,18 +88,23 @@ description: TDengine 保留关键字的详细列表
- DESCRIBE - DESCRIBE
- DETACH - DETACH
- DISTINCT - DISTINCT
- DISTRIBUTED
- DIVIDE - DIVIDE
- DNODE - DNODE
- DNODES - DNODES
- DOT - DOT
- DOUBLE - DOUBLE
- DROP - DROP
- DURATION
### E ### E
- EACH
- ENABLE
- END - END
- EQ - EVERY
- EXISTS - EXISTS
- EXPIRED
- EXPLAIN - EXPLAIN
### F ### F
@ -90,18 +112,20 @@ description: TDengine 保留关键字的详细列表
- FAIL - FAIL
- FILE - FILE
- FILL - FILL
- FIRST
- FLOAT - FLOAT
- FLUSH
- FOR - FOR
- FROM - FROM
- FSYNC - FUNCTION
- FUNCTIONS
### G ### G
- GE
- GLOB - GLOB
- GRANT
- GRANTS - GRANTS
- GROUP - GROUP
- GT
### H ### H
@ -112,15 +136,18 @@ description: TDengine 保留关键字的详细列表
- ID - ID
- IF - IF
- IGNORE - IGNORE
- IMMEDIA - IMMEDIATE
- IMPORT - IMPORT
- IN - IN
- INITIAL - INDEX
- INDEXES
- INITIALLY
- INNER
- INSERT - INSERT
- INSTEAD - INSTEAD
- INT - INT
- INTEGER - INTEGER
- INTERVA - INTERVAL
- INTO - INTO
- IS - IS
- ISNULL - ISNULL
@ -128,6 +155,7 @@ description: TDengine 保留关键字的详细列表
### J ### J
- JOIN - JOIN
- JSON
### K ### K
@ -137,46 +165,57 @@ description: TDengine 保留关键字的详细列表
### L ### L
- LE - LAST
- LAST_ROW
- LICENCES
- LIKE - LIKE
- LIMIT - LIMIT
- LINEAR - LINEAR
- LOCAL - LOCAL
- LP
- LSHIFT
- LT
### M ### M
- MATCH - MATCH
- MAX_DELAY
- MAXROWS - MAXROWS
- MERGE
- META
- MINROWS - MINROWS
- MINUS - MINUS
- MNODE
- MNODES - MNODES
- MODIFY - MODIFY
- MODULES - MODULES
### N ### N
- NE - NCHAR
- NEXT
- NMATCH
- NONE - NONE
- NOT - NOT
- NOTNULL - NOTNULL
- NOW - NOW
- NULL - NULL
- NULLS
### O ### O
- OF - OF
- OFFSET - OFFSET
- ON
- OR - OR
- ORDER - ORDER
- OUTPUTTYPE
### P ### P
- PARTITION - PAGES
- PAGESIZE
- PARTITIONS
- PASS - PASS
- PLUS - PLUS
- PORT
- PPS - PPS
- PRECISION - PRECISION
- PREV - PREV
@ -184,47 +223,63 @@ description: TDengine 保留关键字的详细列表
### Q ### Q
- QNODE
- QNODES
- QTIME - QTIME
- QUERIE - QUERIES
- QUERY - QUERY
- QUORUM
### R ### R
- RAISE - RAISE
- REM - RANGE
- RATIO
- READ
- REDISTRIBUTE
- RENAME
- REPLACE - REPLACE
- REPLICA - REPLICA
- RESET - RESET
- RESTRIC - RESTRICT
- RETENTIONS
- REVOKE
- ROLLUP
- ROW - ROW
- RP
- RSHIFT
### S ### S
- SCHEMALESS
- SCORES - SCORES
- SELECT - SELECT
- SEMI - SEMI
- SERVER_STATUS
- SERVER_VERSION
- SESSION - SESSION
- SET - SET
- SHOW - SHOW
- SLASH - SINGLE_STABLE
- SLIDING - SLIDING
- SLIMIT - SLIMIT
- SMALLIN - SMA
- SMALLINT
- SNODE
- SNODES
- SOFFSET - SOFFSET
- STable - SPLIT
- STableS - STABLE
- STABLES
- STAR - STAR
- STATE - STATE
- STATEMEN - STATE_WINDOW
- STATE_WI - STATEMENT
- STORAGE - STORAGE
- STREAM - STREAM
- STREAMS - STREAMS
- STRICT
- STRING - STRING
- SUBSCRIPTIONS
- SYNCDB - SYNCDB
- SYSINFO
### T ### T
@ -235,20 +290,24 @@ description: TDengine 保留关键字的详细列表
- TBNAME - TBNAME
- TIMES - TIMES
- TIMESTAMP - TIMESTAMP
- TIMEZONE
- TINYINT - TINYINT
- TO
- TODAY
- TOPIC - TOPIC
- TOPICS - TOPICS
- TRANSACTION
- TRANSACTIONS
- TRIGGER - TRIGGER
- TRIM
- TSERIES - TSERIES
- TTL - TTL
### U ### U
- UMINUS
- UNION - UNION
- UNSIGNED - UNSIGNED
- UPDATE - UPDATE
- UPLUS
- USE - USE
- USER - USER
- USERS - USERS
@ -258,8 +317,11 @@ description: TDengine 保留关键字的详细列表
- VALUE - VALUE
- VALUES - VALUES
- VARCHAR
- VARIABLE - VARIABLE
- VARIABLES - VARIABLES
- VERBOSE
- VGROUP
- VGROUPS - VGROUPS
- VIEW - VIEW
- VNODES - VNODES
@ -267,14 +329,25 @@ description: TDengine 保留关键字的详细列表
### W ### W
- WAL - WAL
- WAL_FSYNC_PERIOD
- WAL_LEVEL
- WAL_RETENTION_PERIOD
- WAL_RETENTION_SIZE
- WAL_ROLL_PERIOD
- WAL_SEGMENT_SIZE
- WATERMARK
- WHERE - WHERE
- WINDOW_CLOSE
- WITH
- WRITE
### \_ ### \_
- \_C0 - \_C0
- \_QSTART
- \_QSTOP
- \_QDURATION - \_QDURATION
- \_WSTART - \_QEND
- \_WSTOP - \_QSTART
- \_ROWTS
- \_WDURATION - \_WDURATION
- \_WEND
- \_WSTART

View File

@ -196,7 +196,7 @@ AllowWebSockets
- `u` TDengine 用户名 - `u` TDengine 用户名
- `p` TDengine 密码 - `p` TDengine 密码
注意: 目前不支持 InfluxDB 的 token 验证方式支持 Basic 验证和查询参数验证。 注意: 目前不支持 InfluxDB 的 token 验证方式,仅支持 Basic 验证和查询参数验证。
### OpenTSDB ### OpenTSDB

View File

@ -79,7 +79,7 @@ password = "taosdata"
# 需要被监控的 taosAdapter # 需要被监控的 taosAdapter
[taosAdapter] [taosAdapter]
address = ["127.0.0.1:6041","192.168.1.95:6041"] address = ["127.0.0.1:6041"]
[metrics] [metrics]
# 监控指标前缀 # 监控指标前缀
@ -92,7 +92,7 @@ cluster = "production"
database = "log" database = "log"
# 指定需要监控的普通表 # 指定需要监控的普通表
tables = ["normal_table"] tables = []
``` ```
### 获取监控指标 ### 获取监控指标
@ -141,4 +141,4 @@ taos_cluster_info_dnodes_total{cluster_id="5981392874047724755"} 1
# HELP taos_cluster_info_first_ep # HELP taos_cluster_info_first_ep
# TYPE taos_cluster_info_first_ep gauge # TYPE taos_cluster_info_first_ep gauge
taos_cluster_info_first_ep{cluster_id="5981392874047724755",value="hlb:6030"} 1 taos_cluster_info_first_ep{cluster_id="5981392874047724755",value="hlb:6030"} 1
``` ```

View File

@ -26,7 +26,7 @@ TDengine 分布式架构的逻辑结构图如下:
**管理节点mnode** 一个虚拟的逻辑单元,负责所有数据节点运行状态的监控和维护,以及节点之间的负载均衡(图中 M。同时管理节点也负责元数据包括用户、数据库、超级表等的存储和管理因此也称为 Meta Node。TDengine 集群中可配置多个(最多不超过 3 个mnode它们自动构建成为一个虚拟管理节点组图中 M1M2M3。mnode 支持多副本,采用 RAFT 一致性协议,保证系统的高可用与高可靠,任何数据更新操作只能在 Leader 上进行。mnode 集群的第一个节点在集群部署时自动完成,其他节点的创建与删除由用户通过 SQL 命令完成。每个 dnode 上至多有一个 mnode由所属的数据节点的 EP 来唯一标识。每个 dnode 通过内部消息交互自动获取整个集群中所有 mnode 所在的 dnode 的 EP。 **管理节点mnode** 一个虚拟的逻辑单元,负责所有数据节点运行状态的监控和维护,以及节点之间的负载均衡(图中 M。同时管理节点也负责元数据包括用户、数据库、超级表等的存储和管理因此也称为 Meta Node。TDengine 集群中可配置多个(最多不超过 3 个mnode它们自动构建成为一个虚拟管理节点组图中 M1M2M3。mnode 支持多副本,采用 RAFT 一致性协议,保证系统的高可用与高可靠,任何数据更新操作只能在 Leader 上进行。mnode 集群的第一个节点在集群部署时自动完成,其他节点的创建与删除由用户通过 SQL 命令完成。每个 dnode 上至多有一个 mnode由所属的数据节点的 EP 来唯一标识。每个 dnode 通过内部消息交互自动获取整个集群中所有 mnode 所在的 dnode 的 EP。
**弹性计算节点qnode** 一个虚拟的逻辑单元,运行查询计算任务,也包括基于系统表来实现的 show 命令(图中 Q。集群中可配置多个 qnode在整个集群内部共享使用图中 Q1Q2Q3。qnode 不与具体的 DB 绑定,即一个 qnode 可以同时执行多个 DB 的查询任务。每个 dnode 上至多有一个 qnode由所属的数据节点的 EP 来唯一标识。客户端通过与 mnode 交互,获取可用的 qnode 列表,当没有可用的 qnode 时,计算任务在 vnode 中执行。 **计算节点qnode** 一个虚拟的逻辑单元,运行查询计算任务,也包括基于系统表来实现的 show 命令(图中 Q。集群中可配置多个 qnode在整个集群内部共享使用图中 Q1Q2Q3。qnode 不与具体的 DB 绑定,即一个 qnode 可以同时执行多个 DB 的查询任务。每个 dnode 上至多有一个 qnode由所属的数据节点的 EP 来唯一标识。客户端通过与 mnode 交互,获取可用的 qnode 列表,当没有可用的 qnode 时,计算任务在 vnode 中执行。当一个查询执行时,依赖执行计划,调度器会安排一个或多个 qnode 来一起执行。qnode 能从 vnode 获取数据,也可以将自己的计算结果发给其他 qnode 做进一步的处理。通过引入独立的计算节点TDengine 实现了存储和计算分离。
**流计算节点snode** 一个虚拟的逻辑单元,只运行流计算任务(图中 S。集群中可配置多个 snode在整个集群内部共享使用图中 S1S2S3。snode 不与具体的 stream 绑定,即一个 snode 可以同时执行多个 stream 的计算任务。每个 dnode 上至多有一个 snode由所属的数据节点的 EP 来唯一标识。由 mnode 调度可用的 snode 完成流计算任务,当没有可用的 snode 时,流计算任务在 vnode 中执行。 **流计算节点snode** 一个虚拟的逻辑单元,只运行流计算任务(图中 S。集群中可配置多个 snode在整个集群内部共享使用图中 S1S2S3。snode 不与具体的 stream 绑定,即一个 snode 可以同时执行多个 stream 的计算任务。每个 dnode 上至多有一个 snode由所属的数据节点的 EP 来唯一标识。由 mnode 调度可用的 snode 完成流计算任务,当没有可用的 snode 时,流计算任务在 vnode 中执行。

View File

@ -6,6 +6,11 @@ description: TDengine 发布历史、Release Notes 及下载链接
import Release from "/components/ReleaseV3"; import Release from "/components/ReleaseV3";
## 3.0.1.1
<Release type="tdengine" version="3.0.1.1" />
## 3.0.1.0 ## 3.0.1.0
<Release type="tdengine" version="3.0.1.0" /> <Release type="tdengine" version="3.0.1.0" />

View File

@ -6,6 +6,10 @@ description: taosTools 的发布历史、Release Notes 和下载链接
import Release from "/components/ReleaseV3"; import Release from "/components/ReleaseV3";
## 2.2.0
<Release type="tools" version="2.2.0" />
## 2.1.3 ## 2.1.3
<Release type="tools" version="2.1.3" /> <Release type="tools" version="2.1.3" />

View File

@ -45,8 +45,8 @@ enum {
// clang-format on // clang-format on
typedef struct { typedef struct {
TSKEY ts;
uint64_t groupId; uint64_t groupId;
TSKEY ts;
} SWinKey; } SWinKey;
static inline int SWinKeyCmpr(const void* pKey1, int kLen1, const void* pKey2, int kLen2) { static inline int SWinKeyCmpr(const void* pKey1, int kLen1, const void* pKey2, int kLen2) {
@ -68,6 +68,37 @@ static inline int SWinKeyCmpr(const void* pKey1, int kLen1, const void* pKey2, i
return 0; return 0;
} }
typedef struct {
uint64_t groupId;
TSKEY ts;
int32_t exprIdx;
} STupleKey;
static inline int STupleKeyCmpr(const void* pKey1, int kLen1, const void* pKey2, int kLen2) {
STupleKey* pTuple1 = (STupleKey*)pKey1;
STupleKey* pTuple2 = (STupleKey*)pKey2;
if (pTuple1->groupId > pTuple2->groupId) {
return 1;
} else if (pTuple1->groupId < pTuple2->groupId) {
return -1;
}
if (pTuple1->ts > pTuple2->ts) {
return 1;
} else if (pTuple1->ts < pTuple2->ts) {
return -1;
}
if (pTuple1->exprIdx > pTuple2->exprIdx) {
return 1;
} else if (pTuple1->exprIdx < pTuple2->exprIdx) {
return -1;
}
return 0;
}
enum { enum {
TMQ_MSG_TYPE__DUMMY = 0, TMQ_MSG_TYPE__DUMMY = 0,
TMQ_MSG_TYPE__POLL_RSP, TMQ_MSG_TYPE__POLL_RSP,

View File

@ -36,8 +36,13 @@ typedef struct STSRow2 STSRow2;
typedef struct STSRowBuilder STSRowBuilder; typedef struct STSRowBuilder STSRowBuilder;
typedef struct STagVal STagVal; typedef struct STagVal STagVal;
typedef struct STag STag; typedef struct STag STag;
typedef struct SColData SColData;
// bitmap #define HAS_NONE ((uint8_t)0x1)
#define HAS_NULL ((uint8_t)0x2)
#define HAS_VALUE ((uint8_t)0x4)
// bitmap ================================
const static uint8_t BIT2_MAP[4][4] = {{0b00000000, 0b00000001, 0b00000010, 0}, const static uint8_t BIT2_MAP[4][4] = {{0b00000000, 0b00000001, 0b00000010, 0},
{0b00000000, 0b00000100, 0b00001000, 2}, {0b00000000, 0b00000100, 0b00001000, 2},
{0b00000000, 0b00010000, 0b00100000, 4}, {0b00000000, 0b00010000, 0b00100000, 4},
@ -51,21 +56,21 @@ const static uint8_t BIT2_MAP[4][4] = {{0b00000000, 0b00000001, 0b00000010, 0},
#define SET_BIT2(p, i, v) ((p)[(i) >> 2] = (p)[(i) >> 2] & N1(BIT2_MAP[(i)&3][3]) | BIT2_MAP[(i)&3][(v)]) #define SET_BIT2(p, i, v) ((p)[(i) >> 2] = (p)[(i) >> 2] & N1(BIT2_MAP[(i)&3][3]) | BIT2_MAP[(i)&3][(v)])
#define GET_BIT2(p, i) (((p)[(i) >> 2] >> BIT2_MAP[(i)&3][3]) & ((uint8_t)3)) #define GET_BIT2(p, i) (((p)[(i) >> 2] >> BIT2_MAP[(i)&3][3]) & ((uint8_t)3))
// STSchema // STSchema ================================
int32_t tTSchemaCreate(int32_t sver, SSchema *pSchema, int32_t nCols, STSchema **ppTSchema); int32_t tTSchemaCreate(int32_t sver, SSchema *pSchema, int32_t nCols, STSchema **ppTSchema);
void tTSchemaDestroy(STSchema *pTSchema); void tTSchemaDestroy(STSchema *pTSchema);
// SValue // SValue ================================
int32_t tPutValue(uint8_t *p, SValue *pValue, int8_t type); int32_t tPutValue(uint8_t *p, SValue *pValue, int8_t type);
int32_t tGetValue(uint8_t *p, SValue *pValue, int8_t type); int32_t tGetValue(uint8_t *p, SValue *pValue, int8_t type);
int tValueCmprFn(const SValue *pValue1, const SValue *pValue2, int8_t type); int tValueCmprFn(const SValue *pValue1, const SValue *pValue2, int8_t type);
// SColVal // SColVal ================================
#define COL_VAL_NONE(CID, TYPE) ((SColVal){.cid = (CID), .type = (TYPE), .isNone = 1}) #define COL_VAL_NONE(CID, TYPE) ((SColVal){.cid = (CID), .type = (TYPE), .isNone = 1})
#define COL_VAL_NULL(CID, TYPE) ((SColVal){.cid = (CID), .type = (TYPE), .isNull = 1}) #define COL_VAL_NULL(CID, TYPE) ((SColVal){.cid = (CID), .type = (TYPE), .isNull = 1})
#define COL_VAL_VALUE(CID, TYPE, V) ((SColVal){.cid = (CID), .type = (TYPE), .value = (V)}) #define COL_VAL_VALUE(CID, TYPE, V) ((SColVal){.cid = (CID), .type = (TYPE), .value = (V)})
// STSRow2 // STSRow2 ================================
#define TSROW_LEN(PROW, V) tGetI32v((uint8_t *)(PROW)->data, (V) ? &(V) : NULL) #define TSROW_LEN(PROW, V) tGetI32v((uint8_t *)(PROW)->data, (V) ? &(V) : NULL)
#define TSROW_SVER(PROW, V) tGetI32v((PROW)->data + TSROW_LEN(PROW, NULL), (V) ? &(V) : NULL) #define TSROW_SVER(PROW, V) tGetI32v((PROW)->data + TSROW_LEN(PROW, NULL), (V) ? &(V) : NULL)
@ -77,7 +82,7 @@ int32_t tTSRowToArray(STSRow2 *pRow, STSchema *pTSchema, SArray **ppArray);
int32_t tPutTSRow(uint8_t *p, STSRow2 *pRow); int32_t tPutTSRow(uint8_t *p, STSRow2 *pRow);
int32_t tGetTSRow(uint8_t *p, STSRow2 **ppRow); int32_t tGetTSRow(uint8_t *p, STSRow2 **ppRow);
// STSRowBuilder // STSRowBuilder ================================
#define tsRowBuilderInit() ((STSRowBuilder){0}) #define tsRowBuilderInit() ((STSRowBuilder){0})
#define tsRowBuilderClear(B) \ #define tsRowBuilderClear(B) \
do { \ do { \
@ -86,7 +91,7 @@ int32_t tGetTSRow(uint8_t *p, STSRow2 **ppRow);
} \ } \
} while (0) } while (0)
// STag // STag ================================
int32_t tTagNew(SArray *pArray, int32_t version, int8_t isJson, STag **ppTag); int32_t tTagNew(SArray *pArray, int32_t version, int8_t isJson, STag **ppTag);
void tTagFree(STag *pTag); void tTagFree(STag *pTag);
bool tTagIsJson(const void *pTag); bool tTagIsJson(const void *pTag);
@ -100,7 +105,16 @@ void tTagSetCid(const STag *pTag, int16_t iTag, int16_t cid);
void debugPrintSTag(STag *pTag, const char *tag, int32_t ln); // TODO: remove void debugPrintSTag(STag *pTag, const char *tag, int32_t ln); // TODO: remove
int32_t parseJsontoTagData(const char *json, SArray *pTagVals, STag **ppTag, void *pMsgBuf); int32_t parseJsontoTagData(const char *json, SArray *pTagVals, STag **ppTag, void *pMsgBuf);
// STRUCT ================= // SColData ================================
void tColDataDestroy(void *ph);
void tColDataInit(SColData *pColData, int16_t cid, int8_t type, int8_t smaOn);
void tColDataClear(SColData *pColData);
int32_t tColDataAppendValue(SColData *pColData, SColVal *pColVal);
void tColDataGetValue(SColData *pColData, int32_t iVal, SColVal *pColVal);
uint8_t tColDataGetBitValue(SColData *pColData, int32_t iVal);
int32_t tColDataCopy(SColData *pColDataSrc, SColData *pColDataDest);
// STRUCT ================================
struct STColumn { struct STColumn {
col_id_t colId; col_id_t colId;
int8_t type; int8_t type;
@ -166,6 +180,18 @@ struct SColVal {
SValue value; SValue value;
}; };
struct SColData {
int16_t cid;
int8_t type;
int8_t smaOn;
int32_t nVal;
uint8_t flag;
uint8_t *pBitMap;
int32_t *aOffset;
int32_t nData;
uint8_t *pData;
};
#pragma pack(push, 1) #pragma pack(push, 1)
struct STagVal { struct STagVal {
// char colName[TSDB_COL_NAME_LEN]; // only used for tmq_get_meta // char colName[TSDB_COL_NAME_LEN]; // only used for tmq_get_meta

View File

@ -2956,7 +2956,7 @@ static FORCE_INLINE void* tDecodeSMqSubTopicEp(void* buf, SMqSubTopicEp* pTopicE
} }
static FORCE_INLINE void tDeleteSMqSubTopicEp(SMqSubTopicEp* pSubTopicEp) { static FORCE_INLINE void tDeleteSMqSubTopicEp(SMqSubTopicEp* pSubTopicEp) {
// taosMemoryFree(pSubTopicEp->schema.pSchema); if (pSubTopicEp->schema.nCols) taosMemoryFreeClear(pSubTopicEp->schema.pSchema);
taosArrayDestroy(pSubTopicEp->vgs); taosArrayDestroy(pSubTopicEp->vgs);
} }

View File

@ -34,66 +34,69 @@ typedef struct SFuncExecEnv {
int32_t calcMemSize; int32_t calcMemSize;
} SFuncExecEnv; } SFuncExecEnv;
typedef bool (*FExecGetEnv)(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv); typedef bool (*FExecGetEnv)(struct SFunctionNode *pFunc, SFuncExecEnv *pEnv);
typedef bool (*FExecInit)(struct SqlFunctionCtx *pCtx, struct SResultRowEntryInfo* pResultCellInfo); typedef bool (*FExecInit)(struct SqlFunctionCtx *pCtx, struct SResultRowEntryInfo *pResultCellInfo);
typedef int32_t (*FExecProcess)(struct SqlFunctionCtx *pCtx); typedef int32_t (*FExecProcess)(struct SqlFunctionCtx *pCtx);
typedef int32_t (*FExecFinalize)(struct SqlFunctionCtx *pCtx, SSDataBlock* pBlock); typedef int32_t (*FExecFinalize)(struct SqlFunctionCtx *pCtx, SSDataBlock *pBlock);
typedef int32_t (*FScalarExecProcess)(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput); typedef int32_t (*FScalarExecProcess)(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
typedef int32_t (*FExecCombine)(struct SqlFunctionCtx *pDestCtx, struct SqlFunctionCtx *pSourceCtx); typedef int32_t (*FExecCombine)(struct SqlFunctionCtx *pDestCtx, struct SqlFunctionCtx *pSourceCtx);
typedef struct SScalarFuncExecFuncs { typedef struct SScalarFuncExecFuncs {
FExecGetEnv getEnv; FExecGetEnv getEnv;
FScalarExecProcess process; FScalarExecProcess process;
} SScalarFuncExecFuncs; } SScalarFuncExecFuncs;
typedef struct SFuncExecFuncs { typedef struct SFuncExecFuncs {
FExecGetEnv getEnv; FExecGetEnv getEnv;
FExecInit init; FExecInit init;
FExecProcess process; FExecProcess process;
FExecFinalize finalize; FExecFinalize finalize;
FExecCombine combine; FExecCombine combine;
} SFuncExecFuncs; } SFuncExecFuncs;
#define MAX_INTERVAL_TIME_WINDOW 1000000 // maximum allowed time windows in final results #define MAX_INTERVAL_TIME_WINDOW 1000000 // maximum allowed time windows in final results
#define TOP_BOTTOM_QUERY_LIMIT 100 #define TOP_BOTTOM_QUERY_LIMIT 100
#define FUNCTIONS_NAME_MAX_LENGTH 16 #define FUNCTIONS_NAME_MAX_LENGTH 16
typedef struct SResultRowEntryInfo { typedef struct SResultRowEntryInfo {
bool initialized:1; // output buffer has been initialized bool initialized : 1; // output buffer has been initialized
bool complete:1; // query has completed bool complete : 1; // query has completed
uint8_t isNullRes:6; // the result is null uint8_t isNullRes : 6; // the result is null
uint16_t numOfRes; // num of output result in current buffer. NOT NULL RESULT uint16_t numOfRes; // num of output result in current buffer. NOT NULL RESULT
} SResultRowEntryInfo; } SResultRowEntryInfo;
// determine the real data need to calculated the result // determine the real data need to calculated the result
enum { enum {
BLK_DATA_NOT_LOAD = 0x0, BLK_DATA_NOT_LOAD = 0x0,
BLK_DATA_SMA_LOAD = 0x1, BLK_DATA_SMA_LOAD = 0x1,
BLK_DATA_DATA_LOAD = 0x3, BLK_DATA_DATA_LOAD = 0x3,
BLK_DATA_FILTEROUT = 0x4, // discard current data block since it is not qualified for filter BLK_DATA_FILTEROUT = 0x4, // discard current data block since it is not qualified for filter
}; };
enum { enum {
MAIN_SCAN = 0x0u, MAIN_SCAN = 0x0u,
REVERSE_SCAN = 0x1u, // todo remove it REVERSE_SCAN = 0x1u, // todo remove it
REPEAT_SCAN = 0x2u, //repeat scan belongs to the master scan REPEAT_SCAN = 0x2u, // repeat scan belongs to the master scan
MERGE_STAGE = 0x20u, MERGE_STAGE = 0x20u,
}; };
typedef struct SPoint1 { typedef struct SPoint1 {
int64_t key; int64_t key;
union{double val; char* ptr;}; union {
double val;
char *ptr;
};
} SPoint1; } SPoint1;
struct SqlFunctionCtx; struct SqlFunctionCtx;
struct SResultRowEntryInfo; struct SResultRowEntryInfo;
//for selectivity query, the corresponding tag value is assigned if the data is qualified // for selectivity query, the corresponding tag value is assigned if the data is qualified
typedef struct SSubsidiaryResInfo { typedef struct SSubsidiaryResInfo {
int16_t num; int16_t num;
int32_t rowLen; int32_t rowLen;
char* buf; // serialize data buffer char *buf; // serialize data buffer
struct SqlFunctionCtx **pCtx; struct SqlFunctionCtx **pCtx;
} SSubsidiaryResInfo; } SSubsidiaryResInfo;
@ -106,69 +109,70 @@ typedef struct SResultDataInfo {
} SResultDataInfo; } SResultDataInfo;
#define GET_RES_INFO(ctx) ((ctx)->resultInfo) #define GET_RES_INFO(ctx) ((ctx)->resultInfo)
#define GET_ROWCELL_INTERBUF(_c) ((void*) ((char*)(_c) + sizeof(SResultRowEntryInfo))) #define GET_ROWCELL_INTERBUF(_c) ((void *)((char *)(_c) + sizeof(SResultRowEntryInfo)))
typedef struct SInputColumnInfoData { typedef struct SInputColumnInfoData {
int32_t totalRows; // total rows in current columnar data int32_t totalRows; // total rows in current columnar data
int32_t startRowIndex; // handle started row index int32_t startRowIndex; // handle started row index
int32_t numOfRows; // the number of rows needs to be handled int32_t numOfRows; // the number of rows needs to be handled
int32_t numOfInputCols; // PTS is not included int32_t numOfInputCols; // PTS is not included
bool colDataAggIsSet;// if agg is set or not bool colDataAggIsSet; // if agg is set or not
SColumnInfoData *pPTS; // primary timestamp column SColumnInfoData *pPTS; // primary timestamp column
SColumnInfoData **pData; SColumnInfoData **pData;
SColumnDataAgg **pColumnDataAgg; SColumnDataAgg **pColumnDataAgg;
uint64_t uid; // table uid, used to set the tag value when building the final query result for selectivity functions. uint64_t uid; // table uid, used to set the tag value when building the final query result for selectivity functions.
} SInputColumnInfoData; } SInputColumnInfoData;
typedef struct SSerializeDataHandle { typedef struct SSerializeDataHandle {
struct SDiskbasedBuf* pBuf; struct SDiskbasedBuf *pBuf;
int32_t currentPage; int32_t currentPage;
void *pState;
} SSerializeDataHandle; } SSerializeDataHandle;
// sql function runtime context // sql function runtime context
typedef struct SqlFunctionCtx { typedef struct SqlFunctionCtx {
SInputColumnInfoData input; SInputColumnInfoData input;
SResultDataInfo resDataInfo; SResultDataInfo resDataInfo;
uint32_t order; // data block scanner order: asc|desc uint32_t order; // data block scanner order: asc|desc
uint8_t scanFlag; // record current running step, default: 0 uint8_t scanFlag; // record current running step, default: 0
int16_t functionId; // function id int16_t functionId; // function id
char *pOutput; // final result output buffer, point to sdata->data char *pOutput; // final result output buffer, point to sdata->data
int32_t numOfParams; int32_t numOfParams;
SFunctParam *param; // input parameter, e.g., top(k, 20), the number of results for top query is kept in param SFunctParam *param; // input parameter, e.g., top(k, 20), the number of results for top query is kept in param
SColumnInfoData *pTsOutput; // corresponding output buffer for timestamp of each result, e.g., top/bottom*/ SColumnInfoData *pTsOutput; // corresponding output buffer for timestamp of each result, e.g., top/bottom*/
int32_t offset; int32_t offset;
struct SResultRowEntryInfo *resultInfo; struct SResultRowEntryInfo *resultInfo;
SSubsidiaryResInfo subsidiaries; SSubsidiaryResInfo subsidiaries;
SPoint1 start; SPoint1 start;
SPoint1 end; SPoint1 end;
SFuncExecFuncs fpSet; SFuncExecFuncs fpSet;
SScalarFuncExecFuncs sfp; SScalarFuncExecFuncs sfp;
struct SExprInfo *pExpr; struct SExprInfo *pExpr;
struct SSDataBlock *pSrcBlock; struct SSDataBlock *pSrcBlock;
struct SSDataBlock *pDstBlock; // used by indefinite rows function to set selectivity struct SSDataBlock *pDstBlock; // used by indefinite rows function to set selectivity
SSerializeDataHandle saveHandle; SSerializeDataHandle saveHandle;
bool isStream; bool isStream;
char udfName[TSDB_FUNC_NAME_LEN]; char udfName[TSDB_FUNC_NAME_LEN];
} SqlFunctionCtx; } SqlFunctionCtx;
enum { enum {
TEXPR_BINARYEXPR_NODE= 0x1, TEXPR_BINARYEXPR_NODE = 0x1,
TEXPR_UNARYEXPR_NODE = 0x2, TEXPR_UNARYEXPR_NODE = 0x2,
}; };
typedef struct tExprNode { typedef struct tExprNode {
int32_t nodeType; int32_t nodeType;
union { union {
struct {// function node struct { // function node
char functionName[FUNCTIONS_NAME_MAX_LENGTH]; // todo refactor char functionName[FUNCTIONS_NAME_MAX_LENGTH]; // todo refactor
int32_t functionId; int32_t functionId;
int32_t num; int32_t num;
struct SFunctionNode *pFunctNode; struct SFunctionNode *pFunctNode;
} _function; } _function;
struct { struct {
struct SNode* pRootNode; struct SNode *pRootNode;
} _optrRoot; } _optrRoot;
}; };
} tExprNode; } tExprNode;
@ -182,17 +186,18 @@ struct SScalarParam {
int32_t numOfRows; int32_t numOfRows;
}; };
void cleanupResultRowEntry(struct SResultRowEntryInfo* pCell); void cleanupResultRowEntry(struct SResultRowEntryInfo *pCell);
int32_t getNumOfResult(SqlFunctionCtx* pCtx, int32_t num, SSDataBlock* pResBlock); int32_t getNumOfResult(SqlFunctionCtx *pCtx, int32_t num, SSDataBlock *pResBlock);
bool isRowEntryCompleted(struct SResultRowEntryInfo* pEntry); bool isRowEntryCompleted(struct SResultRowEntryInfo *pEntry);
bool isRowEntryInitialized(struct SResultRowEntryInfo* pEntry); bool isRowEntryInitialized(struct SResultRowEntryInfo *pEntry);
typedef struct SPoint { typedef struct SPoint {
int64_t key; int64_t key;
void * val; void *val;
} SPoint; } SPoint;
int32_t taosGetLinearInterpolationVal(SPoint* point, int32_t outputType, SPoint* point1, SPoint* point2, int32_t inputType); int32_t taosGetLinearInterpolationVal(SPoint *point, int32_t outputType, SPoint *point1, SPoint *point2,
int32_t inputType);
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// udf api // udf api

View File

@ -151,6 +151,8 @@ typedef struct SVnodeModifyLogicNode {
SArray* pDataBlocks; SArray* pDataBlocks;
SVgDataBlocks* pVgDataBlocks; SVgDataBlocks* pVgDataBlocks;
SNode* pAffectedRows; // SColumnNode SNode* pAffectedRows; // SColumnNode
SNode* pStartTs; // SColumnNode
SNode* pEndTs; // SColumnNode
uint64_t tableId; uint64_t tableId;
uint64_t stableId; uint64_t stableId;
int8_t tableType; // table type int8_t tableType; // table type
@ -525,6 +527,8 @@ typedef struct SDataDeleterNode {
char tsColName[TSDB_COL_NAME_LEN]; char tsColName[TSDB_COL_NAME_LEN];
STimeWindow deleteTimeRange; STimeWindow deleteTimeRange;
SNode* pAffectedRows; SNode* pAffectedRows;
SNode* pStartTs;
SNode* pEndTs;
} SDataDeleterNode; } SDataDeleterNode;
typedef struct SSubplan { typedef struct SSubplan {

View File

@ -315,6 +315,8 @@ typedef struct SDeleteStmt {
SNode* pFromTable; // FROM clause SNode* pFromTable; // FROM clause
SNode* pWhere; // WHERE clause SNode* pWhere; // WHERE clause
SNode* pCountFunc; // count the number of rows affected SNode* pCountFunc; // count the number of rows affected
SNode* pFirstFunc; // the start timestamp when the data was actually deleted
SNode* pLastFunc; // the end timestamp when the data was actually deleted
SNode* pTagCond; // pWhere divided into pTagCond and timeRange SNode* pTagCond; // pWhere divided into pTagCond and timeRange
STimeWindow timeRange; STimeWindow timeRange;
uint8_t precision; uint8_t precision;

View File

@ -0,0 +1,75 @@
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "tdatablock.h"
#include "tdbInt.h"
#ifdef __cplusplus
extern "C" {
#endif
#ifndef _STREAM_STATE_H_
#define _STREAM_STATE_H_
typedef struct SStreamTask SStreamTask;
// incremental state storage
typedef struct {
SStreamTask* pOwner;
TDB* db;
TTB* pStateDb;
TTB* pFuncStateDb;
TXN txn;
} SStreamState;
SStreamState* streamStateOpen(char* path, SStreamTask* pTask);
void streamStateClose(SStreamState* pState);
int32_t streamStateBegin(SStreamState* pState);
int32_t streamStateCommit(SStreamState* pState);
int32_t streamStateAbort(SStreamState* pState);
typedef struct {
TBC* pCur;
} SStreamStateCur;
int32_t streamStateFuncPut(SStreamState* pState, const STupleKey* key, const void* value, int32_t vLen);
int32_t streamStateFuncGet(SStreamState* pState, const STupleKey* key, void** pVal, int32_t* pVLen);
int32_t streamStateFuncDel(SStreamState* pState, const STupleKey* key);
int32_t streamStatePut(SStreamState* pState, const SWinKey* key, const void* value, int32_t vLen);
int32_t streamStateGet(SStreamState* pState, const SWinKey* key, void** pVal, int32_t* pVLen);
int32_t streamStateDel(SStreamState* pState, const SWinKey* key);
int32_t streamStateAddIfNotExist(SStreamState* pState, const SWinKey* key, void** pVal, int32_t* pVLen);
int32_t streamStateReleaseBuf(SStreamState* pState, const SWinKey* key, void* pVal);
void streamFreeVal(void* val);
SStreamStateCur* streamStateGetCur(SStreamState* pState, const SWinKey* key);
SStreamStateCur* streamStateSeekKeyNext(SStreamState* pState, const SWinKey* key);
SStreamStateCur* streamStateSeekKeyPrev(SStreamState* pState, const SWinKey* key);
void streamStateFreeCur(SStreamStateCur* pCur);
int32_t streamStateGetKVByCur(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen);
int32_t streamStateSeekFirst(SStreamState* pState, SStreamStateCur* pCur);
int32_t streamStateSeekLast(SStreamState* pState, SStreamStateCur* pCur);
int32_t streamStateCurNext(SStreamState* pState, SStreamStateCur* pCur);
int32_t streamStateCurPrev(SStreamState* pState, SStreamStateCur* pCur);
#ifdef __cplusplus
}
#endif
#endif /* ifndef _STREAM_STATE_H_ */

View File

@ -16,6 +16,7 @@
#include "executor.h" #include "executor.h"
#include "os.h" #include "os.h"
#include "query.h" #include "query.h"
#include "streamState.h"
#include "tdatablock.h" #include "tdatablock.h"
#include "tdbInt.h" #include "tdbInt.h"
#include "tmsg.h" #include "tmsg.h"
@ -263,14 +264,6 @@ typedef struct {
SArray* checkpointVer; SArray* checkpointVer;
} SStreamRecoveringState; } SStreamRecoveringState;
// incremental state storage
typedef struct {
SStreamTask* pOwner;
TDB* db;
TTB* pStateDb;
TXN txn;
} SStreamState;
typedef struct SStreamTask { typedef struct SStreamTask {
int64_t streamId; int64_t streamId;
int32_t taskId; int32_t taskId;
@ -540,39 +533,6 @@ int32_t streamMetaCommit(SStreamMeta* pMeta);
int32_t streamMetaRollBack(SStreamMeta* pMeta); int32_t streamMetaRollBack(SStreamMeta* pMeta);
int32_t streamLoadTasks(SStreamMeta* pMeta); int32_t streamLoadTasks(SStreamMeta* pMeta);
SStreamState* streamStateOpen(char* path, SStreamTask* pTask);
void streamStateClose(SStreamState* pState);
int32_t streamStateBegin(SStreamState* pState);
int32_t streamStateCommit(SStreamState* pState);
int32_t streamStateAbort(SStreamState* pState);
typedef struct {
TBC* pCur;
} SStreamStateCur;
#if 1
int32_t streamStatePut(SStreamState* pState, const SWinKey* key, const void* value, int32_t vLen);
int32_t streamStateGet(SStreamState* pState, const SWinKey* key, void** pVal, int32_t* pVLen);
int32_t streamStateDel(SStreamState* pState, const SWinKey* key);
int32_t streamStateAddIfNotExist(SStreamState* pState, const SWinKey* key, void** pVal, int32_t* pVLen);
int32_t streamStateReleaseBuf(SStreamState* pState, const SWinKey* key, void* pVal);
void streamFreeVal(void* val);
SStreamStateCur* streamStateGetCur(SStreamState* pState, const SWinKey* key);
SStreamStateCur* streamStateSeekKeyNext(SStreamState* pState, const SWinKey* key);
SStreamStateCur* streamStateSeekKeyPrev(SStreamState* pState, const SWinKey* key);
void streamStateFreeCur(SStreamStateCur* pCur);
int32_t streamStateGetKVByCur(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen);
int32_t streamStateSeekFirst(SStreamState* pState, SStreamStateCur* pCur);
int32_t streamStateSeekLast(SStreamState* pState, SStreamStateCur* pCur);
int32_t streamStateCurNext(SStreamState* pState, SStreamStateCur* pCur);
int32_t streamStateCurPrev(SStreamState* pState, SStreamStateCur* pCur);
#endif
#ifdef __cplusplus #ifdef __cplusplus
} }
#endif #endif

View File

@ -69,6 +69,14 @@ void tfsUpdateSize(STfs *pTfs);
*/ */
SDiskSize tfsGetSize(STfs *pTfs); SDiskSize tfsGetSize(STfs *pTfs);
/**
* @brief Get level of multi-tier storage.
*
* @param pTfs
* @return int32_t
*/
int32_t tfsGetLevel(STfs *pTfs);
/** /**
* @brief Allocate an existing available tier level from fs. * @brief Allocate an existing available tier level from fs.
* *

View File

@ -285,6 +285,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_MND_TOPIC_SUBSCRIBED TAOS_DEF_ERROR_CODE(0, 0x03EB) #define TSDB_CODE_MND_TOPIC_SUBSCRIBED TAOS_DEF_ERROR_CODE(0, 0x03EB)
#define TSDB_CODE_MND_CGROUP_USED TAOS_DEF_ERROR_CODE(0, 0x03EC) #define TSDB_CODE_MND_CGROUP_USED TAOS_DEF_ERROR_CODE(0, 0x03EC)
#define TSDB_CODE_MND_TOPIC_MUST_BE_DELETED TAOS_DEF_ERROR_CODE(0, 0x03ED) #define TSDB_CODE_MND_TOPIC_MUST_BE_DELETED TAOS_DEF_ERROR_CODE(0, 0x03ED)
#define TSDB_CODE_MND_IN_REBALANCE TAOS_DEF_ERROR_CODE(0, 0x03EF)
// mnode-stream // mnode-stream
#define TSDB_CODE_MND_STREAM_ALREADY_EXIST TAOS_DEF_ERROR_CODE(0, 0x03F0) #define TSDB_CODE_MND_STREAM_ALREADY_EXIST TAOS_DEF_ERROR_CODE(0, 0x03F0)
@ -577,6 +578,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_FUNC_FUNTION_PARA_TYPE TAOS_DEF_ERROR_CODE(0, 0x2802) #define TSDB_CODE_FUNC_FUNTION_PARA_TYPE TAOS_DEF_ERROR_CODE(0, 0x2802)
#define TSDB_CODE_FUNC_FUNTION_PARA_VALUE TAOS_DEF_ERROR_CODE(0, 0x2803) #define TSDB_CODE_FUNC_FUNTION_PARA_VALUE TAOS_DEF_ERROR_CODE(0, 0x2803)
#define TSDB_CODE_FUNC_NOT_BUILTIN_FUNTION TAOS_DEF_ERROR_CODE(0, 0x2804) #define TSDB_CODE_FUNC_NOT_BUILTIN_FUNTION TAOS_DEF_ERROR_CODE(0, 0x2804)
#define TSDB_CODE_FUNC_DUP_TIMESTAMP TAOS_DEF_ERROR_CODE(0, 0x2805)
//udf //udf
#define TSDB_CODE_UDF_STOPPING TAOS_DEF_ERROR_CODE(0, 0x2901) #define TSDB_CODE_UDF_STOPPING TAOS_DEF_ERROR_CODE(0, 0x2901)