Merge branch '3.0' of https://github.com/taosdata/TDengine into enh/TS-5445-3.0

This commit is contained in:
Hongze Cheng 2025-02-25 09:47:38 +08:00
commit f85ddb6633
137 changed files with 17198 additions and 1051 deletions

View File

@ -7,36 +7,24 @@ Power BI is a business analytics tool provided by Microsoft. By configuring the
## Prerequisites
Install and run Power BI Desktop software (if not installed, please download the latest version for Windows OS 32/64 bit from its official address).
- TDengine 3.3.4.0 and above version is installed and running normally (both Enterprise and Community versions are available).
- taosAdapter is running normally, refer to [taosAdapter Reference](../../../tdengine-reference/components/taosadapter/).
- Install and run Power BI Desktop software (if not installed, please download the latest version for Windows OS 32/64 bit from its official address).
- Download the latest Windows OS X64 client driver from the TDengine official website and install it on the machine running Power BI. After successful installation, the TDengine driver can be seen in the "ODBC Data Sources (32-bit)" or "ODBC Data Sources (64-bit)" management tool.
## Install ODBC Driver
## Configure Data Source
Download the latest Windows OS X64 client driver from the TDengine official website and install it on the machine running Power BI. After successful installation, the TDengine driver can be seen in the "ODBC Data Sources (32-bit)" or "ODBC Data Sources (64-bit)" management tool.
**Step 1**, Search and open the [ODBC Data Source (64 bit)] management tool in the Start menu of the Windows operating system and configure it, refer to [Install ODBC Driver](../../../tdengine-reference/client-libraries/odbc/#Installation).
**Step 2**, Open Power BI and log in, click [Home] -> [Get Data] -> [Other] -> [ODBC] -> [Connect], add data source.
## Configure ODBC Data Source
**Step 3**, Select the data source name just created, such as [MyTDengine], if you need to enter SQL, you can click the [Advanced options] tab, in the expanded dialog box enter the SQL statement. Click the [OK] button to connect to the configured data source.
The steps to configure the ODBC data source are as follows.
**Step 4**, Enter the [Navigator], you can browse the corresponding database's tables/views and load data.
Step 1, search and open "ODBC Data Sources (32-bit)" or "ODBC Data Sources (64-bit)" management tool from the Windows start menu.
Step 2, click the "User DSN" tab → "Add" button, enter the "Create New Data Source" dialog box.
Step 3, in the list of "Select the driver you want to install for this data source" choose "TDengine", click the "Finish" button, enter the TDengine ODBC data source configuration page. Fill in the necessary information as follows.
## Data Analysis
- DSN: Data source name, required, such as "MyTDengine".
- Connection Type: Check the "WebSocket" checkbox.
- URL: ODBC data source URL, required, such as `http://127.0.0.1:6041`.
- Database: Indicates the database to connect to, optional, such as "test".
- Username: Enter username, if not filled, default is "root".
- Password: Enter user password, if not filled, default is "taosdata".
Step 4, click the "Test Connection" button, test the connection situation, if successfully connected, it will prompt "Successfully connected to `http://127.0.0.1:6041`".
Step 5, click the "OK" button, to save the configuration and exit.
## Import TDengine Data into Power BI
The steps to import TDengine data into Power BI are as follows:
Step 1, open Power BI and log in, click "Home" → "Get Data" → "Other" → "ODBC" → "Connect", add data source.
Step 2, select the data source name just created, such as "MyTDengine", if you need to enter SQL, you can click the "Advanced options" tab, in the expanded dialog box enter the SQL statement. Click the "OK" button to connect to the configured data source.
Step 3, enter the "Navigator", you can browse the corresponding database's tables/views and load data.
### Instructions for use
To fully leverage Power BI's advantages in analyzing data from TDengine, users need to first understand core concepts such as dimensions, metrics, window split queries, data split queries, time-series, and correlation, then import data through custom SQL.
@ -47,7 +35,7 @@ To fully leverage Power BI's advantages in analyzing data from TDengine, users n
- Time-Series: When drawing curves or aggregating data by time, it is usually necessary to introduce a date table. Date tables can be imported from Excel spreadsheets, or obtained in TDengine by executing SQL like `select _wstart date, count(*) cnt from test.meters where ts between A and B interval(1d) fill(0)`, where the fill clause represents the filling mode in case of data missing, and the pseudocolumn `_wstart` is the date column to be obtained.
- Correlation: Tells how data is related, such as metrics and dimensions can be associated together through the tbname column, date tables and metrics can be associated through the date column, combined to form visual reports.
## Smart Meter Example
### Smart Meter Example
TDengine employs a unique data model to optimize the storage and query performance of time-series data. This model uses supertables as templates to create an independent table for each device. Each table is designed with high scalability in mind, supporting up to 4096 data columns and 128 tag columns. This design enables TDengine to efficiently handle large volumes of time-series data while maintaining flexibility and ease of use.
@ -56,24 +44,35 @@ Taking smart meters as an example, suppose each meter generates one record per s
In Power BI, users can map the tag columns in TDengine tables to dimension columns for grouping and filtering data. Meanwhile, the aggregated results of the data columns can be imported as measure columns for calculating key indicators and generating reports. In this way, Power BI helps decision-makers quickly obtain the information they need, gain a deeper understanding of business operations, and make more informed decisions.
Follow the steps below to experience the functionality of generating time-series data reports through Power BI.
Step 1, Use TDengine's taosBenchMark to quickly generate data for 1,000 smart meters over 3 days, with a collection frequency of 1s.
```shell
taosBenchmark -t 1000 -n 259200 -S 1000 -y
```
Step 2, Import dimension data. In Power BI, import the tag columns of the table, named as tags, using the following SQL to get the tag data of all smart meters under the supertable.
```sql
select distinct tbname device, groupId, location from test.meters
```
Step 3, Import measure data. In Power BI, import the average current, voltage, and phase of each smart meter in 1-hour time windows, named as data, with the following SQL.
```sql
select tbname, _wstart ws, avg(current), avg(voltage), avg(phase) from test.meters PARTITION by tbname interval(1h)
```
Step 4, Import date data. Using a 1-day time window, obtain the time range and data count of the time-series data, with the following SQL. In the Power Query editor, convert the format of the date column from "text" to "date".
```sql
select _wstart date, count(*) from test.meters interval(1d) having count(*)>0
```
Step 5, Establish the relationship between dimensions and measures. Open the model view and establish the relationship between the tags and data tables, setting tbname as the relationship data column.
Step 6, Establish the relationship between date and measures. Open the model view and establish the relationship between the date dataset and data, with the relationship data columns being date and datatime.
Step 7, Create reports. Use these data in bar charts, pie charts, and other controls.
**Step 1**, Use TDengine's taosBenchMark to quickly generate data for 1,000 smart meters over 3 days, with a collection frequency of 1s.
```shell
taosBenchmark -t 1000 -n 259200 -S 1000 -y
```
**Step 2**, Import dimension data. In Power BI, import the tag columns of the table, named as tags, using the following SQL to get the tag data of all smart meters under the supertable.
```sql
select distinct tbname device, groupId, location from test.meters
```
**Step 3**, Import measure data. In Power BI, import the average current, voltage, and phase of each smart meter in 1-hour time windows, named as data, with the following SQL.
```sql
select tbname, _wstart ws, avg(current), avg(voltage), avg(phase) from test.meters PARTITION by tbname interval(1h)
```
**Step 4**, Import date data. Using a 1-day time window, obtain the time range and data count of the time-series data, with the following SQL. In the Power Query editor, convert the format of the date column from "text" to "date".
```sql
select _wstart date, count(*) from test.meters interval(1d) having count(*)>0
```
**Step 5**, Establish the relationship between dimensions and measures. Open the model view and establish the relationship between the tags and data tables, setting tbname as the relationship data column.
**Step 6**, Establish the relationship between date and measures. Open the model view and establish the relationship between the date dataset and data, with the relationship data columns being date and datatime.
**Step 7**, Create reports. Use these data in bar charts, pie charts, and other controls.
Due to TDengine's superior performance in handling time-series data, users can enjoy a very good experience during data import and daily regular data refreshes. For more information on building Power BI visual effects, please refer to the official Power BI documentation.

View File

@ -11,43 +11,42 @@ import imgStep04 from '../../assets/seeq-04.png';
Seeq is advanced analytics software for the manufacturing and Industrial Internet of Things (IIOT). Seeq supports innovative new features using machine learning in process manufacturing organizations. These features enable organizations to deploy their own or third-party machine learning algorithms to advanced analytics applications used by frontline process engineers and subject matter experts, thus extending the efforts of a single data scientist to many frontline staff.
Through the TDengine Java connector, Seeq can easily support querying time-series data provided by TDengine and offer data presentation, analysis, prediction, and other functions.
Through the `TDengine Java connector`, Seeq can easily support querying time-series data provided by TDengine and offer data presentation, analysis, prediction, and other functions.
## Prerequisites
- Seeq has been installed. Download the relevant software from [Seeq's official website](https://www.seeq.com/customer-download), such as Seeq Server and Seeq Data Lab, etc. Seeq Data Lab needs to be installed on a different server from Seeq Server and interconnected through configuration. For detailed installation and configuration instructions, refer to the [Seeq Knowledge Base](https://support.seeq.com/kb/latest/cloud/).
- TDengine 3.1.0.3 and above version is installed and running normally (both Enterprise and Community versions are available).
- taosAdapter is running normally, refer to [taosAdapter Reference](../../../tdengine-reference/components/taosadapter/).
- Seeq has been installed. Download the relevant software from [Seeq's official website](https://www.seeq.com/customer-download), such as `Seeq Server` and `Seeq Data Lab`, etc. `Seeq Data Lab` needs to be installed on a different server from `Seeq Server` and interconnected through configuration. For detailed installation and configuration instructions, refer to the [Seeq Knowledge Base](https://support.seeq.com/kb/latest/cloud/).
- Install the JDBC driver. Download the `TDengine JDBC connector` file `taos-jdbcdriver-3.2.5-dist.jar` or a higher version from `maven.org`.
- TDengine local instance has been installed. Please refer to the [official documentation](../../../get-started). If using TDengine Cloud, please go to https://cloud.taosdata.com apply for an account and log in to see how to access TDengine Cloud.
## Configure Data Source
## Configuring Seeq to Access TDengine
### Configuration of JDBC Connector
1. Check the data storage location
**Step 1**, Check the data storage location
```shell
sudo seeq config get Folders/Data
```
2. Download the TDengine Java connector package from maven.org, the latest version is [3.2.5](https://repo1.maven.org/maven2/com/taosdata/jdbc/taos-jdbcdriver/3.2.5/taos-jdbcdriver-3.2.5-dist.jar), and copy it to the plugins\lib in the data storage location.
**Step 2**, Download the TDengine Java connector package from `maven.org` and copy it to the `plugins\lib` directory in the data storage location.
3. Restart seeq server
**Step 3**, Restart seeq server
```shell
sudo seeq restart
```
4. Enter License
**Step 4**, Enter License
Use a browser to visit ip:34216 and follow the instructions to enter the license.
## Using Seeq to Analyze TDengine Time-Series Data
## Load TDengine Time-Series Data
This section demonstrates how to use Seeq software in conjunction with TDengine for time-series data analysis.
This chapter demonstrates how to use the Seeq software to load TDengine time-series data.
### Scenario Introduction
The example scenario is a power system where users collect electricity usage data from power station instruments daily and store it in the TDengine cluster. Now, users want to predict how power consumption will develop and purchase more equipment to support it. User power consumption varies with monthly orders, and considering seasonal changes, power consumption will differ. This city is located in the northern hemisphere, so more electricity is used in summer. We simulate data to reflect these assumptions.
### Data Schema
**Step 1**, Create tables in TDengine.
```sql
CREATE STABLE meters (ts TIMESTAMP, num INT, temperature FLOAT, goods INT) TAGS (device NCHAR(20));
@ -58,7 +57,7 @@ CREATE TABLE goods (ts1 TIMESTAMP, ts2 TIMESTAMP, goods FLOAT);
<Image img={imgStep01} alt=""/>
</figure>
### Data Construction Method
**Step 2**, Construct data in TDengine.
```shell
python mockdata.py
@ -67,11 +66,7 @@ taos -s "insert into power.goods select _wstart, _wstart + 10d, avg(goods) from
The source code is hosted on [GitHub Repository](https://github.com/sangshuduo/td-forecasting).
## Using Seeq for Data Analysis
### Configuring Data Source
Log in using a Seeq administrator role account and create a new data source.
**第 3 步**Log in using a Seeq administrator role account and create a new data source.
- Power
@ -251,6 +246,12 @@ Log in using a Seeq administrator role account and create a new data source.
}
```
## Data Analysis
### Scenario Introduction
The example scenario is a power system where users collect electricity usage data from power station instruments daily and store it in the TDengine cluster. Now, users want to predict how power consumption will develop and purchase more equipment to support it. User power consumption varies with monthly orders, and considering seasonal changes, power consumption will differ. This city is located in the northern hemisphere, so more electricity is used in summer. We simulate data to reflect these assumptions.
### Using Seeq Workbench
Log in to the Seeq service page and create a new Seeq Workbench. By selecting data sources from search results and choosing different tools as needed, you can display data or make predictions. For detailed usage methods, refer to the [official knowledge base](https://support.seeq.com/space/KB/146440193/Seeq+Workbench).
@ -330,77 +331,7 @@ Program output results:
<Image img={imgStep03} alt=""/>
</figure>
## Configuring Seeq Data Source Connection to TDengine Cloud
Configuring a Seeq data source connection to TDengine Cloud is essentially no different from connecting to a local TDengine installation. Simply log in to TDengine Cloud, select "Programming - Java" and copy the JDBC string with a token to fill in as the DatabaseJdbcUrl value for the Seeq Data Source.
Note that when using TDengine Cloud, the database name needs to be specified in SQL commands.
### Configuration example using TDengine Cloud as a data source
```json
{
"QueryDefinitions": [
{
"Name": "CloudVoltage",
"Type": "SIGNAL",
"Sql": "SELECT ts, voltage FROM test.meters",
"Enabled": true,
"TestMode": false,
"TestQueriesDuringSync": true,
"InProgressCapsulesEnabled": false,
"Variables": null,
"Properties": [
{
"Name": "Name",
"Value": "Voltage",
"Sql": null,
"Uom": "string"
},
{
"Name": "Interpolation Method",
"Value": "linear",
"Sql": null,
"Uom": "string"
},
{
"Name": "Maximum Interpolation",
"Value": "2day",
"Sql": null,
"Uom": "string"
}
],
"CapsuleProperties": null
}
],
"Type": "GENERIC",
"Hostname": null,
"Port": 0,
"DatabaseName": null,
"Username": "root",
"Password": "taosdata",
"InitialSql": null,
"TimeZone": null,
"PrintRows": false,
"UseWindowsAuth": false,
"SqlFetchBatchSize": 100000,
"UseSSL": false,
"JdbcProperties": null,
"GenericDatabaseConfig": {
"DatabaseJdbcUrl": "jdbc:TAOS-RS://gw.cloud.tdengine.com?useSSL=true&token=41ac9d61d641b6b334e8b76f45f5a8XXXXXXXXXX",
"SqlDriverClassName": "com.taosdata.jdbc.rs.RestfulDriver",
"ResolutionInNanoseconds": 1000,
"ZonedColumnTypes": []
}
}
```
### Example of Seeq Workbench Interface with TDengine Cloud as Data Source
<figure>
<Image img={imgStep04} alt=""/>
</figure>
## Solution Summary
### Solution Summary
By integrating Seeq and TDengine, users can fully leverage the efficient storage and querying capabilities of TDengine, while also benefiting from the powerful data visualization and analysis features provided by Seeq.

View File

@ -243,6 +243,8 @@ The effective value of charset is UTF-8.
| concurrentCheckpoint | |Supported, effective immediately | Internal parameter, whether to check checkpoints concurrently |
| maxStreamBackendCache | |Supported, effective immediately | Internal parameter, maximum cache used by stream computing |
| streamSinkDataRate | |Supported, effective after restart| Internal parameter, used to control the write speed of stream computing results |
| streamNotifyMessageSize | After 3.3.6.0 | Not supported | Internal parameter, controls the message size for event notifications, default value is 8192 |
| streamNotifyFrameSize | After 3.3.6.0 | Not supported | Internal parameter, controls the underlying frame size when sending event notification messages, default value is 256 |
### Log Related

View File

@ -55,9 +55,9 @@ join_clause:
window_clause: {
SESSION(ts_col, tol_val)
| STATE_WINDOW(col)
| STATE_WINDOW(col) [TRUE_FOR(true_for_duration)]
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
| EVENT_WINDOW START WITH start_trigger_condition END WITH end_trigger_condition
| EVENT_WINDOW START WITH start_trigger_condition END WITH end_trigger_condition [TRUE_FOR(true_for_duration)]
| COUNT_WINDOW(count_val[, sliding_val])
interp_clause:

View File

@ -2125,6 +2125,28 @@ UNIQUE(expr)
**Applicable to**: Tables and supertables.
### COLS
```sql
COLS(func(expr), output_expr1, [, output_expr2] ... )
```
**Function Description**: On the data row where the execution result of function func(expr) is located, execute the expression output_expr1, [, output_expr2], return its result, and the result of func (expr) is not output.
**Return Data Type**: Returns multiple columns of data, and the data type of each column is the type of the result returned by the corresponding expression.
**Applicable Data Types**: All type fields.
**Applicable to**: Tables and Super Tables.
**Usage Instructions**:
- Func function type: must be a single-line selection function (output result is a single-line selection function, for example, last is a single-line selection function, but top is a multi-line selection function).
- Mainly used to obtain the associated columns of multiple selection function results in a single SQL query. For example: select cols(max(c0), ts), cols(max(c1), ts) from ... can be used to get the different ts values of the maximum values of columns c0 and c1.
- The result of the parameter func is not returned. If you need to output the result of func, you can add additional output columns, such as: select first(ts), cols(first(ts), c1) from ..
- When there is only one column in the output, you can set an alias for the function. For example, you can do it like this: "select cols(first (ts), c1) as c11 from ...".
- Output one or more columns, and you can set an alias for each output column of the function. For example, you can do it like this: "select (first (ts), c1 as c11, c2 as c22) from ...".
## Time-Series Specific Functions
Time-Series specific functions are tailor-made by TDengine to meet the query scenarios of time-series data. In general databases, implementing similar functionalities usually requires complex query syntax and is inefficient. TDengine has built these functionalities into functions, greatly reducing the user's cost of use.

View File

@ -53,9 +53,9 @@ The syntax for the window clause is as follows:
```sql
window_clause: {
SESSION(ts_col, tol_val)
| STATE_WINDOW(col)
| STATE_WINDOW(col) [TRUE_FOR(true_for_duration)]
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [FILL(fill_mod_and_val)]
| EVENT_WINDOW START WITH start_trigger_condition END WITH end_trigger_condition
| EVENT_WINDOW START WITH start_trigger_condition END WITH end_trigger_condition [TRUE_FOR(true_for_duration)]
| COUNT_WINDOW(count_val[, sliding_val])
}
```
@ -177,6 +177,12 @@ TDengine also supports using CASE expressions in state quantities, which can exp
SELECT tbname, _wstart, CASE WHEN voltage >= 205 and voltage <= 235 THEN 1 ELSE 0 END status FROM meters PARTITION BY tbname STATE_WINDOW(CASE WHEN voltage >= 205 and voltage <= 235 THEN 1 ELSE 0 END);
```
The state window supports using the TRUE_FOR parameter to set its minimum duration. If the window's duration is less than the specified value, it will be discarded automatically and no result will be returned. For example, setting the minimum duration to 3 seconds:
```
SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status) TRUE_FOR (3s);
```
### Session Window
The session window is determined based on the timestamp primary key values of the records. As shown in the diagram below, if the continuous interval of the timestamps is set to be less than or equal to 12 seconds, the following 6 records form 2 session windows, which are: [2019-04-28 14:22:10, 2019-04-28 14:22:30] and [2019-04-28 14:23:10, 2019-04-28 14:23:30]. This is because the interval between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, exceeding the continuous interval (12 seconds).
@ -212,6 +218,12 @@ select _wstart, _wend, count(*) from t event_window start with c1 > 0 end with c
<Image img={imgStep04} alt=""/>
</figure>
The event window supports using the TRUE_FOR parameter to set its minimum duration. If the window's duration is less than the specified value, it will be discarded automatically and no result will be returned. For example, setting the minimum duration to 3 seconds:
```
select _wstart, _wend, count(*) from t event_window start with c1 > 0 end with c2 < 10 true_for (3s);
```
### Count Window
Count windows divide data into windows based on a fixed number of data rows. By default, data is sorted by timestamp, then divided into multiple windows based on the value of count_val, and aggregate calculations are performed. count_val represents the maximum number of data rows in each count window; if the total number of data rows is not divisible by count_val, the last window will have fewer rows than count_val. sliding_val is a constant that represents the number of rows the window slides, similar to the SLIDING in interval.

View File

@ -458,6 +458,12 @@ This document details the server error codes that may be encountered when using
| 0x80002665 | The _TAGS pseudocolumn can only be used for subtable and supertable queries | Illegal tag column query | Check and correct the SQL statement |
| 0x80002666 | Subquery does not output primary timestamp column | Check and correct the SQL statement | |
| 0x80002667 | Invalid usage of expr: %s | Illegal expression | Check and correct the SQL statement |
| 0x80002687 | True_for duration cannot be negative | Use negative value as true_for duration | Check and correct the SQL statement |
| 0x80002688 | Cannot use 'year' or 'month' as true_for duration | Use year or month as true_for_duration | Check and correct the SQL statement |
| 0x80002689 | Invalid using cols function | Illegal using cols function | Check and correct the SQL statement |
| 0x8000268A | Cols function's first param must be a select function that output a single row | The first parameter of the cols function should be a selection function | Check and correct the SQL statement |
| 0x8000268B | Invalid using cols function with multiple output columns | Illegal using the cols function for multiple column output | Check and correct the SQL statement |
| 0x8000268C | Invalid using alias for cols function | Illegal cols function alias | Check and correct the SQL statement |
| 0x800026FF | Parser internal error | Internal error in parser | Preserve the scene and logs, report issue on GitHub |
| 0x80002700 | Planner internal error | Internal error in planner | Preserve the scene and logs, report issue on GitHub |
| 0x80002701 | Expect ts equal | JOIN condition validation failed | Preserve the scene and logs, report issue on GitHub |

View File

@ -39,13 +39,13 @@ TDengine 消费者的概念跟 Kafka 类似,消费者通过订阅主题来接
| `auto.offset.reset` | enum | 消费组订阅的初始位置 | <br />`earliest`: default(version < 3.2.0.0);从头开始订阅; <br/>`latest`: default(version >= 3.2.0.0);仅从最新数据开始订阅; <br/>`none`: 没有提交的 offset 无法订阅 |
| `enable.auto.commit` | boolean | 是否启用消费位点自动提交true: 自动提交客户端应用无需commitfalse客户端应用需要自行commit | 默认值为 true |
| `auto.commit.interval.ms` | integer | 消费记录自动提交消费位点时间间隔,单位为毫秒 | 默认值为 5000 |
| `msg.with.table.name` | boolean | 是否允许从消息中解析表名, 不适用于列订阅(列订阅时可将 tbname 作为列写入 subquery 语句从3.2.0.0版本该参数废弃恒为true | 默认关闭 |
| `msg.with.table.name` | boolean | 是否允许从消息中解析表名, 不适用于列订阅(列订阅时可将 tbname 作为列写入 subquery 语句)(从 3.2.0.0 版本该参数废弃,恒为 true | 默认关闭 |
| `enable.replay` | boolean | 是否开启数据回放功能 | 默认关闭 |
| `session.timeout.ms` | integer | consumer 心跳丢失后超时时间,超时后会触发 rebalance 逻辑,成功后该 consumer 会被删除从3.3.3.0版本开始支持) | 默认值为 12000取值范围 [6000 1800000] |
| `max.poll.interval.ms` | integer | consumer poll 拉取数据间隔的最长时间,超过该时间,会认为该 consumer 离线触发rebalance 逻辑,成功后该 consumer 会被删除从3.3.3.0版本开始支持) | 默认值为 300000[1000INT32_MAX] |
| `fetch.max.wait.ms` | integer | 服务端单次返回数据的最大耗时从3.3.6.0版本开始支持) | 默认值为 1000[1INT32_MAX] |
| `min.poll.rows` | integer | 服务端单次返回数据的最小条数从3.3.6.0版本开始支持) | 默认值为 4096[1INT32_MAX] |
| `msg.consume.rawdata` | integer | 消费数据时拉取数据类型为二进制类型,不可做解析操作,内部参数,只用于 taosx 数据迁移从3.3.6.0版本开始支持) | 默认值为 0 表示不起效, 非 0 为 起效 |
| `session.timeout.ms` | integer | consumer 心跳丢失后超时时间,超时后会触发 rebalance 逻辑,成功后该 consumer 会被删除(从 3.3.3.0 版本开始支持) | 默认值为 12000取值范围 [6000 1800000] |
| `max.poll.interval.ms` | integer | consumer poll 拉取数据间隔的最长时间,超过该时间,会认为该 consumer 离线,触发 rebalance 逻辑,成功后该 consumer 会被删除(从 3.3.3.0 版本开始支持) | 默认值为 300000[1000INT32_MAX] |
| `fetch.max.wait.ms` | integer | 服务端单次返回数据的最大耗时(从 3.3.6.0 版本开始支持) | 默认值为 1000[1INT32_MAX] |
| `min.poll.rows` | integer | 服务端单次返回数据的最小条数(从 3.3.6.0 版本开始支持) | 默认值为 4096[1INT32_MAX] |
| `msg.consume.rawdata` | integer | 消费数据时拉取数据类型为二进制类型,不可做解析操作,内部参数,只用于 taosX 数据迁移(从 3.3.6.0 版本开始支持) | 默认值为 0 表示不起效, 非 0 为 起效 |
下面是各语言连接器创建参数:

View File

@ -1,78 +1,79 @@
---
title: 与 PowerBI 集成
title: 与 PowerBI 集成
sidebar_label: PowerBI
toc_max_heading_level: 4
---
Power BI是由Microsoft提供的一种商业分析工具。通过配置使用ODBC连接器Power BI可以快速访问TDengine的数据。用户可以将标签数据、原始时序数据或按时间聚合后的时序数据从TDengine导入到Power BI制作报表或仪表盘整个过程不需要任何代码编写过程。
Power BI 是由 Microsoft 提供的一种商业分析工具。通过配置使用 ODBC 连接器Power BI 可以快速访问 TDengine 的数据。用户可以将标签数据、原始时序数据或按时间聚合后的时序数据从 TDengine 导入到 Power BI制作报表或仪表盘整个过程不需要任何代码编写过程。
## 前置条件
安装完成Power BI Desktop软件并运行如未安装请从其官方地址下载最新的Windows操作系统 32/64 位版本)。
准备以下环境:
- TDengine 3.3.4.0 以上版本集群已部署并正常运行(企业及社区版均可)。
- taosAdapter 能够正常运行,详细参考 [taosAdapter 参考手册](../../../reference/components/taosadapter)。
- 从 TDengine 官网下载最新的 Windows 操作系统 X64 客户端驱动程序并进行安装,详细参考 [安装 ODBC 驱动](../../../reference/connector/odbc/#安装)。
- 安装完成 Power BI Desktop 软件并运行如未安装请从其官方地址下载最新的Windows操作系统 32/64 位版本)。
## 安装 ODBC 驱动
## 配置数据源
从TDengine官网下载最新的Windows操作系统X64客户端驱动程序并安装在运行Power BI的机器上。安装成功后可在“ODBC数据源32位”或者“ODBC数据源64位”管理工具中看到 TDengine 驱动程序。
**第 1 步**,在 Windows 操作系统的开始菜单中搜索并打开【ODBC数据源64位】管理工具并进行配置。详细参考 [配置ODBC数据源](../../../reference/connector/odbc/#配置数据源)
## 配置ODBC数据源
**第 2 步**,打开 Power BI 并登录后,点击【主页】->【获取数据】->【其他】->【ODBC】->【连接】,添加数据源。
配置ODBC数据源的操作步骤如下。
**第 3 步**选择刚才创建的数据源名称比如【MyTDengine】如果需要输入 SQL则可以点击【高级选项】选项卡在展开的对话框的编辑框中输入 SQL 语句。点击【确定】按钮,即可连接到配置好的数据源。
第1步在Windows操作系统的开始菜单中搜索并打开“ODBC数据源32位”或者“ODBC数据源64位”管理工具。
第2步点击“用户DSN”选项卡→“添加”按钮进入“创建新数据源”对话框。
第3步在“选择您想为其安装数据源的驱动程序”列表中选择“TDengine”点击“完成”按钮进入TDengine ODBC数据源配置页面。填写如下必要信息。
- DSN数据源名称必填比如“MyTDengine”。
- 连接类型勾选“WebSocket”复选框。
- URLODBC 数据源 URL必填比如`http://127.0.0.1:6041`。
- 数据库表示需要连接的数据库可选比如“test”。
- 用户名输入用户名如果不填默认为“root”。
- 密码输入用户密码如果不填默认为“taosdata”。
**第 4 步**,进入【导航器】后,可以浏览对应数据库的数据表/视图并加载数据。
第4步点击“测试连接”按钮测试连接情况如果成功连接则会提示“成功连接到`http://127.0.0.1:6041`”。
第5步点击“确定”按钮即可保存配置并退出。
## 数据分析
## 导入TDengine数据到Power BI
### 使用说明
将TDengine数据导入Power BI的操作步骤如下:
第1步打开Power BI并登录后点击“主页”→“获取数据”→“其他”→“ODBC”→“连接”添加数据源。
第2步选择刚才创建的数据源名称比如“MyTDengine”如果需要输入SQL则可以点击“高级选项”选项卡在展开的对话框的编辑框中输入SQL语句。点击“确定”按钮即可连接到配置好的数据源。
第3步进入“导航器”后可以浏览对应数据库的数据表/视图并加载数据。
为了充分发挥 Power BI 在分析 TDengine中 数据方面的优势,用户需要先理解维度、度量、窗口切分查询、数据切分查询、时序和相关性等核心概念,之后通过自定义的 SQL 导入数据。
- 维度:通常是分类(文本)数据,描述设备、测点、型号等类别信息。在 TDengine 的超级表中,使用标签列存储数据的维度信息,可以通过形如 “select distinct tbname, tag1, tag2 from supertable” 的SQL语法快速获得维度信息。
- 度量可以用于进行计算的定量数值字段常见计算有求和、取平均值和最小值等。如果测点的采集周期为1s那么一年就有 3000 多万条记录,把这些数据全部导入 Power BI 会严重影响其执行效率。在 TDengine 中用户可以使用数据切分查询、窗口切分查询等语法结合与窗口相关的伪列把降采样后的数据导入Power BI 中,具体语法请参阅 TDengine 官方文档的特色查询功能部分。
- 窗口切分查询:比如温度传感器每秒采集一次数据,但须查询每隔 10min 的温度平均值,在这种场景下可以使用窗口子句来获得需要的降采样查询结果,对应的 SQL 形如 `select tbname, _wstart dateavg(temperature) temp from table interval(10m)`,其中,`_wstart` 是伪列表示时间窗口起始时间10m 表示时间窗口的持续时间,`avg(temperature)` 表示时间窗口内的聚合值。
- 数据切分查询:如果需要同时获取很多温度传感器的聚合数值,可对数据进行切分,然后在切分出的数据空间内进行一系列的计算,对应的 SQL 形如 `partition by part_list`。数据切分子句最常见的用法是在超级表查询中按标签将子表数据进行切分,将每个子表的数据独立出来,形成一条条独立的时间序列,方便针对各种时序场景的统计分析。
- 时序:在绘制曲线或者按照时间聚合数据时,通常需要引入日期表。日期表可以从 Excel 表格中导入,也可以在 TDengine 中执行 SQL 获取,例如 `select _wstart date, count(*) cnt from test.meters where ts between A and B interval(1d) fill(0)`,其中 fill 字句表示数据缺失情况下的填充模式,伪列 _wstart 则为要获取的日期列。
- 相关性:告诉数据之间如何关联,如度量和维度可以通过 tbname 列关联在一起,日期表和度量则可以通过 date 列关联,配合形成可视化报表。
为了充分发挥Power BI在分析TDengine中数据方面的优势用户需要先理解维度、度量、窗口切分查询、数据切分查询、时序和相关性等核心概念之后通过自定义的SQL导入数据。
- 维度通常是分类文本数据描述设备、测点、型号等类别信息。在TDengine的超级表中使用标签列存储数据的维度信息可以通过形如“select distinct tbname, tag1, tag2 from supertable”的SQL语法快速获得维度信息。
- 度量可以用于进行计算的定量数值字段常见计算有求和、取平均值和最小值等。如果测点的采集周期为1s那么一年就有3000多万条记录把这些数据全部导入Power BI会严重影响其执行效率。在TDengine中用户可以使用数据切分查询、窗口切分查询等语法结合与窗口相关的伪列把降采样后的数据导入Power BI中具体语法请参阅TDengine官方文档的特色查询功能部分。
- 窗口切分查询比如温度传感器每秒采集一次数据但须查询每隔10min的温度平均值在这种场景下可以使用窗口子句来获得需要的降采样查询结果对应的SQL形如`select tbname, _wstart dateavg(temperature) temp from table interval(10m)`其中_wstart是伪列表示时间窗口起始时间10m表示时间窗口的持续时间avg(temperature)表示时间窗口内的聚合值。
- 数据切分查询如果需要同时获取很多温度传感器的聚合数值可对数据进行切分然后在切分出的数据空间内进行一系列的计算对应的SQL形如 `partition by part_list`。数据切分子句最常见的用法是在超级表查询中按标签将子表数据进行切分,将每个子表的数据独立出来,形成一条条独立的时间序列,方便针对各种时序场景的统计分析。
- 时序在绘制曲线或者按照时间聚合数据时通常需要引入日期表。日期表可以从Excel表格中导入也可以在TDengine中执行SQL获取例如 `select _wstart date, count(*) cnt from test.meters where ts between A and B interval(1d) fill(0)`其中fill字句表示数据缺失情况下的填充模式伪列_wstart则为要获取的日期列。
- 相关性告诉数据之间如何关联如度量和维度可以通过tbname列关联在一起日期表和度量则可以通过date列关联配合形成可视化报表。
### 智能电表样例
## 智能电表样例
TDengine 采用了一种独特的数据模型,以优化时序数据的存储和查询性能。该模型利用超级表作为模板,为每台设备创建一张独立的表。每张表在设计时考虑了高度的可扩展性,最多可包含 4096 个数据列和 128 个标签列。这种设计使得 TDengine 能够高效地处理大量时序数据,同时保持数据的灵活性和易用性。
TDengine采用了一种独特的数据模型以优化时序数据的存储和查询性能。该模型利用超级表作为模板为每台设备创建一张独立的表。每张表在设计时考虑了高度的可扩展性最多可包含4096个数据列和128个标签列。这种设计使得TDengine能够高效地处理大量时序数据同时保持数据的灵活性和易用性
以智能电表为例,假设每块电表每秒产生一条记录,那么每天将产生 86400 条记录。对于 1000 块智能电表来说,每年产生的记录将占用大约 600GB 的存储空间。面对如此庞大的数据量Power BI 等商业智能工具在数据分析和可视化方面发挥着重要作用。
以智能电表为例假设每块电表每秒产生一条记录那么每天将产生86 400条记录。对于1000块智能电表来说每年产生的记录将占用大约600GB的存储空间。面对如此庞大的数据量Power BI等商业智能工具在数据分析和可视化方面发挥着重要作用
在 Power BI 中,用户可以将 TDengine 表中的标签列映射为维度列以便对数据进行分组和筛选。同时数据列的聚合结果可以导入为度量列用于计算关键指标和生成报表。通过这种方式Power BI 能够帮助决策者快速获取所需的信息,深入了解业务运营情况,从而制定更加明智的决策。
在Power BI中用户可以将TDengine表中的标签列映射为维度列以便对数据进行分组和筛选。同时数据列的聚合结果可以导入为度量列用于计算关键指标和生成报表。通过这种方式Power BI能够帮助决策者快速获取所需的信息深入了解业务运营情况从而制定更加明智的决策
根据如下步骤,便可以体验通过 Power BI 生成时序数据报表的功能
根据如下步骤便可以体验通过Power BI生成时序数据报表的功能。
第1步使用TDengine的taosBenchMark快速生成1000块智能电表3天的数据采集频率为1s。
```shell
taosBenchmark -t 1000 -n 259200 -S 1000 -y
```
第2步导入维度数据。在Power BI中导入表的标签列取名为tags通过如下SQL获取超级表下所有智能电表的标签数据。
```sql
select distinct tbname device, groupId, location from test.meters
```
第3步导入度量数据。在Power BI中按照1小时的时间窗口导入每块智能电表的电流均值、电压均值、相位均值取名为dataSQL如下。
```sql
select tbname, _wstart ws, avg(current), avg(voltage), avg(phase) from test.meters PARTITION by tbname interval(1h)
```
第4步导入日期数据。按照1天的时间窗口获得时序数据的时间范围及数据计数SQL如下。需要在Power Query编辑器中将date列的格式从“文本”转化为“日期”。
```sql
select _wstart date, count(*) from test.meters interval(1d) having count(*)>0
```
第5步建立维度和度量的关联关系。打开模型视图建立表tags和data的关联关系将tbname设置为关联数据列。
第6步建立日期和度量的关联关系。打开模型视图建立数据集date和data的关联关系关联的数据列为date和datatime。
第7步制作报告。在柱状图、饼图等控件中使用这些数据。
**第 1 步**,使用 TDengine 的 taosBenchMark 快速生成1000块智能电表3天的数据采集频率为 1s。
由于TDengine处理时序数据的超强性能使得用户在数据导入及每日定期刷新数据时都可以得到非常好的体验。更多有关Power BI视觉效果的构建方法请参照Power BI的官方文档。
```shell
taosBenchmark -t 1000 -n 259200 -S 1000 -y
```
**第 2 步**,导入维度数据。在 Power BI 中导入表的标签列,取名为 tags通过如下 SQL 获取超级表下所有智能电表的标签数据。
```sql
select distinct tbname device, groupId, location from test.meters
```
**第 3 步**,导入度量数据。在 Power BI 中,按照 1 小时的时间窗口,导入每块智能电表的电流均值、电压均值、相位均值,取名为 dataSQL如下。
```sql
select tbname, _wstart ws, avg(current), avg(voltage), avg(phase) from test.meters PARTITION by tbname interval(1h)
```
**第 4 步**,导入日期数据。按照 1 天的时间窗口获得时序数据的时间范围及数据计数SQL 如下。需要在 Power Query 编辑器中将 date 列的格式从“文本”转化为“日期”。
```sql
select _wstart date, count(*) from test.meters interval(1d) having count(*)>0
```
**第 5 步**,建立维度和度量的关联关系。打开模型视图,建立表 tags 和 data 的关联关系,将 tbname 设置为关联数据列。
**第 6 步**,建立日期和度量的关联关系。打开模型视图,建立数据集 date 和 data 的关联关系,关联的数据列为 date 和 datatime。
**第 7 步**,制作报告。在柱状图、饼图等控件中使用这些数据。
由于TDengine处理时序数据的超强性能使得用户在数据导入及每日定期刷新数据时都可以得到非常好的体验。更多有关 Power BI 视觉效果的构建方法,请参照 Power BI 的官方文档。

View File

@ -1,46 +1,55 @@
---
title: 与永洪 BI 集成
title: 与永洪 BI 集成
sidebar_label: 永洪 BI
toc_max_heading_level: 4
---
永洪 BI 是一个专为各种规模企业打造的全业务链大数据分析解决方案,旨在帮助用户轻松发掘大数据价值,获取深入的洞察力。该平台以其灵活性和易用性而广受好评,无论企业规模大小,都能从中受益。
为了实现与 TDengine 的高效集成,永洪 BI 提供了 JDBC 连接器。用户只须按照简单的步骤配置数据源,即可将 TDengine 作为数据源添加到永洪BI中。这一过程不仅快速便捷还能确保数据的准确性和稳定性。
为了实现与 TDengine 的高效集成,永洪 BI 提供了 JDBC 连接器。用户只须按照简单的步骤配置数据源,即可将 TDengine 作为数据源添加到永洪 BI 中。这一过程不仅快速便捷,还能确保数据的准确性和稳定性。
一旦数据源配置完成永洪BI便能直接从TDengine中读取数据并利用其强大的数据处理和分析功能为用户提供丰富的数据展示、分析和预测能力。这意味着用户无须编写复杂的代码或进行烦琐的数据转换工作即可轻松获取所需的业务洞察。
一旦数据源配置完成,永洪 BI 便能直接从 TDengine 中读取数据,并利用其强大的数据处理和分析功能,为用户提供丰富的数据展示、分析和预测能力。这意味着用户无须编写复杂的代码或进行烦琐的数据转换工作,即可轻松获取所需的业务洞察。
## 前置条件
准备以下环境:
- TDengine 3.3.2.0 以上版本集群已部署并正常运行(企业及社区版均可)。
- taosAdapter 能够正常运行,详细参考 [taosAdapter 参考手册](../../../reference/components/taosadapter)。
- 确保永洪 BI 已经安装并运行(如果未安装,请到永洪科技官方下载页面下载)。
- 安装JDBC驱动。从 maven.org 下载 TDengine JDBC 连接器文件 “taos-jdbcdriver-3.4.0-dist.jar”并安装在永洪 BI 的机器上。
- 安装JDBC驱动。从 maven.org 下载 TDengine JDBC 连接器文件 `taos-jdbcdriver-3.4.0-dist.jar` 及以上版本
## 配置JDBC数据源
## 配置数据源
配置JDBC数据源的步骤如下
配置JDBC数据源的步骤如下:
第1步在打开的永洪BI中点击“添加数据源”按钮选择SQL数据源中的“GENERIC”类型。
第2步点击“选择自定义驱动”按钮在“驱动管理”对话框中点击“驱动列表”旁边的“+”输入名称“MyTDengine”。然后点击“上传文件”按钮上传刚刚下载的TDengine JDBC连接器文件“taos-jdbcdriver-3.2.7-dist.jar”并选择“com.taosdata.jdbc.
rs.RestfulDriver”驱动最后点击“确定”按钮完成驱动添加步骤。
第3步复制下面的内容到“URL”字段。
```text
jdbc:TAOS-RS://127.0.0.1:6041?user=root&password=taosdata
```
第4步在“认证方式”中点击“无身份认证”单选按钮。
第5步在数据源的高级设置中修改“Quote 符号”的值为反引号(`)。
第6步点击“测试连接”按钮弹出“测试成功”对话框。点击“保存”按钮输入“MyTDengine”来保存TDengine数据源。
**第 1 步**,在打开的永洪 BI 中点击【添加数据源】按钮,选择 SQL 数据源中的 “GENERIC” 类型。
## 创建TDengine数据集
**第 2 步**,点击【选择自定义驱动】按钮,在【驱动管理】对话框中点击【驱动列表】旁边的 “+”,输入名称 “MyTDengine”。然后点击【上传文件】按钮上传刚刚下载的 TDengine JDBC 连接器文件 `taos-jdbcdriver-3.2.7-dist.jar`,并选择 `com.taosdata.jdbc.rs.RestfulDriver` 驱动,最后点击“确定”按钮,完成驱动添加步骤。
创建TDengine数据集的步骤如下。
**第 3 步**复制下面的内容到【URL】字段。
第1步在永洪BI中点击“添加数据源”按钮展开刚刚创建的数据源并浏览TDengine中的超级表。
第2步可以将超级表的数据全部加载到永洪BI中也可以通过自定义SQL导入部分数据。
第3步当勾选“数据库内计算”复选框时永洪BI将不再缓存TDengine的时序数据并在处理查询时将SQL请求发送给TDengine直接处理。
```text
jdbc:TAOS-RS://127.0.0.1:6041?user=root&password=taosdata
```
当导入数据后永洪BI会自动将数值类型设置为“度量”列将文本类型设置为“维度”列。而在TDengine的超级表中由于将普通列作为数据的度量将标签列作为数据的维度因此用户可能需要在创建数据集时更改部分列的属性。TDengine在支持标准SQL的基础之上还提供了一系列满足时序业务场景需求的特色查询语法例如数据切分查询、窗口切分查询等具体操作步骤请参阅TDengine的官方文档。通过使用这些特色查询当永洪BI将SQL查询发送到TDengine时可以大大提高数据访问速度减少网络传输带宽。
**第 4 步**,在【认证方式】中点击【无身份认证】单选按钮。
**第 5 步**,在数据源的高级设置中修改 “Quote 符号” 的值为反引号(`)。
**第 6 步**,点击【测试连接】按钮,弹出【测试成功】对话框。点击【保存】按钮,输入 “MyTDengine” 来保存 TDengine 数据源。
**第 7 步**,在永洪 BI 中点击【添加数据源】按钮,展开刚刚创建的数据源,并浏览 TDengine 中的超级表。
**第 8 步**,可以将超级表的数据全部加载到永洪 BI 中,也可以通过自定义 SQL 导入部分数据。
**第 9 步**,当勾选【数据库内计算】复选框时,永洪 BI 将不再缓存 TDengine 的时序数据,并在处理查询时将 SQL 请求发送给 TDengine 直接处理。
## 数据分析
当导入数据后,永洪 BI 会自动将数值类型设置为 “度量” 列,将文本类型设置为 “维度” 列。而在 TDengine 的超级表中由于将普通列作为数据的度量将标签列作为数据的维度因此用户可能需要在创建数据集时更改部分列的属性。TDengine 在支持标准 SQL 的基础之上还提供了一系列满足时序业务场景需求的特色查询语法,例如数据切分查询、窗口切分查询等,具体操作步骤请参阅 TDengine 的官方文档。通过使用这些特色查询,当永洪 BI 将 SQL 查询发送到 TDengine 时,可以大大提高数据访问速度,减少网络传输带宽。
在永洪 BI 中,你可以创建 “参数” 并在 SQL 中使用,通过手动、定时的方式动态执行这些 SQL即可实现可视化报告的刷新效果。如下 SQL 可以从 TDengine 实时读取数据。
在永洪BI中你可以创建“参数”并在SQL中使用通过手动、定时的方式动态执行这些SQL即可实现可视化报告的刷新效果。如下SQL可以从TDengine实时读取数据。
```sql
select _wstart ws, count(*) cnt from supertable where tbname=?{metric} and ts = ?{from} and ts < ?{to} interval(?{interval})
```
@ -49,17 +58,15 @@ select _wstart ws, count(*) cnt from supertable where tbname=?{metric} and ts =
1. `_wstart`:表示时间窗口起始时间。
2. `count*`:表示时间窗口内的聚合值。
3. `?{interval}`:表示在 SQL 语句中引入名称为 `interval` 的参数,当 BI 工具查询数据时,会给 `interval` 参数赋值,如果取值为 1m则表示按照 1 分钟的时间窗口降采样数据。
4. `?{metric}`:该参数用来指定查询的数据表名称,当在 BI 工具中把某个“下拉参数组件”的 ID 也设置为 metric 时,该“下拉参数组件”的被选择项将会和该参数绑定在一起,实现动态选择的效果。
5. `?{from}``{to}`:这两个参数用来表示查询数据集的时间范围,可以与“文本参数组件”绑定。
您可以在 BI 工具的“编辑参数”对话框中修改“参数”的数据类型、数据范围、默认取值,并在“可视化报告”中动态设置这些参数的值。
4. `?{metric}`:该参数用来指定查询的数据表名称,当在 BI 工具中把某个 “下拉参数组件” 的 ID 也设置为 metric 时,该 “下拉参数组件” 的被选择项将会和该参数绑定在一起,实现动态选择的效果。
5. `?{from}``{to}`:这两个参数用来表示查询数据集的时间范围,可以与 “文本参数组件” 绑定。
您可以在 BI 工具的【编辑参数】对话框中修改 “参数” 的数据类型、数据范围、默认取值,并在 “可视化报告” 中动态设置这些参数的值。
## 21.4.5 制作可视化报告
制作可视化报告的步骤如下:
制作可视化报告的步骤如下。
1. 在永洪 BI 工具中点击“制作报告”,创建画布。
2. 拖动可视化组件到画布中,例如“表格组件”。
3. 在“数据集”侧边栏中选择待绑定的数据集,将数据列中的“维度”和“度量”按需绑定到“表格组件”。
4. 点击“保存”后,即可查看报告。
1. 在永洪 BI 工具中点击【制作报告】,创建画布。
2. 拖动可视化组件到画布中,例如 “表格组件”。
3. 在【数据集】侧边栏中选择待绑定的数据集,将数据列中的 “维度” 和 “度量” 按需绑定到 “表格组件”。
4. 点击【保存】后,即可查看报告。
5. 更多有关永洪 BI 工具的信息,请查询永洪科技官方帮助文档。

View File

@ -1,48 +1,48 @@
---
sidebar_label: Seeq
title: 与 Seeq 集成
title: 与 Seeq 集成
toc_max_heading_level: 4
---
Seeq 是制造业和工业互联网IIOT高级分析软件。Seeq 支持在工艺制造组织中使用机器学习创新的新功能。这些功能使组织能够将自己或第三方机器学习算法部署到前线流程工程师和主题专家使用的高级分析应用程序,从而使单个数据科学家的努力扩展到许多前线员工。
通过 TDengine Java connector Seeq 可以轻松支持查询 TDengine 提供的时序数据,并提供数据展现、分析、预测等功能。
通过 `TDengine Java connector` Seeq 可以轻松支持查询 TDengine 提供的时序数据,并提供数据展现、分析、预测等功能。
## 前置条件
- Seeq 已经安装。从 [Seeq 官网](https://www.seeq.com/customer-download)下载相关软件,例如 Seeq Server 和 Seeq Data Lab 等。Seeq Data Lab 需要安装在和 Seeq Server 不同的服务器上,并通过配置和 Seeq Server 互联。详细安装配置指令参见[Seeq 知识库]( https://support.seeq.com/kb/latest/cloud/)。
准备以下环境:
- TDengine 3.1.0.3 以上版本集群已部署并正常运行(企业及社区版均可)。
- taosAdapter 能够正常运行,详细参考 [taosAdapter 参考手册](../../../reference/components/taosadapter)。
- Seeq 已经安装。从 [Seeq 官网](https://www.seeq.com/customer-download)下载相关软件,例如 `Seeq Server``Seeq Data Lab` 等。`Seeq Data Lab` 需要安装在和 `Seeq Server` 不同的服务器上,并通过配置和 `Seeq Server` 互联。详细安装配置指令参见 [Seeq 知识库]( https://support.seeq.com/kb/latest/cloud/)。
- 安装 JDBC 驱动。从 `maven.org` 下载 `TDengine JDBC` 连接器文件 `taos-jdbcdriver-3.2.5-dist.jar` 及以上版本。
- TDengine 本地实例已安装。 请参考[官网文档](../../../get-started)。 若使用 TDengine Cloud请在 https://cloud.taosdata.com 申请帐号并登录查看如何访问 TDengine Cloud。
## 配置数据源
## 配置 Seeq 访问 TDengine
### 配置 JDBC 连接器
1. 查看 data 存储位置
**第 1 步**查看 data 存储位置
```
sudo seeq config get Folders/Data
```
2. 从 maven.org 下载 TDengine Java connector 包,目前最新版本为[3.2.5](https://repo1.maven.org/maven2/com/taosdata/jdbc/taos-jdbcdriver/3.2.5/taos-jdbcdriver-3.2.5-dist.jar),并拷贝至 data 存储位置的 plugins\lib 中。
**第 2 步**,将 `maven.org` 下载 `TDengine Java connector` 包并拷贝至 data 存储位置的 `plugins\lib` 中。
3. 重新启动 seeq server
**第 3 步**重新启动 seeq server
```
sudo seeq restart
```
4. 输入 License
**第 4 步**输入 License
使用浏览器访问 ip:34216 并按照说明输入 license。
## 使用 Seeq 分析 TDengine 时序数据
### 加载 TDengine 时序数据
本章节演示如何使用 Seeq 软件配合 TDengine 进行时序数据分析
本章节演示如何使用 Seeq 软件加载 TDengine 时序数据
### 场景介绍
示例场景为一个电力系统,用户每天从电站仪表收集用电量数据,并将其存储在 TDengine 集群中。现在用户想要预测电力消耗将会如何发展,并购买更多设备来支持它。用户电力消耗随着每月订单变化而不同,另外考虑到季节变化,电力消耗量会有所不同。这个城市位于北半球,所以在夏天会使用更多的电力。我们模拟数据来反映这些假定。
### 数据 Schema
**第 1 步**,在 TDengine 中创建表。
```
CREATE STABLE meters (ts TIMESTAMP, num INT, temperature FLOAT, goods INT) TAGS (device NCHAR(20));
@ -51,20 +51,16 @@ CREATE TABLE goods (ts1 TIMESTAMP, ts2 TIMESTAMP, goods FLOAT);
![Seeq demo schema](./seeq/seeq-demo-schema.webp)
### 构造数据方法
**第 2 步**,在 TDengine 中构造数据。
```
python mockdata.py
taos -s "insert into power.goods select _wstart, _wstart + 10d, avg(goods) from power.meters interval(10d);"
```
源代码托管在[GitHub 仓库](https://github.com/sangshuduo/td-forecasting)。
源代码托管在 [GitHub 仓库](https://github.com/sangshuduo/td-forecasting)。
## 使用 Seeq 进行数据分析
### 配置数据源Data Source
使用 Seeq 管理员角色的帐号登录,并新建数据源。
**第 3 步**,使用 Seeq 管理员角色的帐号登录,并新建数据源。
- Power
@ -244,9 +240,15 @@ taos -s "insert into power.goods select _wstart, _wstart + 10d, avg(goods) from
}
```
## 数据分析
### 场景介绍
示例场景为一个电力系统,用户每天从电站仪表收集用电量数据,并将其存储在 TDengine 集群中。现在用户想要预测电力消耗将会如何发展,并购买更多设备来支持它。用户电力消耗随着每月订单变化而不同,另外考虑到季节变化,电力消耗量会有所不同。这个城市位于北半球,所以在夏天会使用更多的电力。我们模拟数据来反映这些假定。
### 使用 Seeq Workbench
登录 Seeq 服务页面并新建 Seeq Workbench通过选择数据源搜索结果和根据需要选择不同的工具可以进行数据展现或预测详细使用方法参见[官方知识库](https://support.seeq.com/space/KB/146440193/Seeq+Workbench)。
登录 Seeq 服务页面并新建 Seeq Workbench通过选择数据源搜索结果和根据需要选择不同的工具可以进行数据展现或预测详细使用方法参见 [官方知识库](https://support.seeq.com/space/KB/146440193/Seeq+Workbench)。
![Seeq Workbench](./seeq/seeq-demo-workbench.webp)
@ -319,78 +321,10 @@ plt.show()
![Seeq forecast result](./seeq/seeq-forecast-result.webp)
## 配置 Seeq 数据源连接 TDengine Cloud
### 方案总结
配置 Seeq 数据源连接 TDengine Cloud 和连接 TDengine 本地安装实例没有本质的不同,只要登录 TDengine Cloud 后选择“编程 - Java”并拷贝带 token 字符串的 JDBC 填写为 Seeq Data Source 的 DatabaseJdbcUrl 值。
注意使用 TDengine Cloud 时 SQL 命令中需要指定数据库名称。
通过集成 Seeq 和 TDengine可以充分利用 TDengine 高效的存储和查询性能,同时也可以受益于 Seeq 提供给用户的强大数据可视化和分析功能。
### 用 TDengine Cloud 作为数据源的配置内容示例:
这种集成使用户能够充分利用 TDengine 的高性能时序数据存储和检索确保高效处理大量数据。同时Seeq 提供高级分析功能,如数据可视化、异常检测、相关性分析和预测建模,使用户能够获得有价值的洞察并基于数据进行决策。
```
{
"QueryDefinitions": [
{
"Name": "CloudVoltage",
"Type": "SIGNAL",
"Sql": "SELECT ts, voltage FROM test.meters",
"Enabled": true,
"TestMode": false,
"TestQueriesDuringSync": true,
"InProgressCapsulesEnabled": false,
"Variables": null,
"Properties": [
{
"Name": "Name",
"Value": "Voltage",
"Sql": null,
"Uom": "string"
},
{
"Name": "Interpolation Method",
"Value": "linear",
"Sql": null,
"Uom": "string"
},
{
"Name": "Maximum Interpolation",
"Value": "2day",
"Sql": null,
"Uom": "string"
}
],
"CapsuleProperties": null
}
],
"Type": "GENERIC",
"Hostname": null,
"Port": 0,
"DatabaseName": null,
"Username": "root",
"Password": "taosdata",
"InitialSql": null,
"TimeZone": null,
"PrintRows": false,
"UseWindowsAuth": false,
"SqlFetchBatchSize": 100000,
"UseSSL": false,
"JdbcProperties": null,
"GenericDatabaseConfig": {
"DatabaseJdbcUrl": "jdbc:TAOS-RS://gw.cloud.taosdata.com?useSSL=true&token=41ac9d61d641b6b334e8b76f45f5a8XXXXXXXXXX",
"SqlDriverClassName": "com.taosdata.jdbc.rs.RestfulDriver",
"ResolutionInNanoseconds": 1000,
"ZonedColumnTypes": []
}
}
```
### TDengine Cloud 作为数据源的 Seeq Workbench 界面示例
![Seeq workbench with TDengine cloud](./seeq/seeq-workbench-with-tdengine-cloud.webp)
## 方案总结
通过集成Seeq和TDengine可以充分利用TDengine高效的存储和查询性能同时也可以受益于Seeq提供给用户的强大数据可视化和分析功能。
这种集成使用户能够充分利用TDengine的高性能时序数据存储和检索确保高效处理大量数据。同时Seeq提供高级分析功能如数据可视化、异常检测、相关性分析和预测建模使用户能够获得有价值的洞察并基于数据进行决策。
综合来看Seeq和TDengine共同为制造业、工业物联网和电力系统等各行各业的时序数据分析提供了综合解决方案。高效数据存储和先进的分析相结合赋予用户充分发挥时序数据潜力的能力推动运营改进并支持预测和规划分析应用。
综合来看Seeq 和 TDengine 共同为制造业、工业物联网和电力系统等各行各业的时序数据分析提供了综合解决方案。高效数据存储和先进的分析相结合,赋予用户充分发挥时序数据潜力的能力,推动运营改进,并支持预测和规划分析应用。

View File

@ -1067,6 +1067,26 @@ charset 的有效值是 UTF-8。
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### streamNotifyMessageSize
- 说明:用于控制事件通知的消息大小 `内部参数`
- 类型:整数
- 单位KB
- 默认值8192
- 最小值8
- 最大值1048576
- 动态修改:不支持
- 支持版本:从 v3.3.6.0 版本开始引入
#### streamNotifyFrameSize
- 说明:用于控制事件通知消息发送时底层的帧大小 `内部参数`
- 类型:整数
- 单位KB
- 默认值256
- 最小值8
- 最大值1048576
- 动态修改:不支持
- 支持版本:从 v3.3.6.0 版本开始引入
### 日志相关
#### logDir
@ -1441,7 +1461,7 @@ charset 的有效值是 UTF-8。
- 取值范围float/double/none
- 默认值none表示关闭无损压缩
- 动态修改:不支持
- 支持版本从v3.1.0.0 版本引入v3.3.0.0 以后废弃
- 支持版本:从 v3.1.0.0 版本引入v3.3.0.0 以后废弃
#### ifAdtFse
- 说明:在启用 TSZ 有损压缩时,使用 FSE 算法替换 HUFFMAN 算法FSE 算法压缩速度更快,但解压稍慢,追求压缩速度可选用此算法
@ -1450,22 +1470,22 @@ charset 的有效值是 UTF-8。
- 最小值0
- 最大值1
- 动态修改:支持通过 SQL 修改,重启生效
- 支持版本从v3.1.0.0 版本引入v3.3.0.0 以后废弃
- 支持版本:从 v3.1.0.0 版本引入v3.3.0.0 以后废弃
#### maxRange
- 说明:用于有损压缩设置 `内部参数`
- 动态修改:支持通过 SQL 修改,重启生效
- 支持版本从v3.1.0.0 版本引入v3.3.0.0 以后废弃
- 支持版本:从 v3.1.0.0 版本引入v3.3.0.0 以后废弃
#### curRange
- 说明:用于有损压缩设置 `内部参数`
- 动态修改:支持通过 SQL 修改,重启生效
- 支持版本从v3.1.0.0 版本引入v3.3.0.0 以后废弃
- 支持版本:从 v3.1.0.0 版本引入v3.3.0.0 以后废弃
#### compressor
- 说明:用于有损压缩设置 `内部参数`
- 动态修改:支持通过 SQL 修改,重启生效
- 支持版本从v3.1.0.0 版本引入v3.3.0.0 以后废弃
- 支持版本:从 v3.1.0.0 版本引入v3.3.0.0 以后废弃
**补充说明**
1. 在 3.3.5.0 之后,所有配置参数都将被持久化到本地存储,重启数据库服务后,将默认使用持久化的配置参数列表;如果您希望继续使用 config 文件中配置的参数,需设置 forceReadConfig 为 1。

View File

@ -115,9 +115,9 @@ taosBenchmark -f <json file>
 
- **continue_if_fail** : 允许用户定义失败后行为。
“continue_if_fail”:  “no”, 失败 taosBenchmark 自动退出,默认行为。
“continue_if_fail”: “yes”, 失败 taosBenchmark 警告用户,并继续写入。
“continue_if_fail”: “smart”, 如果子表不存在失败taosBenchmark 会建立子表并继续写入。
"continue_if_fail":  "no", 失败 taosBenchmark 自动退出,默认行为。
"continue_if_fail": "yes", 失败 taosBenchmark 警告用户,并继续写入。
"continue_if_fail": "smart", 如果子表不存在失败taosBenchmark 会建立子表并继续写入。
#### 数据库相关
@ -125,7 +125,7 @@ taosBenchmark -f <json file>
- **name** : 数据库名。
- **drop** : 数据库已存在时是否删除,可选项为 "yes" 或 "no", 默认为 “yes”
- **drop** : 数据库已存在时是否删除,可选项为 "yes" 或 "no", 默认为 "yes"
#### 超级表相关
@ -208,7 +208,7 @@ taosBenchmark -f <json file>
- **scalingFactor** : 浮点数精度增强因子,仅当数据类型是 float/double 时生效,有效值范围为 1 至 1000000 的正整数。用于增强生成浮点数的精度,特别是在 min 或 max 值较小的情况下。此属性按 10 的幂次增强小数点后的精度scalingFactor 为 10 表示增强 1 位小数精度100 表示增强 2 位,依此类推。
- **fun** : 此列数据以函数填充,目前只支持 sin 和 cos 两函数,输入参数为时间戳换算成角度值,换算公式: 角度 x = 输入的时间列ts值 % 360。同时支持系数调节随机波动因子调节以固定格式的表达式展现如 fun=“10\*sin(x)+100\*random(5)” , x 表示角度,取值 0 ~ 360度增长步长与时间列步长一致。10 表示乘的系数100 表示加或减的系数5 表示波动幅度在 5% 的随机范围内。目前支持的数据类型为 int, bigint, float, double 四种数据类型。注意:表达式为固定模式,不可前后颠倒。
- **fun** : 此列数据以函数填充,目前只支持 sin 和 cos 两函数,输入参数为时间戳换算成角度值,换算公式: 角度 x = 输入的时间列ts值 % 360。同时支持系数调节随机波动因子调节以固定格式的表达式展现如 fun="10\*sin(x)+100\*random(5)" , x 表示角度,取值 0 ~ 360度增长步长与时间列步长一致。10 表示乘的系数100 表示加或减的系数5 表示波动幅度在 5% 的随机范围内。目前支持的数据类型为 int, bigint, float, double 四种数据类型。注意:表达式为固定模式,不可前后颠倒。
- **values** : nchar/binary 列/标签的值域,将从值中随机选择。
@ -220,15 +220,15 @@ taosBenchmark -f <json file>
- **level**: 字符串类型,指定此列两级压缩中的第二级加密算法的压缩率高低,详细参见创建超级表。
- **gen**: 字符串类型,指定此列生成数据的方式,不指定为随机,若指定为 “order”, 会按自然数顺序增长。
- **gen**: 字符串类型,指定此列生成数据的方式,不指定为随机,若指定为 "order", 会按自然数顺序增长。
- **fillNull**: 字符串类型,指定此列是否随机插入 NULL 值,可指定为 “true” 或 "false", 只有当 generate_row_rule 为 2 时有效。
- **fillNull**: 字符串类型,指定此列是否随机插入 NULL 值,可指定为 "true" 或 "false", 只有当 generate_row_rule 为 2 时有效。
#### 写入行为相关
- **thread_count** : 插入数据的线程数量,默认为 8。
- **thread_bind_vgroup** : 写入时 vgroup 是否和写入线程绑定,绑定后可提升写入速度, 取值为 "yes" 或 "no",默认值为 “no”, 设置为 “no” 后与原来行为一致。 当设为 “yes” 时,如果 thread_count 大于写入数据库 vgroups 数量, thread_count 自动调整为 vgroups 数量;如果 thread_count 小于 vgroups 数量,写入线程数量不做调整,一个线程写完一个 vgroup 数据后再写下一个,同时保持一个 vgroup 同时只能由一个线程写入的规则。
- **thread_bind_vgroup** : 写入时 vgroup 是否和写入线程绑定,绑定后可提升写入速度, 取值为 "yes" 或 "no",默认值为 "no", 设置为 "no" 后与原来行为一致。 当设为 "yes" 时,如果 thread_count 大于写入数据库 vgroups 数量, thread_count 自动调整为 vgroups 数量;如果 thread_count 小于 vgroups 数量,写入线程数量不做调整,一个线程写完一个 vgroup 数据后再写下一个,同时保持一个 vgroup 同时只能由一个线程写入的规则。
- **create_table_thread_count** : 建表的线程数量,默认为 8。
@ -248,7 +248,7 @@ taosBenchmark -f <json file>
- **prepare_rand** : 生成的随机数据中唯一值的数量。若为 1 则表示所有数据都相同。默认值为 10000 。
- **pre_load_tb_meta** :是否提前加载子表的 meta 数据,取值为 “yes” or "no"。当子表数量非常多时,打开此选项可提高写入速度。
- **pre_load_tb_meta** :是否提前加载子表的 meta 数据,取值为 "yes" or "no"。当子表数量非常多时,打开此选项可提高写入速度。
### 查询配置参数
@ -265,7 +265,7 @@ interval 控制休眠时间,避免持续查询慢查询消耗 CPU ,单位为
查询指定表(可以指定超级表、子表或普通表)的配置参数在 `specified_table_query` 中设置。
- **mixed_query** : 查询模式
“yes” :`混合查询`
"yes" :`混合查询`
"no"(默认值) :`普通查询`
`普通查询``sqls` 中每个 sql 启动 `threads` 个线程查询此 sql, 执行完 `query_times` 次查询后退出,执行此 sql 的所有线程都完成后进入下一个 sql
`查询总次数` = `sqls` 个数 * `query_times` * `threads`

View File

@ -56,9 +56,9 @@ join_clause:
window_clause: {
SESSION(ts_col, tol_val)
| STATE_WINDOW(col)
| STATE_WINDOW(col) [TRUE_FOR(true_for_duration)]
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
| EVENT_WINDOW START WITH start_trigger_condition END WITH end_trigger_condition
| EVENT_WINDOW START WITH start_trigger_condition END WITH end_trigger_condition [TRUE_FOR(true_for_duration)]
| COUNT_WINDOW(count_val[, sliding_val])
interp_clause:

View File

@ -2049,6 +2049,27 @@ UNIQUE(expr)
**适用于**:表和超级表。
### COLS
```sql
COLS(func(expr), output_expr1, [, output_expr2] ... )
```
**功能说明**:在选择函数 func(expr) 执行结果所在数据行上,执行表达式 output_expr1, [, output_expr2]返回其结果func(expr)结果不输出。
**返回数据类型**:返回多列数据,每列数据类型为对应表达式返回结果的类型。
**适用数据类型**:全部类型字段。
**适用于**:表和超级表。
**使用说明:**
- func 函数类型:必须是单行选择函数(输出结果为一行的选择函数,例如 last 是单行选择函数, 但 top 是多行选择函数)。
- 主要用于一个 sql 中获取多个选择函数结果关联列的场景,例如: select cols(max(c0), ts), cols(max(c1), ts) from ...可用于获取 c0, c1 列最大值的不同 ts 值。
- 注意, 函数 func 的结果并没有返回,如需输出 func 结果,可额外增加输出列,如: select fist(ts), cols(first(ts), c1) from ...
- 输出只有一列时,可以对 cols 函数设置别名。例如: "select cols(first(ts), c1) as c11 from ..."
- 输出一列或者多列时,可以对 cols 函数的每个输出列设置命名。例如: "select cols(first(ts), c1 as c11, c2 as c22)"
## 时序数据特有函数

View File

@ -46,9 +46,9 @@ TDengine 支持按时间窗口切分方式进行聚合结果查询,比如温
```sql
window_clause: {
SESSION(ts_col, tol_val)
| STATE_WINDOW(col)
| STATE_WINDOW(col) [TRUE_FOR(true_for_duration)]
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [FILL(fill_mod_and_val)]
| EVENT_WINDOW START WITH start_trigger_condition END WITH end_trigger_condition
| EVENT_WINDOW START WITH start_trigger_condition END WITH end_trigger_condition [TRUE_FOR(true_for_duration)]
| COUNT_WINDOW(count_val[, sliding_val])
}
```
@ -165,6 +165,12 @@ TDengine 还支持将 CASE 表达式用在状态量,可以表达某个状态
SELECT tbname, _wstart, CASE WHEN voltage >= 205 and voltage <= 235 THEN 1 ELSE 0 END status FROM meters PARTITION BY tbname STATE_WINDOW(CASE WHEN voltage >= 205 and voltage <= 235 THEN 1 ELSE 0 END);
```
状态窗口支持使用 TRUE_FOR 参数来设定窗口的最小持续时长。如果某个状态窗口的宽度低于该设定值,则会自动舍弃,不返回任何计算结果。例如,设置最短持续时长为 3s:
```
SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status) TRUE_FOR (3s);
```
### 会话窗口
会话窗口根据记录的时间戳主键的值来确定是否属于同一个会话。如下图所示,如果设置时间戳的连续的间隔小于等于 12 秒,则以下 6 条记录构成 2 个会话窗口,分别是:[2019-04-28 14:22:102019-04-28 14:22:30]和[2019-04-28 14:23:102019-04-28 14:23:30]。因为 2019-04-28 14:22:30 与 2019-04-28 14:23:10 之间的时间间隔是 40 秒超过了连续时间间隔12 秒)。
@ -196,6 +202,12 @@ select _wstart, _wend, count(*) from t event_window start with c1 > 0 end with c
![TDengine Database 事件窗口示意图](./event_window.webp)
事件窗口支持使用 TRUE_FOR 参数来设定窗口的最小持续时长。如果某个事件窗口的宽度低于该设定值,则会自动舍弃,不返回任何计算结果。例如,设置最短持续时长为 3s:
```
select _wstart, _wend, count(*) from t event_window start with c1 > 0 end with c2 < 10 true_for (3s);
```
### 计数窗口
计数窗口按固定的数据行数来划分窗口。默认将数据按时间戳排序再按照count_val的值将数据划分为多个窗口然后做聚合计算。count_val表示每个count window包含的最大数据行数总数据行数不能整除count_val时最后一个窗口的行数会小于count_val。sliding_val是常量表示窗口滑动的数量类似于 interval的SLIDING。

View File

@ -77,6 +77,7 @@ description: TDengine 保留关键字的详细列表
| CLIENT_VERSION | |
| CLUSTER | |
| COLON | |
| COLS | |
| COLUMN | |
| COMMA | |
| COMMENT | |
@ -427,6 +428,7 @@ description: TDengine 保留关键字的详细列表
| TRANSACTIONS | |
| TRIGGER | |
| TRIM | |
| TRUE_FOR | |
| TSDB_PAGESIZE | |
| TSERIES | |
| TSMA | |

View File

@ -60,8 +60,8 @@ TDengine 其他功能模块的报错,请参考 [错误码](../../../reference/
| GEOMETRY | byte[] |
**注意**JSON 类型仅在 tag 中支持。
GEOMETRY类型是 little endian 字节序的二进制数据,符合 WKB 规范。详细信息请参考 [数据类型](../../taos-sql/data-type/#数据类型)。
WKB规范请参考[Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)。
GEOMETRY 类型是 little endian 字节序的二进制数据,符合 WKB 规范。详细信息请参考 [数据类型](../../taos-sql/data-type/#数据类型)。
WKB 规范请参考 [Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)。
## 示例程序汇总

View File

@ -389,20 +389,20 @@ description: TDengine 服务端的错误码列表和详细说明
| 错误码 | 错误描述 | 可能的出错场景或者可能的原因 | 建议用户采取的措施 |
| ---------- | ------------------------------------------------------------------------------------------------------ | --------------------------------------------- | ------------------------------------- |
| 0x80002600 | syntax error near | SQL语法错误 | 检查并修正SQL语句 |
| 0x80002601 | Incomplete SQL statement | 不完整的SQL语句 | 检查并修正SQL语句 |
| 0x80002602 | Invalid column name | 不合法或不存在的列名 | 检查并修正SQL语句 |
| 0x80002600 | syntax error near | SQL语法错误 | 检查并修正 SQL 语句 |
| 0x80002601 | Incomplete SQL statement | 不完整的SQL语句 | 检查并修正 SQL 语句 |
| 0x80002602 | Invalid column name | 不合法或不存在的列名 | 检查并修正 SQL 语句 |
| 0x80002603 | Table does not exist | 表不存在 | 检查并确认SQL语句中的表是否存在 |
| 0x80002604 | Column ambiguously defined | 列名(别名)重复定义 | 检查并修正SQL语句 |
| 0x80002605 | Invalid value type | 常量值非法 | 检查并修正SQL语句 |
| 0x80002608 | There mustn't be aggregation | 聚合函数出现在非法子句中 | 检查并修正SQL语句 |
| 0x80002609 | ORDER BY item must be the number of a SELECT-list expression | Order by指定的位置不合法 | 检查并修正SQL语句 |
| 0x8000260A | Not a GROUP BY expression | 非法group by语句 | 检查并修正SQL语句 |
| 0x8000260B | Not SELECTed expression | 非法表达式 | 检查并修正SQL语句 |
| 0x8000260C | Not a single-group group function | 非法使用列与函数 | 检查并修正SQL语句 |
| 0x8000260D | Tags number not matched | tag列个数不匹配 | 检查并修正SQL语句 |
| 0x8000260E | Invalid tag name | 无效或不存在的tag名 | 检查并修正SQL语句 |
| 0x80002610 | Value is too long | 值长度超出限制 | 检查并修正SQL语句或API参数 |
| 0x80002604 | Column ambiguously defined | 列名(别名)重复定义 | 检查并修正 SQL 语句 |
| 0x80002605 | Invalid value type | 常量值非法 | 检查并修正 SQL 语句 |
| 0x80002608 | There mustn't be aggregation | 聚合函数出现在非法子句中 | 检查并修正 SQL 语句 |
| 0x80002609 | ORDER BY item must be the number of a SELECT-list expression | Order by指定的位置不合法 | 检查并修正 SQL 语句 |
| 0x8000260A | Not a GROUP BY expression | 非法group by语句 | 检查并修正 SQL 语句 |
| 0x8000260B | Not SELECTed expression | 非法表达式 | 检查并修正 SQL 语句 |
| 0x8000260C | Not a single-group group function | 非法使用列与函数 | 检查并修正 SQL 语句 |
| 0x8000260D | Tags number not matched | tag列个数不匹配 | 检查并修正 SQL 语句 |
| 0x8000260E | Invalid tag name | 无效或不存在的tag名 | 检查并修正 SQL 语句 |
| 0x80002610 | Value is too long | 值长度超出限制 | 检查并修正 SQL 语句或API参数 |
| 0x80002611 | Password too short or empty | 密码为空或少于 8 个字符 | 使用合法的密码 |
| 0x80002612 | Port should be an integer that is less than 65535 and greater than 0 | 端口号非法 | 检查并修正端口号 |
| 0x80002613 | Endpoint should be in the format of 'fqdn:port' | 地址格式错误 | 检查并修正地址信息 |
@ -413,72 +413,78 @@ description: TDengine 服务端的错误码列表和详细说明
| 0x80002618 | Corresponding super table not in this db | 超级表不存在 | 检查库中是否存在对应的超级表 |
| 0x80002619 | Invalid database option | 数据库选项值非法 | 检查并修正数据库选项值 |
| 0x8000261A | Invalid table option | 表选项值非法 | 检查并修正数据表选项值 |
| 0x80002624 | GROUP BY and WINDOW-clause can't be used together | Group by和窗口不能同时使用 | 检查并修正SQL语句 |
| 0x80002627 | Aggregate functions do not support nesting | 函数不支持嵌套使用 | 检查并修正SQL语句 |
| 0x80002628 | Only support STATE_WINDOW on integer/bool/varchar column | 不支持的STATE_WINDOW数据类型 | 检查并修正SQL语句 |
| 0x80002629 | Not support STATE_WINDOW on tag column | 不支持TAG列的STATE_WINDOW | 检查并修正SQL语句 |
| 0x8000262A | STATE_WINDOW not support for super table query | 不支持超级表的STATE_WINDOW | 检查并修正SQL语句 |
| 0x8000262B | SESSION gap should be fixed time window, and greater than 0 | SESSION窗口值非法 | 检查并修正SQL语句 |
| 0x8000262C | Only support SESSION on primary timestamp column | SESSION窗口列非法 | 检查并修正SQL语句 |
| 0x8000262D | Interval offset cannot be negative | INTERVAL offset值非法 | 检查并修正SQL语句 |
| 0x8000262E | Cannot use 'year' as offset when interval is 'month' | INTERVAL offset单位非法 | 检查并修正SQL语句 |
| 0x8000262F | Interval offset should be shorter than interval | INTERVAL offset值非法 | 检查并修正SQL语句 |
| 0x80002630 | Does not support sliding when interval is natural month/year | sliding单位非法 | 检查并修正SQL语句 |
| 0x80002631 | sliding value no larger than the interval value | sliding值非法 | 检查并修正SQL语句 |
| 0x80002632 | sliding value can not less than 1%% of interval value | sliding值非法 | 检查并修正SQL语句 |
| 0x80002633 | Only one tag if there is a json tag | 只支持单个JSON TAG列 | 检查并修正SQL语句 |
| 0x80002634 | Query block has incorrect number of result columns | 列个数不匹配 | 检查并修正SQL语句 |
| 0x80002635 | Incorrect TIMESTAMP value | 主键时间戳列值非法 | 检查并修正SQL语句 |
| 0x80002637 | soffset/offset can not be less than 0 | soffset/offset值非法 | 检查并修正SQL语句 |
| 0x80002638 | slimit/soffset only available for PARTITION/GROUP BY query | slimit/soffset只支持PARTITION BY/GROUP BY语句 | 检查并修正SQL语句 |
| 0x80002624 | GROUP BY and WINDOW-clause can't be used together | Group by和窗口不能同时使用 | 检查并修正 SQL 语句 |
| 0x80002627 | Aggregate functions do not support nesting | 函数不支持嵌套使用 | 检查并修正 SQL 语句 |
| 0x80002628 | Only support STATE_WINDOW on integer/bool/varchar column | 不支持的STATE_WINDOW数据类型 | 检查并修正 SQL 语句 |
| 0x80002629 | Not support STATE_WINDOW on tag column | 不支持TAG列的STATE_WINDOW | 检查并修正 SQL 语句 |
| 0x8000262A | STATE_WINDOW not support for super table query | 不支持超级表的STATE_WINDOW | 检查并修正 SQL 语句 |
| 0x8000262B | SESSION gap should be fixed time window, and greater than 0 | SESSION窗口值非法 | 检查并修正 SQL 语句 |
| 0x8000262C | Only support SESSION on primary timestamp column | SESSION窗口列非法 | 检查并修正 SQL 语句 |
| 0x8000262D | Interval offset cannot be negative | INTERVAL offset值非法 | 检查并修正 SQL 语句 |
| 0x8000262E | Cannot use 'year' as offset when interval is 'month' | INTERVAL offset单位非法 | 检查并修正 SQL 语句 |
| 0x8000262F | Interval offset should be shorter than interval | INTERVAL offset值非法 | 检查并修正 SQL 语句 |
| 0x80002630 | Does not support sliding when interval is natural month/year | sliding单位非法 | 检查并修正 SQL 语句 |
| 0x80002631 | sliding value no larger than the interval value | sliding值非法 | 检查并修正 SQL 语句 |
| 0x80002632 | sliding value can not less than 1%% of interval value | sliding值非法 | 检查并修正 SQL 语句 |
| 0x80002633 | Only one tag if there is a json tag | 只支持单个JSON TAG列 | 检查并修正 SQL 语句 |
| 0x80002634 | Query block has incorrect number of result columns | 列个数不匹配 | 检查并修正 SQL 语句 |
| 0x80002635 | Incorrect TIMESTAMP value | 主键时间戳列值非法 | 检查并修正 SQL 语句 |
| 0x80002637 | soffset/offset can not be less than 0 | soffset/offset值非法 | 检查并修正 SQL 语句 |
| 0x80002638 | slimit/soffset only available for PARTITION/GROUP BY query | slimit/soffset只支持PARTITION BY/GROUP BY语句 | 检查并修正 SQL 语句 |
| 0x80002639 | Invalid topic query | 不支持的TOPIC查询语 |
| 0x8000263A | Cannot drop super table in batch | 不支持批量删除超级表 | 检查并修正SQL语句 |
| 0x8000263B | Start(end) time of query range required or time range too large | 窗口个数超出限制 | 检查并修正SQL语句 |
| 0x8000263C | Duplicated column names | 列名称重复 | 检查并修正SQL语句 |
| 0x8000263D | Tags length exceeds max length | TAG值长度超出最大支持范围 | 检查并修正SQL语句 |
| 0x8000263E | Row length exceeds max length | 行长度检查并修正SQL语句 | 检查并修正SQL语句 |
| 0x8000263F | Illegal number of columns | 列个数错误 | 检查并修正SQL语句 |
| 0x80002640 | Too many columns | 列个数超出上限 | 检查并修正SQL语句 |
| 0x80002641 | First column must be timestamp | 第一列必须是主键时间戳列 | 检查并修正SQL语句 |
| 0x80002642 | Invalid binary/nchar column/tag length | binary/nchar长度错误 | 检查并修正SQL语句 |
| 0x80002643 | Invalid number of tag columns | TAG列个数错误 | 检查并修正SQL语句 |
| 0x8000263A | Cannot drop super table in batch | 不支持批量删除超级表 | 检查并修正 SQL 语句 |
| 0x8000263B | Start(end) time of query range required or time range too large | 窗口个数超出限制 | 检查并修正 SQL 语句 |
| 0x8000263C | Duplicated column names | 列名称重复 | 检查并修正 SQL 语句 |
| 0x8000263D | Tags length exceeds max length | TAG值长度超出最大支持范围 | 检查并修正 SQL 语句 |
| 0x8000263E | Row length exceeds max length | 行长度检查并修正 SQL 语句 | 检查并修正 SQL 语句 |
| 0x8000263F | Illegal number of columns | 列个数错误 | 检查并修正 SQL 语句 |
| 0x80002640 | Too many columns | 列个数超出上限 | 检查并修正 SQL 语句 |
| 0x80002641 | First column must be timestamp | 第一列必须是主键时间戳列 | 检查并修正 SQL 语句 |
| 0x80002642 | Invalid binary/nchar column/tag length | binary/nchar长度错误 | 检查并修正 SQL 语句 |
| 0x80002643 | Invalid number of tag columns | TAG列个数错误 | 检查并修正 SQL 语句 |
| 0x80002644 | Permission denied | 权限错误 | 检查确认用户是否有相应操作权限 |
| 0x80002645 | Invalid stream query | 非法流语句 | 检查并修正SQL语句 |
| 0x80002646 | Invalid _c0 or _rowts expression | _c0或_rowts非法使用 | 检查并修正SQL语句 |
| 0x80002647 | Invalid timeline function | 函数依赖的主键时间戳不存在 | 检查并修正SQL语句 |
| 0x80002645 | Invalid stream query | 非法流语句 | 检查并修正 SQL 语句 |
| 0x80002646 | Invalid _c0 or _rowts expression | _c0或_rowts非法使用 | 检查并修正 SQL 语句 |
| 0x80002647 | Invalid timeline function | 函数依赖的主键时间戳不存在 | 检查并修正 SQL 语句 |
| 0x80002648 | Invalid password | 密码不符合规范 | 检查并修改密码 |
| 0x80002649 | Invalid alter table statement | 修改表语句不合法 | 检查并修正SQL语句 |
| 0x8000264A | Primary timestamp column cannot be dropped | 主键时间戳列不允许删除 | 检查并修正SQL语句 |
| 0x8000264B | Only binary/nchar column length could be modified, and the length can only be increased, not decreased | 非法列修改 | 检查并修正SQL语句 |
| 0x8000264C | Invalid tbname pseudo column | 非法使用tbname列 | 检查并修正SQL语句 |
| 0x80002649 | Invalid alter table statement | 修改表语句不合法 | 检查并修正 SQL 语句 |
| 0x8000264A | Primary timestamp column cannot be dropped | 主键时间戳列不允许删除 | 检查并修正 SQL 语句 |
| 0x8000264B | Only binary/nchar column length could be modified, and the length can only be increased, not decreased | 非法列修改 | 检查并修正 SQL 语句 |
| 0x8000264C | Invalid tbname pseudo column | 非法使用tbname列 | 检查并修正 SQL 语句 |
| 0x8000264D | Invalid function name | 非法函数名 | 检查并修正函数名 |
| 0x8000264E | Comment too long | 注释长度超限 | 检查并修正SQL语句 |
| 0x8000264F | Function(s) only allowed in SELECT list, cannot mixed with non scalar functions or columns | 非法的函数混用 | 检查并修正SQL语句 |
| 0x80002650 | Window query not supported, since no valid timestamp column included in the result of subquery | 窗口查询依赖的主键时间戳列不存在 | 检查并修正SQL语句 |
| 0x80002651 | No columns can be dropped | 必须的列不能被删除 | 检查并修正SQL语句 |
| 0x80002652 | Only tag can be json type | 普通列不支持JSON类型 | 检查并修正SQL语句 |
| 0x80002655 | The DELETE statement must have a definite time window range | DELETE语句中存在非法WHERE条件 | 检查并修正SQL语句 |
| 0x80002656 | The REDISTRIBUTE VGROUP statement only support 1 to 3 dnodes | REDISTRIBUTE VGROUP指定的DNODE个数非法 | 检查并修正SQL语句 |
| 0x80002657 | Fill now allowed | 函数不允许FILL功能 | 检查并修正SQL语句 |
| 0x80002658 | Invalid windows pc | 非法使用窗口伪列 | 检查并修正SQL语句 |
| 0x80002659 | Window not allowed | 函数不能在窗口中使用 | 检查并修正SQL语句 |
| 0x8000265A | Stream not allowed | 函数不能在流计算中使用 | 检查并修正SQL语句 |
| 0x8000265B | Group by not allowd | 函数不能在分组中使用 | 检查并修正SQL语句 |
| 0x8000265D | Invalid interp clause | 非法INTERP或相关语句 | 检查并修正SQL语句 |
| 0x8000265E | Not valid function ion window | 非法窗口语句 | 检查并修正SQL语句 |
| 0x8000265F | Only support single table | 函数只支持在单表查询中使用 | 检查并修正SQL语句 |
| 0x80002660 | Invalid sma index | 非法创建SMA语句 | 检查并修正SQL语句 |
| 0x80002661 | Invalid SELECTed expression | 无效查询语句 | 检查并修正SQL语句 |
| 0x8000264E | Comment too long | 注释长度超限 | 检查并修正 SQL 语句 |
| 0x8000264F | Function(s) only allowed in SELECT list, cannot mixed with non scalar functions or columns | 非法的函数混用 | 检查并修正 SQL 语句 |
| 0x80002650 | Window query not supported, since no valid timestamp column included in the result of subquery | 窗口查询依赖的主键时间戳列不存在 | 检查并修正 SQL 语句 |
| 0x80002651 | No columns can be dropped | 必须的列不能被删除 | 检查并修正 SQL 语句 |
| 0x80002652 | Only tag can be json type | 普通列不支持JSON类型 | 检查并修正 SQL 语句 |
| 0x80002655 | The DELETE statement must have a definite time window range | DELETE语句中存在非法WHERE条件 | 检查并修正 SQL 语句 |
| 0x80002656 | The REDISTRIBUTE VGROUP statement only support 1 to 3 dnodes | REDISTRIBUTE VGROUP指定的DNODE个数非法 | 检查并修正 SQL 语句 |
| 0x80002657 | Fill now allowed | 函数不允许FILL功能 | 检查并修正 SQL 语句 |
| 0x80002658 | Invalid windows pc | 非法使用窗口伪列 | 检查并修正 SQL 语句 |
| 0x80002659 | Window not allowed | 函数不能在窗口中使用 | 检查并修正 SQL 语句 |
| 0x8000265A | Stream not allowed | 函数不能在流计算中使用 | 检查并修正 SQL 语句 |
| 0x8000265B | Group by not allowd | 函数不能在分组中使用 | 检查并修正 SQL 语句 |
| 0x8000265D | Invalid interp clause | 非法INTERP或相关语句 | 检查并修正 SQL 语句 |
| 0x8000265E | Not valid function ion window | 非法窗口语句 | 检查并修正 SQL 语句 |
| 0x8000265F | Only support single table | 函数只支持在单表查询中使用 | 检查并修正 SQL 语句 |
| 0x80002660 | Invalid sma index | 非法创建SMA语句 | 检查并修正 SQL 语句 |
| 0x80002661 | Invalid SELECTed expression | 无效查询语句 | 检查并修正 SQL 语句 |
| 0x80002662 | Fail to get table info | 获取表元数据信息失败 | 保留现场和日志github上报issue |
| 0x80002663 | Not unique table/alias | 表名(别名)冲突 | 检查并修正SQL语句 |
| 0x80002664 | Join requires valid time series input | 不支持子查询不含主键时间戳列输出的JOIN查询 | 检查并修正SQL语句 |
| 0x80002665 | The _TAGS pseudo column can only be used for subtable and supertable queries | 非法TAG列查询 | 检查并修正SQL语句 |
| 0x80002666 | 子查询不含主键时间戳列输出 | 检查并修正SQL语句 |
| 0x80002667 | Invalid usage of expr: %s | 非法表达式 | 检查并修正SQL语句 |
| 0x80002663 | Not unique table/alias | 表名(别名)冲突 | 检查并修正 SQL 语句 |
| 0x80002664 | Join requires valid time series input | 不支持子查询不含主键时间戳列输出的JOIN查询 | 检查并修正 SQL 语句 |
| 0x80002665 | The _TAGS pseudo column can only be used for subtable and supertable queries | 非法TAG列查询 | 检查并修正 SQL 语句 |
| 0x80002666 | 子查询不含主键时间戳列输出 | 检查并修正 SQL 语句 |
| 0x80002667 | Invalid usage of expr: %s | 非法表达式 | 检查并修正 SQL 语句 |
| 0x80002687 | True_for duration cannot be negative | true_for 的值不能是负数 | 检查并修正 SQL 语句 |
| 0x80002688 | Cannot use 'year' or 'month' as true_for duration | 不能使用 n(月), y(年) 作为 true_for 的时间单位 | 检查并修正 SQL 语句 |
| 0x80002689 | Invalid using cols function | cols函数使用错误 | 检查并修正 SQL 语句 |
| 0x8000268A | Cols function's first param must be a select function that output a single row | cols函数第一个参数应该为选择函数 | 检查并修正 SQL 语句 |
| 0x8000268B | Invalid using cols function with multiple output columns | 多列输出的 cols 函数使用错误 | 检查并修正 SQL 语句 |
| 0x8000268C | Invalid using alias for cols function | cols 函数输出列重命名错误 | 检查并修正 SQL 语句 |
| 0x800026FF | Parser internal error | 解析器内部错误 | 保留现场和日志github上报issue |
| 0x80002700 | Planner internal error | 计划期内部错误 | 保留现场和日志github上报issue |
| 0x80002701 | Expect ts equal | JOIN条件校验失败 | 保留现场和日志github上报issue |
| 0x80002702 | Cross join not support | 不支持CROSS JOIN | 检查并修正SQL语句 |
| 0x80002702 | Cross join not support | 不支持CROSS JOIN | 检查并修正 SQL 语句 |
## function

View File

@ -61,6 +61,7 @@ typedef enum {
TSDB_GRANT_ACTIVE_ACTIVE,
TSDB_GRANT_DUAL_REPLICA_HA,
TSDB_GRANT_DB_ENCRYPTION,
TSDB_GRANT_TD_GPT,
} EGrantType;
int32_t checkAndGetCryptKey(const char *encryptCode, const char *machineId, char **key);

View File

@ -38,7 +38,6 @@ typedef enum {
STREAM_QUEUE,
ARB_QUEUE,
STREAM_CTRL_QUEUE,
STREAM_LONG_EXEC_QUEUE,
QUEUE_MAX,
} EQueueType;

View File

@ -183,7 +183,7 @@ void qCleanExecTaskBlockBuf(qTaskInfo_t tinfo);
*/
int32_t qAsyncKillTask(qTaskInfo_t tinfo, int32_t rspCode);
int32_t qKillTask(qTaskInfo_t tinfo, int32_t rspCode, int64_t waitDuration);
int32_t qKillTask(qTaskInfo_t tinfo, int32_t rspCode);
bool qTaskIsExecuting(qTaskInfo_t qinfo);

View File

@ -276,12 +276,14 @@ typedef struct tExprNode {
int32_t num;
struct SFunctionNode *pFunctNode;
int32_t functionType;
int32_t bindExprID;
} _function;
struct {
struct SNode *pRootNode;
} _optrRoot;
};
int32_t relatedTo;
} tExprNode;
struct SScalarParam {

View File

@ -155,6 +155,7 @@ typedef enum EFunctionType {
FUNCTION_TYPE_FORECAST_LOW,
FUNCTION_TYPE_FORECAST_HIGH,
FUNCTION_TYPE_FORECAST_ROWTS,
FUNCTION_TYPE_COLS,
FUNCTION_TYPE_IROWTS_ORIGIN,
// internal function
@ -208,6 +209,7 @@ typedef enum EFunctionType {
FUNCTION_TYPE_HYPERLOGLOG_STATE,
FUNCTION_TYPE_HYPERLOGLOG_STATE_MERGE,
// geometry functions
FUNCTION_TYPE_GEOM_FROM_TEXT = 4250,
FUNCTION_TYPE_AS_TEXT,
@ -295,6 +297,7 @@ bool fmisSelectGroupConstValueFunc(int32_t funcId);
bool fmIsElapsedFunc(int32_t funcId);
bool fmIsDBUsageFunc(int32_t funcId);
bool fmIsRowTsOriginFunc(int32_t funcId);
bool fmIsSelectColsFunc(int32_t funcId);
void getLastCacheDataType(SDataType* pType, int32_t pkBytes);
int32_t createFunction(const char* pName, SNodeList* pParameterList, SFunctionNode** pFunc);

View File

@ -327,6 +327,7 @@ typedef struct SWindowLogicNode {
SNode* pStateExpr;
SNode* pStartCond;
SNode* pEndCond;
int64_t trueForLimit;
int8_t triggerType;
int64_t watermark;
int64_t deleteMark;
@ -724,6 +725,7 @@ typedef SSessionWinodwPhysiNode SStreamFinalSessionWinodwPhysiNode;
typedef struct SStateWinodwPhysiNode {
SWindowPhysiNode window;
SNode* pStateKey;
int64_t trueForLimit;
} SStateWinodwPhysiNode;
typedef SStateWinodwPhysiNode SStreamStateWinodwPhysiNode;
@ -732,6 +734,7 @@ typedef struct SEventWinodwPhysiNode {
SWindowPhysiNode window;
SNode* pStartCond;
SNode* pEndCond;
int64_t trueForLimit;
} SEventWinodwPhysiNode;
typedef SEventWinodwPhysiNode SStreamEventWinodwPhysiNode;

View File

@ -16,6 +16,7 @@
#ifndef _TD_QUERY_NODES_H_
#define _TD_QUERY_NODES_H_
#include <stdint.h>
#ifdef __cplusplus
extern "C" {
#endif
@ -61,6 +62,8 @@ typedef struct SExprNode {
bool asParam;
bool asPosition;
int32_t projIdx;
int32_t relatedTo;
int32_t bindExprID;
} SExprNode;
typedef enum EColumnType {
@ -322,6 +325,7 @@ typedef struct SStateWindowNode {
ENodeType type; // QUERY_NODE_STATE_WINDOW
SNode* pCol; // timestamp primary key
SNode* pExpr;
SNode* pTrueForLimit;
} SStateWindowNode;
typedef struct SSessionWindowNode {
@ -346,6 +350,7 @@ typedef struct SEventWindowNode {
SNode* pCol; // timestamp primary key
SNode* pStartCond;
SNode* pEndCond;
SNode* pTrueForLimit;
} SEventWindowNode;
typedef struct SCountWindowNode {
@ -428,6 +433,7 @@ typedef struct SSelectStmt {
ENodeType type; // QUERY_NODE_SELECT_STMT
bool isDistinct;
SNodeList* pProjectionList;
SNodeList* pProjectionBindList;
SNode* pFromTable;
SNode* pWhere;
SNodeList* pPartitionByList;
@ -697,6 +703,9 @@ char* getJoinSTypeString(EJoinSubType type);
char* getFullJoinTypeString(EJoinType type, EJoinSubType stype);
int32_t mergeJoinConds(SNode** ppDst, SNode** ppSrc);
void rewriteExprAliasName(SExprNode* pNode, int64_t num);
bool isRelatedToOtherExpr(SExprNode* pExpr);
#ifdef __cplusplus
}
#endif

View File

@ -58,7 +58,6 @@ extern "C" {
#define STREAM_EXEC_T_STOP_ALL_TASKS (-5)
#define STREAM_EXEC_T_RESUME_TASK (-6)
#define STREAM_EXEC_T_ADD_FAILED_TASK (-7)
#define STREAM_EXEC_T_STOP_ONE_TASK (-8)
typedef struct SStreamTask SStreamTask;
typedef struct SStreamQueue SStreamQueue;
@ -769,19 +768,15 @@ void streamMetaCleanup();
int32_t streamMetaOpen(const char* path, void* ahandle, FTaskBuild expandFunc, FTaskExpand expandTaskFn, int32_t vgId,
int64_t stage, startComplete_fn_t fn, SStreamMeta** pMeta);
void streamMetaClose(SStreamMeta* streamMeta);
int32_t streamMetaSaveTaskInMeta(SStreamMeta* pMeta, SStreamTask* pTask); // save to stream meta store
int32_t streamMetaRemoveTaskInMeta(SStreamMeta* pMeta, STaskId* pKey);
int32_t streamMetaSaveTask(SStreamMeta* pMeta, SStreamTask* pTask); // save to stream meta store
int32_t streamMetaRemoveTask(SStreamMeta* pMeta, STaskId* pKey);
int32_t streamMetaRegisterTask(SStreamMeta* pMeta, int64_t ver, SStreamTask* pTask, bool* pAdded);
int32_t streamMetaUnregisterTask(SStreamMeta* pMeta, int64_t streamId, int32_t taskId);
int32_t streamMetaGetNumOfTasks(SStreamMeta* pMeta);
int32_t streamMetaAcquireTaskNoLock(SStreamMeta* pMeta, int64_t streamId, int32_t taskId, SStreamTask** pTask);
int32_t streamMetaAcquireTaskUnsafe(SStreamMeta* pMeta, STaskId* pId, SStreamTask** pTask);
int32_t streamMetaAcquireTask(SStreamMeta* pMeta, int64_t streamId, int32_t taskId, SStreamTask** pTask);
void streamMetaReleaseTask(SStreamMeta* pMeta, SStreamTask* pTask);
void streamMetaClear(SStreamMeta* pMeta);
void streamMetaInitBackend(SStreamMeta* pMeta);
int32_t streamMetaCommit(SStreamMeta* pMeta);
@ -815,7 +810,6 @@ void streamMetaLoadAllTasks(SStreamMeta* pMeta);
int32_t streamMetaStartAllTasks(SStreamMeta* pMeta);
int32_t streamMetaStopAllTasks(SStreamMeta* pMeta);
int32_t streamMetaStartOneTask(SStreamMeta* pMeta, int64_t streamId, int32_t taskId);
int32_t streamMetaStopOneTask(SStreamMeta* pMeta, int64_t streamId, int32_t taskId);
bool streamMetaAllTasksReady(const SStreamMeta* pMeta);
int32_t streamTaskSendNegotiateChkptIdMsg(SStreamTask* pTask);
int32_t streamTaskCheckIfReqConsenChkptId(SStreamTask* pTask, int64_t ts);

View File

@ -693,6 +693,7 @@ int32_t taosGetErrSize();
#define TSDB_CODE_GRANT_OBJECT_STROAGE_EXPIRED TAOS_DEF_ERROR_CODE(0, 0x082B)
#define TSDB_CODE_GRANT_DUAL_REPLICA_HA_EXPIRED TAOS_DEF_ERROR_CODE(0, 0x082C)
#define TSDB_CODE_GRANT_DB_ENCRYPTION_EXPIRED TAOS_DEF_ERROR_CODE(0, 0x082D)
#define TSDB_CODE_GRANT_TD_GPT_EXPIRED TAOS_DEF_ERROR_CODE(0, 0x082E)
// sync
// #define TSDB_CODE_SYN_INVALID_CONFIG TAOS_DEF_ERROR_CODE(0, 0x0900) // 2.x
@ -908,6 +909,12 @@ int32_t taosGetErrSize();
#define TSDB_CODE_PAR_INVALID_ANOMALY_WIN_OPT TAOS_DEF_ERROR_CODE(0, 0x2684)
#define TSDB_CODE_PAR_INVALID_FORECAST_CLAUSE TAOS_DEF_ERROR_CODE(0, 0x2685)
#define TSDB_CODE_PAR_INVALID_VGID_LIST TAOS_DEF_ERROR_CODE(0, 0x2686)
#define TSDB_CODE_PAR_TRUE_FOR_NEGATIVE TAOS_DEF_ERROR_CODE(0, 0x2687)
#define TSDB_CODE_PAR_TRUE_FOR_UNIT TAOS_DEF_ERROR_CODE(0, 0x2688)
#define TSDB_CODE_PAR_INVALID_COLS_FUNCTION TAOS_DEF_ERROR_CODE(0, 0x2689)
#define TSDB_CODE_PAR_INVALID_COLS_SELECTFUNC TAOS_DEF_ERROR_CODE(0, 0x268A)
#define TSDB_CODE_INVALID_MULITI_COLS_FUNC TAOS_DEF_ERROR_CODE(0, 0x268B)
#define TSDB_CODE_INVALID_COLS_ALIAS TAOS_DEF_ERROR_CODE(0, 0x268C)
#define TSDB_CODE_PAR_INTERNAL_ERROR TAOS_DEF_ERROR_CODE(0, 0x26FF)
//planner

View File

@ -266,6 +266,7 @@ typedef enum ELogicConditionType {
#define TSDB_SUBSCRIBE_KEY_LEN (TSDB_CGROUP_LEN + TSDB_TOPIC_FNAME_LEN + 2)
#define TSDB_PARTITION_KEY_LEN (TSDB_SUBSCRIBE_KEY_LEN + 20)
#define TSDB_COL_NAME_LEN 65
#define TSDB_COL_NAME_EXLEN 8
#define TSDB_COL_FNAME_LEN (TSDB_TABLE_NAME_LEN + TSDB_COL_NAME_LEN + TSDB_NAME_DELIMITER_LEN)
#define TSDB_MAX_SAVED_SQL_LEN TSDB_MAX_COLUMNS * 64
#define TSDB_MAX_SQL_LEN TSDB_PAYLOAD_SIZE

View File

@ -76,7 +76,7 @@ void tQWorkerFreeQueue(SQWorkerPool *pool, STaosQueue *queue);
int32_t tAutoQWorkerInit(SAutoQWorkerPool *pool);
void tAutoQWorkerCleanup(SAutoQWorkerPool *pool);
STaosQueue *tAutoQWorkerAllocQueue(SAutoQWorkerPool *pool, void *ahandle, FItem fp, int32_t minNum);
STaosQueue *tAutoQWorkerAllocQueue(SAutoQWorkerPool *pool, void *ahandle, FItem fp);
void tAutoQWorkerFreeQueue(SAutoQWorkerPool *pool, STaosQueue *queue);
int32_t tWWorkerInit(SWWorkerPool *pool);

View File

@ -1510,7 +1510,7 @@ TEST(clientCase, sub_tb_mt_test) {
(void)taosThreadCreate(&qid[i], NULL, doConsumeData, NULL);
}
for (int32_t i = 0; i < 4; ++i) {
for (int32_t i = 0; i < 1; ++i) {
(void)taosThreadJoin(qid[i], NULL);
}
}

View File

@ -188,7 +188,7 @@ char tsCheckpointBackupDir[PATH_MAX] = "/var/lib/taos/backup/checkpoint/";
// tmq
int32_t tmqMaxTopicNum = 20;
int32_t tmqRowSize = 4096;
int32_t tmqRowSize = 1000;
// query
int32_t tsQueryPolicy = 1;
bool tsQueryTbNotExistAsEmpty = false;

View File

@ -32,7 +32,6 @@ typedef struct SVnodeMgmt {
const char *name;
SQueryAutoQWorkerPool queryPool;
SAutoQWorkerPool streamPool;
SAutoQWorkerPool streamLongExecPool;
SWWorkerPool streamCtrlPool;
SWWorkerPool fetchPool;
SSingleWorker mgmtWorker;
@ -76,7 +75,6 @@ typedef struct {
STaosQueue *pQueryQ;
STaosQueue *pStreamQ;
STaosQueue *pStreamCtrlQ;
STaosQueue *pStreamLongExecQ;
STaosQueue *pFetchQ;
STaosQueue *pMultiMgmQ;
} SVnodeObj;
@ -139,8 +137,6 @@ int32_t vmPutMsgToQueryQueue(SVnodeMgmt *pMgmt, SRpcMsg *pMsg);
int32_t vmPutMsgToFetchQueue(SVnodeMgmt *pMgmt, SRpcMsg *pMsg);
int32_t vmPutMsgToStreamQueue(SVnodeMgmt *pMgmt, SRpcMsg *pMsg);
int32_t vmPutMsgToStreamCtrlQueue(SVnodeMgmt *pMgmt, SRpcMsg *pMsg);
int32_t vmPutMsgToStreamLongExecQueue(SVnodeMgmt *pMgmt, SRpcMsg *pMsg);
int32_t vmPutMsgToMergeQueue(SVnodeMgmt *pMgmt, SRpcMsg *pMsg);
int32_t vmPutMsgToMgmtQueue(SVnodeMgmt *pMgmt, SRpcMsg *pMsg);
int32_t vmPutMsgToMultiMgmtQueue(SVnodeMgmt *pMgmt, SRpcMsg *pMsg);

View File

@ -1008,29 +1008,27 @@ SArray *vmGetMsgHandles() {
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_RUN, vmPutMsgToStreamQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_DISPATCH, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_DISPATCH_RSP, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_STREAM_TASK_CHECK, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_STREAM_TASK_CHECK_RSP, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_CHECKPOINT_READY, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_CHECKPOINT_READY_RSP, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_RETRIEVE_TRIGGER, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_RETRIEVE_TRIGGER_RSP, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_HEARTBEAT_RSP, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_REQ_CHKPT_RSP, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_CHKPT_REPORT_RSP, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_STREAM_SCAN_HISTORY, vmPutMsgToStreamLongExecQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_GET_STREAM_PROGRESS, vmPutMsgToStreamQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_RETRIEVE, vmPutMsgToStreamQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_RETRIEVE_RSP, vmPutMsgToStreamQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_UPDATE_CHKPT, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_CONSEN_CHKPT, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_STREAM_TASK_CHECK, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_STREAM_TASK_CHECK_RSP, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_PAUSE, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_RESUME, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_STOP, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_STREAM_CHECK_POINT_SOURCE, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_CHECKPOINT_READY, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_CHECKPOINT_READY_RSP, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_RETRIEVE_TRIGGER, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_RETRIEVE_TRIGGER_RSP, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_STREAM_TASK_UPDATE, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_STREAM_TASK_RESET, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_HEARTBEAT_RSP, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_REQ_CHKPT_RSP, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_CHKPT_REPORT_RSP, vmPutMsgToStreamCtrlQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_GET_STREAM_PROGRESS, vmPutMsgToStreamQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_UPDATE_CHKPT, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_CONSEN_CHKPT, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_ALTER_REPLICA, vmPutMsgToMgmtQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_ALTER_CONFIG, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;

View File

@ -398,14 +398,10 @@ void vmCloseVnode(SVnodeMgmt *pMgmt, SVnodeObj *pVnode, bool commitAndRemoveWal,
dInfo("vgId:%d, wait for vnode stream queue:%p is empty, %d remains", pVnode->vgId,
pVnode->pStreamQ, taosQueueItemSize(pVnode->pStreamQ));
while (!taosQueueEmpty(pVnode->pStreamQ)) taosMsleep(50);
while (!taosQueueEmpty(pVnode->pStreamQ)) taosMsleep(10);
dInfo("vgId:%d, wait for vnode stream ctrl queue:%p is empty", pVnode->vgId, pVnode->pStreamCtrlQ);
while (!taosQueueEmpty(pVnode->pStreamCtrlQ)) taosMsleep(50);
dInfo("vgId:%d, wait for vnode stream long-exec queue:%p is empty, %d remains", pVnode->vgId,
pVnode->pStreamLongExecQ, taosQueueItemSize(pVnode->pStreamLongExecQ));
while (!taosQueueEmpty(pVnode->pStreamLongExecQ)) taosMsleep(50);
while (!taosQueueEmpty(pVnode->pStreamCtrlQ)) taosMsleep(10);
dInfo("vgId:%d, all vnode queues is empty", pVnode->vgId);

View File

@ -150,7 +150,7 @@ static void vmProcessStreamCtrlQueue(SQueueInfo *pInfo, STaosQall* pQall, int32_
SRpcMsg *pMsg = pItem;
const STraceId *trace = &pMsg->info.traceId;
dGTrace("vgId:%d, msg:%p get from vnode-stream-ctrl queue", pVnode->vgId, pMsg);
dGTrace("vgId:%d, msg:%p get from vnode-ctrl-stream queue", pVnode->vgId, pMsg);
code = vnodeProcessStreamCtrlMsg(pVnode->pImpl, pMsg, pInfo);
if (code != 0) {
terrno = code;
@ -165,26 +165,6 @@ static void vmProcessStreamCtrlQueue(SQueueInfo *pInfo, STaosQall* pQall, int32_
}
}
static void vmProcessStreamLongExecQueue(SQueueInfo *pInfo, SRpcMsg *pMsg) {
SVnodeObj *pVnode = pInfo->ahandle;
const STraceId *trace = &pMsg->info.traceId;
int32_t code = 0;
dGTrace("vgId:%d, msg:%p get from vnode-stream long-exec queue", pVnode->vgId, pMsg);
code = vnodeProcessStreamLongExecMsg(pVnode->pImpl, pMsg, pInfo);
if (code != 0) {
terrno = code;
dGError("vgId:%d, msg:%p failed to process stream msg %s since %s", pVnode->vgId, pMsg, TMSG_INFO(pMsg->msgType),
tstrerror(code));
vmSendRsp(pMsg, code);
}
dGTrace("vgId:%d, msg:%p is freed, code:0x%x", pVnode->vgId, pMsg, code);
rpcFreeCont(pMsg->pCont);
taosFreeQitem(pMsg);
}
static void vmProcessFetchQueue(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
SVnodeObj *pVnode = pInfo->ahandle;
SRpcMsg *pMsg = NULL;
@ -294,13 +274,9 @@ static int32_t vmPutMsgToQueue(SVnodeMgmt *pMgmt, SRpcMsg *pMsg, EQueueType qtyp
code = taosWriteQitem(pVnode->pStreamQ, pMsg);
break;
case STREAM_CTRL_QUEUE:
dGTrace("vgId:%d, msg:%p put into vnode-stream-ctrl queue", pVnode->vgId, pMsg);
dGTrace("vgId:%d, msg:%p put into vnode-ctrl-stream queue", pVnode->vgId, pMsg);
code = taosWriteQitem(pVnode->pStreamCtrlQ, pMsg);
break;
case STREAM_LONG_EXEC_QUEUE:
dGTrace("vgId:%d, msg:%p put into vnode-stream-long-exec queue", pVnode->vgId, pMsg);
code = taosWriteQitem(pVnode->pStreamLongExecQ, pMsg);
break;
case FETCH_QUEUE:
dGTrace("vgId:%d, msg:%p put into vnode-fetch queue", pVnode->vgId, pMsg);
code = taosWriteQitem(pVnode->pFetchQ, pMsg);
@ -359,8 +335,6 @@ int32_t vmPutMsgToStreamQueue(SVnodeMgmt *pMgmt, SRpcMsg *pMsg) { return vmPutMs
int32_t vmPutMsgToStreamCtrlQueue(SVnodeMgmt *pMgmt, SRpcMsg *pMsg) { return vmPutMsgToQueue(pMgmt, pMsg, STREAM_CTRL_QUEUE); }
int32_t vmPutMsgToStreamLongExecQueue(SVnodeMgmt *pMgmt, SRpcMsg *pMsg) { return vmPutMsgToQueue(pMgmt, pMsg, STREAM_LONG_EXEC_QUEUE); }
int32_t vmPutMsgToMultiMgmtQueue(SVnodeMgmt *pMgmt, SRpcMsg *pMsg) {
const STraceId *trace = &pMsg->info.traceId;
dGTrace("msg:%p, put into vnode-multi-mgmt queue", pMsg);
@ -435,10 +409,6 @@ int32_t vmGetQueueSize(SVnodeMgmt *pMgmt, int32_t vgId, EQueueType qtype) {
break;
case STREAM_CTRL_QUEUE:
size = taosQueueItemSize(pVnode->pStreamCtrlQ);
break;
case STREAM_LONG_EXEC_QUEUE:
size = taosQueueItemSize(pVnode->pStreamLongExecQ);
break;
default:
break;
}
@ -481,16 +451,13 @@ int32_t vmAllocQueue(SVnodeMgmt *pMgmt, SVnodeObj *pVnode) {
}
pVnode->pQueryQ = tQueryAutoQWorkerAllocQueue(&pMgmt->queryPool, pVnode, (FItem)vmProcessQueryQueue);
pVnode->pStreamQ = tAutoQWorkerAllocQueue(&pMgmt->streamPool, pVnode, (FItem)vmProcessStreamQueue);
pVnode->pFetchQ = tWWorkerAllocQueue(&pMgmt->fetchPool, pVnode, (FItems)vmProcessFetchQueue);
// init stream msg processing queue family
pVnode->pStreamQ = tAutoQWorkerAllocQueue(&pMgmt->streamPool, pVnode, (FItem)vmProcessStreamQueue, 2);
pVnode->pStreamCtrlQ = tWWorkerAllocQueue(&pMgmt->streamCtrlPool, pVnode, (FItems)vmProcessStreamCtrlQueue);
pVnode->pStreamLongExecQ = tAutoQWorkerAllocQueue(&pMgmt->streamLongExecPool, pVnode, (FItem)vmProcessStreamLongExecQueue, 1);
if (pVnode->pWriteW.queue == NULL || pVnode->pSyncW.queue == NULL || pVnode->pSyncRdW.queue == NULL ||
pVnode->pApplyW.queue == NULL || pVnode->pQueryQ == NULL || pVnode->pStreamQ == NULL || pVnode->pFetchQ == NULL
|| pVnode->pStreamCtrlQ == NULL || pVnode->pStreamLongExecQ == NULL) {
|| pVnode->pStreamCtrlQ == NULL) {
return TSDB_CODE_OUT_OF_MEMORY;
}
@ -506,7 +473,6 @@ int32_t vmAllocQueue(SVnodeMgmt *pMgmt, SVnodeObj *pVnode) {
dInfo("vgId:%d, fetch-queue:%p is alloced, thread:%08" PRId64, pVnode->vgId, pVnode->pFetchQ,
taosQueueGetThreadId(pVnode->pFetchQ));
dInfo("vgId:%d, stream-queue:%p is alloced", pVnode->vgId, pVnode->pStreamQ);
dInfo("vgId:%d, stream-long-exec-queue:%p is alloced", pVnode->vgId, pVnode->pStreamLongExecQ);
dInfo("vgId:%d, stream-ctrl-queue:%p is alloced, thread:%08" PRId64, pVnode->vgId, pVnode->pStreamCtrlQ,
taosQueueGetThreadId(pVnode->pStreamCtrlQ));
return 0;
@ -515,22 +481,17 @@ int32_t vmAllocQueue(SVnodeMgmt *pMgmt, SVnodeObj *pVnode) {
void vmFreeQueue(SVnodeMgmt *pMgmt, SVnodeObj *pVnode) {
tQueryAutoQWorkerFreeQueue(&pMgmt->queryPool, pVnode->pQueryQ);
tAutoQWorkerFreeQueue(&pMgmt->streamPool, pVnode->pStreamQ);
tAutoQWorkerFreeQueue(&pMgmt->streamLongExecPool, pVnode->pStreamLongExecQ);
tWWorkerFreeQueue(&pMgmt->streamCtrlPool, pVnode->pStreamCtrlQ);
tWWorkerFreeQueue(&pMgmt->fetchPool, pVnode->pFetchQ);
pVnode->pQueryQ = NULL;
pVnode->pFetchQ = NULL;
pVnode->pStreamQ = NULL;
pVnode->pStreamCtrlQ = NULL;
pVnode->pStreamLongExecQ = NULL;
pVnode->pFetchQ = NULL;
dDebug("vgId:%d, queue is freed", pVnode->vgId);
}
int32_t vmStartWorker(SVnodeMgmt *pMgmt) {
int32_t code = 0;
int32_t code = 0;
SQueryAutoQWorkerPool *pQPool = &pMgmt->queryPool;
pQPool->name = "vnode-query";
pQPool->min = tsNumOfVnodeQueryThreads;
@ -544,13 +505,8 @@ int32_t vmStartWorker(SVnodeMgmt *pMgmt) {
pStreamPool->ratio = tsRatioOfVnodeStreamThreads;
if ((code = tAutoQWorkerInit(pStreamPool)) != 0) return code;
SAutoQWorkerPool *pLongExecPool = &pMgmt->streamLongExecPool;
pLongExecPool->name = "vnode-stream-long-exec";
pLongExecPool->ratio = tsRatioOfVnodeStreamThreads/3;
if ((code = tAutoQWorkerInit(pLongExecPool)) != 0) return code;
SWWorkerPool *pStreamCtrlPool = &pMgmt->streamCtrlPool;
pStreamCtrlPool->name = "vnode-stream-ctrl";
pStreamCtrlPool->name = "vnode-ctrl-stream";
pStreamCtrlPool->max = 1;
if ((code = tWWorkerInit(pStreamCtrlPool)) != 0) return code;
@ -585,7 +541,6 @@ int32_t vmStartWorker(SVnodeMgmt *pMgmt) {
void vmStopWorker(SVnodeMgmt *pMgmt) {
tQueryAutoQWorkerCleanup(&pMgmt->queryPool);
tAutoQWorkerCleanup(&pMgmt->streamPool);
tAutoQWorkerCleanup(&pMgmt->streamLongExecPool);
tWWorkerCleanup(&pMgmt->streamCtrlPool);
tWWorkerCleanup(&pMgmt->fetchPool);
dDebug("vnode workers are closed");

View File

@ -41,7 +41,7 @@ int32_t sendSyncReq(const SEpSet *pEpSet, SRpcMsg *pMsg) {
}
char *i642str(int64_t val) {
static char str[24] = {0};
static threadlocal char str[24] = {0};
(void)snprintf(str, sizeof(str), "%" PRId64, val);
return str;
}

View File

@ -113,7 +113,6 @@ int32_t vnodeProcessQueryMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pInfo);
int32_t vnodeProcessFetchMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pInfo);
int32_t vnodeProcessStreamMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pInfo);
int32_t vnodeProcessStreamCtrlMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pInfo);
int32_t vnodeProcessStreamLongExecMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pInfo);
void vnodeProposeWriteMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs);
void vnodeApplyWriteMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs);
void vnodeProposeCommitOnNeed(SVnode *pVnode, bool atExit);

View File

@ -1302,7 +1302,7 @@ _checkpoint:
}
streamMetaWLock(pMeta);
if ((code = streamMetaSaveTaskInMeta(pMeta, pTask)) != 0) {
if ((code = streamMetaSaveTask(pMeta, pTask)) != 0) {
streamMetaWUnLock(pMeta);
taosHashCancelIterate(pInfoHash, infoHash);
TSDB_CHECK_CODE(code, lino, _exit);

View File

@ -962,6 +962,7 @@ int32_t tqProcessTaskScanHistory(STQ* pTq, SRpcMsg* pMsg) {
int32_t code = TSDB_CODE_SUCCESS;
SStreamTask* pTask = NULL;
SStreamTask* pStreamTask = NULL;
char* pStatus = NULL;
code = streamMetaAcquireTask(pMeta, pReq->streamId, pReq->taskId, &pTask);
if (pTask == NULL) {
@ -972,7 +973,29 @@ int32_t tqProcessTaskScanHistory(STQ* pTq, SRpcMsg* pMsg) {
// do recovery step1
const char* id = pTask->id.idStr;
char* pStatus = streamTaskGetStatus(pTask).name;
streamMutexLock(&pTask->lock);
SStreamTaskState s = streamTaskGetStatus(pTask);
pStatus = s.name;
if ((s.state != TASK_STATUS__SCAN_HISTORY) || (pTask->status.downstreamReady == 0)) {
tqError("s-task:%s vgId:%d status:%s downstreamReady:%d not allowed/ready for scan-history data, quit", id,
pMeta->vgId, s.name, pTask->status.downstreamReady);
streamMutexUnlock(&pTask->lock);
streamMetaReleaseTask(pMeta, pTask);
return 0;
}
if (pTask->exec.pExecutor == NULL) {
tqError("s-task:%s vgId:%d executor is null, not executor scan history", id, pMeta->vgId);
streamMutexUnlock(&pTask->lock);
streamMetaReleaseTask(pMeta, pTask);
return 0;
}
streamMutexUnlock(&pTask->lock);
// avoid multi-thread exec
while (1) {

View File

@ -1098,6 +1098,22 @@ int32_t tqRetrieveTaosxBlock(STqReader* pReader, SMqDataRsp* pRsp, SArray* block
*pSubmitTbDataRet = pSubmitTbData;
}
if (fetchMeta == ONLY_META) {
if (pSubmitTbData->pCreateTbReq != NULL) {
if (pRsp->createTableReq == NULL){
pRsp->createTableReq = taosArrayInit(0, POINTER_BYTES);
if (pRsp->createTableReq == NULL){
return terrno;
}
}
if (taosArrayPush(pRsp->createTableReq, &pSubmitTbData->pCreateTbReq) == NULL){
return terrno;
}
pSubmitTbData->pCreateTbReq = NULL;
}
return 0;
}
int32_t sversion = pSubmitTbData->sver;
int64_t uid = pSubmitTbData->uid;
pReader->lastBlkUid = uid;

View File

@ -339,6 +339,9 @@ static void tqProcessSubData(STQ* pTq, STqHandle* pHandle, SMqDataRsp* pRsp, int
bool tmp = (pSubmitTbData->flags & pRequest->sourceExcluded) != 0;
TSDB_CHECK_CONDITION(!tmp, code, lino, END, TSDB_CODE_SUCCESS);
if (pHandle->fetchMeta == ONLY_META){
goto END;
}
int32_t blockNum = taosArrayGetSize(pBlocks) == 0 ? 1 : taosArrayGetSize(pBlocks);
if (pRsp->withTbName) {
@ -347,7 +350,6 @@ static void tqProcessSubData(STQ* pTq, STqHandle* pHandle, SMqDataRsp* pRsp, int
TSDB_CHECK_CODE(code, lino, END);
}
tmp = (pHandle->fetchMeta == ONLY_META && pSubmitTbData->pCreateTbReq == NULL);
TSDB_CHECK_CONDITION(!tmp, code, lino, END, TSDB_CODE_SUCCESS);
for (int32_t i = 0; i < blockNum; i++) {
if (taosArrayGetSize(pBlocks) == 0){
@ -403,14 +405,16 @@ static void preProcessSubmitMsg(STqHandle* pHandle, const SMqPollReq* pRequest,
}
int64_t uid = pSubmitTbData->uid;
if (taosHashGet(pRequest->uidHash, &uid, LONG_BYTES) != NULL) {
tqDebug("poll rawdata split,uid:%" PRId64 " is already exists", uid);
terrno = TSDB_CODE_TMQ_RAW_DATA_SPLIT;
return;
} else {
int32_t code = taosHashPut(pRequest->uidHash, &uid, LONG_BYTES, &uid, LONG_BYTES);
if (code != 0){
tqError("failed to add table uid to hash, code:%d, uid:%"PRId64, code, uid);
if (pRequest->rawData) {
if (taosHashGet(pRequest->uidHash, &uid, LONG_BYTES) != NULL) {
tqDebug("poll rawdata split,uid:%" PRId64 " is already exists", uid);
terrno = TSDB_CODE_TMQ_RAW_DATA_SPLIT;
return;
} else {
int32_t code = taosHashPut(pRequest->uidHash, &uid, LONG_BYTES, &uid, LONG_BYTES);
if (code != 0) {
tqError("failed to add table uid to hash, code:%d, uid:%" PRId64, code, uid);
}
}
}
@ -453,9 +457,7 @@ int32_t tqTaosxScanLog(STQ* pTq, STqHandle* pHandle, SPackedData submit, SMqData
}
code = tqReaderSetSubmitMsg(pReader, submit.msgStr, submit.msgLen, submit.ver, rawList);
TSDB_CHECK_CODE(code, lino, END);
if (pRequest->rawData) {
preProcessSubmitMsg(pHandle, pRequest, &rawList);
}
preProcessSubmitMsg(pHandle, pRequest, &rawList);
// data could not contains same uid data in rawdata mode
if (pRequest->rawData != 0 && terrno == TSDB_CODE_TMQ_RAW_DATA_SPLIT){
goto END;

View File

@ -226,6 +226,82 @@ static void tDeleteCommon(void* parm) {}
taosxRsp.createTableNum > 0 ? TMQ_MSG_TYPE__POLL_DATA_META_RSP : \
(pRequest->rawData == 1 ? TMQ_MSG_TYPE__POLL_RAW_DATA_RSP : TMQ_MSG_TYPE__POLL_DATA_RSP)
static int32_t buildBatchMeta(SMqBatchMetaRsp *btMetaRsp, int16_t type, int32_t bodyLen, void* body){
int32_t code = 0;
if (!btMetaRsp->batchMetaReq) {
btMetaRsp->batchMetaReq = taosArrayInit(4, POINTER_BYTES);
TQ_NULL_GO_TO_END(btMetaRsp->batchMetaReq);
btMetaRsp->batchMetaLen = taosArrayInit(4, sizeof(int32_t));
TQ_NULL_GO_TO_END(btMetaRsp->batchMetaLen);
}
SMqMetaRsp tmpMetaRsp = {0};
tmpMetaRsp.resMsgType = type;
tmpMetaRsp.metaRspLen = bodyLen;
tmpMetaRsp.metaRsp = body;
uint32_t len = 0;
tEncodeSize(tEncodeMqMetaRsp, &tmpMetaRsp, len, code);
if (TSDB_CODE_SUCCESS != code) {
tqError("tmq extract meta from log, tEncodeMqMetaRsp error");
goto END;
}
int32_t tLen = sizeof(SMqRspHead) + len;
void* tBuf = taosMemoryCalloc(1, tLen);
TQ_NULL_GO_TO_END(tBuf);
void* metaBuff = POINTER_SHIFT(tBuf, sizeof(SMqRspHead));
SEncoder encoder = {0};
tEncoderInit(&encoder, metaBuff, len);
code = tEncodeMqMetaRsp(&encoder, &tmpMetaRsp);
tEncoderClear(&encoder);
if (code < 0) {
tqError("tmq extract meta from log, tEncodeMqMetaRsp error");
goto END;
}
TQ_NULL_GO_TO_END (taosArrayPush(btMetaRsp->batchMetaReq, &tBuf));
TQ_NULL_GO_TO_END (taosArrayPush(btMetaRsp->batchMetaLen, &tLen));
END:
return code;
}
static int32_t buildCreateTbBatchReqBinary(SMqDataRsp *taosxRsp, void** pBuf, int32_t *len){
int32_t code = 0;
SVCreateTbBatchReq pReq = {0};
pReq.nReqs = taosArrayGetSize(taosxRsp->createTableReq);
pReq.pArray = taosArrayInit(1, sizeof(struct SVCreateTbReq));
TQ_NULL_GO_TO_END(pReq.pArray);
for (int i = 0; i < taosArrayGetSize(taosxRsp->createTableReq); i++){
void *createTableReq = taosArrayGetP(taosxRsp->createTableReq, i);
TQ_NULL_GO_TO_END(taosArrayPush(pReq.pArray, createTableReq));
}
tEncodeSize(tEncodeSVCreateTbBatchReq, &pReq, *len, code);
if (code < 0) {
goto END;
}
*len += sizeof(SMsgHead);
*pBuf = taosMemoryMalloc(*len);
TQ_NULL_GO_TO_END(pBuf);
SEncoder coder = {0};
tEncoderInit(&coder, POINTER_SHIFT(*pBuf, sizeof(SMsgHead)), *len);
code = tEncodeSVCreateTbBatchReq(&coder, &pReq);
tEncoderClear(&coder);
END:
taosArrayDestroy(pReq.pArray);
return code;
}
#define SEND_BATCH_META_RSP \
tqOffsetResetToLog(&btMetaRsp.rspOffset, fetchVer);\
code = tqSendBatchMetaPollRsp(pHandle, pMsg, pRequest, &btMetaRsp, vgId);\
goto END;
#define SEND_DATA_RSP \
tqOffsetResetToLog(&taosxRsp.rspOffset, fetchVer);\
code = tqSendDataRsp(pHandle, pMsg, pRequest, &taosxRsp, POLL_RSP_TYPE(pRequest, taosxRsp), vgId);\
goto END;
static int32_t extractDataAndRspForDbStbSubscribe(STQ* pTq, STqHandle* pHandle, const SMqPollReq* pRequest,
SRpcMsg* pMsg, STqOffsetVal* offset) {
int32_t vgId = TD_VID(pTq->pVnode);
@ -272,14 +348,9 @@ static int32_t extractDataAndRspForDbStbSubscribe(STQ* pTq, STqHandle* pHandle,
if (tqFetchLog(pTq, pHandle, &fetchVer, pRequest->reqId) < 0) {
if (totalMetaRows > 0) {
tqOffsetResetToLog(&btMetaRsp.rspOffset, fetchVer);
code = tqSendBatchMetaPollRsp(pHandle, pMsg, pRequest, &btMetaRsp, vgId);
goto END;
SEND_BATCH_META_RSP
}
tqOffsetResetToLog(&taosxRsp.rspOffset, fetchVer);
code = tqSendDataRsp(pHandle, pMsg, pRequest, &taosxRsp,
POLL_RSP_TYPE(pRequest, taosxRsp), vgId);
goto END;
SEND_DATA_RSP
}
SWalCont* pHead = &pHandle->pWalReader->pHead->head;
@ -289,10 +360,7 @@ static int32_t extractDataAndRspForDbStbSubscribe(STQ* pTq, STqHandle* pHandle,
// process meta
if (pHead->msgType != TDMT_VND_SUBMIT) {
if (totalRows > 0) {
tqOffsetResetToLog(&taosxRsp.rspOffset, fetchVer);
code = tqSendDataRsp(pHandle, pMsg, pRequest, &taosxRsp,
POLL_RSP_TYPE(pRequest, taosxRsp), vgId);
goto END;
SEND_DATA_RSP
}
if ((pRequest->sourceExcluded & TD_REQ_FROM_TAOX) != 0) {
@ -318,53 +386,20 @@ static int32_t extractDataAndRspForDbStbSubscribe(STQ* pTq, STqHandle* pHandle,
code = tqSendMetaPollRsp(pHandle, pMsg, pRequest, &metaRsp, vgId);
goto END;
}
if (!btMetaRsp.batchMetaReq) {
btMetaRsp.batchMetaReq = taosArrayInit(4, POINTER_BYTES);
TQ_NULL_GO_TO_END(btMetaRsp.batchMetaReq);
btMetaRsp.batchMetaLen = taosArrayInit(4, sizeof(int32_t));
TQ_NULL_GO_TO_END(btMetaRsp.batchMetaLen);
}
code = buildBatchMeta(&btMetaRsp, pHead->msgType, pHead->bodyLen, pHead->body);
fetchVer++;
SMqMetaRsp tmpMetaRsp = {0};
tmpMetaRsp.resMsgType = pHead->msgType;
tmpMetaRsp.metaRspLen = pHead->bodyLen;
tmpMetaRsp.metaRsp = pHead->body;
uint32_t len = 0;
tEncodeSize(tEncodeMqMetaRsp, &tmpMetaRsp, len, code);
if (TSDB_CODE_SUCCESS != code) {
tqError("tmq extract meta from log, tEncodeMqMetaRsp error");
continue;
if (code != 0){
goto END;
}
int32_t tLen = sizeof(SMqRspHead) + len;
void* tBuf = taosMemoryCalloc(1, tLen);
TQ_NULL_GO_TO_END(tBuf);
void* metaBuff = POINTER_SHIFT(tBuf, sizeof(SMqRspHead));
SEncoder encoder = {0};
tEncoderInit(&encoder, metaBuff, len);
code = tEncodeMqMetaRsp(&encoder, &tmpMetaRsp);
tEncoderClear(&encoder);
if (code < 0) {
tqError("tmq extract meta from log, tEncodeMqMetaRsp error");
continue;
}
TQ_NULL_GO_TO_END (taosArrayPush(btMetaRsp.batchMetaReq, &tBuf));
TQ_NULL_GO_TO_END (taosArrayPush(btMetaRsp.batchMetaLen, &tLen));
totalMetaRows++;
if ((taosArrayGetSize(btMetaRsp.batchMetaReq) >= tmqRowSize) || (taosGetTimestampMs() - st > pRequest->timeout)) {
tqOffsetResetToLog(&btMetaRsp.rspOffset, fetchVer);
code = tqSendBatchMetaPollRsp(pHandle, pMsg, pRequest, &btMetaRsp, vgId);
goto END;
SEND_BATCH_META_RSP
}
continue;
}
if (totalMetaRows > 0) {
tqOffsetResetToLog(&btMetaRsp.rspOffset, fetchVer);
code = tqSendBatchMetaPollRsp(pHandle, pMsg, pRequest, &btMetaRsp, vgId);
goto END;
if (totalMetaRows > 0 && pHandle->fetchMeta != ONLY_META) {
SEND_BATCH_META_RSP
}
// process data
@ -376,17 +411,39 @@ static int32_t extractDataAndRspForDbStbSubscribe(STQ* pTq, STqHandle* pHandle,
TQ_ERR_GO_TO_END(tqTaosxScanLog(pTq, pHandle, submit, &taosxRsp, &totalRows, pRequest));
if (pHandle->fetchMeta == ONLY_META && taosArrayGetSize(taosxRsp.createTableReq) > 0){
int32_t len = 0;
void *pBuf = NULL;
code = buildCreateTbBatchReqBinary(&taosxRsp, &pBuf, &len);
if (code == 0){
code = buildBatchMeta(&btMetaRsp, TDMT_VND_CREATE_TABLE, len, pBuf);
}
taosMemoryFree(pBuf);
taosArrayDestroyP(taosxRsp.createTableReq, NULL);
taosxRsp.createTableReq = NULL;
fetchVer++;
if (code != 0){
goto END;
}
totalMetaRows++;
if ((taosArrayGetSize(btMetaRsp.batchMetaReq) >= tmqRowSize) ||
(taosGetTimestampMs() - st > pRequest->timeout) ||
(!pRequest->enableBatchMeta && !pRequest->useSnapshot)) {
SEND_BATCH_META_RSP
}
continue;
}
if ((pRequest->rawData == 0 && totalRows >= pRequest->minPollRows) ||
(taosGetTimestampMs() - st > pRequest->timeout) ||
(pRequest->rawData != 0 && (taosArrayGetSize(taosxRsp.blockData) > pRequest->minPollRows ||
terrno == TSDB_CODE_TMQ_RAW_DATA_SPLIT))) {
// tqDebug("start to send rsp, block num:%d, totalRows:%d, createTableNum:%d, terrno:%d",
// (int)taosArrayGetSize(taosxRsp.blockData), totalRows, taosxRsp.createTableNum, terrno);
tqOffsetResetToLog(&taosxRsp.rspOffset, terrno == TSDB_CODE_TMQ_RAW_DATA_SPLIT ? fetchVer : fetchVer + 1);
code = tqSendDataRsp(pHandle, pMsg, pRequest, &taosxRsp,
POLL_RSP_TYPE(pRequest, taosxRsp), vgId);
if (terrno == TSDB_CODE_TMQ_RAW_DATA_SPLIT){terrno = 0;}
goto END;
if (terrno == TSDB_CODE_TMQ_RAW_DATA_SPLIT){
terrno = 0;
} else{
fetchVer++;
}
SEND_DATA_RSP
} else {
fetchVer++;
}

View File

@ -268,13 +268,13 @@ int32_t tqStreamTaskProcessUpdateReq(SStreamMeta* pMeta, SMsgCb* cb, SRpcMsg* pM
// stream do update the nodeEp info, write it into stream meta.
if (updated) {
tqDebug("s-task:%s vgId:%d save task after update epset, and stop task", idstr, vgId);
code = streamMetaSaveTaskInMeta(pMeta, pTask);
code = streamMetaSaveTask(pMeta, pTask);
if (code) {
tqError("s-task:%s vgId:%d failed to save task, code:%s", idstr, vgId, tstrerror(code));
}
if (pHTask != NULL) {
code = streamMetaSaveTaskInMeta(pMeta, pHTask);
code = streamMetaSaveTask(pMeta, pHTask);
if (code) {
tqError("s-task:%s vgId:%d failed to save related history task, code:%s", idstr, vgId, tstrerror(code));
}
@ -751,8 +751,6 @@ int32_t tqStreamTaskProcessDropReq(SStreamMeta* pMeta, char* msg, int32_t msgLen
}
streamMetaWUnLock(pMeta);
tqDebug("vgId:%d process drop task:0x%x completed", vgId, pReq->taskId);
return 0; // always return success
}
@ -867,9 +865,6 @@ int32_t tqStreamTaskProcessRunReq(SStreamMeta* pMeta, SRpcMsg* pMsg, bool isLead
} else if (type == STREAM_EXEC_T_ADD_FAILED_TASK) {
code = streamMetaAddFailedTask(pMeta, req.streamId, req.taskId);
return code;
} else if (type == STREAM_EXEC_T_STOP_ONE_TASK) {
code = streamMetaStopOneTask(pMeta, req.streamId, req.taskId);
return code;
} else if (type == STREAM_EXEC_T_RESUME_TASK) { // task resume to run after idle for a while
SStreamTask* pTask = NULL;
code = streamMetaAcquireTask(pMeta, req.streamId, req.taskId, &pTask);

View File

@ -934,7 +934,9 @@ int32_t vnodeProcessFetchMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pInfo) {
int32_t vnodeProcessStreamMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pInfo) {
vTrace("vgId:%d, msg:%p in stream queue is processing", pVnode->config.vgId, pMsg);
if (!syncIsReadyForRead(pVnode->sync)) {
if ((pMsg->msgType == TDMT_SCH_FETCH || pMsg->msgType == TDMT_VND_TABLE_META || pMsg->msgType == TDMT_VND_TABLE_CFG ||
pMsg->msgType == TDMT_VND_BATCH_META) &&
!syncIsReadyForRead(pVnode->sync)) {
vnodeRedirectRpcMsg(pVnode, pMsg, terrno);
return 0;
}
@ -946,6 +948,8 @@ int32_t vnodeProcessStreamMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pInfo)
return tqProcessTaskRetrieveReq(pVnode->pTq, pMsg);
case TDMT_STREAM_RETRIEVE_RSP:
return tqProcessTaskRetrieveRsp(pVnode->pTq, pMsg);
case TDMT_VND_STREAM_SCAN_HISTORY:
return tqProcessTaskScanHistory(pVnode->pTq, pMsg);
case TDMT_VND_GET_STREAM_PROGRESS:
return tqStreamProgressRetrieveReq(pVnode->pTq, pMsg);
default:
@ -992,22 +996,6 @@ int32_t vnodeProcessStreamCtrlMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pIn
}
}
int32_t vnodeProcessStreamLongExecMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pInfo) {
vTrace("vgId:%d, msg:%p in stream long exec queue is processing", pVnode->config.vgId, pMsg);
if (!syncIsReadyForRead(pVnode->sync)) {
vnodeRedirectRpcMsg(pVnode, pMsg, terrno);
return 0;
}
switch (pMsg->msgType) {
case TDMT_VND_STREAM_SCAN_HISTORY:
return tqProcessTaskScanHistory(pVnode->pTq, pMsg);
default:
vError("unknown msg type:%d in stream long exec queue", pMsg->msgType);
return TSDB_CODE_APP_ERROR;
}
}
void smaHandleRes(void *pVnode, int64_t smaId, const SArray *data) {
int32_t code = tdProcessTSmaInsert(((SVnode *)pVnode)->pSma, smaId, (const char *)data);
if (code) {

View File

@ -42,6 +42,7 @@ typedef struct SGroupResInfo {
int32_t index; // rows consumed in func:doCopyToSDataBlockXX
int32_t iter; // relate to index-1, last consumed data's slot id in hash table
void* dataPos; // relate to index-1, last consumed data's position, in the nodelist of cur slot
int32_t delIndex; // rows consumed in func:doBuildDeleteDataBlock
SArray* pRows; // SArray<SResKeyPos>
char* pBuf;
bool freeItem;

View File

@ -686,6 +686,54 @@ typedef struct SResultWindowInfo {
bool isOutput;
} SResultWindowInfo;
typedef struct SSessionAggOperatorInfo {
SOptrBasicInfo binfo;
SAggSupporter aggSup;
SExprSupp scalarSupp; // supporter for perform scalar function
SGroupResInfo groupResInfo;
SWindowRowsSup winSup;
bool reptScan; // next round scan
int64_t gap; // session window gap
int32_t tsSlotId; // primary timestamp slot id
STimeWindowAggSupp twAggSup;
struct SOperatorInfo* pOperator;
bool cleanGroupResInfo;
} SSessionAggOperatorInfo;
typedef struct SStateWindowOperatorInfo {
SOptrBasicInfo binfo;
SAggSupporter aggSup;
SExprSupp scalarSup;
SGroupResInfo groupResInfo;
SWindowRowsSup winSup;
SColumn stateCol; // start row index
bool hasKey;
SStateKeys stateKey;
int32_t tsSlotId; // primary timestamp column slot id
STimeWindowAggSupp twAggSup;
struct SOperatorInfo* pOperator;
bool cleanGroupResInfo;
int64_t trueForLimit;
} SStateWindowOperatorInfo;
typedef struct SEventWindowOperatorInfo {
SOptrBasicInfo binfo;
SAggSupporter aggSup;
SExprSupp scalarSup;
SWindowRowsSup winSup;
int32_t tsSlotId; // primary timestamp column slot id
STimeWindowAggSupp twAggSup;
uint64_t groupId; // current group id, used to identify the data block from different groups
SFilterInfo* pStartCondInfo;
SFilterInfo* pEndCondInfo;
bool inWindow;
SResultRow* pRow;
SSDataBlock* pPreDataBlock;
struct SOperatorInfo* pOperator;
int64_t trueForLimit;
} SEventWindowOperatorInfo;
typedef struct SStreamSessionAggOperatorInfo {
SOptrBasicInfo binfo;
SSteamOpBasicInfo basic;
@ -746,6 +794,7 @@ typedef struct SStreamStateAggOperatorInfo {
SSHashObj* pPkDeleted;
bool destHasPrimaryKey;
struct SOperatorInfo* pOperator;
int64_t trueForLimit;
} SStreamStateAggOperatorInfo;
typedef struct SStreamEventAggOperatorInfo {
@ -778,6 +827,7 @@ typedef struct SStreamEventAggOperatorInfo {
struct SOperatorInfo* pOperator;
SNodeList* pStartCondCols;
SNodeList* pEndCondCols;
int64_t trueForLimit;
} SStreamEventAggOperatorInfo;
typedef struct SStreamCountAggOperatorInfo {
@ -1052,7 +1102,8 @@ int32_t saveSessionOutputBuf(SStreamAggSupporter* pAggSup, SResultWindowInfo* pW
int32_t saveResult(SResultWindowInfo winInfo, SSHashObj* pStUpdated);
int32_t saveDeleteRes(SSHashObj* pStDelete, SSessionKey key);
void removeSessionResult(SStreamAggSupporter* pAggSup, SSHashObj* pHashMap, SSHashObj* pResMap, SSessionKey* pKey);
void doBuildDeleteDataBlock(struct SOperatorInfo* pOp, SSHashObj* pStDeleted, SSDataBlock* pBlock, void** Ite);
void doBuildDeleteDataBlock(struct SOperatorInfo* pOp, SSHashObj* pStDeleted, SSDataBlock* pBlock, void** Ite,
SGroupResInfo* pGroupResInfo);
void doBuildSessionResult(struct SOperatorInfo* pOperator, void* pState, SGroupResInfo* pGroupResInfo,
SSDataBlock* pBlock, SArray* pSessionKeys);
int32_t getSessionWindowInfoByKey(SStreamAggSupporter* pAggSup, SSessionKey* pKey, SResultWindowInfo* pWinInfo);
@ -1109,6 +1160,7 @@ int32_t getNextQualifiedWindow(SInterval* pInterval, STimeWindow* pNext, SDataBl
int32_t extractQualifiedTupleByFilterResult(SSDataBlock* pBlock, const SColumnInfoData* p, int32_t status);
bool getIgoreNullRes(SExprSupp* pExprSup);
bool checkNullRow(SExprSupp* pExprSup, SSDataBlock* pSrcBlock, int32_t index, bool ignoreNull);
int64_t getMinWindowSize(struct SOperatorInfo* pOperator);
#ifdef __cplusplus
}

View File

@ -15,6 +15,7 @@
#include "filter.h"
#include "function.h"
#include "nodes.h"
#include "os.h"
#include "querynodes.h"
#include "tfill.h"

View File

@ -24,22 +24,6 @@
#include "tdatablock.h"
#include "ttime.h"
typedef struct SEventWindowOperatorInfo {
SOptrBasicInfo binfo;
SAggSupporter aggSup;
SExprSupp scalarSup;
SWindowRowsSup winSup;
int32_t tsSlotId; // primary timestamp column slot id
STimeWindowAggSupp twAggSup;
uint64_t groupId; // current group id, used to identify the data block from different groups
SFilterInfo* pStartCondInfo;
SFilterInfo* pEndCondInfo;
bool inWindow;
SResultRow* pRow;
SSDataBlock* pPreDataBlock;
SOperatorInfo* pOperator;
} SEventWindowOperatorInfo;
static int32_t eventWindowAggregateNext(SOperatorInfo* pOperator, SSDataBlock** pRes);
static void destroyEWindowOperatorInfo(void* param);
static int32_t eventWindowAggImpl(SOperatorInfo* pOperator, SEventWindowOperatorInfo* pInfo, SSDataBlock* pBlock);
@ -114,8 +98,9 @@ int32_t createEventwindowOperatorInfo(SOperatorInfo* downstream, SPhysiNode* phy
pInfo->tsSlotId = tsSlotId;
pInfo->pPreDataBlock = NULL;
pInfo->pOperator = pOperator;
pInfo->trueForLimit = pEventWindowNode->trueForLimit;
setOperatorInfo(pOperator, "EventWindowOperator", QUERY_NODE_PHYSICAL_PLAN_MERGE_STATE, true, OP_NOT_OPENED, pInfo,
setOperatorInfo(pOperator, "EventWindowOperator", QUERY_NODE_PHYSICAL_PLAN_MERGE_EVENT, true, OP_NOT_OPENED, pInfo,
pTaskInfo);
pOperator->fpSet = createOperatorFpSet(optrDummyOpenFn, eventWindowAggregateNext, NULL, destroyEWindowOperatorInfo,
optrDefaultBufFn, NULL, optrDefaultGetNextExtFn, NULL);
@ -297,6 +282,7 @@ int32_t eventWindowAggImpl(SOperatorInfo* pOperator, SEventWindowOperatorInfo* p
TSKEY* tsList = (TSKEY*)pColInfoData->pData;
SWindowRowsSup* pRowSup = &pInfo->winSup;
int32_t rowIndex = 0;
int64_t minWindowSize = getMinWindowSize(pOperator);
pRowSup->numOfRows = 0;
if (pInfo->groupId == 0) {
@ -341,18 +327,23 @@ int32_t eventWindowAggImpl(SOperatorInfo* pOperator, SEventWindowOperatorInfo* p
QUERY_CHECK_CODE(code, lino, _return);
doUpdateNumOfRows(pSup->pCtx, pInfo->pRow, pSup->numOfExprs, pSup->rowEntryInfoOffset);
// check buffer size
if (pRes->info.rows + pInfo->pRow->numOfRows >= pRes->info.capacity) {
int32_t newSize = pRes->info.rows + pInfo->pRow->numOfRows;
code = blockDataEnsureCapacity(pRes, newSize);
if (pRowSup->win.ekey - pRowSup->win.skey < minWindowSize) {
qDebug("skip small window, groupId: %" PRId64 ", windowSize: %" PRId64 ", minWindowSize: %" PRId64,
pInfo->groupId, pRowSup->win.ekey - pRowSup->win.skey, minWindowSize);
} else {
// check buffer size
if (pRes->info.rows + pInfo->pRow->numOfRows >= pRes->info.capacity) {
int32_t newSize = pRes->info.rows + pInfo->pRow->numOfRows;
code = blockDataEnsureCapacity(pRes, newSize);
QUERY_CHECK_CODE(code, lino, _return);
}
code = copyResultrowToDataBlock(pSup->pExprInfo, pSup->numOfExprs, pInfo->pRow, pSup->pCtx, pRes,
pSup->rowEntryInfoOffset, pTaskInfo);
QUERY_CHECK_CODE(code, lino, _return);
pRes->info.rows += pInfo->pRow->numOfRows;
}
code = copyResultrowToDataBlock(pSup->pExprInfo, pSup->numOfExprs, pInfo->pRow, pSup->pCtx, pRes,
pSup->rowEntryInfoOffset, pTaskInfo);
QUERY_CHECK_CODE(code, lino, _return);
pRes->info.rows += pInfo->pRow->numOfRows;
pInfo->pRow->numOfRows = 0;
pInfo->inWindow = false;

View File

@ -18,6 +18,8 @@
#include "index.h"
#include "os.h"
#include "query.h"
#include "querynodes.h"
#include "tarray.h"
#include "tdatablock.h"
#include "thash.h"
#include "tmsg.h"
@ -212,6 +214,7 @@ void cleanupGroupResInfo(SGroupResInfo* pGroupResInfo) {
pGroupResInfo->pRows = NULL;
}
pGroupResInfo->index = 0;
pGroupResInfo->delIndex = 0;
}
int32_t resultrowComparAsc(const void* p1, const void* p2) {
@ -303,6 +306,7 @@ void initMultiResInfoFromArrayList(SGroupResInfo* pGroupResInfo, SArray* pArrayL
pGroupResInfo->freeItem = true;
pGroupResInfo->pRows = pArrayList;
pGroupResInfo->index = 0;
pGroupResInfo->delIndex = 0;
}
bool hasRemainResults(SGroupResInfo* pGroupResInfo) {
@ -1956,6 +1960,7 @@ int32_t createExprFromOneNode(SExprInfo* pExp, SNode* pNode, int16_t slotId) {
QUERY_CHECK_CODE(code, lino, _end);
}
}
pExp->pExpr->_function.bindExprID = ((SExprNode*)pNode)->bindExprID;
} else if (type == QUERY_NODE_OPERATOR) {
pExp->pExpr->nodeType = QUERY_NODE_OPERATOR;
SOperatorNode* pOpNode = (SOperatorNode*)pNode;
@ -1993,7 +1998,7 @@ int32_t createExprFromOneNode(SExprInfo* pExp, SNode* pNode, int16_t slotId) {
code = TSDB_CODE_QRY_EXECUTOR_INTERNAL_ERROR;
QUERY_CHECK_CODE(code, lino, _end);
}
pExp->pExpr->relatedTo = ((SExprNode*)pNode)->relatedTo;
_end:
if (code != TSDB_CODE_SUCCESS) {
qError("%s failed at line %d since %s", __func__, lino, tstrerror(code));
@ -2074,42 +2079,78 @@ int32_t createExprInfo(SNodeList* pNodeList, SNodeList* pGroupKeys, SExprInfo**
return code;
}
static void deleteSubsidiareCtx(void* pData) {
SSubsidiaryResInfo* pCtx = (SSubsidiaryResInfo*)pData;
if (pCtx->pCtx) {
taosMemoryFreeClear(pCtx->pCtx);
}
}
// set the output buffer for the selectivity + tag query
static int32_t setSelectValueColumnInfo(SqlFunctionCtx* pCtx, int32_t numOfOutput) {
int32_t num = 0;
int32_t code = TSDB_CODE_SUCCESS;
int32_t lino = 0;
SqlFunctionCtx* p = NULL;
SqlFunctionCtx** pValCtx = taosMemoryCalloc(numOfOutput, POINTER_BYTES);
if (pValCtx == NULL) {
return terrno;
SArray* pValCtxArray = NULL;
for (int32_t i = numOfOutput - 1; i > 0; --i) { // select Func is at the end of the list
int32_t funcIdx = pCtx[i].pExpr->pExpr->_function.bindExprID;
if (funcIdx > 0) {
if (pValCtxArray == NULL) {
// the end of the list is the select function of biggest index
pValCtxArray = taosArrayInit_s(sizeof(SSubsidiaryResInfo*), funcIdx);
if (pValCtxArray == NULL) {
return terrno;
}
}
if (funcIdx > pValCtxArray->size) {
qError("funcIdx:%d is out of range", funcIdx);
taosArrayDestroyP(pValCtxArray, deleteSubsidiareCtx);
return TSDB_CODE_QRY_EXECUTOR_INTERNAL_ERROR;
}
SSubsidiaryResInfo* pSubsidiary = &pCtx[i].subsidiaries;
pSubsidiary->pCtx = taosMemoryCalloc(numOfOutput, POINTER_BYTES);
if (pSubsidiary->pCtx == NULL) {
taosArrayDestroyP(pValCtxArray, deleteSubsidiareCtx);
return terrno;
}
pSubsidiary->num = 0;
taosArraySet(pValCtxArray, funcIdx - 1, &pSubsidiary);
}
}
SHashObj* pSelectFuncs = taosHashInit(8, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), false, HASH_ENTRY_LOCK);
QUERY_CHECK_NULL(pSelectFuncs, code, lino, _end, terrno);
SqlFunctionCtx* p = NULL;
SqlFunctionCtx** pValCtx = NULL;
if (pValCtxArray == NULL) {
pValCtx = taosMemoryCalloc(numOfOutput, POINTER_BYTES);
if (pValCtx == NULL) {
QUERY_CHECK_CODE(terrno, lino, _end);
}
}
for (int32_t i = 0; i < numOfOutput; ++i) {
const char* pName = pCtx[i].pExpr->pExpr->_function.functionName;
if ((strcmp(pName, "_select_value") == 0) || (strcmp(pName, "_group_key") == 0) ||
(strcmp(pName, "_group_const_value") == 0)) {
pValCtx[num++] = &pCtx[i];
} else if (fmIsSelectFunc(pCtx[i].functionId)) {
void* data = taosHashGet(pSelectFuncs, pName, strlen(pName));
if (taosHashGetSize(pSelectFuncs) != 0 && data == NULL) {
p = NULL;
break;
if ((strcmp(pName, "_select_value") == 0)) {
if (pValCtxArray == NULL) {
pValCtx[num++] = &pCtx[i];
} else {
int32_t tempRes = taosHashPut(pSelectFuncs, pName, strlen(pName), &num, sizeof(num));
if (tempRes != TSDB_CODE_SUCCESS && tempRes != TSDB_CODE_DUP_KEY) {
code = tempRes;
QUERY_CHECK_CODE(code, lino, _end);
int32_t bindFuncIndex = pCtx[i].pExpr->pExpr->relatedTo; // start from index 1;
if (bindFuncIndex > 0) { // 0 is default index related to the select function
bindFuncIndex -= 1;
}
p = &pCtx[i];
SSubsidiaryResInfo** pSubsidiary = taosArrayGet(pValCtxArray, bindFuncIndex);
if(pSubsidiary == NULL) {
QUERY_CHECK_CODE(TSDB_CODE_QRY_EXECUTOR_INTERNAL_ERROR, lino, _end);
}
(*pSubsidiary)->pCtx[(*pSubsidiary)->num] = &pCtx[i];
(*pSubsidiary)->num++;
}
} else if (fmIsSelectFunc(pCtx[i].functionId)) {
if (pValCtxArray == NULL) {
p = &pCtx[i];
}
}
}
taosHashCleanup(pSelectFuncs);
if (p != NULL) {
p->subsidiaries.pCtx = pValCtx;
@ -2120,9 +2161,11 @@ static int32_t setSelectValueColumnInfo(SqlFunctionCtx* pCtx, int32_t numOfOutpu
_end:
if (code != TSDB_CODE_SUCCESS) {
taosArrayDestroyP(pValCtxArray, deleteSubsidiareCtx);
taosMemoryFreeClear(pValCtx);
taosHashCleanup(pSelectFuncs);
qError("%s failed at line %d since %s", __func__, lino, tstrerror(code));
} else {
taosArrayDestroy(pValCtxArray);
}
return code;
}

View File

@ -995,43 +995,26 @@ int32_t qAsyncKillTask(qTaskInfo_t qinfo, int32_t rspCode) {
return TSDB_CODE_SUCCESS;
}
int32_t qKillTask(qTaskInfo_t tinfo, int32_t rspCode, int64_t waitDuration) {
int64_t st = taosGetTimestampMs();
int32_t qKillTask(qTaskInfo_t tinfo, int32_t rspCode) {
SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)tinfo;
if (pTaskInfo == NULL) {
return TSDB_CODE_QRY_INVALID_QHANDLE;
}
if (waitDuration > 0) {
qDebug("%s sync killed execTask, and waiting for %.2fs", GET_TASKID(pTaskInfo), waitDuration/1000.0);
} else {
qDebug("%s async killed execTask", GET_TASKID(pTaskInfo));
}
qDebug("%s sync killed execTask", GET_TASKID(pTaskInfo));
setTaskKilled(pTaskInfo, TSDB_CODE_TSC_QUERY_KILLED);
if (waitDuration > 0) {
while (1) {
taosWLockLatch(&pTaskInfo->lock);
if (qTaskIsExecuting(pTaskInfo)) { // let's wait for 100 ms and try again
taosWUnLockLatch(&pTaskInfo->lock);
taosMsleep(200);
int64_t d = taosGetTimestampMs() - st;
if (d >= waitDuration && waitDuration >= 0) {
qWarn("%s waiting more than %.2fs, not wait anymore", GET_TASKID(pTaskInfo), waitDuration / 1000.0);
return TSDB_CODE_SUCCESS;
}
} else { // not running now
pTaskInfo->code = rspCode;
taosWUnLockLatch(&pTaskInfo->lock);
return TSDB_CODE_SUCCESS;
}
while (1) {
taosWLockLatch(&pTaskInfo->lock);
if (qTaskIsExecuting(pTaskInfo)) { // let's wait for 100 ms and try again
taosWUnLockLatch(&pTaskInfo->lock);
taosMsleep(100);
} else { // not running now
pTaskInfo->code = rspCode;
taosWUnLockLatch(&pTaskInfo->lock);
return TSDB_CODE_SUCCESS;
}
}
return TSDB_CODE_SUCCESS;
}
bool qTaskIsExecuting(qTaskInfo_t qinfo) {

View File

@ -80,7 +80,8 @@ static void doApplyScalarCalculation(SOperatorInfo* pOperator, SSDataBlock* p
static int32_t doSetInputDataBlock(SExprSupp* pExprSup, SSDataBlock* pBlock, int32_t order, int32_t scanFlag,
bool createDummyCol);
static void doCopyToSDataBlock(SExecTaskInfo* pTaskInfo, SSDataBlock* pBlock, SExprSupp* pSup, SDiskbasedBuf* pBuf,
SGroupResInfo* pGroupResInfo, int32_t threshold, bool ignoreGroup);
SGroupResInfo* pGroupResInfo, int32_t threshold, bool ignoreGroup,
int64_t minWindowSize);
SResultRow* getNewResultRow(SDiskbasedBuf* pResultBuf, int32_t* currentPageId, int32_t interBufSize) {
SFilePage* pData = NULL;
@ -846,7 +847,7 @@ _end:
}
void doCopyToSDataBlock(SExecTaskInfo* pTaskInfo, SSDataBlock* pBlock, SExprSupp* pSup, SDiskbasedBuf* pBuf,
SGroupResInfo* pGroupResInfo, int32_t threshold, bool ignoreGroup) {
SGroupResInfo* pGroupResInfo, int32_t threshold, bool ignoreGroup, int64_t minWindowSize) {
int32_t code = TSDB_CODE_SUCCESS;
int32_t lino = 0;
SExprInfo* pExprInfo = pSup->pExprInfo;
@ -874,6 +875,14 @@ void doCopyToSDataBlock(SExecTaskInfo* pTaskInfo, SSDataBlock* pBlock, SExprSupp
releaseBufPage(pBuf, page);
continue;
}
// skip the window which is less than the windowMinSize
if (pRow->win.ekey - pRow->win.skey < minWindowSize) {
qDebug("skip small window, groupId: %" PRId64 ", windowSize: %" PRId64 ", minWindowSize: %" PRId64, pPos->groupId,
pRow->win.ekey - pRow->win.skey, minWindowSize);
pGroupResInfo->index += 1;
releaseBufPage(pBuf, page);
continue;
}
if (!ignoreGroup) {
if (pBlock->info.id.groupId == 0) {
@ -937,11 +946,11 @@ void doBuildResultDatablock(SOperatorInfo* pOperator, SOptrBasicInfo* pbInfo, SG
pBlock->info.id.groupId = 0;
if (!pbInfo->mergeResultBlock) {
doCopyToSDataBlock(pTaskInfo, pBlock, &pOperator->exprSupp, pBuf, pGroupResInfo, pOperator->resultInfo.threshold,
false);
false, getMinWindowSize(pOperator));
} else {
while (hasRemainResults(pGroupResInfo)) {
doCopyToSDataBlock(pTaskInfo, pBlock, &pOperator->exprSupp, pBuf, pGroupResInfo, pOperator->resultInfo.threshold,
true);
true, getMinWindowSize(pOperator));
if (pBlock->info.rows >= pOperator->resultInfo.threshold) {
break;
}

View File

@ -42,7 +42,9 @@ typedef struct SIndefOperatorInfo {
} SIndefOperatorInfo;
static int32_t doGenerateSourceData(SOperatorInfo* pOperator);
static SSDataBlock* doProjectOperation1(SOperatorInfo* pOperator);
static int32_t doProjectOperation(SOperatorInfo* pOperator, SSDataBlock** pResBlock);
static SSDataBlock* doApplyIndefinitFunction1(SOperatorInfo* pOperator);
static int32_t doApplyIndefinitFunction(SOperatorInfo* pOperator, SSDataBlock** pResBlock);
static int32_t setRowTsColumnOutputInfo(SqlFunctionCtx* pCtx, int32_t numOfCols, SArray** pResList);
static int32_t setFunctionResultOutput(SOperatorInfo* pOperator, SOptrBasicInfo* pInfo, SAggSupporter* pSup,
@ -555,6 +557,12 @@ static void doHandleDataBlock(SOperatorInfo* pOperator, SSDataBlock* pBlock, SOp
}
}
SSDataBlock* doApplyIndefinitFunction1(SOperatorInfo* pOperator) {
SSDataBlock* pResBlock = NULL;
pOperator->pTaskInfo->code = doApplyIndefinitFunction(pOperator, &pResBlock);
return pResBlock;
}
int32_t doApplyIndefinitFunction(SOperatorInfo* pOperator, SSDataBlock** pResBlock) {
QRY_PARAM_CHECK(pResBlock);
SIndefOperatorInfo* pIndefInfo = pOperator->info;

View File

@ -395,7 +395,7 @@ static int32_t buildCountResult(SOperatorInfo* pOperator, SSDataBlock** ppRes) {
STaskNotifyEventStat* pNotifyEventStat = pTaskInfo->streamInfo.pNotifyEventStat;
bool addNotifyEvent = false;
addNotifyEvent = BIT_FLAG_TEST_MASK(pTaskInfo->streamInfo.eventTypes, SNOTIFY_EVENT_WINDOW_CLOSE);
doBuildDeleteDataBlock(pOperator, pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
doBuildDeleteDataBlock(pOperator, pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator, &pInfo->groupResInfo);
if (pInfo->pDelRes->info.rows > 0) {
printDataBlock(pInfo->pDelRes, getStreamOpName(pOperator->operatorType), GET_TASKID(pTaskInfo));
if (addNotifyEvent) {

View File

@ -616,8 +616,8 @@ static int32_t buildEventResult(SOperatorInfo* pOperator, SSDataBlock** ppRes) {
SStreamNotifyEventSupp* pNotifySup = &pInfo->basic.notifyEventSup;
STaskNotifyEventStat* pNotifyEventStat = pTaskInfo->streamInfo.pNotifyEventStat;
bool addNotifyEvent = false;
addNotifyEvent = BIT_FLAG_TEST_MASK(pTaskInfo->streamInfo.eventTypes, SNOTIFY_EVENT_WINDOW_CLOSE);
doBuildDeleteDataBlock(pOperator, pInfo->pSeDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
addNotifyEvent = BIT_FLAG_TEST_MASK(pTaskInfo->streamInfo.eventTypes, SNOTIFY_EVENT_WINDOW_CLOSE);
doBuildDeleteDataBlock(pOperator, pInfo->pSeDeleted, pInfo->pDelRes, &pInfo->pDelIterator, &pInfo->groupResInfo);
if (pInfo->pDelRes->info.rows > 0) {
printDataBlock(pInfo->pDelRes, getStreamOpName(pOperator->operatorType), GET_TASKID(pTaskInfo));
if (addNotifyEvent) {
@ -1075,6 +1075,8 @@ int32_t createStreamEventAggOperatorInfo(SOperatorInfo* downstream, SPhysiNode*
code = nodesCollectColumnsFromNode((SNode*)pEventNode->pEndCond, NULL, COLLECT_COL_TYPE_ALL, &pInfo->pEndCondCols);
QUERY_CHECK_CODE(code, lino, _error);
pInfo->trueForLimit = pEventNode->trueForLimit;
*pOptrInfo = pOperator;
return TSDB_CODE_SUCCESS;

View File

@ -22,13 +22,13 @@
#define NOTIFY_EVENT_NAME_CACHE_LIMIT_MB 16
typedef struct SStreamNotifyEvent {
uint64_t gid;
int64_t eventType;
uint64_t gid;
int64_t eventType;
STimeWindow win;
cJSON* pJson;
cJSON* pJson;
} SStreamNotifyEvent;
#define NOTIFY_EVENT_KEY_SIZE \
#define NOTIFY_EVENT_KEY_SIZE \
((sizeof(((struct SStreamNotifyEvent*)0)->gid) + sizeof(((struct SStreamNotifyEvent*)0)->eventType)) + \
sizeof(((struct SStreamNotifyEvent*)0)->win.skey))

View File

@ -485,6 +485,7 @@ void clearGroupResInfo(SGroupResInfo* pGroupResInfo) {
taosArrayDestroy(pGroupResInfo->pRows);
pGroupResInfo->pRows = NULL;
pGroupResInfo->index = 0;
pGroupResInfo->delIndex = 0;
}
void destroyStreamFinalIntervalOperatorInfo(void* param) {
@ -2887,14 +2888,86 @@ inline int32_t sessionKeyCompareAsc(const void* pKey1, const void* pKey2) {
return 0;
}
void doBuildDeleteDataBlock(SOperatorInfo* pOp, SSHashObj* pStDeleted, SSDataBlock* pBlock, void** Ite) {
static int32_t appendToDeleteDataBlock(SOperatorInfo* pOp, SSDataBlock *pBlock, SSessionKey *pKey) {
int32_t code = TSDB_CODE_SUCCESS;
int32_t lino = 0;
SStorageAPI* pAPI = &pOp->pTaskInfo->storageAPI;
SExecTaskInfo* pTaskInfo = pOp->pTaskInfo;
QUERY_CHECK_NULL(pBlock, code, lino, _end, TSDB_CODE_INVALID_PARA);
QUERY_CHECK_NULL(pKey, code, lino, _end, TSDB_CODE_INVALID_PARA);
SColumnInfoData* pStartTsCol = taosArrayGet(pBlock->pDataBlock, START_TS_COLUMN_INDEX);
code = colDataSetVal(pStartTsCol, pBlock->info.rows, (const char*)&pKey->win.skey, false);
QUERY_CHECK_CODE(code, lino, _end);
SColumnInfoData* pEndTsCol = taosArrayGet(pBlock->pDataBlock, END_TS_COLUMN_INDEX);
code = colDataSetVal(pEndTsCol, pBlock->info.rows, (const char*)&pKey->win.skey, false);
QUERY_CHECK_CODE(code, lino, _end);
SColumnInfoData* pUidCol = taosArrayGet(pBlock->pDataBlock, UID_COLUMN_INDEX);
colDataSetNULL(pUidCol, pBlock->info.rows);
SColumnInfoData* pGpCol = taosArrayGet(pBlock->pDataBlock, GROUPID_COLUMN_INDEX);
code = colDataSetVal(pGpCol, pBlock->info.rows, (const char*)&pKey->groupId, false);
QUERY_CHECK_CODE(code, lino, _end);
SColumnInfoData* pCalStCol = taosArrayGet(pBlock->pDataBlock, CALCULATE_START_TS_COLUMN_INDEX);
colDataSetNULL(pCalStCol, pBlock->info.rows);
SColumnInfoData* pCalEdCol = taosArrayGet(pBlock->pDataBlock, CALCULATE_END_TS_COLUMN_INDEX);
colDataSetNULL(pCalEdCol, pBlock->info.rows);
SColumnInfoData* pTableCol = taosArrayGet(pBlock->pDataBlock, TABLE_NAME_COLUMN_INDEX);
if (!pTableCol) {
QUERY_CHECK_CODE(code, lino, _end);
}
void* tbname = NULL;
int32_t winCode = TSDB_CODE_SUCCESS;
SStorageAPI* pAPI = &pOp->pTaskInfo->storageAPI;
code =
pAPI->stateStore.streamStateGetParName(pOp->pTaskInfo->streamInfo.pState, pKey->groupId, &tbname, false, &winCode);
QUERY_CHECK_CODE(code, lino, _end);
if (winCode != TSDB_CODE_SUCCESS) {
colDataSetNULL(pTableCol, pBlock->info.rows);
} else {
char parTbName[VARSTR_HEADER_SIZE + TSDB_TABLE_NAME_LEN];
STR_WITH_MAXSIZE_TO_VARSTR(parTbName, tbname, sizeof(parTbName));
code = colDataSetVal(pTableCol, pBlock->info.rows, (const char*)parTbName, false);
QUERY_CHECK_CODE(code, lino, _end);
pAPI->stateStore.streamStateFreeVal(tbname);
}
pBlock->info.rows += 1;
_end:
if (code != TSDB_CODE_SUCCESS) {
qError("%s failed at line %d since %s. task:%s", __func__, lino, tstrerror(code), GET_TASKID(pTaskInfo));
}
return code;
}
void doBuildDeleteDataBlock(SOperatorInfo* pOp, SSHashObj* pStDeleted, SSDataBlock* pBlock, void** Ite,
SGroupResInfo* pGroupResInfo) {
int32_t code = TSDB_CODE_SUCCESS;
int32_t lino = 0;
SExecTaskInfo* pTaskInfo = pOp->pTaskInfo;
int64_t minWindowSize = getMinWindowSize(pOp);
int32_t numOfRows = getNumOfTotalRes(pGroupResInfo);
blockDataCleanup(pBlock);
int32_t size = tSimpleHashGetSize(pStDeleted);
if (minWindowSize > 0) {
// Add the number of windows that are below the minimum width limit.
for (int32_t i = pGroupResInfo->delIndex; i < numOfRows; ++i) {
SResultWindowInfo* pWinInfo = taosArrayGet(pGroupResInfo->pRows, i);
SRowBuffPos* pPos = pWinInfo->pStatePos;
SSessionKey* pKey = (SSessionKey*)pPos->pKey;
if (pKey->win.ekey - pKey->win.skey < minWindowSize) {
size++;
}
}
}
if (size == 0) {
return;
}
@ -2907,48 +2980,21 @@ void doBuildDeleteDataBlock(SOperatorInfo* pOp, SSHashObj* pStDeleted, SSDataBlo
break;
}
SSessionKey* res = tSimpleHashGetKey(*Ite, NULL);
SColumnInfoData* pStartTsCol = taosArrayGet(pBlock->pDataBlock, START_TS_COLUMN_INDEX);
code = colDataSetVal(pStartTsCol, pBlock->info.rows, (const char*)&res->win.skey, false);
code = appendToDeleteDataBlock(pOp, pBlock, res);
QUERY_CHECK_CODE(code, lino, _end);
}
SColumnInfoData* pEndTsCol = taosArrayGet(pBlock->pDataBlock, END_TS_COLUMN_INDEX);
code = colDataSetVal(pEndTsCol, pBlock->info.rows, (const char*)&res->win.skey, false);
QUERY_CHECK_CODE(code, lino, _end);
SColumnInfoData* pUidCol = taosArrayGet(pBlock->pDataBlock, UID_COLUMN_INDEX);
colDataSetNULL(pUidCol, pBlock->info.rows);
SColumnInfoData* pGpCol = taosArrayGet(pBlock->pDataBlock, GROUPID_COLUMN_INDEX);
code = colDataSetVal(pGpCol, pBlock->info.rows, (const char*)&res->groupId, false);
QUERY_CHECK_CODE(code, lino, _end);
SColumnInfoData* pCalStCol = taosArrayGet(pBlock->pDataBlock, CALCULATE_START_TS_COLUMN_INDEX);
colDataSetNULL(pCalStCol, pBlock->info.rows);
SColumnInfoData* pCalEdCol = taosArrayGet(pBlock->pDataBlock, CALCULATE_END_TS_COLUMN_INDEX);
colDataSetNULL(pCalEdCol, pBlock->info.rows);
SColumnInfoData* pTableCol = taosArrayGet(pBlock->pDataBlock, TABLE_NAME_COLUMN_INDEX);
if (!pTableCol) {
QUERY_CHECK_CODE(code, lino, _end);
if (minWindowSize > 0) {
for (int32_t i = pGroupResInfo->delIndex; i < numOfRows; ++i) {
SResultWindowInfo* pWinInfo = taosArrayGet(pGroupResInfo->pRows, i);
SRowBuffPos* pPos = pWinInfo->pStatePos;
SSessionKey* pKey = (SSessionKey*)pPos->pKey;
if (pKey->win.ekey - pKey->win.skey < minWindowSize) {
code = appendToDeleteDataBlock(pOp, pBlock, pKey);
QUERY_CHECK_CODE(code, lino, _end);
}
}
void* tbname = NULL;
int32_t winCode = TSDB_CODE_SUCCESS;
code = pAPI->stateStore.streamStateGetParName(pOp->pTaskInfo->streamInfo.pState, res->groupId, &tbname, false,
&winCode);
QUERY_CHECK_CODE(code, lino, _end);
if (winCode != TSDB_CODE_SUCCESS) {
colDataSetNULL(pTableCol, pBlock->info.rows);
} else {
char parTbName[VARSTR_HEADER_SIZE + TSDB_TABLE_NAME_LEN];
STR_WITH_MAXSIZE_TO_VARSTR(parTbName, tbname, sizeof(parTbName));
code = colDataSetVal(pTableCol, pBlock->info.rows, (const char*)parTbName, false);
QUERY_CHECK_CODE(code, lino, _end);
pAPI->stateStore.streamStateFreeVal(tbname);
}
pBlock->info.rows += 1;
pGroupResInfo->delIndex = numOfRows;
}
_end:
@ -3141,6 +3187,7 @@ void initGroupResInfoFromArrayList(SGroupResInfo* pGroupResInfo, SArray* pArrayL
pGroupResInfo->index = 0;
pGroupResInfo->pBuf = NULL;
pGroupResInfo->freeItem = false;
pGroupResInfo->delIndex = 0;
}
int32_t buildSessionResultDataBlock(SOperatorInfo* pOperator, void* pState, SSDataBlock* pBlock, SExprSupp* pSup,
@ -3153,6 +3200,7 @@ int32_t buildSessionResultDataBlock(SOperatorInfo* pOperator, void* pState, SSDa
int32_t numOfExprs = pSup->numOfExprs;
int32_t* rowEntryOffset = pSup->rowEntryInfoOffset;
SqlFunctionCtx* pCtx = pSup->pCtx;
int64_t minWindowSize = getMinWindowSize(pOperator);
int32_t numOfRows = getNumOfTotalRes(pGroupResInfo);
@ -3193,6 +3241,13 @@ int32_t buildSessionResultDataBlock(SOperatorInfo* pOperator, void* pState, SSDa
pGroupResInfo->index += 1;
continue;
}
// skip the window which is less than the windowMinSize
if (pKey->win.ekey - pKey->win.skey < minWindowSize) {
qDebug("skip small window, groupId: %" PRId64 ", windowSize: %" PRId64 ", minWindowSize: %" PRId64, pKey->groupId,
pKey->win.ekey - pKey->win.skey, minWindowSize);
pGroupResInfo->index += 1;
continue;
}
if (pBlock->info.rows + pRow->numOfRows > pBlock->info.capacity) {
break;
@ -3286,7 +3341,7 @@ static int32_t buildSessionResult(SOperatorInfo* pOperator, SSDataBlock** ppRes)
bool addNotifyEvent = false;
addNotifyEvent = IS_NORMAL_SESSION_OP(pOperator) &&
BIT_FLAG_TEST_MASK(pTaskInfo->streamInfo.eventTypes, SNOTIFY_EVENT_WINDOW_CLOSE);
doBuildDeleteDataBlock(pOperator, pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
doBuildDeleteDataBlock(pOperator, pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator, &pInfo->groupResInfo);
if (pInfo->pDelRes->info.rows > 0) {
printDataBlock(pInfo->pDelRes, getStreamOpName(pOperator->operatorType), GET_TASKID(pTaskInfo));
if (addNotifyEvent) {
@ -4928,7 +4983,7 @@ static int32_t buildStateResult(SOperatorInfo* pOperator, SSDataBlock** ppRes) {
STaskNotifyEventStat* pNotifyEventStat = pTaskInfo->streamInfo.pNotifyEventStat;
bool addNotifyEvent = false;
addNotifyEvent = BIT_FLAG_TEST_MASK(pTaskInfo->streamInfo.eventTypes, SNOTIFY_EVENT_WINDOW_CLOSE);
doBuildDeleteDataBlock(pOperator, pInfo->pSeDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
doBuildDeleteDataBlock(pOperator, pInfo->pSeDeleted, pInfo->pDelRes, &pInfo->pDelIterator, &pInfo->groupResInfo);
if (pInfo->pDelRes->info.rows > 0) {
printDataBlock(pInfo->pDelRes, getStreamOpName(pOperator->operatorType), GET_TASKID(pTaskInfo));
if (addNotifyEvent) {
@ -5363,6 +5418,8 @@ int32_t createStreamStateAggOperatorInfo(SOperatorInfo* downstream, SPhysiNode*
code = appendDownstream(pOperator, &downstream, 1);
QUERY_CHECK_CODE(code, lino, _error);
pInfo->trueForLimit = pStateNode->trueForLimit;
*pOptrInfo = pOperator;
return TSDB_CODE_SUCCESS;

View File

@ -1390,3 +1390,22 @@ void destroyTimeSliceOperatorInfo(void* param) {
}
taosMemoryFreeClear(param);
}
int64_t getMinWindowSize(struct SOperatorInfo* pOperator) {
if (pOperator == NULL) {
return 0;
}
switch (pOperator->operatorType) {
case QUERY_NODE_PHYSICAL_PLAN_MERGE_STATE:
return ((SStateWindowOperatorInfo*)pOperator->info)->trueForLimit;
case QUERY_NODE_PHYSICAL_PLAN_STREAM_STATE:
return ((SStreamStateAggOperatorInfo*)pOperator->info)->trueForLimit;
case QUERY_NODE_PHYSICAL_PLAN_MERGE_EVENT:
return ((SEventWindowOperatorInfo*)pOperator->info)->trueForLimit;
case QUERY_NODE_PHYSICAL_PLAN_STREAM_EVENT:
return ((SStreamEventAggOperatorInfo*)pOperator->info)->trueForLimit;
default:
return 0;
}
}

View File

@ -27,35 +27,6 @@
#include "tlog.h"
#include "ttime.h"
typedef struct SSessionAggOperatorInfo {
SOptrBasicInfo binfo;
SAggSupporter aggSup;
SExprSupp scalarSupp; // supporter for perform scalar function
SGroupResInfo groupResInfo;
SWindowRowsSup winSup;
bool reptScan; // next round scan
int64_t gap; // session window gap
int32_t tsSlotId; // primary timestamp slot id
STimeWindowAggSupp twAggSup;
SOperatorInfo* pOperator;
bool cleanGroupResInfo;
} SSessionAggOperatorInfo;
typedef struct SStateWindowOperatorInfo {
SOptrBasicInfo binfo;
SAggSupporter aggSup;
SExprSupp scalarSup;
SGroupResInfo groupResInfo;
SWindowRowsSup winSup;
SColumn stateCol; // start row index
bool hasKey;
SStateKeys stateKey;
int32_t tsSlotId; // primary timestamp column slot id
STimeWindowAggSupp twAggSup;
SOperatorInfo* pOperator;
bool cleanGroupResInfo;
} SStateWindowOperatorInfo;
typedef enum SResultTsInterpType {
RESULT_ROW_START_INTERP = 1,
RESULT_ROW_END_INTERP = 2,
@ -1743,6 +1714,7 @@ int32_t createStatewindowOperatorInfo(SOperatorInfo* downstream, SStateWinodwPhy
pInfo->tsSlotId = tsSlotId;
pInfo->pOperator = pOperator;
pInfo->cleanGroupResInfo = false;
pInfo->trueForLimit = pStateNode->trueForLimit;
setOperatorInfo(pOperator, "StateWindowOperator", QUERY_NODE_PHYSICAL_PLAN_MERGE_STATE, true, OP_NOT_OPENED, pInfo,
pTaskInfo);
pOperator->fpSet = createOperatorFpSet(openStateWindowAggOptr, doStateWindowAggNext, NULL, destroyStateWindowOperatorInfo,

View File

@ -59,6 +59,7 @@ extern "C" {
#define FUNC_MGT_COUNT_LIKE_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(30) // funcs that should also return 0 when no rows found
#define FUNC_MGT_PROCESS_BY_ROW FUNC_MGT_FUNC_CLASSIFICATION_MASK(31)
#define FUNC_MGT_FORECAST_PC_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(32)
#define FUNC_MGT_SELECT_COLS_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(33)
#define FUNC_MGT_TEST_MASK(val, mask) (((val) & (mask)) != 0)

View File

@ -1702,6 +1702,10 @@ static int32_t translateOutVarchar(SFunctionNode* pFunc, char* pErrBuf, int32_t
return TSDB_CODE_SUCCESS;
}
static int32_t invalidColsFunction(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
return TSDB_CODE_PAR_INVALID_COLS_FUNCTION;
}
static int32_t translateHistogramImpl(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
FUNC_ERR_RET(validateParam(pFunc, pErrBuf, len));
int32_t numOfParams = LIST_LENGTH(pFunc->pParameterList);
@ -2737,7 +2741,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.translateFunc = translateOutFirstIn,
.dynDataRequiredFunc = firstDynDataReq,
.getEnvFunc = getFirstLastFuncEnv,
.initFunc = functionSetup,
.initFunc = firstLastFunctionSetup,
.processFunc = firstFunction,
.sprocessFunc = firstLastScalarFunction,
.finalizeFunc = firstLastFinalize,
@ -4232,7 +4236,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
{
.name = "_group_key",
.type = FUNCTION_TYPE_GROUP_KEY,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_KEEP_ORDER_FUNC | FUNC_MGT_SKIP_SCAN_CHECK_FUNC,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_KEEP_ORDER_FUNC | FUNC_MGT_SKIP_SCAN_CHECK_FUNC,
.translateFunc = translateGroupKey,
.getEnvFunc = getGroupKeyFuncEnv,
.initFunc = functionSetup,
@ -4952,7 +4956,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
{
.name = "_group_const_value",
.type = FUNCTION_TYPE_GROUP_CONST_VALUE,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_KEEP_ORDER_FUNC,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_KEEP_ORDER_FUNC,
.parameters = {.minParamNum = 0,
.maxParamNum = 0,
.paramInfoPattern = 0,
@ -5647,7 +5651,11 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.paramInfoPattern = 0,
.outputParaInfo = {.validDataType = FUNC_PARAM_SUPPORT_VARCHAR_TYPE}},
.translateFunc = translateOutVarchar,
}
},
{
.name = "cols",
.translateFunc = invalidColsFunction,
},
};
// clang-format on

View File

@ -912,14 +912,13 @@ int32_t minmaxFunctionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
if (pEntryInfo->numOfRes > 0) {
code = setSelectivityValue(pCtx, pBlock, &pRes->tuplePos, currentRow);
} else {
code = setSelectivityValue(pCtx, pBlock, &pRes->nullTuplePos, currentRow);
code = setNullSelectivityValue(pCtx, pBlock, currentRow);
}
}
return code;
}
#ifdef BUILD_NO_CALL
int32_t setNullSelectivityValue(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, int32_t rowIndex) {
if (pCtx->subsidiaries.num <= 0) {
return TSDB_CODE_SUCCESS;
@ -930,12 +929,14 @@ int32_t setNullSelectivityValue(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, int32
int32_t dstSlotId = pc->pExpr->base.resSchema.slotId;
SColumnInfoData* pDstCol = taosArrayGet(pBlock->pDataBlock, dstSlotId);
if (NULL == pDstCol) {
return terrno;
}
colDataSetNULL(pDstCol, rowIndex);
}
return TSDB_CODE_SUCCESS;
}
#endif
int32_t setSelectivityValue(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, const STuplePos* pTuplePos, int32_t rowIndex) {
if (pCtx->subsidiaries.num <= 0) {
@ -961,20 +962,14 @@ int32_t setSelectivityValue(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, const STu
SqlFunctionCtx* pc = pCtx->subsidiaries.pCtx[j];
int32_t dstSlotId = pc->pExpr->base.resSchema.slotId;
// group_key function has its own process function
// do not process there
if (fmIsGroupKeyFunc(pc->functionId)) {
continue;
}
SColumnInfoData* pDstCol = taosArrayGet(pBlock->pDataBlock, dstSlotId);
if (NULL == pDstCol) {
return TSDB_CODE_OUT_OF_RANGE;
return terrno;
}
if (nullList[j]) {
colDataSetNULL(pDstCol, rowIndex);
} else {
code = colDataSetVal(pDstCol, rowIndex, pStart, false);
code = colDataSetValOrCover(pDstCol, rowIndex, pStart, false);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
@ -2431,8 +2426,6 @@ int32_t firstLastFunctionSetup(SqlFunctionCtx* pCtx, SResultRowEntryInfo* pResIn
}
SFirstLastRes* pRes = GET_ROWCELL_INTERBUF(pResInfo);
SInputColumnInfoData* pInput = &pCtx->input;
pRes->nullTupleSaved = false;
pRes->nullTuplePos.pageId = -1;
return TSDB_CODE_SUCCESS;
@ -2464,10 +2457,10 @@ static int32_t firstlastSaveTupleData(const SSDataBlock* pSrcBlock, int32_t rowI
}
if (!pInfo->hasResult) {
code = saveTupleData(pCtx, rowIndex, pSrcBlock, noElements ? &pInfo->nullTuplePos : &pInfo->pos);
} else {
code = saveTupleData(pCtx, rowIndex, pSrcBlock, &pInfo->pos);
} else if (!noElements) {
code = updateTupleData(pCtx, rowIndex, pSrcBlock, &pInfo->pos);
}
} else { } // dothing
return code;
}
@ -2538,7 +2531,7 @@ int32_t firstFunction(SqlFunctionCtx* pCtx) {
if (pInput->colDataSMAIsSet && (pInput->pColumnDataAgg[0]->numOfNull == pInput->totalRows) &&
pInputCol->hasNull == true) {
// save selectivity value for column consisted of all null values
int32_t code = firstlastSaveTupleData(pCtx->pSrcBlock, pInput->startRowIndex, pCtx, pInfo, !pInfo->nullTupleSaved);
int32_t code = firstlastSaveTupleData(pCtx->pSrcBlock, pInput->startRowIndex, pCtx, pInfo, true);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
@ -2633,7 +2626,7 @@ int32_t firstFunction(SqlFunctionCtx* pCtx) {
if (numOfElems == 0) {
// save selectivity value for column consisted of all null values
int32_t code = firstlastSaveTupleData(pCtx->pSrcBlock, pInput->startRowIndex, pCtx, pInfo, !pInfo->nullTupleSaved);
int32_t code = firstlastSaveTupleData(pCtx->pSrcBlock, pInput->startRowIndex, pCtx, pInfo, true);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
@ -2654,11 +2647,11 @@ int32_t lastFunction(SqlFunctionCtx* pCtx) {
int32_t type = pInputCol->info.type;
int32_t bytes = pInputCol->info.bytes;
pInfo->bytes = bytes;
if (IS_NULL_TYPE(type)) {
return TSDB_CODE_SUCCESS;
}
pInfo->bytes = bytes;
SColumnInfoData* pkCol = pInput->pPrimaryKey;
pInfo->pkType = -1;
@ -2673,7 +2666,7 @@ int32_t lastFunction(SqlFunctionCtx* pCtx) {
if (pInput->colDataSMAIsSet && (pInput->pColumnDataAgg[0]->numOfNull == pInput->totalRows) &&
pInputCol->hasNull == true) {
// save selectivity value for column consisted of all null values
int32_t code = firstlastSaveTupleData(pCtx->pSrcBlock, pInput->startRowIndex, pCtx, pInfo, !pInfo->nullTupleSaved);
int32_t code = firstlastSaveTupleData(pCtx->pSrcBlock, pInput->startRowIndex, pCtx, pInfo, true);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
@ -2768,7 +2761,7 @@ int32_t lastFunction(SqlFunctionCtx* pCtx) {
if (pResInfo->numOfRes == 0 || pInfo->ts < cts) {
char* data = colDataGetData(pInputCol, chosen);
int32_t code = doSaveCurrentVal(pCtx, i, cts, NULL, type, data);
int32_t code = doSaveCurrentVal(pCtx, chosen, cts, NULL, type, data);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
@ -2816,7 +2809,7 @@ int32_t lastFunction(SqlFunctionCtx* pCtx) {
// save selectivity value for column consisted of all null values
if (numOfElems == 0) {
int32_t code = firstlastSaveTupleData(pCtx->pSrcBlock, pInput->startRowIndex, pCtx, pInfo, !pInfo->nullTupleSaved);
int32_t code = firstlastSaveTupleData(pCtx->pSrcBlock, pInput->startRowIndex, pCtx, pInfo, true);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
@ -2865,7 +2858,7 @@ static bool firstLastTransferInfoImpl(SFirstLastRes* pInput, SFirstLastRes* pOut
static int32_t firstLastTransferInfo(SqlFunctionCtx* pCtx, SFirstLastRes* pInput, SFirstLastRes* pOutput, bool isFirst,
int32_t rowIndex) {
if (firstLastTransferInfoImpl(pInput, pOutput, isFirst)) {
int32_t code = firstlastSaveTupleData(pCtx->pSrcBlock, rowIndex, pCtx, pOutput, pOutput->nullTupleSaved);
int32_t code = firstlastSaveTupleData(pCtx->pSrcBlock, rowIndex, pCtx, pOutput, false);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
@ -2914,7 +2907,7 @@ static int32_t firstLastFunctionMergeImpl(SqlFunctionCtx* pCtx, bool isFirstQuer
}
if (numOfElems == 0) {
int32_t code = firstlastSaveTupleData(pCtx->pSrcBlock, pInput->startRowIndex, pCtx, pInfo, !pInfo->nullTupleSaved);
int32_t code = firstlastSaveTupleData(pCtx->pSrcBlock, pInput->startRowIndex, pCtx, pInfo, true);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
@ -2944,7 +2937,7 @@ int32_t firstLastFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
if (pResInfo->isNullRes) {
colDataSetNULL(pCol, pBlock->info.rows);
return setSelectivityValue(pCtx, pBlock, &pRes->nullTuplePos, pBlock->info.rows);
return setNullSelectivityValue(pCtx, pBlock, pBlock->info.rows);
}
code = colDataSetVal(pCol, pBlock->info.rows, pRes->buf, pRes->isNull || pResInfo->isNullRes);
if (TSDB_CODE_SUCCESS != code) {
@ -2983,7 +2976,7 @@ int32_t firstLastPartialFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
if (pEntryInfo->numOfRes == 0) {
colDataSetNULL(pCol, pBlock->info.rows);
code = setSelectivityValue(pCtx, pBlock, &pRes->nullTuplePos, pBlock->info.rows);
code = setNullSelectivityValue(pCtx, pBlock, pBlock->info.rows);
} else {
code = colDataSetVal(pCol, pBlock->info.rows, res, false);
if (TSDB_CODE_SUCCESS != code) {

View File

@ -173,6 +173,8 @@ bool fmIsScalarFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC
bool fmIsVectorFunc(int32_t funcId) { return !fmIsScalarFunc(funcId) && !fmIsPseudoColumnFunc(funcId); }
bool fmIsSelectColsFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_SELECT_COLS_FUNC); }
bool fmIsSelectFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_SELECT_FUNC); }
bool fmIsTimelineFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_TIMELINE_FUNC); }
@ -441,6 +443,8 @@ int32_t createFunctionWithSrcFunc(const char* pName, const SFunctionNode* pSrcFu
return code;
}
resetOutputChangedFunc(*ppFunc, pSrcFunc);
(*ppFunc)->node.relatedTo = pSrcFunc->node.relatedTo;
(*ppFunc)->node.bindExprID = pSrcFunc->node.bindExprID;
return code;
}

View File

@ -106,6 +106,8 @@ static int32_t exprNodeCopy(const SExprNode* pSrc, SExprNode* pDst) {
COPY_SCALAR_FIELD(asParam);
COPY_SCALAR_FIELD(asPosition);
COPY_SCALAR_FIELD(projIdx);
COPY_SCALAR_FIELD(relatedTo);
COPY_SCALAR_FIELD(bindExprID);
return TSDB_CODE_SUCCESS;
}
@ -354,6 +356,7 @@ static int32_t limitNodeCopy(const SLimitNode* pSrc, SLimitNode* pDst) {
static int32_t stateWindowNodeCopy(const SStateWindowNode* pSrc, SStateWindowNode* pDst) {
CLONE_NODE_FIELD(pCol);
CLONE_NODE_FIELD(pExpr);
CLONE_NODE_FIELD(pTrueForLimit);
return TSDB_CODE_SUCCESS;
}
@ -361,6 +364,7 @@ static int32_t eventWindowNodeCopy(const SEventWindowNode* pSrc, SEventWindowNod
CLONE_NODE_FIELD(pCol);
CLONE_NODE_FIELD(pStartCond);
CLONE_NODE_FIELD(pEndCond);
CLONE_NODE_FIELD(pTrueForLimit);
return TSDB_CODE_SUCCESS;
}
@ -627,6 +631,7 @@ static int32_t logicWindowCopy(const SWindowLogicNode* pSrc, SWindowLogicNode* p
CLONE_NODE_FIELD(pStateExpr);
CLONE_NODE_FIELD(pStartCond);
CLONE_NODE_FIELD(pEndCond);
COPY_SCALAR_FIELD(trueForLimit);
COPY_SCALAR_FIELD(triggerType);
COPY_SCALAR_FIELD(watermark);
COPY_SCALAR_FIELD(deleteMark);

View File

@ -3127,6 +3127,7 @@ static int32_t jsonToPhysiSessionWindowNode(const SJson* pJson, void* pObj) {
}
static const char* jkStateWindowPhysiPlanStateKey = "StateKey";
static const char* jkStateWindowPhysiPlanTrueForLimit = "TrueForLimit";
static int32_t physiStateWindowNodeToJson(const void* pObj, SJson* pJson) {
const SStateWinodwPhysiNode* pNode = (const SStateWinodwPhysiNode*)pObj;
@ -3135,6 +3136,9 @@ static int32_t physiStateWindowNodeToJson(const void* pObj, SJson* pJson) {
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddObject(pJson, jkStateWindowPhysiPlanStateKey, nodeToJson, pNode->pStateKey);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkStateWindowPhysiPlanTrueForLimit, pNode->trueForLimit);
}
return code;
}
@ -3146,12 +3150,16 @@ static int32_t jsonToPhysiStateWindowNode(const SJson* pJson, void* pObj) {
if (TSDB_CODE_SUCCESS == code) {
code = jsonToNodeObject(pJson, jkStateWindowPhysiPlanStateKey, &pNode->pStateKey);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonGetBigIntValue(pJson, jkStateWindowPhysiPlanTrueForLimit, &pNode->trueForLimit);
}
return code;
}
static const char* jkEventWindowPhysiPlanStartCond = "StartCond";
static const char* jkEventWindowPhysiPlanEndCond = "EndCond";
static const char* jkEventWindowPhysiPlanTrueForLimit = "TrueForLimit";
static int32_t physiEventWindowNodeToJson(const void* pObj, SJson* pJson) {
const SEventWinodwPhysiNode* pNode = (const SEventWinodwPhysiNode*)pObj;
@ -3163,6 +3171,9 @@ static int32_t physiEventWindowNodeToJson(const void* pObj, SJson* pJson) {
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddObject(pJson, jkEventWindowPhysiPlanEndCond, nodeToJson, pNode->pEndCond);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkEventWindowPhysiPlanTrueForLimit, pNode->trueForLimit);
}
return code;
}
@ -3177,6 +3188,9 @@ static int32_t jsonToPhysiEventWindowNode(const SJson* pJson, void* pObj) {
if (TSDB_CODE_SUCCESS == code) {
code = jsonToNodeObject(pJson, jkEventWindowPhysiPlanEndCond, &pNode->pEndCond);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonGetBigIntValue(pJson, jkEventWindowPhysiPlanTrueForLimit, &pNode->trueForLimit);
}
return code;
}
@ -4960,6 +4974,7 @@ static int32_t jsonToLimitNode(const SJson* pJson, void* pObj) {
static const char* jkStateWindowCol = "StateWindowCol";
static const char* jkStateWindowExpr = "StateWindowExpr";
static const char* jkStateWindowTrueForLimit = "TrueForLimit";
static int32_t stateWindowNodeToJson(const void* pObj, SJson* pJson) {
const SStateWindowNode* pNode = (const SStateWindowNode*)pObj;
@ -4967,6 +4982,9 @@ static int32_t stateWindowNodeToJson(const void* pObj, SJson* pJson) {
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddObject(pJson, jkStateWindowExpr, nodeToJson, pNode->pExpr);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddObject(pJson, jkStateWindowTrueForLimit, nodeToJson, pNode->pTrueForLimit);
}
return code;
}
@ -4977,6 +4995,9 @@ static int32_t jsonToStateWindowNode(const SJson* pJson, void* pObj) {
if (TSDB_CODE_SUCCESS == code) {
code = jsonToNodeObject(pJson, jkStateWindowExpr, (SNode**)&pNode->pExpr);
}
if (TSDB_CODE_SUCCESS == code) {
code = jsonToNodeObject(pJson, jkStateWindowTrueForLimit, (SNode**)&pNode->pTrueForLimit);
}
return code;
}
@ -5006,6 +5027,7 @@ static int32_t jsonToSessionWindowNode(const SJson* pJson, void* pObj) {
static const char* jkEventWindowTsPrimaryKey = "TsPrimaryKey";
static const char* jkEventWindowStartCond = "StartCond";
static const char* jkEventWindowEndCond = "EndCond";
static const char* jkEventWindowTrueForLimit = "TrueForLimit";
static int32_t eventWindowNodeToJson(const void* pObj, SJson* pJson) {
const SEventWindowNode* pNode = (const SEventWindowNode*)pObj;
@ -5017,6 +5039,9 @@ static int32_t eventWindowNodeToJson(const void* pObj, SJson* pJson) {
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddObject(pJson, jkEventWindowEndCond, nodeToJson, pNode->pEndCond);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddObject(pJson, jkEventWindowTrueForLimit, nodeToJson, pNode->pTrueForLimit);
}
return code;
}
@ -5030,6 +5055,9 @@ static int32_t jsonToEventWindowNode(const SJson* pJson, void* pObj) {
if (TSDB_CODE_SUCCESS == code) {
code = jsonToNodeObject(pJson, jkEventWindowEndCond, &pNode->pEndCond);
}
if (TSDB_CODE_SUCCESS == code) {
code = jsonToNodeObject(pJson, jkEventWindowTrueForLimit, &pNode->pTrueForLimit);
}
return code;
}

View File

@ -13,6 +13,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "functionMgt.h"
#include "querynodes.h"
#define COMPARE_SCALAR_FIELD(fldname) \
@ -137,6 +138,15 @@ static bool functionNodeEqual(const SFunctionNode* a, const SFunctionNode* b) {
COMPARE_SCALAR_FIELD(funcId);
COMPARE_STRING_FIELD(functionName);
COMPARE_NODE_LIST_FIELD(pParameterList);
if (a->funcType == FUNCTION_TYPE_SELECT_VALUE) {
if ((a->node.relatedTo != b->node.relatedTo)) return false;
} else {
// select cols(cols(first(c0), ts), first(c0) from meters;
if ((a->node.bindExprID != b->node.bindExprID)) {
return false;
}
}
return true;
}

View File

@ -664,11 +664,18 @@ static int32_t msgToDataType(STlvDecoder* pDecoder, void* pObj) {
return code;
}
enum { EXPR_CODE_RES_TYPE = 1 };
enum { EXPR_CODE_RES_TYPE = 1, EXPR_CODE_BIND_TUPLE_FUNC_IDX, EXPR_CODE_TUPLE_FUNC_IDX };
static int32_t exprNodeToMsg(const void* pObj, STlvEncoder* pEncoder) {
const SExprNode* pNode = (const SExprNode*)pObj;
return tlvEncodeObj(pEncoder, EXPR_CODE_RES_TYPE, dataTypeToMsg, &pNode->resType);
int32_t code = tlvEncodeObj(pEncoder, EXPR_CODE_RES_TYPE, dataTypeToMsg, &pNode->resType);
if (TSDB_CODE_SUCCESS == code) {
code = tlvEncodeI32(pEncoder, EXPR_CODE_BIND_TUPLE_FUNC_IDX, pNode->relatedTo);
}
if (TSDB_CODE_SUCCESS == code) {
code = tlvEncodeI32(pEncoder, EXPR_CODE_TUPLE_FUNC_IDX, pNode->bindExprID);
}
return code;
}
static int32_t msgToExprNode(STlvDecoder* pDecoder, void* pObj) {
@ -681,6 +688,12 @@ static int32_t msgToExprNode(STlvDecoder* pDecoder, void* pObj) {
case EXPR_CODE_RES_TYPE:
code = tlvDecodeObjFromTlv(pTlv, msgToDataType, &pNode->resType);
break;
case EXPR_CODE_BIND_TUPLE_FUNC_IDX:
code = tlvDecodeI32(pTlv, &pNode->relatedTo);
break;
case EXPR_CODE_TUPLE_FUNC_IDX:
code = tlvDecodeI32(pTlv, &pNode->bindExprID);
break;
default:
break;
}
@ -695,6 +708,12 @@ static int32_t columnNodeInlineToMsg(const void* pObj, STlvEncoder* pEncoder) {
const SColumnNode* pNode = (const SColumnNode*)pObj;
int32_t code = dataTypeInlineToMsg(&pNode->node.resType, pEncoder);
if (TSDB_CODE_SUCCESS == code) {
code = tlvEncodeValueI32(pEncoder, pNode->node.relatedTo);
}
if (TSDB_CODE_SUCCESS == code) {
code = tlvEncodeValueI32(pEncoder, pNode->node.bindExprID);
}
if (TSDB_CODE_SUCCESS == code) {
code = tlvEncodeValueU64(pEncoder, pNode->tableId);
}
@ -745,6 +764,12 @@ static int32_t msgToColumnNodeInline(STlvDecoder* pDecoder, void* pObj) {
SColumnNode* pNode = (SColumnNode*)pObj;
int32_t code = msgToDataTypeInline(pDecoder, &pNode->node.resType);
if (TSDB_CODE_SUCCESS == code) {
code = tlvDecodeValueI32(pDecoder, &pNode->node.relatedTo);
}
if (TSDB_CODE_SUCCESS == code) {
code = tlvDecodeValueI32(pDecoder, &pNode->node.bindExprID);
}
if (TSDB_CODE_SUCCESS == code) {
code = tlvDecodeValueU64(pDecoder, &pNode->tableId);
}
@ -3443,7 +3468,7 @@ static int32_t msgToPhysiSessionWindowNode(STlvDecoder* pDecoder, void* pObj) {
return code;
}
enum { PHY_STATE_CODE_WINDOW = 1, PHY_STATE_CODE_KEY };
enum { PHY_STATE_CODE_WINDOW = 1, PHY_STATE_CODE_KEY, PHY_STATE_CODE_TRUE_FOR_LIMIT };
static int32_t physiStateWindowNodeToMsg(const void* pObj, STlvEncoder* pEncoder) {
const SStateWinodwPhysiNode* pNode = (const SStateWinodwPhysiNode*)pObj;
@ -3452,6 +3477,9 @@ static int32_t physiStateWindowNodeToMsg(const void* pObj, STlvEncoder* pEncoder
if (TSDB_CODE_SUCCESS == code) {
code = tlvEncodeObj(pEncoder, PHY_STATE_CODE_KEY, nodeToMsg, pNode->pStateKey);
}
if (TSDB_CODE_SUCCESS == code) {
code = tlvEncodeI64(pEncoder, PHY_STATE_CODE_TRUE_FOR_LIMIT, pNode->trueForLimit);
}
return code;
}
@ -3469,6 +3497,9 @@ static int32_t msgToPhysiStateWindowNode(STlvDecoder* pDecoder, void* pObj) {
case PHY_STATE_CODE_KEY:
code = msgToNodeFromTlv(pTlv, (void**)&pNode->pStateKey);
break;
case PHY_STATE_CODE_TRUE_FOR_LIMIT:
code = tlvDecodeI64(pTlv, &pNode->trueForLimit);
break;
default:
break;
}
@ -3477,7 +3508,7 @@ static int32_t msgToPhysiStateWindowNode(STlvDecoder* pDecoder, void* pObj) {
return code;
}
enum { PHY_EVENT_CODE_WINDOW = 1, PHY_EVENT_CODE_START_COND, PHY_EVENT_CODE_END_COND };
enum { PHY_EVENT_CODE_WINDOW = 1, PHY_EVENT_CODE_START_COND, PHY_EVENT_CODE_END_COND, PHY_EVENT_CODE_TRUE_FOR_LIMIT };
static int32_t physiEventWindowNodeToMsg(const void* pObj, STlvEncoder* pEncoder) {
const SEventWinodwPhysiNode* pNode = (const SEventWinodwPhysiNode*)pObj;
@ -3489,6 +3520,9 @@ static int32_t physiEventWindowNodeToMsg(const void* pObj, STlvEncoder* pEncoder
if (TSDB_CODE_SUCCESS == code) {
code = tlvEncodeObj(pEncoder, PHY_EVENT_CODE_END_COND, nodeToMsg, pNode->pEndCond);
}
if (TSDB_CODE_SUCCESS == code) {
code = tlvEncodeI64(pEncoder, PHY_EVENT_CODE_TRUE_FOR_LIMIT, pNode->trueForLimit);
}
return code;
}
@ -3509,6 +3543,9 @@ static int32_t msgToPhysiEventWindowNode(STlvDecoder* pDecoder, void* pObj) {
case PHY_EVENT_CODE_END_COND:
code = msgToNodeFromTlv(pTlv, (void**)&pNode->pEndCond);
break;
case PHY_EVENT_CODE_TRUE_FOR_LIMIT:
code = tlvDecodeI64(pTlv, &pNode->trueForLimit);
break;
default:
break;
}

View File

@ -102,6 +102,9 @@ static EDealRes dispatchExpr(SNode* pNode, ETraversalOrder order, FNodeWalker wa
if (DEAL_RES_ERROR != res && DEAL_RES_END != res) {
res = walkExpr(pState->pCol, order, walker, pContext);
}
if (DEAL_RES_ERROR != res && DEAL_RES_END != res) {
res = walkExpr(pState->pTrueForLimit, order, walker, pContext);
}
break;
}
case QUERY_NODE_SESSION_WINDOW: {
@ -174,6 +177,9 @@ static EDealRes dispatchExpr(SNode* pNode, ETraversalOrder order, FNodeWalker wa
if (DEAL_RES_ERROR != res && DEAL_RES_END != res) {
res = walkExpr(pEvent->pEndCond, order, walker, pContext);
}
if (DEAL_RES_ERROR != res && DEAL_RES_END != res) {
res = walkExpr(pEvent->pTrueForLimit, order, walker, pContext);
}
break;
}
case QUERY_NODE_COUNT_WINDOW: {
@ -313,6 +319,9 @@ static EDealRes rewriteExpr(SNode** pRawNode, ETraversalOrder order, FNodeRewrit
if (DEAL_RES_ERROR != res && DEAL_RES_END != res) {
res = rewriteExpr(&pState->pCol, order, rewriter, pContext);
}
if (DEAL_RES_ERROR != res && DEAL_RES_END != res) {
res = rewriteExpr(&pState->pTrueForLimit, order, rewriter, pContext);
}
break;
}
case QUERY_NODE_SESSION_WINDOW: {
@ -385,6 +394,9 @@ static EDealRes rewriteExpr(SNode** pRawNode, ETraversalOrder order, FNodeRewrit
if (DEAL_RES_ERROR != res && DEAL_RES_END != res) {
res = rewriteExpr(&pEvent->pEndCond, order, rewriter, pContext);
}
if (DEAL_RES_ERROR != res && DEAL_RES_END != res) {
res = rewriteExpr(&pEvent->pTrueForLimit, order, rewriter, pContext);
}
break;
}
case QUERY_NODE_WINDOW_OFFSET: {
@ -475,6 +487,7 @@ void nodesWalkSelectStmtImpl(SSelectStmt* pSelect, ESqlClause clause, FNodeWalke
nodesWalkExprs(pSelect->pOrderByList, walker, pContext);
case SQL_CLAUSE_ORDER_BY:
nodesWalkExprs(pSelect->pProjectionList, walker, pContext);
nodesWalkExprs(pSelect->pProjectionBindList, walker, pContext);
default:
break;
}
@ -515,6 +528,7 @@ void nodesRewriteSelectStmt(SSelectStmt* pSelect, ESqlClause clause, FNodeRewrit
nodesRewriteExprs(pSelect->pOrderByList, rewriter, pContext);
case SQL_CLAUSE_ORDER_BY:
nodesRewriteExprs(pSelect->pProjectionList, rewriter, pContext);
nodesRewriteExprs(pSelect->pProjectionBindList, rewriter, pContext);
default:
break;
}

View File

@ -15,6 +15,7 @@
#include "cmdnodes.h"
#include "functionMgt.h"
#include "nodes.h"
#include "nodesUtil.h"
#include "plannodes.h"
#include "querynodes.h"
@ -1125,6 +1126,7 @@ void nodesDestroyNode(SNode* pNode) {
SStateWindowNode* pState = (SStateWindowNode*)pNode;
nodesDestroyNode(pState->pCol);
nodesDestroyNode(pState->pExpr);
nodesDestroyNode(pState->pTrueForLimit);
break;
}
case QUERY_NODE_SESSION_WINDOW: {
@ -1239,6 +1241,7 @@ void nodesDestroyNode(SNode* pNode) {
nodesDestroyNode(pEvent->pCol);
nodesDestroyNode(pEvent->pStartCond);
nodesDestroyNode(pEvent->pEndCond);
nodesDestroyNode(pEvent->pTrueForLimit);
break;
}
case QUERY_NODE_COUNT_WINDOW: {
@ -1292,6 +1295,7 @@ void nodesDestroyNode(SNode* pNode) {
case QUERY_NODE_SELECT_STMT: {
SSelectStmt* pStmt = (SSelectStmt*)pNode;
nodesDestroyList(pStmt->pProjectionList);
nodesDestroyList(pStmt->pProjectionBindList);
nodesDestroyNode(pStmt->pFromTable);
nodesDestroyNode(pStmt->pWhere);
nodesDestroyList(pStmt->pPartitionByList);
@ -3261,3 +3265,12 @@ int32_t nodesListDeduplicate(SNodeList** ppList) {
}
return code;
}
void rewriteExprAliasName(SExprNode* pNode, int64_t num) {
(void)tsnprintf(pNode->aliasName, TSDB_COL_NAME_LEN, "expr_%x", num);
return;
}
bool isRelatedToOtherExpr(SExprNode* pExpr) {
return pExpr->relatedTo != 0;
}

View File

@ -91,6 +91,7 @@ TEST_F(NodesCloneTest, stateWindow) {
SStateWindowNode* pDstNode = (SStateWindowNode*)pDst;
ASSERT_EQ(nodeType(pSrcNode->pCol), nodeType(pDstNode->pCol));
ASSERT_EQ(nodeType(pSrcNode->pExpr), nodeType(pDstNode->pExpr));
ASSERT_EQ(nodeType(pSrcNode->pTrueForLimit), nodeType(pDstNode->pTrueForLimit));
});
std::unique_ptr<SNode, void (*)(SNode*)> srcNode(nullptr, nodesDestroyNode);
@ -102,6 +103,7 @@ TEST_F(NodesCloneTest, stateWindow) {
SStateWindowNode* pNode = (SStateWindowNode*)srcNode.get();
code = nodesMakeNode(QUERY_NODE_COLUMN, &pNode->pCol);
code = nodesMakeNode(QUERY_NODE_OPERATOR, &pNode->pExpr);
code = nodesMakeNode(QUERY_NODE_VALUE, &pNode->pTrueForLimit);
return srcNode.get();
}());
}

View File

@ -155,8 +155,8 @@ SNode* createViewNode(SAstCreateContext* pCxt, SToken* pDbName, SToken* pVie
SNode* createLimitNode(SAstCreateContext* pCxt, SNode* pLimit, SNode* pOffset);
SNode* createOrderByExprNode(SAstCreateContext* pCxt, SNode* pExpr, EOrder order, ENullOrder nullOrder);
SNode* createSessionWindowNode(SAstCreateContext* pCxt, SNode* pCol, SNode* pGap);
SNode* createStateWindowNode(SAstCreateContext* pCxt, SNode* pExpr);
SNode* createEventWindowNode(SAstCreateContext* pCxt, SNode* pStartCond, SNode* pEndCond);
SNode* createStateWindowNode(SAstCreateContext* pCxt, SNode* pExpr, SNode *pTrueForLimit);
SNode* createEventWindowNode(SAstCreateContext* pCxt, SNode* pStartCond, SNode* pEndCond, SNode *pTrueForLimit);
SNode* createCountWindowNode(SAstCreateContext* pCxt, const SToken* pCountToken, const SToken* pSlidingToken);
SNode* createAnomalyWindowNode(SAstCreateContext* pCxt, SNode* pExpr, const SToken* pFuncOpt);
SNode* createIntervalWindowNode(SAstCreateContext* pCxt, SNode* pInterval, SNode* pOffset, SNode* pSliding,
@ -335,6 +335,7 @@ SNode* createDropTSMAStmt(SAstCreateContext* pCxt, bool ignoreNotExists, SNode*
SNode* createShowCreateTSMAStmt(SAstCreateContext* pCxt, SNode* pRealTable);
SNode* createShowTSMASStmt(SAstCreateContext* pCxt, SNode* dbName);
SNode* createShowDiskUsageStmt(SAstCreateContext* pCxt, SNode* dbName, ENodeType type);
SNodeList* createColsFuncParamNodeList(SAstCreateContext* pCxt, SNode* pFuncNode, SNodeList* pNodeList, SToken* pAlias);
#ifdef __cplusplus
}

View File

@ -1283,6 +1283,7 @@ pseudo_column(A) ::= IROWTS_ORIGIN(B).
function_expression(A) ::= function_name(B) NK_LP expression_list(C) NK_RP(D). { A = createRawExprNodeExt(pCxt, &B, &D, createFunctionNode(pCxt, &B, C)); }
function_expression(A) ::= star_func(B) NK_LP star_func_para_list(C) NK_RP(D). { A = createRawExprNodeExt(pCxt, &B, &D, createFunctionNode(pCxt, &B, C)); }
function_expression(A) ::= cols_func(B) NK_LP cols_func_para_list(C) NK_RP(D). { A = createRawExprNodeExt(pCxt, &B, &D, createFunctionNode(pCxt, &B, C)); }
function_expression(A) ::=
CAST(B) NK_LP expr_or_subquery(C) AS type_name(D) NK_RP(E). { A = createRawExprNodeExt(pCxt, &B, &E, createCastFunctionNode(pCxt, releaseRawExprNode(pCxt, C), D)); }
function_expression(A) ::=
@ -1345,6 +1346,23 @@ star_func(A) ::= FIRST(B).
star_func(A) ::= LAST(B). { A = B; }
star_func(A) ::= LAST_ROW(B). { A = B; }
%type cols_func { SToken }
%destructor cols_func { }
cols_func(A) ::= COLS(B). { A = B; }
%type cols_func_para_list { SNodeList* }
%destructor cols_func_para_list { nodesDestroyList($$); }
cols_func_para_list(A) ::= function_expression(B) NK_COMMA cols_func_expression_list(C). { A = createColsFuncParamNodeList(pCxt, B, C, NULL); }
cols_func_expression(A) ::= expr_or_subquery(B). { A = releaseRawExprNode(pCxt, B); }
cols_func_expression(A) ::= expr_or_subquery(B) column_alias(C). { A = setProjectionAlias(pCxt, releaseRawExprNode(pCxt, B), &C);}
cols_func_expression(A) ::= expr_or_subquery(B) AS column_alias(C). { A = setProjectionAlias(pCxt, releaseRawExprNode(pCxt, B), &C);}
%type cols_func_expression_list { SNodeList* }
%destructor cols_func_expression_list { nodesDestroyList($$); }
cols_func_expression_list(A) ::= cols_func_expression(B). { A = createNodeList(pCxt, B); }
cols_func_expression_list(A) ::= cols_func_expression_list(B) NK_COMMA cols_func_expression(C). { A = addNodeToList(pCxt, B, C); }
%type star_func_para_list { SNodeList* }
%destructor star_func_para_list { nodesDestroyList($$); }
star_func_para_list(A) ::= NK_STAR(B). { A = createNodeList(pCxt, createColumnNode(pCxt, NULL, &B)); }
@ -1628,7 +1646,8 @@ partition_item(A) ::= expr_or_subquery(B) AS column_alias(C).
twindow_clause_opt(A) ::= . { A = NULL; }
twindow_clause_opt(A) ::= SESSION NK_LP column_reference(B) NK_COMMA
interval_sliding_duration_literal(C) NK_RP. { A = createSessionWindowNode(pCxt, releaseRawExprNode(pCxt, B), releaseRawExprNode(pCxt, C)); }
twindow_clause_opt(A) ::= STATE_WINDOW NK_LP expr_or_subquery(B) NK_RP. { A = createStateWindowNode(pCxt, releaseRawExprNode(pCxt, B)); }
twindow_clause_opt(A) ::=
STATE_WINDOW NK_LP expr_or_subquery(B) NK_RP true_for_opt(C). { A = createStateWindowNode(pCxt, releaseRawExprNode(pCxt, B), C); }
twindow_clause_opt(A) ::= INTERVAL NK_LP interval_sliding_duration_literal(B)
NK_RP sliding_opt(C) fill_opt(D). { A = createIntervalWindowNode(pCxt, releaseRawExprNode(pCxt, B), NULL, C, D); }
twindow_clause_opt(A) ::=
@ -1637,9 +1656,9 @@ twindow_clause_opt(A) ::=
sliding_opt(D) fill_opt(E). { A = createIntervalWindowNode(pCxt, releaseRawExprNode(pCxt, B), releaseRawExprNode(pCxt, C), D, E); }
twindow_clause_opt(A) ::=
INTERVAL NK_LP interval_sliding_duration_literal(B) NK_COMMA
AUTO(C) NK_RP sliding_opt(D) fill_opt(E). { A = createIntervalWindowNode(pCxt, releaseRawExprNode(pCxt, B), createDurationValueNode(pCxt, &C), D, E); }
twindow_clause_opt(A) ::=
EVENT_WINDOW START WITH search_condition(B) END WITH search_condition(C). { A = createEventWindowNode(pCxt, B, C); }
AUTO(C) NK_RP sliding_opt(D) fill_opt(E). { A = createIntervalWindowNode(pCxt, releaseRawExprNode(pCxt, B), createDurationValueNode(pCxt, &C), D, E); }
twindow_clause_opt(A) ::= EVENT_WINDOW START WITH search_condition(B)
END WITH search_condition(C) true_for_opt(D). { A = createEventWindowNode(pCxt, B, C, D); }
twindow_clause_opt(A) ::=
COUNT_WINDOW NK_LP NK_INTEGER(B) NK_RP. { A = createCountWindowNode(pCxt, &B, &B); }
twindow_clause_opt(A) ::=
@ -1717,6 +1736,9 @@ range_opt(A) ::=
every_opt(A) ::= . { A = NULL; }
every_opt(A) ::= EVERY NK_LP duration_literal(B) NK_RP. { A = releaseRawExprNode(pCxt, B); }
true_for_opt(A) ::= . { A = NULL; }
true_for_opt(A) ::= TRUE_FOR NK_LP interval_sliding_duration_literal(B) NK_RP. { A = releaseRawExprNode(pCxt, B); }
/************************************************ query_expression ****************************************************/
query_expression(A) ::= query_simple(B)
order_by_clause_opt(C) slimit_clause_opt(D) limit_clause_opt(E). {

View File

@ -16,6 +16,7 @@
#include <regex.h>
#include <uv.h>
#include "nodes.h"
#include "parAst.h"
#include "parUtil.h"
#include "tglobal.h"
@ -356,6 +357,33 @@ SToken getTokenFromRawExprNode(SAstCreateContext* pCxt, SNode* pNode) {
return t;
}
SNodeList* createColsFuncParamNodeList(SAstCreateContext* pCxt, SNode* pNode, SNodeList* pNodeList, SToken* pAlias) {
CHECK_PARSER_STATUS(pCxt);
if (NULL == pNode || QUERY_NODE_RAW_EXPR != nodeType(pNode)) {
pCxt->errCode = TSDB_CODE_PAR_SYNTAX_ERROR;
}
CHECK_PARSER_STATUS(pCxt);
SRawExprNode* pRawExpr = (SRawExprNode*)pNode;
SNode* pFuncNode = pRawExpr->pNode;
if(pFuncNode->type != QUERY_NODE_FUNCTION) {
pCxt->errCode = TSDB_CODE_PAR_SYNTAX_ERROR;
}
CHECK_PARSER_STATUS(pCxt);
SNodeList* list = NULL;
pCxt->errCode = nodesMakeList(&list);
CHECK_MAKE_NODE(list);
pCxt->errCode = nodesListAppend(list, pFuncNode);
CHECK_PARSER_STATUS(pCxt);
pCxt->errCode = nodesListAppendList(list, pNodeList);
CHECK_PARSER_STATUS(pCxt);
return list;
_err:
nodesDestroyNode(pFuncNode);
nodesDestroyList(pNodeList);
return NULL;
}
SNodeList* createNodeList(SAstCreateContext* pCxt, SNode* pNode) {
CHECK_PARSER_STATUS(pCxt);
SNodeList* list = NULL;
@ -1332,7 +1360,7 @@ _err:
return NULL;
}
SNode* createStateWindowNode(SAstCreateContext* pCxt, SNode* pExpr) {
SNode* createStateWindowNode(SAstCreateContext* pCxt, SNode* pExpr, SNode* pTrueForLimit) {
SStateWindowNode* state = NULL;
CHECK_PARSER_STATUS(pCxt);
pCxt->errCode = nodesMakeNode(QUERY_NODE_STATE_WINDOW, (SNode**)&state);
@ -1340,14 +1368,16 @@ SNode* createStateWindowNode(SAstCreateContext* pCxt, SNode* pExpr) {
state->pCol = createPrimaryKeyCol(pCxt, NULL);
CHECK_MAKE_NODE(state->pCol);
state->pExpr = pExpr;
state->pTrueForLimit = pTrueForLimit;
return (SNode*)state;
_err:
nodesDestroyNode((SNode*)state);
nodesDestroyNode(pExpr);
nodesDestroyNode(pTrueForLimit);
return NULL;
}
SNode* createEventWindowNode(SAstCreateContext* pCxt, SNode* pStartCond, SNode* pEndCond) {
SNode* createEventWindowNode(SAstCreateContext* pCxt, SNode* pStartCond, SNode* pEndCond, SNode* pTrueForLimit) {
SEventWindowNode* pEvent = NULL;
CHECK_PARSER_STATUS(pCxt);
pCxt->errCode = nodesMakeNode(QUERY_NODE_EVENT_WINDOW, (SNode**)&pEvent);
@ -1356,11 +1386,13 @@ SNode* createEventWindowNode(SAstCreateContext* pCxt, SNode* pStartCond, SNode*
CHECK_MAKE_NODE(pEvent->pCol);
pEvent->pStartCond = pStartCond;
pEvent->pEndCond = pEndCond;
pEvent->pTrueForLimit = pTrueForLimit;
return (SNode*)pEvent;
_err:
nodesDestroyNode((SNode*)pEvent);
nodesDestroyNode(pStartCond);
nodesDestroyNode(pEndCond);
nodesDestroyNode(pTrueForLimit);
return NULL;
}

View File

@ -355,10 +355,12 @@ static SKeyword keywordTable[] = {
{"FORCE_WINDOW_CLOSE", TK_FORCE_WINDOW_CLOSE},
{"DISK_INFO", TK_DISK_INFO},
{"AUTO", TK_AUTO},
{"COLS", TK_COLS},
{"NOTIFY", TK_NOTIFY},
{"ON_FAILURE", TK_ON_FAILURE},
{"NOTIFY_HISTORY", TK_NOTIFY_HISTORY},
{"REGEXP", TK_REGEXP},
{"TRUE_FOR", TK_TRUE_FOR}
};
// clang-format on

View File

@ -13,8 +13,13 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "nodes.h"
#include "parInt.h"
#include "parTranslater.h"
#include <stdint.h>
#include "query.h"
#include "querynodes.h"
#include "taoserror.h"
#include "tdatablock.h"
#include "catalog.h"
@ -1099,6 +1104,15 @@ static bool isForecastPseudoColumnFunc(const SNode* pNode) {
return (QUERY_NODE_FUNCTION == nodeType(pNode) && fmIsForecastPseudoColumnFunc(((SFunctionNode*)pNode)->funcId));
}
static bool isColsFunctionResult(const SNode* pNode) {
return ((nodesIsExprNode(pNode)) && (isRelatedToOtherExpr((SExprNode*)pNode)));
}
static bool isInvalidColsBindFunction(const SFunctionNode* pFunc) {
return (pFunc->node.bindExprID != 0 && (!fmIsSelectFunc(pFunc->funcId) || fmIsMultiRowsFunc(pFunc->funcId) ||
fmIsIndefiniteRowsFunc(pFunc->funcId)));
}
#ifdef BUILD_NO_CALL
static bool isTimelineFunc(const SNode* pNode) {
return (QUERY_NODE_FUNCTION == nodeType(pNode) && fmIsTimelineFunc(((SFunctionNode*)pNode)->funcId));
@ -1125,6 +1139,10 @@ static bool isVectorFunc(const SNode* pNode) {
return (QUERY_NODE_FUNCTION == nodeType(pNode) && fmIsVectorFunc(((SFunctionNode*)pNode)->funcId));
}
static bool isColsFunc(const SNode* pNode) {
return (QUERY_NODE_FUNCTION == nodeType(pNode) && fmIsSelectColsFunc(((SFunctionNode*)pNode)->funcId));
}
static bool isDistinctOrderBy(STranslateContext* pCxt) {
return (SQL_CLAUSE_ORDER_BY == pCxt->currClause && isSelectStmt(pCxt->pCurrStmt) &&
((SSelectStmt*)pCxt->pCurrStmt)->isDistinct);
@ -1557,24 +1575,27 @@ static int32_t findAndSetColumn(STranslateContext* pCxt, SColumnNode** pColRef,
STempTableNode* pTempTable = (STempTableNode*)pTable;
SNodeList* pProjectList = getProjectList(pTempTable->pSubquery);
SNode* pNode;
SExprNode* pFoundExpr = NULL;
FOREACH(pNode, pProjectList) {
SExprNode* pExpr = (SExprNode*)pNode;
if (0 == strcmp(pCol->colName, pExpr->aliasName)) {
if (*pFound) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_AMBIGUOUS_COLUMN, pCol->colName);
}
code = setColumnInfoByExpr(pTempTable, pExpr, pColRef);
if (TSDB_CODE_SUCCESS != code) {
break;
}
pFoundExpr = pExpr;
*pFound = true;
} else if (isPrimaryKeyImpl(pNode) && isInternalPrimaryKey(pCol)) {
code = setColumnInfoByExpr(pTempTable, pExpr, pColRef);
if (TSDB_CODE_SUCCESS != code) break;
pFoundExpr = pExpr;
pCol->isPrimTs = true;
*pFound = true;
}
}
if (pFoundExpr) {
code = setColumnInfoByExpr(pTempTable, pFoundExpr, pColRef);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
}
}
return code;
}
@ -2331,7 +2352,7 @@ static EDealRes translateOperator(STranslateContext* pCxt, SOperatorNode* pOp) {
static EDealRes haveVectorFunction(SNode* pNode, void* pContext) {
if (isAggFunc(pNode) || isIndefiniteRowsFunc(pNode) || isWindowPseudoColumnFunc(pNode) ||
isInterpPseudoColumnFunc(pNode) || isForecastPseudoColumnFunc(pNode)) {
isInterpPseudoColumnFunc(pNode) || isForecastPseudoColumnFunc(pNode) || isColsFunctionResult(pNode)) {
*((bool*)pContext) = true;
return DEAL_RES_END;
}
@ -2479,9 +2500,9 @@ static int32_t rewriteCountTbname(STranslateContext* pCxt, SFunctionNode* pCount
return code;
}
static bool hasInvalidFuncNesting(SNodeList* pParameterList) {
static bool hasInvalidFuncNesting(SFunctionNode* pFunc) {
bool hasInvalidFunc = false;
nodesWalkExprs(pParameterList, haveVectorFunction, &hasInvalidFunc);
nodesWalkExprs(pFunc->pParameterList, haveVectorFunction, &hasInvalidFunc);
return hasInvalidFunc;
}
@ -2503,7 +2524,7 @@ static int32_t translateAggFunc(STranslateContext* pCxt, SFunctionNode* pFunc) {
if (beforeHaving(pCxt->currClause)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_ILLEGAL_USE_AGG_FUNCTION);
}
if (hasInvalidFuncNesting(pFunc->pParameterList)) {
if (hasInvalidFuncNesting(pFunc)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_AGG_FUNC_NESTING);
}
// The auto-generated COUNT function in the DELETE statement is legal
@ -2547,7 +2568,7 @@ static int32_t translateIndefiniteRowsFunc(STranslateContext* pCxt, SFunctionNod
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC,
"%s function is not supported in window query or group query", pFunc->functionName);
}
if (hasInvalidFuncNesting(pFunc->pParameterList)) {
if (hasInvalidFuncNesting(pFunc)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_AGG_FUNC_NESTING);
}
return TSDB_CODE_SUCCESS;
@ -2562,7 +2583,7 @@ static int32_t translateMultiRowsFunc(STranslateContext* pCxt, SFunctionNode* pF
((SSelectStmt*)pCxt->pCurrStmt)->hasMultiRowsFunc) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
}
if (hasInvalidFuncNesting(pFunc->pParameterList)) {
if (hasInvalidFuncNesting(pFunc)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_AGG_FUNC_NESTING);
}
return TSDB_CODE_SUCCESS;
@ -2593,7 +2614,7 @@ static int32_t translateInterpFunc(STranslateContext* pCxt, SFunctionNode* pFunc
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC,
"%s function is not supported in window query or group query", pFunc->functionName);
}
if (hasInvalidFuncNesting(pFunc->pParameterList)) {
if (hasInvalidFuncNesting(pFunc)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_AGG_FUNC_NESTING);
}
return TSDB_CODE_SUCCESS;
@ -2659,7 +2680,7 @@ static int32_t translateForecastFunc(STranslateContext* pCxt, SFunctionNode* pFu
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC,
"%s function is not supported in window query or group query", pFunc->functionName);
}
if (hasInvalidFuncNesting(pFunc->pParameterList)) {
if (hasInvalidFuncNesting(pFunc)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_AGG_FUNC_NESTING);
}
return TSDB_CODE_SUCCESS;
@ -2889,6 +2910,9 @@ static int32_t calcSelectFuncNum(SFunctionNode* pFunc, int32_t currSelectFuncNum
if (fmIsCumulativeFunc(pFunc->funcId)) {
return currSelectFuncNum > 0 ? currSelectFuncNum : 1;
}
if(fmIsSelectColsFunc(pFunc->funcId)) {
return currSelectFuncNum;
}
return currSelectFuncNum + ((fmIsMultiResFunc(pFunc->funcId) && !fmIsLastRowFunc(pFunc->funcId))
? getMultiResFuncNum(pFunc->pParameterList)
: 1);
@ -3314,6 +3338,10 @@ static EDealRes translateFunction(STranslateContext* pCxt, SFunctionNode** pFunc
pCxt->errCode = TSDB_CODE_PAR_ILLEGAL_USE_AGG_FUNCTION;
}
}
if (isInvalidColsBindFunction(*pFunc)) {
pCxt->errCode = TSDB_CODE_PAR_INVALID_COLS_SELECTFUNC;
return DEAL_RES_ERROR;
}
if (TSDB_CODE_SUCCESS == pCxt->errCode) {
pCxt->errCode = translateFunctionImpl(pCxt, pFunc);
}
@ -3555,6 +3583,9 @@ static EDealRes rewriteColToSelectValFunc(STranslateContext* pCxt, SNode** pNode
tstrncpy(pFunc->functionName, "_select_value", TSDB_FUNC_NAME_LEN);
tstrncpy(pFunc->node.aliasName, ((SExprNode*)*pNode)->aliasName, TSDB_COL_NAME_LEN);
tstrncpy(pFunc->node.userAlias, ((SExprNode*)*pNode)->userAlias, TSDB_COL_NAME_LEN);
pFunc->node.relatedTo = ((SExprNode*)*pNode)->relatedTo;
pFunc->node.bindExprID = ((SExprNode*)*pNode)->bindExprID;
pCxt->errCode = nodesListMakeAppend(&pFunc->pParameterList, *pNode);
if (TSDB_CODE_SUCCESS == pCxt->errCode) {
pCxt->errCode = getFuncInfo(pCxt, pFunc);
@ -3827,6 +3858,7 @@ static EDealRes doCheckExprForGroupBy(SNode** pNode, void* pContext) {
if (isVectorFunc(*pNode) && !isDistinctOrderBy(pCxt)) {
return DEAL_RES_IGNORE_CHILD;
}
bool isSingleTable = fromSingleTable(((SSelectStmt*)pCxt->pCurrStmt)->pFromTable);
SNode* pGroupNode = NULL;
FOREACH(pGroupNode, getGroupByList(pCxt)) {
SNode* pActualNode = getGroupByNode(pGroupNode);
@ -3836,10 +3868,13 @@ static EDealRes doCheckExprForGroupBy(SNode** pNode, void* pContext) {
if (IsEqualTbNameFuncNode(pSelect, pActualNode, *pNode)) {
return rewriteExprToGroupKeyFunc(pCxt, pNode);
}
if (isTbnameFuction(pActualNode) && QUERY_NODE_COLUMN == nodeType(*pNode) &&
if ((isTbnameFuction(pActualNode) || isSingleTable) && QUERY_NODE_COLUMN == nodeType(*pNode) &&
((SColumnNode*)*pNode)->colType == COLUMN_TYPE_TAG) {
return rewriteExprToSelectTagFunc(pCxt, pNode);
}
if(isSingleTable && isTbnameFuction(*pNode)) {
return rewriteExprToSelectTagFunc(pCxt, pNode);
}
}
SNode* pPartKey = NULL;
bool partionByTbname = hasTbnameFunction(pSelect->pPartitionByList);
@ -3863,17 +3898,29 @@ static EDealRes doCheckExprForGroupBy(SNode** pNode, void* pContext) {
}
}
if (isScanPseudoColumnFunc(*pNode) || QUERY_NODE_COLUMN == nodeType(*pNode)) {
if (pSelect->selectFuncNum > 1 || (isDistinctOrderBy(pCxt) && pCxt->currClause == SQL_CLAUSE_ORDER_BY)) {
if (isScanPseudoColumnFunc(*pNode)) {
if (((pSelect->selectFuncNum > 1 && pCxt->stableQuery) ||
(isDistinctOrderBy(pCxt) && pCxt->currClause == SQL_CLAUSE_ORDER_BY)) &&
!isRelatedToOtherExpr((SExprNode*)*pNode)) {
return generateDealNodeErrMsg(pCxt, getGroupByErrorCode(pCxt), ((SExprNode*)(*pNode))->userAlias);
}
}
if (QUERY_NODE_COLUMN == nodeType(*pNode)) {
if (((pSelect->selectFuncNum > 1) || (isDistinctOrderBy(pCxt) && pCxt->currClause == SQL_CLAUSE_ORDER_BY)) &&
!isRelatedToOtherExpr((SExprNode*)*pNode)) {
return generateDealNodeErrMsg(pCxt, getGroupByErrorCode(pCxt), ((SExprNode*)(*pNode))->userAlias);
}
}
if (isScanPseudoColumnFunc(*pNode) || QUERY_NODE_COLUMN == nodeType(*pNode)) {
if (isWindowJoinStmt(pSelect) &&
(isWindowJoinProbeTableCol(pSelect, *pNode) || isWindowJoinGroupCol(pSelect, *pNode) ||
(isWindowJoinSubTbname(pSelect, *pNode)) || isWindowJoinSubTbTag(pSelect, *pNode))) {
return rewriteExprToGroupKeyFunc(pCxt, pNode);
}
if (pSelect->hasOtherVectorFunc || !pSelect->hasSelectFunc) {
if ((pSelect->hasOtherVectorFunc || !pSelect->hasSelectFunc) && !isRelatedToOtherExpr((SExprNode*)*pNode)) {
return generateDealNodeErrMsg(pCxt, getGroupByErrorCode(pCxt), ((SExprNode*)(*pNode))->userAlias);
}
@ -3920,6 +3967,7 @@ static int32_t rewriteColsToSelectValFunc(STranslateContext* pCxt, SSelectStmt*
typedef struct CheckAggColCoexistCxt {
STranslateContext* pTranslateCxt;
bool existCol;
bool hasColFunc;
SNodeList* pColList;
} CheckAggColCoexistCxt;
@ -3928,6 +3976,10 @@ static EDealRes doCheckAggColCoexist(SNode** pNode, void* pContext) {
if (isVectorFunc(*pNode)) {
return DEAL_RES_IGNORE_CHILD;
}
if(isColsFunctionResult(*pNode)) {
pCxt->hasColFunc = true;
}
SNode* pPartKey = NULL;
bool partionByTbname = false;
if (fromSingleTable(((SSelectStmt*)pCxt->pTranslateCxt->pCurrStmt)->pFromTable) ||
@ -3947,7 +3999,8 @@ static EDealRes doCheckAggColCoexist(SNode** pNode, void* pContext) {
((QUERY_NODE_COLUMN == nodeType(*pNode) && ((SColumnNode*)*pNode)->colType == COLUMN_TYPE_TAG))) {
return rewriteExprToSelectTagFunc(pCxt->pTranslateCxt, pNode);
}
if (isScanPseudoColumnFunc(*pNode) || QUERY_NODE_COLUMN == nodeType(*pNode)) {
if ((isScanPseudoColumnFunc(*pNode) || QUERY_NODE_COLUMN == nodeType(*pNode)) &&
((!nodesIsExprNode(*pNode) || !isRelatedToOtherExpr((SExprNode*)*pNode)))) {
pCxt->existCol = true;
}
return DEAL_RES_CONTINUE;
@ -4006,7 +4059,7 @@ static int32_t checkAggColCoexist(STranslateContext* pCxt, SSelectStmt* pSelect)
if (!pSelect->onlyHasKeepOrderFunc) {
pSelect->timeLineResMode = TIME_LINE_NONE;
}
CheckAggColCoexistCxt cxt = {.pTranslateCxt = pCxt, .existCol = false};
CheckAggColCoexistCxt cxt = {.pTranslateCxt = pCxt, .existCol = false, .hasColFunc = false};
nodesRewriteExprs(pSelect->pProjectionList, doCheckAggColCoexist, &cxt);
if (!pSelect->isDistinct) {
nodesRewriteExprs(pSelect->pOrderByList, doCheckAggColCoexist, &cxt);
@ -4018,6 +4071,9 @@ static int32_t checkAggColCoexist(STranslateContext* pCxt, SSelectStmt* pSelect)
if (cxt.existCol) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_SINGLE_GROUP);
}
if (cxt.hasColFunc) {
return rewriteColsToSelectValFunc(pCxt, pSelect);
}
return TSDB_CODE_SUCCESS;
}
@ -4060,7 +4116,7 @@ static int32_t checkWinJoinAggColCoexist(STranslateContext* pCxt, SSelectStmt* p
}
static int32_t checkHavingGroupBy(STranslateContext* pCxt, SSelectStmt* pSelect) {
int32_t code = TSDB_CODE_SUCCESS;
int32_t code = TSDB_CODE_SUCCESS;
if (NULL == getGroupByList(pCxt) && NULL == pSelect->pPartitionByList && NULL == pSelect->pWindow &&
!isWindowJoinStmt(pSelect)) {
return code;
@ -5425,9 +5481,38 @@ static int32_t translateClausePosition(STranslateContext* pCxt, SNodeList* pProj
return TSDB_CODE_SUCCESS;
}
static int32_t rewriteColsFunction(STranslateContext* pCxt, SNodeList** nodeList, SNodeList** selectFuncList);
static int32_t rewriteHavingColsNode(STranslateContext* pCxt, SNode** pNode, SNodeList** selectFuncList);
static int32_t prepareColumnExpansion(STranslateContext* pCxt, ESqlClause clause, SSelectStmt* pSelect) {
int32_t code = TSDB_CODE_SUCCESS;
int32_t len = LIST_LENGTH(pSelect->pProjectionBindList);
if (clause == SQL_CLAUSE_SELECT) {
code = rewriteColsFunction(pCxt, &pSelect->pProjectionList, &pSelect->pProjectionBindList);
} else if (clause == SQL_CLAUSE_HAVING) {
code = rewriteHavingColsNode(pCxt, &pSelect->pHaving, &pSelect->pProjectionBindList);
} else if (clause == SQL_CLAUSE_ORDER_BY) {
code = rewriteColsFunction(pCxt, &pSelect->pOrderByList, &pSelect->pProjectionBindList);
} else {
code =
generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_WRONG_VALUE_TYPE, "Invalid clause for column expansion");
}
if (TSDB_CODE_SUCCESS == code && LIST_LENGTH(pSelect->pProjectionBindList) > len) {
code = translateExprList(pCxt, pSelect->pProjectionBindList);
}
if (pSelect->pProjectionBindList != NULL) {
pSelect->hasAggFuncs = true;
}
return code;
}
static int32_t translateOrderBy(STranslateContext* pCxt, SSelectStmt* pSelect) {
bool other;
int32_t code = translateClausePosition(pCxt, pSelect->pProjectionList, pSelect->pOrderByList, &other);
int32_t code = prepareColumnExpansion(pCxt, SQL_CLAUSE_ORDER_BY, pSelect);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
code = translateClausePosition(pCxt, pSelect->pProjectionList, pSelect->pOrderByList, &other);
if (TSDB_CODE_SUCCESS == code) {
if (0 == LIST_LENGTH(pSelect->pOrderByList)) {
NODES_DESTORY_LIST(pSelect->pOrderByList);
@ -5528,7 +5613,7 @@ static int32_t rewriteProjectAlias(SNodeList* pProjectionList) {
if ('\0' == pExpr->userAlias[0]) {
tstrncpy(pExpr->userAlias, pExpr->aliasName, TSDB_COL_NAME_LEN);
}
snprintf(pExpr->aliasName, TSDB_COL_NAME_LEN,"#expr_%d", no++);
rewriteExprAliasName(pExpr, no++);
}
return TSDB_CODE_SUCCESS;
}
@ -5557,14 +5642,14 @@ static int32_t checkProjectAlias(STranslateContext* pCxt, SNodeList* pProjection
}
static int32_t translateProjectionList(STranslateContext* pCxt, SSelectStmt* pSelect) {
SNode* pNode;
int32_t projIdx = 1;
FOREACH(pNode, pSelect->pProjectionList) { ((SExprNode*)pNode)->projIdx = projIdx++; }
if (!pSelect->isSubquery) {
return rewriteProjectAlias(pSelect->pProjectionList);
} else {
SNode* pNode;
int32_t projIdx = 1;
FOREACH(pNode, pSelect->pProjectionList) { ((SExprNode*)pNode)->projIdx = projIdx++; }
return TSDB_CODE_SUCCESS;
}
return TSDB_CODE_SUCCESS;
}
typedef struct SReplaceGroupByAliasCxt {
@ -5646,7 +5731,10 @@ static int32_t translatePartitionByList(STranslateContext* pCxt, SSelectStmt* pS
static int32_t translateSelectList(STranslateContext* pCxt, SSelectStmt* pSelect) {
pCxt->currClause = SQL_CLAUSE_SELECT;
int32_t code = translateExprList(pCxt, pSelect->pProjectionList);
int32_t code = prepareColumnExpansion(pCxt, SQL_CLAUSE_SELECT, pSelect);
if (TSDB_CODE_SUCCESS == code) {
code = translateExprList(pCxt, pSelect->pProjectionList);
}
if (TSDB_CODE_SUCCESS == code) {
code = translateStar(pCxt, pSelect);
}
@ -5670,10 +5758,18 @@ static int32_t translateSelectList(STranslateContext* pCxt, SSelectStmt* pSelect
}
static int32_t translateHaving(STranslateContext* pCxt, SSelectStmt* pSelect) {
int32_t code = TSDB_CODE_SUCCESS;
if (NULL == pSelect->pGroupByList && NULL == pSelect->pPartitionByList && NULL == pSelect->pWindow &&
!isWindowJoinStmt(pSelect) && NULL != pSelect->pHaving) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_GROUPBY_LACK_EXPRESSION);
}
pCxt->currClause = SQL_CLAUSE_HAVING;
if (NULL != pSelect->pHaving) {
code = prepareColumnExpansion(pCxt, SQL_CLAUSE_HAVING, pSelect);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
}
if (isWindowJoinStmt(pSelect)) {
if (NULL != pSelect->pHaving) {
bool hasFunc = false;
@ -5683,9 +5779,7 @@ static int32_t translateHaving(STranslateContext* pCxt, SSelectStmt* pSelect) {
}
}
}
pCxt->currClause = SQL_CLAUSE_HAVING;
int32_t code = translateExpr(pCxt, &pSelect->pHaving);
return code;
return translateExpr(pCxt, &pSelect->pHaving);
}
static int32_t translateGroupBy(STranslateContext* pCxt, SSelectStmt* pSelect) {
@ -6043,6 +6137,20 @@ static int32_t checkStateWindowForStream(STranslateContext* pCxt, SSelectStmt* p
return TSDB_CODE_SUCCESS;
}
static int32_t checkTrueForLimit(STranslateContext *pCxt, SNode *pNode) {
SValueNode *pTrueForLimit = (SValueNode *)pNode;
if (pTrueForLimit == NULL) {
return TSDB_CODE_SUCCESS;
}
if (pTrueForLimit->datum.i < 0) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_TRUE_FOR_NEGATIVE);
}
if (IS_CALENDAR_TIME_DURATION(pTrueForLimit->unit)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_TRUE_FOR_UNIT);
}
return TSDB_CODE_SUCCESS;
}
static int32_t translateStateWindow(STranslateContext* pCxt, SSelectStmt* pSelect) {
if (QUERY_NODE_TEMP_TABLE == nodeType(pSelect->pFromTable) &&
!isGlobalTimeLineQuery(((STempTableNode*)pSelect->pFromTable)->pSubquery)) {
@ -6055,6 +6163,9 @@ static int32_t translateStateWindow(STranslateContext* pCxt, SSelectStmt* pSelec
if (TSDB_CODE_SUCCESS == code) {
code = checkStateWindowForStream(pCxt, pSelect);
}
if (TSDB_CODE_SUCCESS == code) {
code = checkTrueForLimit(pCxt, pState->pTrueForLimit);
}
return code;
}
@ -6081,7 +6192,7 @@ static int32_t translateEventWindow(STranslateContext* pCxt, SSelectStmt* pSelec
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_TIMELINE_QUERY,
"EVENT_WINDOW requires valid time series input");
}
return TSDB_CODE_SUCCESS;
return checkTrueForLimit(pCxt, ((SEventWindowNode*)pSelect->pWindow)->pTrueForLimit);
}
static int32_t translateCountWindow(STranslateContext* pCxt, SSelectStmt* pSelect) {
@ -7310,6 +7421,307 @@ static int32_t translateSelectWithoutFrom(STranslateContext* pCxt, SSelectStmt*
pCxt->dual = true;
return translateExprList(pCxt, pSelect->pProjectionList);
}
typedef struct SCheckColsFuncCxt {
bool hasColsFunc;
SNodeList** selectFuncList;
int32_t status;
} SCheckColsFuncCxt;
static bool isColsFuncByName(SFunctionNode* pFunc) {
if (strcasecmp(pFunc->functionName, "cols") != 0) {
return false;
}
return true;
}
static bool isMultiColsFuncNode(SNode* pNode) {
if (QUERY_NODE_FUNCTION == nodeType(pNode)) {
SFunctionNode* pFunc = (SFunctionNode*)pNode;
if (isColsFuncByName(pFunc)) {
if (pFunc->pParameterList->length > 2) {
return true;
}
}
}
return false;
}
typedef struct SBindTupleFuncCxt {
SNode* root;
int32_t bindExprID;
} SBindTupleFuncCxt;
static EDealRes pushDownBindSelectFunc(SNode** pNode, void* pContext) {
SBindTupleFuncCxt* pCxt = pContext;
if (nodesIsExprNode(*pNode)) {
SExprNode* pExpr = (SExprNode*)*pNode;
pExpr->relatedTo = pCxt->bindExprID;
if (nodeType(*pNode) != QUERY_NODE_COLUMN) {
return DEAL_RES_CONTINUE;
}
if (*pNode != pCxt->root) {
int len = strlen(pExpr->aliasName);
if (len + TSDB_COL_NAME_EXLEN >= TSDB_COL_NAME_LEN) {
char buffer[TSDB_COL_NAME_EXLEN + TSDB_COL_NAME_LEN + 1] = {0};
(void)tsnprintf(buffer, sizeof(buffer), "%s.%d", pExpr->aliasName, pExpr->relatedTo);
uint64_t hashVal = MurmurHash3_64(buffer, TSDB_COL_NAME_EXLEN + TSDB_COL_NAME_LEN + 1);
(void)tsnprintf(pExpr->aliasName, TSDB_COL_NAME_EXLEN, "%" PRIu64, hashVal);
} else {
(void)tsnprintf(pExpr->aliasName + len, TSDB_COL_NAME_EXLEN, ".%d", pExpr->relatedTo);
}
}
}
return DEAL_RES_CONTINUE;
}
static int32_t getSelectFuncIndex(SNodeList* FuncNodeList, SNode* pSelectFunc) {
SNode* pNode = NULL;
int32_t selectFuncIndex = 0;
FOREACH(pNode, FuncNodeList) {
++selectFuncIndex;
if (nodesEqualNode(pNode, pSelectFunc)) {
return selectFuncIndex;
}
}
return 0;
}
static EDealRes checkHasColsFunc(SNode** pNode, void* pContext){
if (QUERY_NODE_FUNCTION == nodeType(*pNode)) {
SFunctionNode* pFunc = (SFunctionNode*)*pNode;
if (isColsFuncByName(pFunc)) {
*(bool*)pContext = true;
return DEAL_RES_END;
}
}
return DEAL_RES_CONTINUE;
}
static int32_t checkMultColsFuncParam(SNodeList* pParameterList) {
if (!pParameterList || pParameterList->length < 2) {
return TSDB_CODE_PAR_INVALID_COLS_FUNCTION;
}
int32_t index = 0;
SNode* pNode = NULL;
FOREACH(pNode, pParameterList) {
if (index == 0) { // the first parameter is select function
if (QUERY_NODE_FUNCTION != nodeType(pNode) || isColsFuncByName((SFunctionNode*)pNode)) {
return TSDB_CODE_PAR_INVALID_COLS_FUNCTION;
}
SFunctionNode* pFunc = (SFunctionNode*)pNode;
// pFunc->funcId is zero at here, so need to check at * step
// if(!fmIsSelectFunc(pFunc->funcId)) {
// return TSDB_CODE_PAR_INVALID_COLS_FUNCTION;
// }
SNode* pTmpNode = NULL;
FOREACH(pTmpNode, pFunc->pParameterList) {
bool hasColsFunc = false;
nodesRewriteExpr(&pTmpNode, checkHasColsFunc, (void*)&hasColsFunc);
if (hasColsFunc) {
return TSDB_CODE_PAR_INVALID_COLS_FUNCTION;
}
}
} else {
bool hasColsFunc = false;
nodesRewriteExpr(&pNode, checkHasColsFunc, &hasColsFunc);
if (hasColsFunc) {
return TSDB_CODE_PAR_INVALID_COLS_FUNCTION;
}
}
++index;
}
return TSDB_CODE_SUCCESS;
}
static EDealRes rewriteSingleColsFunc(SNode** pNode, void* pContext) {
int32_t code = TSDB_CODE_SUCCESS;
if (QUERY_NODE_FUNCTION != nodeType(*pNode)) {
return DEAL_RES_CONTINUE;
}
SCheckColsFuncCxt* pCxt = pContext;
SFunctionNode* pFunc = (SFunctionNode*)*pNode;
if (isColsFuncByName(pFunc)) {
if(pFunc->pParameterList->length > 2) {
pCxt->status = TSDB_CODE_PAR_INVALID_COLS_SELECTFUNC;
return DEAL_RES_ERROR;
}
SNode* pSelectFunc = nodesListGetNode(pFunc->pParameterList, 0);
SNode* pExpr = nodesListGetNode(pFunc->pParameterList, 1);
if (nodeType(pSelectFunc) != QUERY_NODE_FUNCTION || isColsFuncByName((SFunctionNode*)pSelectFunc)) {
pCxt->status = TSDB_CODE_PAR_INVALID_COLS_SELECTFUNC;
parserError("%s Invalid cols function, the first parameter must be a select function", __func__);
return DEAL_RES_ERROR;
}
if (pFunc->node.asAlias) {
if (((SExprNode*)pExpr)->asAlias) {
pCxt->status = TSDB_CODE_INVALID_COLS_ALIAS;
parserError("%s Invalid using alias for cols function", __func__);
return DEAL_RES_ERROR;
} else {
((SExprNode*)pExpr)->asAlias = true;
tstrncpy(((SExprNode*)pExpr)->userAlias, pFunc->node.userAlias, TSDB_COL_NAME_LEN);
}
}
if(*pCxt->selectFuncList == NULL) {
code = nodesMakeList(pCxt->selectFuncList);
if (NULL == *pCxt->selectFuncList) {
pCxt->status = code;
return DEAL_RES_ERROR;
}
}
int32_t selectFuncCount = (*pCxt->selectFuncList)->length;
int32_t selectFuncIndex = getSelectFuncIndex(*pCxt->selectFuncList, pSelectFunc);
if (selectFuncIndex == 0) {
++selectFuncCount;
selectFuncIndex = selectFuncCount;
SNode* pNewNode = NULL;
code = nodesCloneNode(pSelectFunc, &pNewNode);
if(code) goto _end;
((SExprNode*)pNewNode)->bindExprID = selectFuncIndex;
code = nodesListMakeStrictAppend(pCxt->selectFuncList, pNewNode);
if(code) goto _end;
}
SNode* pNewNode = NULL;
code = nodesCloneNode(pExpr, &pNewNode);
if(code) goto _end;
if (nodesIsExprNode(pNewNode)) {
SBindTupleFuncCxt pCxt = {pNewNode, selectFuncIndex};
nodesRewriteExpr(&pNewNode, pushDownBindSelectFunc, &pCxt);
} else {
pCxt->status = TSDB_CODE_PAR_INVALID_COLS_FUNCTION;
parserError("%s Invalid cols function, the first parameter must be a select function", __func__);
return DEAL_RES_ERROR;
}
nodesDestroyNode(*pNode);
*pNode = pNewNode;
}
return DEAL_RES_CONTINUE;
_end:
pCxt->status = code;
return DEAL_RES_ERROR;
}
static int32_t rewriteHavingColsNode(STranslateContext* pCxt, SNode** pNode, SNodeList** selectFuncList) {
int32_t code = TSDB_CODE_SUCCESS;
if(!pNode || *pNode == NULL) return code;
if (isMultiColsFuncNode(*pNode)) {
parserWarn("%s Invalid using multi cols func in having.", __func__);
return TSDB_CODE_PAR_INVALID_COLS_FUNCTION;
} else {
SCheckColsFuncCxt pSelectFuncCxt = {false, selectFuncList, TSDB_CODE_SUCCESS};
nodesRewriteExpr(pNode, rewriteSingleColsFunc, &pSelectFuncCxt);
if (pSelectFuncCxt.status != TSDB_CODE_SUCCESS) {
return pSelectFuncCxt.status;
}
}
return code;
}
static int32_t rewriteColsFunction(STranslateContext* pCxt, SNodeList** nodeList, SNodeList** selectFuncList) {
int32_t code = TSDB_CODE_SUCCESS;
bool needRewrite = false;
SNode** pNode = NULL;
FOREACH_FOR_REWRITE(pNode, *nodeList) {
if (isMultiColsFuncNode(*pNode)) {
code = checkMultColsFuncParam(((SFunctionNode*)*pNode)->pParameterList);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
needRewrite = true;
} else {
SCheckColsFuncCxt pSelectFuncCxt = {false, selectFuncList, TSDB_CODE_SUCCESS};
nodesRewriteExpr(pNode, rewriteSingleColsFunc, &pSelectFuncCxt);
if (pSelectFuncCxt.status != TSDB_CODE_SUCCESS) {
return pSelectFuncCxt.status;
}
}
}
SNodeList* pNewNodeList = NULL;
SNode* pNewNode = NULL;
if (needRewrite) {
if (pCxt->createStream) {
return TSDB_CODE_PAR_INVALID_COLS_FUNCTION;
}
code = nodesMakeList(&pNewNodeList);
if (NULL == pNewNodeList) {
return code;
}
if (*selectFuncList == NULL) {
code = nodesMakeList(selectFuncList);
if (NULL == *selectFuncList) {
nodesDestroyList(pNewNodeList);
return code;
}
}
int32_t nums = 0;
int32_t selectFuncCount = (*selectFuncList)->length;
SNode* pTmpNode = NULL;
FOREACH(pTmpNode, *nodeList) {
if (isMultiColsFuncNode(pTmpNode)) {
SFunctionNode* pFunc = (SFunctionNode*)pTmpNode;
if(pFunc->node.asAlias) {
code = TSDB_CODE_INVALID_COLS_ALIAS;
parserError("%s Invalid using alias for cols function", __func__);
goto _end;
}
SNode* pSelectFunc = nodesListGetNode(pFunc->pParameterList, 0);
if (nodeType(pSelectFunc) != QUERY_NODE_FUNCTION) {
code = TSDB_CODE_PAR_INVALID_COLS_FUNCTION;
parserError("%s Invalid cols function, the first parameter must be a select function", __func__);
goto _end;
}
int32_t selectFuncIndex = getSelectFuncIndex(*selectFuncList, pSelectFunc);
if (selectFuncIndex == 0) {
++selectFuncCount;
selectFuncIndex = selectFuncCount;
code = nodesCloneNode(pSelectFunc, &pNewNode);
if(TSDB_CODE_SUCCESS != code) goto _end;
((SExprNode*)pNewNode)->bindExprID = selectFuncIndex;
code = nodesListMakeStrictAppend(selectFuncList, pNewNode);
if(TSDB_CODE_SUCCESS != code) goto _end;
}
// start from index 1, because the first parameter is select function which needn't to output.
for (int i = 1; i < pFunc->pParameterList->length; ++i) {
SNode* pExpr = nodesListGetNode(pFunc->pParameterList, i);
code = nodesCloneNode(pExpr, &pNewNode);
if(TSDB_CODE_SUCCESS != code) goto _end;
if (nodesIsExprNode(pNewNode)) {
SBindTupleFuncCxt pCxt = {pNewNode, selectFuncIndex};
nodesRewriteExpr(&pNewNode, pushDownBindSelectFunc, &pCxt);
} else {
code = TSDB_CODE_PAR_INVALID_COLS_FUNCTION;
parserError("%s Invalid cols function, the first parameter must be a select function", __func__);
goto _end;
}
if (TSDB_CODE_SUCCESS != code) goto _end;
code = nodesListMakeStrictAppend(&pNewNodeList, pNewNode);
if (TSDB_CODE_SUCCESS != code) goto _end;
}
continue;
}
code = nodesCloneNode(pTmpNode, &pNewNode);
if (TSDB_CODE_SUCCESS != code) goto _end;
code = nodesListMakeStrictAppend(&pNewNodeList, pNewNode);
if (TSDB_CODE_SUCCESS != code) goto _end;
}
nodesDestroyList(*nodeList);
*nodeList = pNewNodeList;
return TSDB_CODE_SUCCESS;
}
_end:
if (TSDB_CODE_SUCCESS != code) {
nodesDestroyNode(pNewNode);
nodesDestroyList(pNewNodeList);
}
return code;
}
static int32_t translateSelectFrom(STranslateContext* pCxt, SSelectStmt* pSelect) {
pCxt->pCurrStmt = (SNode*)pSelect;

View File

@ -227,6 +227,10 @@ static char* getSyntaxErrFormat(int32_t errCode) {
return "Some functions cannot appear in the select list at the same time";
case TSDB_CODE_PAR_REGULAR_EXPRESSION_ERROR:
return "Syntax error in regular expression";
case TSDB_CODE_PAR_TRUE_FOR_NEGATIVE:
return "True_for duration cannot be negative";
case TSDB_CODE_PAR_TRUE_FOR_UNIT:
return "Cannot use 'year' or 'month' as true_for duration";
default:
return "Unknown error";
}

View File

@ -123,6 +123,7 @@ static EDealRes doRewriteExpr(SNode** pNode, void* pContext) {
tstrncpy(pCol->node.userAlias, ((SExprNode*)pExpr)->userAlias, TSDB_COL_NAME_LEN);
tstrncpy(pCol->colName, ((SExprNode*)pExpr)->aliasName, TSDB_COL_NAME_LEN);
pCol->node.projIdx = ((SExprNode*)(*pNode))->projIdx;
pCol->node.relatedTo = ((SExprNode*)(*pNode))->relatedTo;
if (QUERY_NODE_FUNCTION == nodeType(pExpr)) {
setColumnInfo((SFunctionNode*)pExpr, pCol, pCxt->isPartitionBy);
}
@ -150,7 +151,7 @@ static EDealRes doNameExpr(SNode* pNode, void* pContext) {
case QUERY_NODE_LOGIC_CONDITION:
case QUERY_NODE_FUNCTION: {
if ('\0' == ((SExprNode*)pNode)->aliasName[0]) {
snprintf(((SExprNode*)pNode)->aliasName, TSDB_COL_NAME_LEN, "#expr_%p", pNode);
rewriteExprAliasName((SExprNode*)pNode, (int64_t)pNode);
}
return DEAL_RES_IGNORE_CHILD;
}
@ -757,6 +758,7 @@ static SColumnNode* createColumnByExpr(const char* pStmtName, SExprNode* pExpr)
if (NULL != pStmtName) {
snprintf(pCol->tableAlias, sizeof(pCol->tableAlias), "%s", pStmtName);
}
pCol->node.relatedTo = pExpr->relatedTo;
return pCol;
}
@ -1159,6 +1161,9 @@ static int32_t createWindowLogicNodeByState(SLogicPlanContext* pCxt, SStateWindo
nodesDestroyNode((SNode*)pWindow);
return code;
}
if (pState->pTrueForLimit) {
pWindow->trueForLimit = ((SValueNode*)pState->pTrueForLimit)->datum.i;
}
// rewrite the expression in subsequent clauses
code = rewriteExprForSelect(pWindow->pStateExpr, pSelect, SQL_CLAUSE_WINDOW);
if (TSDB_CODE_SUCCESS == code) {
@ -1272,6 +1277,9 @@ static int32_t createWindowLogicNodeByEvent(SLogicPlanContext* pCxt, SEventWindo
nodesDestroyNode((SNode*)pWindow);
return TSDB_CODE_OUT_OF_MEMORY;
}
if (pEvent->pTrueForLimit) {
pWindow->trueForLimit = ((SValueNode*)pEvent->pTrueForLimit)->datum.i;
}
return createWindowLogicNodeFinalize(pCxt, pSelect, pWindow, pLogicNode);
}

View File

@ -3050,7 +3050,7 @@ static int32_t smaIndexOptCreateSmaCols(SNodeList* pFuncs, uint64_t tableId, SNo
}
SExprNode exprNode;
exprNode.resType = ((SExprNode*)pWsNode)->resType;
snprintf(exprNode.aliasName, TSDB_COL_NAME_LEN, "#expr_%d", index + 1);
rewriteExprAliasName(&exprNode, index + 1);
SColumnNode* pkNode = NULL;
code = smaIndexOptCreateSmaCol((SNode*)&exprNode, tableId, PRIMARYKEY_TIMESTAMP_COL_ID, &pkNode);
if (TSDB_CODE_SUCCESS != code) {

View File

@ -13,6 +13,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "nodes.h"
#include "planInt.h"
#include "catalog.h"
@ -30,98 +31,82 @@ typedef struct SSlotIndex {
SArray* pSlotIdsInfo; // duplicate name slot
} SSlotIndex;
enum {
SLOT_KEY_TYPE_ALL = 1,
SLOT_KEY_TYPE_COLNAME = 2,
};
static int32_t getSlotKeyHelper(SNode* pNode, const char* pPreName, const char* name, char** ppKey, int32_t callocLen,
int32_t* pLen, uint16_t extraBufLen, int8_t slotKeyType) {
int32_t code = 0;
*ppKey = taosMemoryCalloc(1, callocLen);
if (!*ppKey) {
return terrno;
}
if (slotKeyType == SLOT_KEY_TYPE_ALL) {
TAOS_STRNCAT(*ppKey, pPreName, TSDB_TABLE_NAME_LEN);
TAOS_STRNCAT(*ppKey, ".", 2);
TAOS_STRNCAT(*ppKey, name, TSDB_COL_NAME_LEN);
*pLen = taosHashBinary(*ppKey, strlen(*ppKey));
} else {
TAOS_STRNCAT(*ppKey, name, TSDB_COL_NAME_LEN);
*pLen = strlen(*ppKey);
}
return code;
}
static int32_t getSlotKey(SNode* pNode, const char* pStmtName, char** ppKey, int32_t* pLen, uint16_t extraBufLen) {
int32_t code = 0;
int32_t callocLen = 0;
if (QUERY_NODE_COLUMN == nodeType(pNode)) {
SColumnNode* pCol = (SColumnNode*)pNode;
if (NULL != pStmtName) {
if ('\0' != pStmtName[0]) {
*ppKey = taosMemoryCalloc(1, TSDB_TABLE_NAME_LEN + 1 + TSDB_COL_NAME_LEN + 1 + extraBufLen);
if (!*ppKey) {
return terrno;
}
TAOS_STRNCAT(*ppKey, pStmtName, TSDB_TABLE_NAME_LEN);
TAOS_STRNCAT(*ppKey, ".", 2);
TAOS_STRNCAT(*ppKey, pCol->node.aliasName, TSDB_COL_NAME_LEN);
*pLen = taosHashBinary(*ppKey, strlen(*ppKey));
return code;
callocLen = TSDB_TABLE_NAME_LEN + 1 + TSDB_COL_NAME_LEN + 1 + extraBufLen;
return getSlotKeyHelper(pNode, pStmtName, pCol->node.aliasName, ppKey, callocLen, pLen, extraBufLen,
SLOT_KEY_TYPE_ALL);
} else {
*ppKey = taosMemoryCalloc(1, TSDB_COL_NAME_LEN + 1 + extraBufLen);
if (!*ppKey) {
return terrno;
}
TAOS_STRNCAT(*ppKey, pCol->node.aliasName, TSDB_COL_NAME_LEN);
*pLen = strlen(*ppKey);
return code;
callocLen = TSDB_COL_NAME_LEN + 1 + extraBufLen;
return getSlotKeyHelper(pNode, pStmtName, pCol->node.aliasName, ppKey, callocLen, pLen, extraBufLen,
SLOT_KEY_TYPE_COLNAME);
}
}
if ('\0' == pCol->tableAlias[0]) {
*ppKey = taosMemoryCalloc(1, TSDB_COL_NAME_LEN + 1 + extraBufLen);
if (!*ppKey) {
return terrno;
}
TAOS_STRNCAT(*ppKey, pCol->colName, TSDB_COL_NAME_LEN);
*pLen = strlen(*ppKey);
return code;
callocLen = TSDB_COL_NAME_LEN + 1 + extraBufLen;
return getSlotKeyHelper(pNode, pStmtName, pCol->colName, ppKey, callocLen, pLen, extraBufLen,
SLOT_KEY_TYPE_COLNAME);
}
*ppKey = taosMemoryCalloc(1, TSDB_TABLE_NAME_LEN + 1 + TSDB_COL_NAME_LEN + 1 + extraBufLen);
if (!*ppKey) {
return terrno;
}
TAOS_STRNCAT(*ppKey, pCol->tableAlias, TSDB_TABLE_NAME_LEN);
TAOS_STRNCAT(*ppKey, ".", 2);
TAOS_STRNCAT(*ppKey, pCol->colName, TSDB_COL_NAME_LEN);
*pLen = taosHashBinary(*ppKey, strlen(*ppKey));
return code;
callocLen = TSDB_TABLE_NAME_LEN + 1 + TSDB_COL_NAME_LEN + 1 + extraBufLen;
return getSlotKeyHelper(pNode, pCol->tableAlias, pCol->colName, ppKey, callocLen, pLen, extraBufLen,
SLOT_KEY_TYPE_ALL);
} else if (QUERY_NODE_FUNCTION == nodeType(pNode)) {
SFunctionNode* pFunc = (SFunctionNode*)pNode;
if (FUNCTION_TYPE_TBNAME == pFunc->funcType) {
SValueNode* pVal = (SValueNode*)nodesListGetNode(pFunc->pParameterList, 0);
if (pVal) {
if (NULL != pStmtName && '\0' != pStmtName[0]) {
*ppKey = taosMemoryCalloc(1, TSDB_TABLE_NAME_LEN + 1 + TSDB_COL_NAME_LEN + 1 + extraBufLen);
if (!*ppKey) {
return terrno;
}
TAOS_STRNCAT(*ppKey, pStmtName, TSDB_TABLE_NAME_LEN);
TAOS_STRNCAT(*ppKey, ".", 2);
TAOS_STRNCAT(*ppKey, ((SExprNode*)pNode)->aliasName, TSDB_COL_NAME_LEN);
*pLen = taosHashBinary(*ppKey, strlen(*ppKey));
return code;
callocLen = TSDB_TABLE_NAME_LEN + 1 + TSDB_COL_NAME_LEN + 1 + extraBufLen;
return getSlotKeyHelper(pNode, pStmtName, ((SExprNode*)pNode)->aliasName, ppKey, callocLen, pLen, extraBufLen,
SLOT_KEY_TYPE_ALL);
}
int32_t literalLen = strlen(pVal->literal);
*ppKey = taosMemoryCalloc(1, literalLen + 1 + TSDB_COL_NAME_LEN + 1 + extraBufLen);
if (!*ppKey) {
return terrno;
}
TAOS_STRNCAT(*ppKey, pVal->literal, literalLen);
TAOS_STRNCAT(*ppKey, ".", 2);
TAOS_STRNCAT(*ppKey, ((SExprNode*)pNode)->aliasName, TSDB_COL_NAME_LEN);
*pLen = taosHashBinary(*ppKey, strlen(*ppKey));
return code;
callocLen = literalLen + 1 + TSDB_COL_NAME_LEN + 1 + extraBufLen;
return getSlotKeyHelper(pNode, pVal->literal, ((SExprNode*)pNode)->aliasName, ppKey, callocLen, pLen,
extraBufLen, SLOT_KEY_TYPE_ALL);
}
}
}
if (NULL != pStmtName && '\0' != pStmtName[0]) {
*ppKey = taosMemoryCalloc(1, TSDB_TABLE_NAME_LEN + 1 + TSDB_COL_NAME_LEN + 1 + extraBufLen);
if (!*ppKey) {
return terrno;
}
TAOS_STRNCAT(*ppKey, pStmtName, TSDB_TABLE_NAME_LEN);
TAOS_STRNCAT(*ppKey, ".", 2);
TAOS_STRNCAT(*ppKey, ((SExprNode*)pNode)->aliasName, TSDB_COL_NAME_LEN);
*pLen = taosHashBinary(*ppKey, strlen(*ppKey));
return code;
callocLen = TSDB_TABLE_NAME_LEN + 1 + TSDB_COL_NAME_LEN + 1 + extraBufLen;
return getSlotKeyHelper(pNode, pStmtName, ((SExprNode*)pNode)->aliasName, ppKey, callocLen, pLen, extraBufLen,
SLOT_KEY_TYPE_ALL);
}
*ppKey = taosMemoryCalloc(1, TSDB_COL_NAME_LEN + 1 + extraBufLen);
if (!*ppKey) {
return terrno;
}
TAOS_STRNCAT(*ppKey, ((SExprNode*)pNode)->aliasName, TSDB_COL_NAME_LEN);
*pLen = strlen(*ppKey);
callocLen = TSDB_COL_NAME_LEN + 1 + extraBufLen;
return getSlotKeyHelper(pNode, pStmtName, ((SExprNode*)pNode)->aliasName, ppKey, callocLen, pLen, extraBufLen,
SLOT_KEY_TYPE_COLNAME);
return code;
}
@ -2328,6 +2313,8 @@ static int32_t createStateWindowPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pC
// }
}
pState->trueForLimit = pWindowLogicNode->trueForLimit;
if (TSDB_CODE_SUCCESS == code) {
code = createWindowPhysiNodeFinalize(pCxt, pChildren, &pState->window, pWindowLogicNode);
}
@ -2358,6 +2345,7 @@ static int32_t createEventWindowPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pC
if (TSDB_CODE_SUCCESS == code) {
code = setNodeSlotId(pCxt, pChildTupe->dataBlockId, -1, pWindowLogicNode->pEndCond, &pEvent->pEndCond);
}
pEvent->trueForLimit = pWindowLogicNode->trueForLimit;
if (TSDB_CODE_SUCCESS == code) {
code = createWindowPhysiNodeFinalize(pCxt, pChildren, &pEvent->window, pWindowLogicNode);
}

View File

@ -79,6 +79,7 @@ static EDealRes doCreateColumn(SNode* pNode, void* pContext) {
}
}
}
pCol->node.relatedTo = pExpr->relatedTo;
return (TSDB_CODE_SUCCESS == nodesListStrictAppend(pCxt->pList, (SNode*)pCol) ? DEAL_RES_IGNORE_CHILD
: DEAL_RES_ERROR);
}

View File

@ -4402,3 +4402,4 @@ int32_t uniqueScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarPara
int32_t modeScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput) {
return selectScalarFunction(pInput, inputNum, pOutput);
}

View File

@ -595,68 +595,71 @@ void streamTaskClearCheckInfo(SStreamTask* pTask, bool clearChkpReadyMsg) {
pTask->id.idStr, pInfo->failedId, pTask->chkInfo.checkpointId);
}
int32_t streamTaskUpdateTaskCheckpointInfo(SStreamTask* pTask, bool restored, SVUpdateCheckpointInfoReq* pReq) {
// The checkpointInfo can be updated in the following three cases:
// 1. follower tasks; 2. leader task with status of TASK_STATUS__CK; 3. restore not completed
static int32_t doUpdateCheckpointInfoCheck(SStreamTask* pTask, bool restored, SVUpdateCheckpointInfoReq* pReq,
bool* pContinue) {
SStreamMeta* pMeta = pTask->pMeta;
int32_t vgId = pMeta->vgId;
int32_t code = 0;
const char* id = pTask->id.idStr;
SCheckpointInfo* pInfo = &pTask->chkInfo;
streamMutexLock(&pTask->lock);
*pContinue = true;
// not update the checkpoint info if the checkpointId is less than the failed checkpointId
if (pReq->checkpointId < pInfo->pActiveInfo->failedId) {
stWarn("s-task:%s vgId:%d not update the checkpoint-info, since update checkpointId:%" PRId64
" is less than the failed checkpointId:%" PRId64 ", discard the update info",
" is less than the failed checkpointId:%" PRId64 ", discard",
id, vgId, pReq->checkpointId, pInfo->pActiveInfo->failedId);
streamMutexUnlock(&pTask->lock);
// always return true
*pContinue = false;
return TSDB_CODE_SUCCESS;
}
// it's an expired checkpointInfo update msg, we still try to drop the required drop fill-history task.
if (pReq->checkpointId <= pInfo->checkpointId) {
stDebug("s-task:%s vgId:%d latest checkpointId:%" PRId64 " Ver:%" PRId64
" no need to update checkpoint info, updated checkpointId:%" PRId64 " Ver:%" PRId64 " transId:%d ignored",
id, vgId, pInfo->checkpointId, pInfo->checkpointVer, pReq->checkpointId, pReq->checkpointVer,
pReq->transId);
streamMutexUnlock(&pTask->lock);
{ // destroy the related fill-history tasks
// drop task should not in the meta-lock, and drop the related fill-history task now
if (pReq->dropRelHTask) {
code = streamMetaUnregisterTask(pMeta, pReq->hStreamId, pReq->hTaskId);
int32_t numOfTasks = streamMetaGetNumOfTasks(pMeta);
stDebug("s-task:%s vgId:%d related fill-history task:0x%x dropped in update checkpointInfo, remain tasks:%d",
id, vgId, pReq->taskId, numOfTasks);
}
{ // destroy the related fill-history tasks
if (pReq->dropRelHTask) {
code = streamMetaUnregisterTask(pMeta, pReq->hStreamId, pReq->hTaskId);
if (pReq->dropRelHTask) {
code = streamMetaCommit(pMeta);
}
}
int32_t numOfTasks = streamMetaGetNumOfTasks(pMeta);
stDebug("s-task:%s vgId:%d related fill-history task:0x%x dropped in update checkpointInfo, remain tasks:%d",
id, vgId, pReq->taskId, numOfTasks);
//todo: task may not exist, commit anyway, optimize this later
code = streamMetaCommit(pMeta);
}
}
*pContinue = false;
// always return true
return TSDB_CODE_SUCCESS;
}
SStreamTaskState pStatus = streamTaskGetStatus(pTask);
SStreamTaskState status = streamTaskGetStatus(pTask);
if (!restored) { // during restore procedure, do update checkpoint-info
stDebug("s-task:%s vgId:%d status:%s update the checkpoint-info during restore, checkpointId:%" PRId64 "->%" PRId64
" checkpointVer:%" PRId64 "->%" PRId64 " checkpointTs:%" PRId64 "->%" PRId64,
id, vgId, pStatus.name, pInfo->checkpointId, pReq->checkpointId, pInfo->checkpointVer, pReq->checkpointVer,
id, vgId, status.name, pInfo->checkpointId, pReq->checkpointId, pInfo->checkpointVer, pReq->checkpointVer,
pInfo->checkpointTime, pReq->checkpointTs);
} else { // not in restore status, must be in checkpoint status
if ((pStatus.state == TASK_STATUS__CK) || (pMeta->role == NODE_ROLE_FOLLOWER)) {
stDebug("s-task:%s vgId:%d status:%s role:%d start to update the checkpoint-info, checkpointId:%" PRId64 "->%" PRId64
" checkpointVer:%" PRId64 "->%" PRId64 " checkpointTs:%" PRId64 "->%" PRId64,
id, vgId, pStatus.name, pMeta->role, pInfo->checkpointId, pReq->checkpointId, pInfo->checkpointVer,
if (((status.state == TASK_STATUS__CK) && (pMeta->role == NODE_ROLE_LEADER)) ||
(pMeta->role == NODE_ROLE_FOLLOWER)) {
stDebug("s-task:%s vgId:%d status:%s role:%d start to update the checkpoint-info, checkpointId:%" PRId64
"->%" PRId64 " checkpointVer:%" PRId64 "->%" PRId64 " checkpointTs:%" PRId64 "->%" PRId64,
id, vgId, status.name, pMeta->role, pInfo->checkpointId, pReq->checkpointId, pInfo->checkpointVer,
pReq->checkpointVer, pInfo->checkpointTime, pReq->checkpointTs);
} else {
stDebug("s-task:%s vgId:%d status:%s NOT update the checkpoint-info, checkpointId:%" PRId64 "->%" PRId64
" checkpointVer:%" PRId64 "->%" PRId64,
id, vgId, pStatus.name, pInfo->checkpointId, pReq->checkpointId, pInfo->checkpointVer,
id, vgId, status.name, pInfo->checkpointId, pReq->checkpointId, pInfo->checkpointVer,
pReq->checkpointVer);
}
}
@ -665,14 +668,48 @@ int32_t streamTaskUpdateTaskCheckpointInfo(SStreamTask* pTask, bool restored, SV
pInfo->processedVer <= pReq->checkpointVer);
if (!valid) {
stFatal("s-task:%s invalid checkpointId update info recv, current checkpointId:%" PRId64 " checkpointVer:%" PRId64
" processedVer:%" PRId64 " req checkpointId:%" PRId64 " checkpointVer:%" PRId64 " discard it",
id, pInfo->checkpointId, pInfo->checkpointVer, pInfo->processedVer, pReq->checkpointId,
pReq->checkpointVer);
streamMutexUnlock(&pTask->lock);
return TSDB_CODE_STREAM_INTERNAL_ERROR;
// invalid update checkpoint info for leader, since the processedVer is greater than the checkpointVer
// It is possible for follower tasks that the processedVer is greater than the checkpointVer, and the processed info
// in follower tasks will be discarded, since the leader/follower switch happens before the checkpoint of the
// processedVer being generated.
if (pMeta->role == NODE_ROLE_LEADER) {
stFatal("s-task:%s checkpointId update info recv, current checkpointId:%" PRId64 " checkpointVer:%" PRId64
" processedVer:%" PRId64 " req checkpointId:%" PRId64 " checkpointVer:%" PRId64 " discard it",
id, pInfo->checkpointId, pInfo->checkpointVer, pInfo->processedVer, pReq->checkpointId,
pReq->checkpointVer);
*pContinue = false;
return TSDB_CODE_STREAM_INTERNAL_ERROR;
} else {
stInfo("s-task:%s vgId:%d follower recv checkpointId update info, current checkpointId:%" PRId64
" checkpointVer:%" PRId64 " processedVer:%" PRId64 " req checkpointId:%" PRId64 " checkpointVer:%" PRId64,
id, pMeta->vgId, pInfo->checkpointId, pInfo->checkpointVer, pInfo->processedVer, pReq->checkpointId,
pReq->checkpointVer);
}
}
return TSDB_CODE_SUCCESS;
}
int32_t streamTaskUpdateTaskCheckpointInfo(SStreamTask* pTask, bool restored, SVUpdateCheckpointInfoReq* pReq) {
SStreamMeta* pMeta = pTask->pMeta;
int32_t vgId = pMeta->vgId;
int32_t code = 0;
const char* id = pTask->id.idStr;
SCheckpointInfo* pInfo = &pTask->chkInfo;
bool continueUpdate = true;
streamMutexLock(&pTask->lock);
code = doUpdateCheckpointInfoCheck(pTask, restored, pReq, &continueUpdate);
if (!continueUpdate) {
streamMutexUnlock(&pTask->lock);
return code;
}
SStreamTaskState pStatus = streamTaskGetStatus(pTask);
// update only it is in checkpoint status, or during restore procedure.
if ((pStatus.state == TASK_STATUS__CK) || (!restored) || (pMeta->role == NODE_ROLE_FOLLOWER)) {
pInfo->checkpointId = pReq->checkpointId;
@ -697,7 +734,7 @@ int32_t streamTaskUpdateTaskCheckpointInfo(SStreamTask* pTask, bool restored, SV
pTask->status.taskStatus = TASK_STATUS__READY;
code = streamMetaSaveTaskInMeta(pMeta, pTask);
code = streamMetaSaveTask(pMeta, pTask);
streamMutexUnlock(&pTask->lock);
if (code != TSDB_CODE_SUCCESS) {
@ -1537,14 +1574,6 @@ int32_t deleteCheckpointFile(const char* id, const char* name) {
int32_t streamTaskSendNegotiateChkptIdMsg(SStreamTask* pTask) {
streamMutexLock(&pTask->lock);
ETaskStatus p = streamTaskGetStatus(pTask).state;
// if (pInfo->alreadySendChkptId == true) {
// stDebug("s-task:%s already start to consensus-checkpointId, not start again before it completed", id);
// streamMutexUnlock(&pTask->lock);
// return TSDB_CODE_SUCCESS;
// } else {
// pInfo->alreadySendChkptId = true;
// }
//
streamTaskSetReqConsenChkptId(pTask, taosGetTimestampMs());
streamMutexUnlock(&pTask->lock);

View File

@ -875,7 +875,7 @@ static int32_t doStreamExecTask(SStreamTask* pTask) {
}
double el = (taosGetTimestampMs() - st) / 1000.0;
if (el > 2.0) { // elapsed more than 5 sec, not occupy the CPU anymore
if (el > 5.0) { // elapsed more than 5 sec, not occupy the CPU anymore
stDebug("s-task:%s occupy more than 5.0s, release the exec threads and idle for 500ms", id);
streamTaskSetIdleInfo(pTask, 500);
return code;

View File

@ -633,7 +633,7 @@ void streamMetaCloseImpl(void* arg) {
}
// todo let's check the status for each task
int32_t streamMetaSaveTaskInMeta(SStreamMeta* pMeta, SStreamTask* pTask) {
int32_t streamMetaSaveTask(SStreamMeta* pMeta, SStreamTask* pTask) {
int32_t vgId = pTask->pMeta->vgId;
void* buf = NULL;
int32_t len;
@ -683,7 +683,7 @@ int32_t streamMetaSaveTaskInMeta(SStreamMeta* pMeta, SStreamTask* pTask) {
return code;
}
int32_t streamMetaRemoveTaskInMeta(SStreamMeta* pMeta, STaskId* pTaskId) {
int32_t streamMetaRemoveTask(SStreamMeta* pMeta, STaskId* pTaskId) {
int64_t key[2] = {pTaskId->streamId, pTaskId->taskId};
int32_t code = tdbTbDelete(pMeta->pTaskDb, key, STREAM_TASK_KEY_LEN, pMeta->txn);
if (code != 0) {
@ -706,7 +706,7 @@ int32_t streamMetaRegisterTask(SStreamMeta* pMeta, int64_t ver, SStreamTask* pTa
void* p = taosHashGet(pMeta->pTasksMap, &id, sizeof(id));
if (p != NULL) {
stDebug("s-task:0x%" PRIx64 " already exist in meta, no need to register", id.taskId);
stDebug("s-task:%" PRIx64 " already exist in meta, no need to register", id.taskId);
tFreeStreamTask(pTask);
return code;
}
@ -736,7 +736,7 @@ int32_t streamMetaRegisterTask(SStreamMeta* pMeta, int64_t ver, SStreamTask* pTa
return code;
}
if ((code = streamMetaSaveTaskInMeta(pMeta, pTask)) != 0) {
if ((code = streamMetaSaveTask(pMeta, pTask)) != 0) {
int32_t unused = taosHashRemove(pMeta->pTasksMap, &id, sizeof(id));
void* pUnused = taosArrayPop(pMeta->pTaskList);
@ -886,8 +886,6 @@ static void doRemoveIdFromList(SArray* pTaskList, int32_t num, SStreamTaskId* id
static int32_t streamTaskSendTransSuccessMsg(SStreamTask* pTask, void* param) {
int32_t code = 0;
int32_t waitingDuration = 5000;
if (pTask->info.taskLevel == TASK_LEVEL__SOURCE) {
code = streamTaskSendCheckpointSourceRsp(pTask);
if (code) {
@ -898,7 +896,7 @@ static int32_t streamTaskSendTransSuccessMsg(SStreamTask* pTask, void* param) {
// let's kill the query procedure within stream, to end it ASAP.
if (pTask->info.taskLevel != TASK_LEVEL__SINK && pTask->exec.pExecutor != NULL) {
code = qKillTask(pTask->exec.pExecutor, TSDB_CODE_SUCCESS, -1);
code = qKillTask(pTask->exec.pExecutor, TSDB_CODE_SUCCESS);
if (code != TSDB_CODE_SUCCESS) {
stError("s-task:%s failed to kill task related query handle, code:%s", pTask->id.idStr, tstrerror(code));
}
@ -935,7 +933,7 @@ int32_t streamMetaUnregisterTask(SStreamMeta* pMeta, int64_t streamId, int32_t t
code = taosHashRemove(pMeta->pTasksMap, &id, sizeof(id));
doRemoveIdFromList(pMeta->pTaskList, (int32_t)taosArrayGetSize(pMeta->pTaskList), &pTask->id);
code = streamMetaRemoveTaskInMeta(pMeta, &id);
code = streamMetaRemoveTask(pMeta, &id);
if (code) {
stError("vgId:%d failed to remove task:0x%" PRIx64 ", code:%s", pMeta->vgId, id.taskId, tstrerror(code));
}
@ -966,32 +964,6 @@ int32_t streamMetaUnregisterTask(SStreamMeta* pMeta, int64_t streamId, int32_t t
return 0;
}
int32_t streamMetaStopOneTask(SStreamMeta* pMeta, int64_t streamId, int32_t taskId) {
SStreamTask* pTask = NULL;
int32_t code = 0;
int32_t vgId = pMeta->vgId;
int32_t numOfTasks = 0;
streamMetaWLock(pMeta);
// code = streamMetaUnregisterTask(pMeta, streamId, taskId);
// numOfTasks = streamMetaGetNumOfTasks(pMeta);
// if (code) {
// stError("vgId:%d failed to drop task:0x%x, code:%s", vgId, taskId, tstrerror(code));
// }
//
// code = streamMetaCommit(pMeta);
// if (code) {
// stError("vgId:%d failed to commit after drop task:0x%x, code:%s", vgId, taskId, tstrerror(code));
// } else {
// stDebug("s-task:0x%"PRIx64"-0x%x vgId:%d dropped, remain tasks:%d", streamId, taskId, pMeta->vgId, numOfTasks);
// }
streamMetaWUnLock(pMeta);
return code;
}
int32_t streamMetaBegin(SStreamMeta* pMeta) {
streamMetaWLock(pMeta);
int32_t code = tdbBegin(pMeta->db, &pMeta->txn, tdbDefaultMalloc, tdbDefaultFree, NULL,
@ -1215,7 +1187,7 @@ void streamMetaLoadAllTasks(SStreamMeta* pMeta) {
if (taosArrayGetSize(pRecycleList) > 0) {
for (int32_t i = 0; i < taosArrayGetSize(pRecycleList); ++i) {
STaskId* pId = taosArrayGet(pRecycleList, i);
code = streamMetaRemoveTaskInMeta(pMeta, pId);
code = streamMetaRemoveTask(pMeta, pId);
if (code) {
stError("s-task:0x%" PRIx64 " failed to remove task, code:%s", pId->taskId, tstrerror(code));
}

View File

@ -76,7 +76,7 @@ int32_t streamStartScanHistoryAsync(SStreamTask* pTask, int8_t igUntreated) {
memcpy(serializedReq, &req, len);
SRpcMsg rpcMsg = {.contLen = len, .pCont = serializedReq, .msgType = TDMT_VND_STREAM_SCAN_HISTORY};
return tmsgPutToQueue(pTask->pMsgCb, STREAM_LONG_EXEC_QUEUE, &rpcMsg);
return tmsgPutToQueue(pTask->pMsgCb, STREAM_QUEUE, &rpcMsg);
}
void streamExecScanHistoryInFuture(SStreamTask* pTask, int32_t idleDuration) {

View File

@ -45,6 +45,10 @@ int32_t streamMetaStartAllTasks(SStreamMeta* pMeta) {
if (numOfTasks == 0) {
stInfo("vgId:%d no tasks exist, quit from consensus checkpointId", pMeta->vgId);
streamMetaWLock(pMeta);
streamMetaResetStartInfo(&pMeta->startInfo, vgId);
streamMetaWUnLock(pMeta);
return TSDB_CODE_SUCCESS;
}
@ -447,6 +451,7 @@ int32_t streamMetaStopAllTasks(SStreamMeta* pMeta) {
continue;
}
int64_t refId = pTask->id.refId;
int32_t ret = streamTaskStop(pTask);
if (ret) {
stError("s-task:0x%x failed to stop task, code:%s", pTaskId->taskId, tstrerror(ret));

View File

@ -710,7 +710,7 @@ int32_t streamTaskStop(SStreamTask* pTask) {
}
if (pTask->info.taskLevel != TASK_LEVEL__SINK && pTask->exec.pExecutor != NULL) {
code = qKillTask(pTask->exec.pExecutor, TSDB_CODE_SUCCESS, 5000);
code = qKillTask(pTask->exec.pExecutor, TSDB_CODE_SUCCESS);
if (code != TSDB_CODE_SUCCESS) {
stError("s-task:%s failed to kill task related query handle, code:%s", id, tstrerror(code));
}
@ -869,7 +869,7 @@ int32_t streamTaskClearHTaskAttr(SStreamTask* pTask, int32_t resetRelHalt) {
pStreamTask->status.taskStatus = TASK_STATUS__READY;
}
code = streamMetaSaveTaskInMeta(pMeta, pStreamTask);
code = streamMetaSaveTask(pMeta, pStreamTask);
streamMutexUnlock(&(pStreamTask->lock));
streamMetaReleaseTask(pMeta, pStreamTask);
@ -1034,7 +1034,7 @@ static int32_t taskPauseCallback(SStreamTask* pTask, void* param) {
// in case of fill-history task, stop the tsdb file scan operation.
if (pTask->info.fillHistory == 1) {
void* pExecutor = pTask->exec.pExecutor;
code = qKillTask(pExecutor, TSDB_CODE_SUCCESS, 10000);
code = qKillTask(pExecutor, TSDB_CODE_SUCCESS);
}
stDebug("vgId:%d s-task:%s set pause flag and pause task", pMeta->vgId, pTask->id.idStr);
@ -1296,8 +1296,6 @@ const char* streamTaskGetExecType(int32_t type) {
return "resume-task-from-idle";
case STREAM_EXEC_T_ADD_FAILED_TASK:
return "record-start-failed-task";
case STREAM_EXEC_T_STOP_ONE_TASK:
return "stop-one-task";
case 0:
return "exec-all-tasks";
default:

View File

@ -564,6 +564,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_MULTI_STORAGE_EXPIRED, "License expired for m
TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_OBJECT_STROAGE_EXPIRED, "License expired for object storage function")
TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_DUAL_REPLICA_HA_EXPIRED,"License expired for dual-replica HA function")
TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_DB_ENCRYPTION_EXPIRED, "License expired for database encryption function")
TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_TD_GPT_EXPIRED, "License expired for TDgpt function")
// sync
TAOS_DEFINE_ERROR(TSDB_CODE_SYN_TIMEOUT, "Sync timeout")
@ -750,8 +751,15 @@ TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INVALID_ANOMALY_WIN_TYPE, "ANOMALY_WINDOW only
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INVALID_ANOMALY_WIN_COL, "ANOMALY_WINDOW not support on tag column")
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INVALID_ANOMALY_WIN_OPT, "ANOMALY_WINDOW option should include algo field")
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INVALID_FORECAST_CLAUSE, "Invalid forecast clause")
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_REGULAR_EXPRESSION_ERROR, "Syntax error in regular expression")
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_REGULAR_EXPRESSION_ERROR, "Syntax error in regular expression")
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INVALID_VGID_LIST, "Invalid vgid list")
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_TRUE_FOR_NEGATIVE, "True_for duration cannot be negative")
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_TRUE_FOR_UNIT, "Cannot use 'year' or 'month' as true_for duration")
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INVALID_COLS_FUNCTION, "Invalid cols function")
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INVALID_COLS_SELECTFUNC, "cols function's first param must be a select function that output a single row")
TAOS_DEFINE_ERROR(TSDB_CODE_INVALID_MULITI_COLS_FUNC, "Improper use of cols function with multiple output columns")
TAOS_DEFINE_ERROR(TSDB_CODE_INVALID_COLS_ALIAS, "Invalid using alias for cols function")
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INTERNAL_ERROR, "Parser internal error")
//planner

View File

@ -256,7 +256,7 @@ static void *tAutoQWorkerThreadFp(SQueueWorker *worker) {
return NULL;
}
STaosQueue *tAutoQWorkerAllocQueue(SAutoQWorkerPool *pool, void *ahandle, FItem fp, int32_t minNum) {
STaosQueue *tAutoQWorkerAllocQueue(SAutoQWorkerPool *pool, void *ahandle, FItem fp) {
int32_t code;
STaosQueue *queue;
@ -280,10 +280,7 @@ STaosQueue *tAutoQWorkerAllocQueue(SAutoQWorkerPool *pool, void *ahandle, FItem
int32_t queueNum = taosGetQueueNumber(pool->qset);
int32_t curWorkerNum = taosArrayGetSize(pool->workers);
int32_t dstWorkerNum = ceilf(queueNum * pool->ratio);
if (dstWorkerNum < minNum) {
dstWorkerNum = minNum;
}
if (dstWorkerNum < 2) dstWorkerNum = 2;
// spawn a thread to process queue
while (curWorkerNum < dstWorkerNum) {

View File

@ -400,6 +400,7 @@ TSDB_CODE_GRANT_MULTI_STORAGE_EXPIRED = 0x8000082A
TSDB_CODE_GRANT_OBJECT_STROAGE_EXPIRED = 0x8000082B
TSDB_CODE_GRANT_DUAL_REPLICA_HA_EXPIRED = 0x8000082C
TSDB_CODE_GRANT_DB_ENCRYPTION_EXPIRED = 0x8000082D
TSDB_CODE_GRANT_TD_GPT_EXPIRED = 0x8000082E
TSDB_CODE_SYN_TIMEOUT = 0x80000903
TSDB_CODE_SYN_MISMATCHED_SIGNATURE = 0x80000907
TSDB_CODE_SYN_NOT_LEADER = 0x8000090C
@ -554,6 +555,8 @@ TSDB_CODE_PAR_COL_PK_TYPE = 0x80002679
TSDB_CODE_PAR_INVALID_PK_OP = 0x8000267A
TSDB_CODE_PAR_PRIMARY_KEY_IS_NULL = 0x8000267B
TSDB_CODE_PAR_PRIMARY_KEY_IS_NONE = 0x8000267C
TSDB_CODE_PAR_TRUE_FOR_NEGATIVE = 0x80002687
TSDB_CODE_PAR_TRUE_FOR_UNIT = 0x80002688
TSDB_CODE_PAR_INTERNAL_ERROR = 0x800026FF
TSDB_CODE_PLAN_INTERNAL_ERROR = 0x80002700
TSDB_CODE_PLAN_EXPECTED_TS_EQUAL = 0x80002701

View File

@ -50,7 +50,7 @@ ignoreCodes = [
'0x80000734', '0x80000735', '0x80000736', '0x80000737', '0x80000738', '0x8000080E', '0x8000080F', '0x80000810', '0x80000811', '0x80000812',
'0x80000813', '0x80000814', '0x80000815', '0x80000816', '0x80000817', '0x80000818', '0x80000819', '0x80000820', '0x80000821', '0x80000822',
'0x80000823', '0x80000824', '0x80000825', '0x80000826', '0x80000827', '0x80000828', '0x80000829', '0x8000082A', '0x8000082B', '0x8000082C',
'0x8000082D', '0x80000907', '0x80000919', '0x8000091A', '0x8000091B', '0x8000091C', '0x8000091D', '0x8000091E', '0x8000091F', '0x80000A00',
'0x8000082D', '0x8000082E', '0x80000907', '0x80000919', '0x8000091A', '0x8000091B', '0x8000091C', '0x8000091D', '0x8000091E', '0x8000091F', '0x80000A00',
'0x80000A01', '0x80000A03', '0x80000A06', '0x80000A07', '0x80000A08', '0x80000A09', '0x80000A0A', '0x80000A0B', '0x80000A0E', '0x80002206',
'0x80002207', '0x80002406', '0x80002407', '0x80002503', '0x80002506', '0x80002507', '0x8000261B', '0x80002653', '0x80002668', '0x80002669',
'0x8000266A', '0x8000266B', '0x8000266C', '0x8000266D', '0x8000266E', '0x8000266F', '0x80002670', '0x80002671', '0x80002672', '0x80002673',

View File

@ -79,7 +79,7 @@
(void)streamMetaAddFailedTask
(void)streamMetaAddTaskLaunchResult
(void)streamMetaCommit
(void)streamMetaRemoveTaskInMeta
(void)streamMetaRemoveTask
(void)streamMetaSendHbHelper
(void)streamMetaStartAllTasks
(void)streamMetaStartOneTask

View File

@ -470,8 +470,6 @@
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_taosx.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_ts5466.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_td33504.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_ts-5473.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_ts-5776.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_ts5906.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/td-32187.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/td-33225.py
@ -1260,6 +1258,10 @@
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/odbc.py
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/fill_with_group.py
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/state_window.py -Q 3
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/cols_function.py
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/cols_function.py -Q 2
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/cols_function.py -Q 3
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/cols_function.py -Q 4
,,y,system-test,./pytest.sh python3 ./test.py -f 99-TDcase/TD-21561.py -Q 4
,,y,system-test,./pytest.sh python3 ./test.py -f 99-TDcase/TD-20582.py
,,n,system-test,python3 ./test.py -f eco-system/meta/database/keep_time_offset.py
@ -1268,6 +1270,7 @@
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/operator.py -Q 3
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/operator.py -Q 4
,,y,system-test,./pytest.sh python3 ./test.py -f eco-system/manager/schema_change.py -N 3 -M 3
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/test_window_true_for.py
#tsim test
,,y,script,./test.sh -f tsim/query/timeline.sim

View File

@ -298,7 +298,7 @@
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/dataFromTsdbNWal-multiCtb.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_taosx.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_ts5466.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_ts-5473.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_c_test.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/td-32187.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/td-33225.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_ts4563.py

View File

@ -671,6 +671,15 @@ class TDSql:
caller = inspect.getframeinfo(inspect.stack()[1][0])
args = (caller.filename, caller.lineno, self.sql, col_name_list, expect_col_name_list)
tdLog.exit("%s(%d) failed: sql:%s, col_name_list:%s != expect_col_name_list:%s" % args)
def checkResColNameList(self, expect_col_name_list):
col_name_list = []
col_type_list = []
for query_col in self.cursor.description:
col_name_list.append(query_col[0])
col_type_list.append(query_col[1])
self.checkColNameList(col_name_list, expect_col_name_list)
def __check_equal(self, elm, expect_elm):
if elm == expect_elm:

View File

@ -15,3 +15,5 @@ TSDB_CODE_UDF_FUNC_EXEC_FAILURE = (TAOS_DEF_ERROR_CODE | 0x290A)
TSDB_CODE_TSC_INTERNAL_ERROR = (TAOS_DEF_ERROR_CODE | 0x02FF)
TSDB_CODE_PAR_SYNTAX_ERROR = (TAOS_DEF_ERROR_CODE | 0x2600)
TSDB_CODE_PAR_INVALID_COLS_FUNCTION = (TAOS_DEF_ERROR_CODE | 0x2689)

View File

@ -1294,7 +1294,10 @@ sql_error select avg(f1), spread(f1), spread(f2), spread(tb1.f1) from tb1 group
sql_error select avg(f1), spread(f1), spread(f2), spread(tb1.f1) from tb1 group by f1 having spread(f1) > id1 and sum(f1);
sql_error select avg(f1), spread(f1), spread(f2), spread(tb1.f1) from tb1 group by f1 having spread(f1) > id1 and sum(f1) > 1;
sql select avg(f1), spread(f1), spread(f2), spread(tb1.f1) from tb1 group by f1 having spread(f1) > id1 and sum(f1) > 1;
if $rows != 0 then
return -1
endi
sql select avg(f1), spread(f1), spread(f2), spread(tb1.f1) from tb1 group by f1 having spread(f1) > 2 and sum(f1) > 1 order by f1;
if $rows != 0 then

View File

@ -809,7 +809,6 @@ sql create stable st(ts timestamp, a int, b int , c int) tags(ta int,tb int,tc i
sql create table ts1 using st tags(1,1,1);
sql create stream streams5 trigger at_once IGNORE EXPIRED 0 IGNORE UPDATE 0 into streamt5 as select count(*), _wstart, _wend, max(a) from ts1 interval(10s) ;
sql create stream streams6 trigger at_once IGNORE EXPIRED 0 IGNORE UPDATE 0 into streamt6 as select count(*), _wstart, _wend, max(a), _wstart as ts from ts1 interval(10s) ;
run tsim/stream/checkTaskStatus.sim
sql_error create stream streams7 trigger at_once into streamt7 as select _wstart, count(*), _wstart, _wend, max(a) from ts1 interval(10s) ;
@ -833,14 +832,14 @@ if $loop_count == 10 then
endi
if $rows != 1 then
print =====rows=$rows
print ===== streamt5: rows=$rows
goto loop170
endi
sql select * from streamt6;
if $rows != 1 then
print =====rows=$rows
print ===== streamt6: rows=$rows
goto loop170
endi

View File

@ -1,3 +1,4 @@
## \brief Test create view and drop view functions
sql connect
sql use testa;

View File

@ -237,7 +237,7 @@ class TDTestCase:
tdLog.info(f"expireTime: {expireTime}, serviceTime: {serviceTime}")
tdSql.checkEqual(True, abs(expireTime - serviceTime - 864000) < 15)
tdSql.query(f'show grants full;')
nGrantItems = 31
nGrantItems = 32
tdSql.checkEqual(len(tdSql.queryResult), nGrantItems)
tdSql.checkEqual(tdSql.queryResult[0][2], serviceTimeStr)
for i in range(1, nGrantItems):

Some files were not shown because too many files have changed in this diff Show More