other: merge 3.0
|
@ -378,7 +378,7 @@ The configuration parameters in properties are as follows.
|
||||||
- TSDBDriver.PROPERTY_KEY_DISABLE_SSL_CERT_VALIDATION: Whether to disable SSL certification validation. It only takes effect when using Websocket connections. true: enabled, false: disabled. The default is false.
|
- TSDBDriver.PROPERTY_KEY_DISABLE_SSL_CERT_VALIDATION: Whether to disable SSL certification validation. It only takes effect when using Websocket connections. true: enabled, false: disabled. The default is false.
|
||||||
|
|
||||||
|
|
||||||
For JDBC native connections, you can specify other parameters, such as log level, SQL length, etc., by specifying URL and Properties. For more detailed configuration, please refer to [Client Configuration](../../reference/config/#configuration-file-on-client-side).
|
For JDBC native connections, you can specify other parameters, such as log level, SQL length, etc., by specifying URL and Properties. For more detailed configuration, please refer to [Client Configuration](../../reference/config/).
|
||||||
|
|
||||||
### Priority of configuration parameters
|
### Priority of configuration parameters
|
||||||
|
|
||||||
|
|
|
@ -444,7 +444,7 @@ FROM temp_ctable t1 LEFT ASOF JOIN temp_stable t2
|
||||||
ON t1.ts = t2.ts AND t1.deviceid = t2.deviceid;
|
ON t1.ts = t2.ts AND t1.deviceid = t2.deviceid;
|
||||||
```
|
```
|
||||||
|
|
||||||
For more information about JOIN operations, please refer to the page [TDengine Join] (../join).
|
For more information about JOIN operations, please refer to the page [TDengine Join](../join).
|
||||||
|
|
||||||
## Nested Query
|
## Nested Query
|
||||||
|
|
||||||
|
|
|
@ -1209,27 +1209,40 @@ ignore_negative: {
|
||||||
### DIFF
|
### DIFF
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
DIFF(expr [, ignore_negative])
|
DIFF(expr [, ignore_option])
|
||||||
|
|
||||||
ignore_negative: {
|
ignore_option: {
|
||||||
0
|
0
|
||||||
| 1
|
| 1
|
||||||
|
| 2
|
||||||
|
| 3
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Description**: The different of each row with its previous row for a specific column. `ignore_negative` can be specified as 0 or 1, the default value is 1 if it's not specified. `1` means negative values are ignored. For tables with composite primary key, the data with the smallest primary key value is used to calculate the difference.
|
**Description**: The difference of each row with its previous row for a specific column. `ignore_option` takes the value of 0|1|2|3, the default value is 0 if it's not specified.
|
||||||
|
- `0` means that negative values (diff results) are not ignored and null values are not ignored
|
||||||
|
- `1` means that negative values (diff results) are treated as null values
|
||||||
|
- `2` means that negative values (diff results) are not ignored but null values are ignored
|
||||||
|
- `3` means that negative values (diff results) are ignored and null values are ignored
|
||||||
|
- For tables with composite primary key, the data with the smallest primary key value is used to calculate the difference.
|
||||||
|
|
||||||
**Return value type**:Same as the data type of the column being operated upon
|
**Return value type**: `bool`, `timestamp` and `integer` value type all return `int_64`, `float` type returns `double`; if the diff result overflows, it is returned as overflow.
|
||||||
|
|
||||||
**Applicable data types**: Numeric
|
**Applicable data types**: Numeric type, timestamp and bool type.
|
||||||
|
|
||||||
**Applicable table types**: standard tables and supertables
|
**Applicable table types**: standard tables and supertables
|
||||||
|
|
||||||
**More explanation**:
|
**More explanation**:
|
||||||
|
|
||||||
- The number of result rows is the number of rows subtracted by one, no output for the first row
|
- diff is to calculate the difference of a specific column in current row and the **first valid data before the row**. The **first valid data before the row** refers to the most adjacent non-null value of same column with smaller timestamp.
|
||||||
- It can be used together with a selected column. For example: select \_rowts, DIFF() from.
|
- The diff result of numeric type is the corresponding arithmatic difference; the timestamp is calculated based on the timestamp precision of the database; when calculating diff, `true` is treated as 1 and `false` is treated as 0
|
||||||
|
- If the data of current row is NULL or can't find the **first valid data before the current row**, the diff result is NULL
|
||||||
|
- When ignoring negative values (ignore_option is set to 1 or 3), if the diff result is negative, the result is set to null, and then filtered according to the null value filtering rule
|
||||||
|
- When the diff result has an overflow, whether to ignore the negative value depends on the result of the logical operation is positive or negative. For example, the value of 9223372036854775800 - (-9223372036854775806) exceeds the range of BIGINT, and the diff result will display the overflow value -10, but it will not be ignored as a negative value
|
||||||
|
- Single or multiple diffs can be used in a single statement, and for each diff you can specify same or different `ignore_option`. When there are multiple diffs in a single statement, when and only when all the diff results are NULL for a row and each diff's `ignore_option` is specified as ignoring NULL, the output of this row will be removed from the result set.
|
||||||
|
- Can be used with the selected associated columns. For example: `select _rowts, DIFF()`.
|
||||||
|
- When there is not composite primary key, if there are the same timestamps across different subtables, it will prompt "Duplicate timestamps not allowed"
|
||||||
|
- When using with composite primary key, there may be same combination of timestamp and complete primary key across sub-tables, which row will be used depends on which row is found first, that means the result of running diff() multiple times may be different in such a case
|
||||||
|
|
||||||
### IRATE
|
### IRATE
|
||||||
|
|
||||||
|
|
|
@ -22,9 +22,7 @@ Record these values:
|
||||||
- TDengine REST API url: `http://tdengine.local:6041`.
|
- TDengine REST API url: `http://tdengine.local:6041`.
|
||||||
- TDengine cluster authorization, with user + password.
|
- TDengine cluster authorization, with user + password.
|
||||||
|
|
||||||
## Configuring Grafana
|
## Install Grafana Plugin and Configure Data Source
|
||||||
|
|
||||||
### Install Grafana Plugin and Configure Data Source
|
|
||||||
|
|
||||||
<Tabs defaultValue="script">
|
<Tabs defaultValue="script">
|
||||||
<TabItem value="gui" label="With GUI">
|
<TabItem value="gui" label="With GUI">
|
||||||
|
@ -34,8 +32,6 @@ Under Grafana 8, plugin catalog allows you to [browse and manage plugins within
|
||||||
Installation may cost some minutes, you can **Create a TDengine data source** when installation finished.
|
Installation may cost some minutes, you can **Create a TDengine data source** when installation finished.
|
||||||
Then you can add a TDengine data source by filling up the configuration options.
|
Then you can add a TDengine data source by filling up the configuration options.
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- Host: IP address of the server where the components of the TDengine cluster provide REST service and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`.
|
- Host: IP address of the server where the components of the TDengine cluster provide REST service and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`.
|
||||||
- User: TDengine user name.
|
- User: TDengine user name.
|
||||||
- Password: TDengine user password.
|
- Password: TDengine user password.
|
||||||
|
@ -99,8 +95,6 @@ Now users can log in to the Grafana server (username/password: admin/admin) dire
|
||||||
Click `Add data source` to enter the Add data source page, and enter TDengine in the query box to add it.
|
Click `Add data source` to enter the Add data source page, and enter TDengine in the query box to add it.
|
||||||
Enter the datasource configuration page, and follow the default prompts to modify the corresponding configuration.
|
Enter the datasource configuration page, and follow the default prompts to modify the corresponding configuration.
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- Host: IP address of the server where the components of the TDengine cluster provide REST service and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`.
|
- Host: IP address of the server where the components of the TDengine cluster provide REST service and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`.
|
||||||
- User: TDengine user name.
|
- User: TDengine user name.
|
||||||
- Password: TDengine user password.
|
- Password: TDengine user password.
|
||||||
|
@ -177,37 +171,112 @@ Open Grafana (http://localhost:3000), and you can add dashboard with TDengine no
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
### Create Dashboard
|
:::info
|
||||||
|
|
||||||
Go back to the main interface to create a dashboard and click Add Query to enter the panel query page:
|
In the following introduction, we take Grafana v11.0.0 as an example. Other versions may have different features, please refer to [Grafana's official website](https://grafana.com/docs/grafana/latest/).
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
## Built-in Variables and Custom Variables
|
||||||
|
The Variable feature in Grafana is very powerful. It can be used in queries, panel titles, labels, etc., to create more dynamic and interactive Dashboards, improving user experience and efficiency.
|
||||||
|
|
||||||
|
The main functions and characteristics of variables include:
|
||||||
|
|
||||||
|
- Dynamic data query: Variables can be used in query statements, allowing users to dynamically change query conditions by selecting different variable values, thus viewing different data views. This is very useful for scenarios that need to dynamically display data based on user input.
|
||||||
|
|
||||||
|
- Improved reusability: By defining variables, the same configuration or query logic can be reused in multiple places without the need to rewrite the same code. This makes the maintenance and updating of Dashboards simpler and more efficient.
|
||||||
|
|
||||||
|
- Flexible configuration options: Variables offer a variety of configuration options, such as predefined static value lists, dynamic value querying from data sources, regular expression filtering, etc., making the application of variables more flexible and powerful.
|
||||||
|
|
||||||
|
|
||||||
|
Grafana provides both built-in variables and custom variables, which can be referenced in SQL writing. We can use `$variableName` to reference the variable, where `variableName` is the name of the variable. For detailed reference, please refer to [Variable reference](https://grafana.com/docs/grafana/latest/dashboards/variables/variable-syntax/).
|
||||||
|
|
||||||
|
### Built-in Variables
|
||||||
|
Grafana has built-in variables such as `from`, `to`, and `interval`, all derived from Grafana plugin panels. Their meanings are as follows:
|
||||||
|
- `from` is the start time of the query range
|
||||||
|
- `to` is the end time of the query range
|
||||||
|
- `interval` represent time spans
|
||||||
|
|
||||||
|
It is recommended to set the start and end times of the query range for each query, which can effectively reduce the amount of data scanned by the TDengine server during query execution. `interval` is the size of the window split, which in Grafana version 11, is calculated based on the time range and the number of return points.
|
||||||
|
In addition to the above three common variables, Grafana also provides variables such as `__timezone`, `__org`, `__user`, etc. For details, please refer to [Built-in Variables](https://grafana.com/docs/grafana/latest/dashboards/variables/add-template-variables/#global-variables).
|
||||||
|
|
||||||
|
### Custom Variables
|
||||||
|
We can add custom variables in the Dashboard. The usage of custom variables is no different from that of built-in variables; they are referenced in SQL with `$variableName`.
|
||||||
|
Custom variables support multiple types, such as `Query`, `Constant`, `Interval`, `Data source`, etc.
|
||||||
|
Custom variables can reference other custom variables, for example, one variable represents a region, and another variable can reference the value of the region to query devices in that region.
|
||||||
|
|
||||||
|
#### Adding Query Type Variables
|
||||||
|
In the Dashboard configuration, select `Variables`, then click `New variable`:
|
||||||
|
1. In the `Name` field, enter your variable name, here we set the variable name as `selected_groups`.
|
||||||
|
2. In the `Select variable type` dropdown menu, select `Query`.
|
||||||
|
Depending on the selected variable type, configure the corresponding options. For example, if you choose `Query`, you need to specify the data source and the query statement for obtaining variable values. Here, taking smart meters as an example, we set the query type, select the data source, and configure the SQL as `select distinct(groupid) from power.meters where groupid < 3 and ts > $from and ts < $to;`
|
||||||
|
3. After clicking `Run Query` at the bottom, you can see the variable values generated based on your configuration in the `Preview of values` section.
|
||||||
|
4. Other configurations are not detailed here. After completing the configuration, click the `Apply` button at the bottom of the page, then click `Save dashboard` in the top right corner to save.
|
||||||
|
|
||||||
|
After completing the above steps, we have successfully added a new custom variable `$selected_groups` to the Dashboard. We can later reference this variable in the Dashboard's queries with `$selected_groups`.
|
||||||
|
|
||||||
|
We can also add another custom variable to reference this `selected_groups` variable, for example, we add a query variable named `tbname_max_current`, with the SQL as `select tbname from power.meters where groupid = $selected_groups and ts > $from and ts < $to;`
|
||||||
|
|
||||||
|
#### Adding Interval Type Variables
|
||||||
|
We can customize the time window interval to better fit business needs.
|
||||||
|
1. In the `Name` field, enter the variable name as `interval`.
|
||||||
|
2. In the `Select variable type` dropdown menu, select `Interval`.
|
||||||
|
3. In the `Interval options` enter `1s,2s,5s,10s,15s,30s,1m`.
|
||||||
|
4. Other configurations are not detailed here. After completing the configuration, click the `Apply` button at the bottom of the page, then click `Save dashboard` in the top right corner to save.
|
||||||
|
|
||||||
|
After completing the above steps, we have successfully added a new custom variable `$interval` to the Dashboard. We can later reference this variable in the Dashboard's queries with `$interval`.
|
||||||
|
|
||||||
|
## TDengine Time Series Query Support
|
||||||
|
On top of supporting standard SQL, TDengine also provides a series of special query syntaxes that meet the needs of time series business scenarios, bringing great convenience to the development of applications in time series scenarios.
|
||||||
|
- `partition by` this clause can split data by certain dimensions and then perform a series of calculations within the split data space, which can replace `group by` in most cases.
|
||||||
|
- `interval` this clause is used to generate time windows of the same time interval.
|
||||||
|
- `fill` this clause is used to specify how to fill when there is data missing in any window.
|
||||||
|
- `Window Pseudocolumns` If you need to output the time window information corresponding to the aggregation result in the results, you need to use window pseudocolumns in the SELECT clause: the start time of the time window (_wstart), the end time of the time window (_wend), etc.
|
||||||
|
|
||||||
|
For a detailed introduction to these features, please refer to [Time-Series Extensions](../../taos-sql/distinguished/).
|
||||||
|
|
||||||
|
|
||||||
|
## Create Dashboard
|
||||||
|
|
||||||
|
Return to the main interface to create a Dashboard, click Add Query to enter the panel query page:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query. We will continue to use power meters as an example. In order to demonstrate the beautiful curves, **virtual data** is used here.
|
As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query. We will continue to use power meters as an example. In order to demonstrate the beautiful curves, **virtual data** is used here.
|
||||||
|
|
||||||
- INPUT SQL: Enter the desired query (the results being two columns and multiple rows), such as `select _wstart as ts, avg(current) as current from power.meters where ts > $from and ts < $to interval($interval) fill(null)`. In this statement, `$from`, `$to`, and `$interval` are variables that Grafana replaces with the query time range and interval. In addition to the built-in variables, custom template variables are also supported.
|
## Time Series Data Display
|
||||||
- ALIAS BY: This allows you to set the current query alias.
|
Suppose we want to query the average current size over a period of time, with the time window divided by $interval, and fill with null if data is missing in any time window interval.
|
||||||
- GENERATE SQL: Clicking this button will automatically replace the corresponding variables and generate the final executed statement.
|
- INPUT SQL: Enter the statement to be queried (the result set of this SQL statement should be two columns and multiple rows), here enter: `select _wstart as ts, avg(current) as current from power.meters where groupid in ($selected_groups) and ts > $from and ts < $to interval($interval) fill(null)`, where from, to, and interval are built-in variables of the Grafana, selected_groups is a custom variable.
|
||||||
- Group by column(s): `group by` or `partition by` columns name split by comma. By setting `Group by column(s)`, it can show multi-dimension data if Sql is `group by` or `partition by`. Such as, it can show data by `groupid` if sql is `select _wstart as ts, groupid, avg(current) as current from power.meters where ts > $from and ts < $to partition by groupid interval($interval) fill(null)` and `Group by column(s)` is `groupid`.
|
- ALIAS BY: You can set the current query alias.
|
||||||
- Group By Format: format legend for `group by` or `partition by`. For example, in the above Input SQL, set `Group By Format` to `groupid-{{groupid}}`, and display the legend name as the formatted group name.
|
- GENERATE SQL: Clicking this button will automatically replace the corresponding variables and generate the final execution statement.
|
||||||
|
|
||||||
|
In the custom variables at the top, if the value of `selected_groups` is selected as 1, then the query for the average value change of all device currents in the `meters` supertable where `groupid` is 1 is as shown in the following figure:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
Since the REST connection because is stateless. Grafana plugin can use <db_name>.<table_name> in the SQL command to specify the database name.
|
Since the REST interface is stateless, it is not possible to use the `use db` statement to switch databases. In the SQL statement in the Grafana plugin, you can use \<db_name>.\<table_name> to specify the database.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
Query the average current changes of all devices in the `meters` stable as shown in the following figure:
|
## Time Series Data Group Display
|
||||||
|
Suppose we want to query the average current value over a period of time, displayed by `groupid` grouping, we can modify the previous SQL to `select _wstart as ts, groupid, avg(current) as current from power.meters where ts > $from and ts < $to partition by groupid interval($interval) fill(null)`
|
||||||
|
|
||||||

|
- Group by column(s): **Half-width** comma-separated `group by` or `partition by` column names. If it is a `group by` or `partition by` query statement, setting the `Group by` column can display multidimensional data. Here, set the Group by column name to `groupid`, which can display data grouped by `groupid`.
|
||||||
|
- Group By Format: Legend formatting format for multidimensional data in Group by or Partition by scenarios. For example, the above INPUT SQL, setting `Group By Format` to `groupid-{{groupid}}`, the displayed legend name is the formatted group name.
|
||||||
|
|
||||||
Query the average current value of all devices in the 'meters' stable and display it in groups according to the `groupid` as shown in the following figure:
|
After completing the settings, the data is displayed grouped by `groupid` as shown in the following figure:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
> For more information on how to use Grafana to create the appropriate monitoring interface and for more details on using Grafana, refer to the official Grafana [documentation](https://grafana.com/docs/).
|
> For more information on how to use Grafana to create the appropriate monitoring interface and for more details on using Grafana, refer to the official Grafana [documentation](https://grafana.com/docs/).
|
||||||
|
|
||||||
### Importing the Dashboard
|
## Performance Suggestions
|
||||||
|
- **Include time range in all queries**, in time series databases, if the time range is not included in the query, it will lead to table scanning and poor performance. A common SQL writing example is `select column_name from db.table where ts > $from and ts < $to;`
|
||||||
|
- For queries of the latest status type, we generally recommend **enabling cache when creating the database** (`CACHEMODEL` set to last_row or both), a common SQL writing example is `select last(column_name) from db.table where ts > $from and ts < $to;`
|
||||||
|
|
||||||
|
## Importing the Dashboard
|
||||||
|
|
||||||
You can install TDinsight dashboard in data source configuration page (like `http://localhost:3000/datasources/edit/1/dashboards`) as a monitoring visualization tool for TDengine cluster. Ensure that you use TDinsight for 3.x. Please note TDinsight for 3.x needs to configure and run taoskeeper correctly.
|
You can install TDinsight dashboard in data source configuration page (like `http://localhost:3000/datasources/edit/1/dashboards`) as a monitoring visualization tool for TDengine cluster. Ensure that you use TDinsight for 3.x. Please note TDinsight for 3.x needs to configure and run taoskeeper correctly.
|
||||||
|
|
||||||
|
@ -221,3 +290,137 @@ For more dashboards using TDengine data source, [search here in Grafana](https:/
|
||||||
- [15155](https://grafana.com/grafana/dashboards/15155): TDengine alert demo.
|
- [15155](https://grafana.com/grafana/dashboards/15155): TDengine alert demo.
|
||||||
- [15167](https://grafana.com/grafana/dashboards/15167): TDinsight.
|
- [15167](https://grafana.com/grafana/dashboards/15167): TDinsight.
|
||||||
- [16388](https://grafana.com/grafana/dashboards/16388): Telegraf node metrics dashboard using TDengine data source.
|
- [16388](https://grafana.com/grafana/dashboards/16388): Telegraf node metrics dashboard using TDengine data source.
|
||||||
|
|
||||||
|
|
||||||
|
## Alert Configuration Introduction
|
||||||
|
### Alert Configuration Steps
|
||||||
|
The TDengine Grafana plugin supports alerts. To configure alerts, the following steps are required:
|
||||||
|
1. Configure Contact Points: Set up notification channels, including DingDing, Email, Slack, WebHook, Prometheus Alertmanager, etc.
|
||||||
|
2. Configure Notification Policies: Set up routing for which channel to send alerts to, as well as the timing and frequency of notifications.
|
||||||
|
3. Configure "Alert rules": Set up detailed alert rules.
|
||||||
|
3.1 Configure alert name.
|
||||||
|
3.2 Configure query and alert trigger conditions.
|
||||||
|
3.3 Configure evaluation behavior.
|
||||||
|
3.4 Configure labels and notifications.
|
||||||
|
3.5 Configure annotations.
|
||||||
|
|
||||||
|
### Alert Configuration Web UI
|
||||||
|
In Grafana 11, the alert Web UI has 6 tabs: "Alert rules", "Contact points", "Notification policies", "Silences", "Groups", and "Settings".
|
||||||
|
- "Alert rules" displays and configures alert rules.
|
||||||
|
- "Contact points" support notification channels such as DingDing, Email, Slack, WebHook, Prometheus Alertmanager, etc.
|
||||||
|
- "Notification policies" sets up routing for which channel to send alerts to, as well as the timing and frequency of notifications.
|
||||||
|
- "Silences" configures silent periods for alerts.
|
||||||
|
- "Groups" displays grouped alerts after they are triggered.
|
||||||
|
- "Admin" allows modifying alert configurations through JSON.
|
||||||
|
|
||||||
|
## Configuring Email Contact Point
|
||||||
|
### Modifying Grafana Server Configuration File
|
||||||
|
Add SMTP/Emailing and Alerting modules to the Grafana service configuration file. For Linux systems, the configuration file is usually located at `/etc/grafana/grafana.ini`.
|
||||||
|
Add the following content to the configuration file:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
#################################### SMTP / Emailing ##########################
|
||||||
|
[smtp]
|
||||||
|
enabled = true
|
||||||
|
host = smtp.qq.com:465 #Email service used
|
||||||
|
user = receiver@foxmail.com
|
||||||
|
password = *********** #Use mail authorization code
|
||||||
|
skip_verify = true
|
||||||
|
from_address = sender@foxmail.com
|
||||||
|
```
|
||||||
|
|
||||||
|
Then restart the Grafana service. For example, on a Linux system, execute `systemctl restart grafana-server.service`
|
||||||
|
|
||||||
|
### Grafana Configuration for Email Contact Point
|
||||||
|
|
||||||
|
Find "Home" -> "Alerting" -> "Contact points" on the Grafana page to create a new contact point
|
||||||
|
"Name": Email Contact Point
|
||||||
|
"Integration": Select the contact type, here choose Email, fill in the email receiving address, and save the contact point after completion
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Configuring Feishu Contact Point
|
||||||
|
|
||||||
|
### Feishu Robot Configuration
|
||||||
|
1. "Feishu Workspace" -> "Get Apps" -> "Search for Feishu Robot Assistant" -> "Create Command"
|
||||||
|
2. Choose Trigger: Grafana
|
||||||
|
3. Choose Action: Send a message through the official robot, fill in the recipient and message content
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### Grafana Configuration for Feishu Contact Point
|
||||||
|
|
||||||
|
Find "Home" -> "Alerting" -> "Contact points" on the Grafana page to create a new contact point
|
||||||
|
"Name": Feishu Contact Point
|
||||||
|
"Integration": Select the contact type, here choose Webhook, and fill in the URL (the Grafana trigger Webhook address in Feishu Robot Assistant), then save the contact point
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Notification Policy
|
||||||
|
After configuring the contact points, you can see there is a Default Policy
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Click on the "..." on the right -> "Edit", then edit the default notification policy, a configuration window pops up:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Configure the parameters as shown in the screenshot above.
|
||||||
|
|
||||||
|
## Configuring Alert Rules
|
||||||
|
|
||||||
|
### Define Query and Alert Conditions
|
||||||
|
|
||||||
|
Select "Edit" -> "Alert" -> "New alert rule" in the panel where you want to configure the alert.
|
||||||
|
|
||||||
|
1. "Enter alert rule name": Here, enter `power meters alert` as an example for smart meters.
|
||||||
|
2. "Define query and alert condition":
|
||||||
|
2.1 Choose data source: `TDengine Datasource`
|
||||||
|
2.2 Query statement:
|
||||||
|
```sql
|
||||||
|
select _wstart as ts, groupid, avg(current) as current from power.meters where ts > $from and ts < $to partition by groupid interval($interval) fill(null)
|
||||||
|
```
|
||||||
|
2.3 Set "Expression": `Threshold is above 100`
|
||||||
|
2.4 Click "Set as alert condition"
|
||||||
|
2.5 "Preview": View the results of the set rules
|
||||||
|
|
||||||
|
After completing the settings, you can see the image displayed below:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### Configuring Expressions and Calculation Rules
|
||||||
|
|
||||||
|
Grafana's "Expression" supports various operations and calculations on data, which are divided into:
|
||||||
|
1. "Reduce": Aggregates the values of a time series within the selected time range into a single value
|
||||||
|
1.1 "Function" is used to set the aggregation method, supporting Min, Max, Last, Mean, Sum, and Count.
|
||||||
|
1.2 "Mode" supports the following three:
|
||||||
|
- "Strict": If no data is queried, the data will be assigned NaN.
|
||||||
|
- "Drop Non-numeric Value": Remove illegal data results.
|
||||||
|
- "Replace Non-numeric Value": If it is illegal data, replace it with a constant value.
|
||||||
|
2. "Threshold": Checks whether the time series data meets the threshold judgment condition. Returns 0 when the condition is false, and 1 when true. Supports the following methods:
|
||||||
|
- Is above (x > y)
|
||||||
|
- Is below (x < y)
|
||||||
|
- Is within range (x > y1 AND x < y2)
|
||||||
|
- Is outside range (x < y1 AND x > y2)
|
||||||
|
3. "Math": Performs mathematical operations on the data of the time series.
|
||||||
|
4. "Resample": Changes the timestamps in each time series to have a consistent interval, so that mathematical operations can be performed between them.
|
||||||
|
5. "Classic condition (legacy)": Multiple logical conditions can be configured to determine whether to trigger an alert.
|
||||||
|
|
||||||
|
As shown in the screenshot above, here we set the alert to trigger when the maximum value exceeds 100.
|
||||||
|
|
||||||
|
### Configuring Evaluation behavior
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Configure the parameters as shown in the screenshot above.
|
||||||
|
|
||||||
|
### Configuring Labels and Notifications
|
||||||
|

|
||||||
|
|
||||||
|
Configure the parameters as shown in the screenshot above.
|
||||||
|
|
||||||
|
### Configuring annotations
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
After setting "Summary" and "Description", you will receive an alert notification if the alert is triggered.
|
||||||
|
|
|
@ -94,7 +94,7 @@ The output as bellow:
|
||||||
|
|
||||||
The role of the TDengine Sink Connector is to synchronize the data of the specified topic to TDengine. Users do not need to create databases and super tables in advance. The name of the target database can be specified manually (see the configuration parameter connection.database), or it can be generated according to specific rules (see the configuration parameter connection.database.prefix).
|
The role of the TDengine Sink Connector is to synchronize the data of the specified topic to TDengine. Users do not need to create databases and super tables in advance. The name of the target database can be specified manually (see the configuration parameter connection.database), or it can be generated according to specific rules (see the configuration parameter connection.database.prefix).
|
||||||
|
|
||||||
TDengine Sink Connector internally uses TDengine [modeless write interface](../../client-libraries/cpp#modeless write-api) to write data to TDengine, currently supports data in three formats: [InfluxDB line protocol format](../../develop/insert-data/influxdb-line), [OpenTSDB Telnet protocol format](../../develop/insert-data/opentsdb-telnet), and [OpenTSDB JSON protocol format](../../develop/insert-data/opentsdb-json).
|
TDengine Sink Connector internally uses TDengine [modeless write interface](../../client-libraries/cpp/#schemaless-writing-api) to write data to TDengine, currently supports data in three formats: [InfluxDB line protocol format](../../develop/insert-data/influxdb-line), [OpenTSDB Telnet protocol format](../../develop/insert-data/opentsdb-telnet), and [OpenTSDB JSON protocol format](../../develop/insert-data/opentsdb-json).
|
||||||
|
|
||||||
The following example synchronizes the data of the topic meters to the target database power. The data format is the InfluxDB Line protocol format.
|
The following example synchronizes the data of the topic meters to the target database power. The data format is the InfluxDB Line protocol format.
|
||||||
|
|
||||||
|
|
|
@ -25,7 +25,7 @@ description: Use PowerBI and TDengine to analyze time series data
|
||||||
|
|
||||||
  [DSN]:       Data Source Name, required field, such as "MyTDengine"
|
  [DSN]:       Data Source Name, required field, such as "MyTDengine"
|
||||||
|
|
||||||
Depending on your TDengine server version, download appropriate version of TDengine client package from TDengine website [Download Link](../../get-started/package/), or TDengine explorer if you are using a local TDengine cluster. Install the TDengine client package on same Windows machine where PowerBI is running.
|
Depending on your TDengine server version, download appropriate version of TDengine client package from TDengine website [Download Link](https://docs.tdengine.com/get-started/package/), or TDengine explorer if you are using a local TDengine cluster. Install the TDengine client package on same Windows machine where PowerBI is running.
|
||||||
|
|
||||||
|
|
||||||
  [URL]:        taos://localhost:6041
|
  [URL]:        taos://localhost:6041
|
||||||
|
|
After Width: | Height: | Size: 83 KiB |
After Width: | Height: | Size: 39 KiB |
After Width: | Height: | Size: 105 KiB |
After Width: | Height: | Size: 41 KiB |
After Width: | Height: | Size: 36 KiB |
After Width: | Height: | Size: 116 KiB |
After Width: | Height: | Size: 37 KiB |
After Width: | Height: | Size: 98 KiB |
After Width: | Height: | Size: 204 KiB |
Before Width: | Height: | Size: 124 KiB After Width: | Height: | Size: 154 KiB |
Before Width: | Height: | Size: 122 KiB After Width: | Height: | Size: 157 KiB |
|
@ -379,7 +379,7 @@ properties 中的配置参数如下:
|
||||||
- TSDBDriver.PROPERTY_KEY_DISABLE_SSL_CERT_VALIDATION: 关闭 SSL 证书验证 。仅在使用 Websocket 连接时生效。true: 启用,false: 不启用。默认为 false。
|
- TSDBDriver.PROPERTY_KEY_DISABLE_SSL_CERT_VALIDATION: 关闭 SSL 证书验证 。仅在使用 Websocket 连接时生效。true: 启用,false: 不启用。默认为 false。
|
||||||
|
|
||||||
|
|
||||||
此外对 JDBC 原生连接,通过指定 URL 和 Properties 还可以指定其他参数,比如日志级别、SQL 长度等。更多详细配置请参考[客户端配置](../../reference/config/#仅客户端适用)。
|
此外对 JDBC 原生连接,通过指定 URL 和 Properties 还可以指定其他参数,比如日志级别、SQL 长度等。更多详细配置请参考[客户端配置](../../reference/config/)。
|
||||||
|
|
||||||
### 配置参数的优先级
|
### 配置参数的优先级
|
||||||
|
|
||||||
|
|
|
@ -1200,27 +1200,40 @@ ignore_negative: {
|
||||||
### DIFF
|
### DIFF
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
DIFF(expr [, ignore_negative])
|
DIFF(expr [, ignore_option])
|
||||||
|
|
||||||
ignore_negative: {
|
ignore_option: {
|
||||||
0
|
0
|
||||||
| 1
|
| 1
|
||||||
|
| 2
|
||||||
|
| 3
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:统计表中某列的值与前一行对应值的差。 ignore_negative 取值为 0|1 , 可以不填,默认值为 0. 不忽略负值。ignore_negative 为 1 时表示忽略负数。对于你存在复合主键的表的查询,若时间戳相同的数据存在多条,则只有对应的复合主键最小的数据参与运算。
|
**功能说明**:统计表中特定列与之前行的当前列有效值之差。 ignore_option 取值为 0|1|2|3 , 可以不填,默认值为 0.
|
||||||
|
- `0` 表示不忽略(diff结果)负值不忽略 null 值
|
||||||
|
- `1` 表示(diff结果)负值作为 null 值
|
||||||
|
- `2` 表示不忽略(diff结果)负值但忽略 null 值
|
||||||
|
- `3` 表示忽略(diff结果)负值且忽略 null 值
|
||||||
|
- 对于存在复合主键的表的查询,若时间戳相同的数据存在多条,则只有对应的复合主键最小的数据参与运算。
|
||||||
|
|
||||||
**返回数据类型**:同应用字段。
|
**返回数据类型**:bool、时间戳及整型数值类型均返回 int_64,浮点类型返回 double, 若 diff 结果溢出则返回溢出后的值。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型、时间戳和 bool 类型。
|
||||||
|
|
||||||
**适用于**:表和超级表。
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**:
|
||||||
|
|
||||||
- 输出结果行数是范围内总行数减一,第一行没有结果输出。
|
- diff 是计算本行特定列与同列的前一个有效数据的差值,同列的前一个有效数据:指的是同一列中时间戳较小的最临近的非空值。
|
||||||
- 可以与选择相关联的列一起使用。 例如: select \_rowts, DIFF() from。
|
- 数值类型 diff 结果为对应的算术差值;时间戳类型根据数据库的时间戳精度进行差值计算;bool 类型计算差值时 true 视为 1, false 视为 0
|
||||||
|
- 如当前行数据为 null 或者没有找到同列前一个有效数据时,diff 结果为 null
|
||||||
|
- 忽略负值时( ignore_option 设置为 1 或 3 ),如果 diff 结果为负值,则结果设置为 null,然后根据 null 值过滤规则进行过滤
|
||||||
|
- 当 diff 结果发生溢出时,结果是否是`应该忽略的负值`取决于逻辑运算结果是正数还是负数,例如 9223372036854775800 - (-9223372036854775806) 的值超出 BIGINT 的范围 ,diff 结果会显示溢出值 -10,但并不会被作为负值忽略
|
||||||
|
- 单个语句中可以使用单个或者多个 diff,并且每个 diff 可以指定相同或不同的 ignore_option ,当单个语句中存在多个 diff 时当且仅当某行所有 diff 的结果都为 null ,并且 ignore_option 都设置为忽略 null 值,该行才从结果集中剔除
|
||||||
|
- 可以选择与相关联的列一起使用。 例如: select _rowts, DIFF() from。
|
||||||
|
- 当没有复合主键时,如果不同的子表有相同时间戳的数据,会提示 "Duplicate timestamps not allowed"
|
||||||
|
- 当使用复合主键时,不同子表的时间戳和主键组合可能相同,使用哪一行取决于先找到哪一行,这意味着在这种情况下多次运行 diff() 的结果可能会不同。
|
||||||
|
|
||||||
### IRATE
|
### IRATE
|
||||||
|
|
||||||
|
|
|
@ -13,7 +13,7 @@ TDengine 能够与开源数据可视化系统 [Grafana](https://www.grafana.com/
|
||||||
|
|
||||||
要让 Grafana 能正常添加 TDengine 数据源,需要以下几方面的准备工作。
|
要让 Grafana 能正常添加 TDengine 数据源,需要以下几方面的准备工作。
|
||||||
|
|
||||||
- Grafana 服务已经部署并正常运行。目前 TDengine 支持 Grafana 7.5 以上的版本。用户可以根据当前的操作系统,到 Grafana 官网下载安装包,并执行安装。下载地址如下:\<https://grafana.com/grafana/download>。
|
- Grafana 服务已经部署并正常运行。目前 TDengine 支持 Grafana 7.5 以上的版本。用户可以根据当前的操作系统,到 Grafana 官网下载安装包,并执行安装。下载地址如下:[https://grafana.com/grafana/download](https://grafana.com/grafana/download) 。
|
||||||
- TDengine 集群已经部署并正常运行。
|
- TDengine 集群已经部署并正常运行。
|
||||||
- taosAdapter 已经安装并正常运行。具体细节请参考 [taosAdapter 的使用手册](../../reference/taosadapter)
|
- taosAdapter 已经安装并正常运行。具体细节请参考 [taosAdapter 的使用手册](../../reference/taosadapter)
|
||||||
|
|
||||||
|
@ -22,26 +22,19 @@ TDengine 能够与开源数据可视化系统 [Grafana](https://www.grafana.com/
|
||||||
- TDengine 集群 REST API 地址,如:`http://tdengine.local:6041`。
|
- TDengine 集群 REST API 地址,如:`http://tdengine.local:6041`。
|
||||||
- TDengine 集群认证信息,可使用用户名及密码。
|
- TDengine 集群认证信息,可使用用户名及密码。
|
||||||
|
|
||||||
## 配置 Grafana
|
## 安装 Grafana Plugin 并配置数据源
|
||||||
|
|
||||||
### 安装 Grafana Plugin 并配置数据源
|
|
||||||
|
|
||||||
<Tabs defaultValue="script">
|
<Tabs defaultValue="script">
|
||||||
<TabItem value="gui" label="图形化界面安装">
|
<TabItem value="gui" label="图形化界面安装">
|
||||||
|
|
||||||
使用 Grafana 最新版本(8.5+),您可以在 Grafana 中[浏览和管理插件](https://grafana.com/docs/grafana/next/administration/plugin-management/#plugin-catalog)(对于 7.x 版本,请采用 **安装脚本** 或 **手动安装** 方式)。在 Grafana 管理界面中的 **Configurations > Plugins** 页面直接搜索 `TDengine` 并按照提示安装。
|
使用 Grafana 最新版本(8.5+),您可以在 Grafana 中[浏览和管理插件](https://grafana.com/docs/grafana/next/administration/plugin-management/#plugin-catalog)(对于 7.x 版本,请采用 **安装脚本** 或 **手动安装** 方式)。在 Grafana 管理界面中的 **Configurations > Plugins** 页面直接搜索 `TDengine` 并按照提示安装。
|
||||||
|
|
||||||
安装完毕后,按照指示 **Create a TDengine data source** 添加数据源。
|
安装完毕后,按照指示 **Create a TDengine data source** 添加数据源,输入 TDengine 相关配置:
|
||||||
输入 TDengine 相关配置,如下图所示:
|
- Host: TDengine 集群中提供 REST 服务的 IP 地址与端口号,默认 `http://localhost:6041`
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- Host: TDengine 集群中提供 REST 服务的 IP 地址与端口号,默认 \<http://localhost:6041>。
|
|
||||||
- User:TDengine 用户名。
|
- User:TDengine 用户名。
|
||||||
- Password:TDengine 用户密码。
|
- Password:TDengine 用户密码。
|
||||||
|
|
||||||
点击 `Save & Test` 进行测试,成功会提示:`TDengine Data source is working`。
|
点击 `Save & Test` 进行测试,成功会提示:`TDengine Data source is working`。
|
||||||
配置完毕,现在可以使用 TDengine 创建 Dashboard 了。
|
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
<TabItem value="script" label="安装脚本">
|
<TabItem value="script" label="安装脚本">
|
||||||
|
@ -93,13 +86,11 @@ sudo unzip tdengine-datasource-$GF_VERSION.zip -d /var/lib/grafana/plugins/
|
||||||
GF_INSTALL_PLUGINS=tdengine-datasource
|
GF_INSTALL_PLUGINS=tdengine-datasource
|
||||||
```
|
```
|
||||||
|
|
||||||
之后,用户可以直接通过 \<http://localhost:3000> 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,
|
之后,用户可以直接通过 http://localhost:3000 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,
|
||||||
|
|
||||||
点击 `Add data source` 可进入新增数据源页面,在查询框中输入 TDengine, 然后点击 `select` 选择添加后会进入数据源配置页面,按照默认提示修改相应配置即可:
|
点击 `Add data source` 可进入新增数据源页面,在查询框中输入 TDengine, 然后点击 `select` 选择添加后会进入数据源配置页面,按照默认提示修改相应配置:
|
||||||
|
|
||||||

|
- Host: TDengine 集群中提供 REST 服务的 IP 地址与端口号,默认 `http://localhost:6041`
|
||||||
|
|
||||||
- Host: TDengine 集群中提供 REST 服务的 IP 地址与端口号,默认 \<http://localhost:6041>。
|
|
||||||
- User:TDengine 用户名。
|
- User:TDengine 用户名。
|
||||||
- Password:TDengine 用户密码。
|
- Password:TDengine 用户密码。
|
||||||
|
|
||||||
|
@ -170,25 +161,92 @@ docker run -d \
|
||||||
|
|
||||||
3. 使用 docker-compose 命令启动 TDengine + Grafana :`docker-compose up -d`。
|
3. 使用 docker-compose 命令启动 TDengine + Grafana :`docker-compose up -d`。
|
||||||
|
|
||||||
打开 Grafana \<http://localhost:3000>,现在可以添加 Dashboard 了。
|
打开 Grafana [http://localhost:3000](http://localhost:3000),现在可以添加 Dashboard 了。
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
### 创建 Dashboard
|
|
||||||
|
|
||||||
回到主界面创建 Dashboard,点击 Add Query 进入面板查询页面:
|
:::info
|
||||||
|
|
||||||
|
下文介绍中,都以 Grafana v11.0.0 版本为例,其他版本功能可能有差异,请参考 [Grafana 官网](https://grafana.com/docs/grafana/latest/)。
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
## 内置变量和自定义变量
|
||||||
|
Grafana 中的 Variable(变量)功能非常强大,可以在 Dashboard 的查询、面板标题、标签等地方使用,用来创建更加动态和交互式的 Dashbord,提高用户体验和效率。
|
||||||
|
|
||||||
|
变量的主要作用和特点包括:
|
||||||
|
|
||||||
|
- 动态数据查询:变量可以用于查询语句中,使得用户可以通过选择不同的变量值来动态更改查询条件,从而查看不同的数据视图。这对于需要根据用户输入动态展示数据的场景非常有用。
|
||||||
|
|
||||||
|
- 提高可重用性:通过定义变量,可以在多个地方重用相同的配置或查询逻辑,而不需要重复编写相同的代码。这使得 Dashboard 的维护和更新变得更加简单高效。
|
||||||
|
|
||||||
|
- 灵活的配置选项:变量提供了多种配置选项,如预定义的静态值列表、从数据源动态查询值、正则表达式过滤等,使得变量的应用更加灵活和强大。
|
||||||
|
|
||||||
|
|
||||||
|
Grafana 提供了内置变量和自定义变量,它们都可以可以在编写 SQL 时引用,引用的方式是 `$variableName`,`variableName` 是变量的名字,其他引用方式请参考 [引用方式](https://grafana.com/docs/grafana/latest/dashboards/variables/variable-syntax/)。
|
||||||
|
|
||||||
|
### 内置变量
|
||||||
|
Grafana 内置了 `from`、`to` 和 `interval` 等变量,都取自于 Grafana 插件面板。其含义如下:
|
||||||
|
- `from` 查询范围的起始时间
|
||||||
|
- `to` 查询范围的结束时间
|
||||||
|
- `interval` 窗口切分间隔
|
||||||
|
|
||||||
|
对于每个查询都建议设置查询范围的起始时间和结束时间,可以有效的减少 TDengine 服务端执行查询扫描的数据量。`internal` 是窗口切分的大小,在 Grafana 11 版本中,其大小为时间间隔和返回点数计算而得。
|
||||||
|
除了上述三个常用变量,Grafana 还提供了如 `__timezone`, `__org`, `__user` 等变量,详情请参考 [内置变量](https://grafana.com/docs/grafana/latest/dashboards/variables/add-template-variables/#global-variables)。
|
||||||
|
|
||||||
|
### 自定义变量
|
||||||
|
我们可以在 Dashbord 中增加自定义变量。自定义变量和内置变量的使用方式没有区别,都是在 SQL 中用 `$variableName` 进行引用。
|
||||||
|
自定义变量支持多种类型,常见的类型包括 `Query`(查询)、`Constant`(常量)、`Interval`(间隔)、`Data source`(数据源)等。
|
||||||
|
自定义变量可以引用其他自定义变量,比如一个变量表示区域,另一个变量可以引用区域的值,来查询这个区域的设备。
|
||||||
|
#### 添加查询类型变量
|
||||||
|
在 Dashbord 的配置中,选择 【Variables】,然后点击 【New variable】:
|
||||||
|
1. 在 “Name“ 字段中,输入你的变量名,此处我们设置变量名为 `selected_groups`。
|
||||||
|
2. 在 【Select variable type】下拉菜单中,选择 ”Query“(查询)。
|
||||||
|
根据选择的变量类型,配置相应的选项。例如,如果选择了 “Query” 类型,你需要指定数据源和用于获取变量值的查询语句。此处我们还以智能电表为例,设置查询类型,选择数据源后,配置 SQL 为 `select distinct(groupid) from power.meters where groupid < 3 and ts > $from and ts < $to;`
|
||||||
|
3. 点击底部的【Run Query】后,可以在 “Preview of values”(值预览)部分,查看到根据你的配置生成的变量值。
|
||||||
|
4. 还有其他配置不再赘述,完成配置后,点击页面底部的【Apply】(应用)按钮,然后点击右上角的【Save dashboard】保存。
|
||||||
|
|
||||||
|
完成以上步骤后,我们就成功在 Dashboard 中添加了一个新的自定义变量 `$selected_groups`。我们可以后面在 Dashboard 的查询中通过 `$selected_groups` 的方式引用这个变量。
|
||||||
|
|
||||||
|
我们还可以再新增自定义变量来引用这个 `selected_groups` 变量,比如我们新增一个名为 `tbname_max_current` 的查询变量,其 SQL 为 `select tbname from power.meters where groupid = $selected_groups and ts > $from and ts < $to;`
|
||||||
|
|
||||||
|
#### 添加间隔类型变量
|
||||||
|
我们可以自定义时间窗口间隔,可以更加贴合业务需求。
|
||||||
|
1. 在 “Name“ 字段中,输入变量名为 `interval`。
|
||||||
|
2. 在 【Select variable type】下拉菜单中,选择 “Interval”(间隔)。
|
||||||
|
3. 在 【Interval options】选项中输入 `1s,2s,5s,10s,15s,30s,1m`。
|
||||||
|
4. 还有其他配置不再赘述,完成配置后,点击页面底部的【Apply】(应用)按钮,然后点击右上角的【Save dashboard】保存。
|
||||||
|
|
||||||
|
完成以上步骤后,我们就成功在 Dashboard 中添加了一个新的自定义变量 `$interval`。我们可以后面在 Dashboard 的查询中通过 `$interval` 的方式引用这个变量。
|
||||||
|
|
||||||
|
## TDengine 时间序列查询支持
|
||||||
|
TDengine 在支持标准 SQL 的基础之上,还提供了一系列满足时序业务场景需求的特色查询语法,这些语法能够为时序场景的应用的开发带来极大的便利。
|
||||||
|
- `partition by` 子句可以按一定的维度对数据进行切分,然后在切分出的数据空间内再进行一系列的计算。绝大多数情况可以替代 `group by`。
|
||||||
|
- `interval` 子句用于产生相等时间周期的窗口
|
||||||
|
- `fill` 语句指定某一窗口区间数据缺失的情况下的填充模式
|
||||||
|
- `时间戳伪列` 如果需要在结果中输出聚合结果所对应的时间窗口信息,需要在 SELECT 子句中使用时间戳相关的伪列: 时间窗口起始时间 (_wstart), 时间窗口结束时间 (_wend) 等
|
||||||
|
|
||||||
|
上述特性详细介绍可以参考 [特色查询](../../taos-sql/distinguished/)。
|
||||||
|
|
||||||
|
## 创建 Dashboard
|
||||||
|
|
||||||
|
回到主界面创建 Dashboard,点击【Add Query】进入面板查询页面:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
如上图所示,在 Query 中选中 `TDengine` 数据源,在下方查询框可输入相应 SQL 进行查询。 我们继续用智能电表来举例,为了展示曲线美观,此处**用了虚拟数据**。
|
如上图所示,在 ”Query“ 中选中 `TDengine` 数据源,在下方查询框可输入相应 SQL 进行查询。 我们继续用智能电表来举例,为了展示曲线美观,此处**用了虚拟数据**。
|
||||||
具体说明如下:
|
|
||||||
|
|
||||||
- INPUT SQL:输入要查询的语句(该 SQL 语句的结果集应为两列多行),例如:`select _wstart as ts, avg(current) as current from power.meters where ts > $from and ts < $to interval($interval) fill(null)` ,其中,from、to 和 interval 为 TDengine 插件的内置变量,表示从 Grafana 插件面板获取的时间查询范围和窗口切分间隔。除了内置变量外,也支持使用自定义模板变量。
|
## 时间序列数据展示
|
||||||
- ALIAS BY:可设置当前查询别名。
|
假设我们想查询一段时间内的平均电流大小,时间窗口按 `$interval` 切分,若某一时间窗口区间数据缺失,填充 null。
|
||||||
- GENERATE SQL: 点击该按钮会自动替换相应变量,并生成最终执行的语句。
|
- “INPUT SQL“:输入要查询的语句(该 SQL 语句的结果集应为两列多行),此处输入:`select _wstart as ts, avg(current) as current from power.meters where groupid in ($selected_groups) and ts > $from and ts < $to interval($interval) fill(null)` ,其中,from、to 和 interval 为 Grafana 内置变量,selected_groups 为自定义变量。
|
||||||
- Group by column(s): **半角**逗号分隔的 `group by` 或 `partition by` 列名。如果是 `group by` or `partition by` 查询语句,设置 `Group by` 列,可以展示多维数据。例如:INPUT SQL 为 `select _wstart as ts, groupid, avg(current) as current from power.meters where ts > $from and ts < $to partition by groupid interval($interval) fill(null)`,设置 Group by 列名为 `groupid`,可以按 `groupid` 展示数据。
|
- “ALIAS BY“:可设置当前查询别名。
|
||||||
- Group By Format: Group by 或 Partition by 场景下多维数据 legend 格式化格式。例如上述 INPUT SQL,将 `Group By Format` 设置为 `groupid-{{groupid}}`,展示的 legend 名字为格式化的分组名。
|
- “GENERATE SQL“: 点击该按钮会自动替换相应变量,并生成最终执行的语句。
|
||||||
|
|
||||||
|
在顶部的自定义变量中,若选择 `selected_groups` 的值为 1,则查询 `meters` 超级表中 `groupid` 为 1 的所有设备电流平均值变化如下图:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
|
@ -196,17 +254,24 @@ docker run -d \
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
查询 `meters` 超级表所有设备电流平均值变化如下图:
|
## 时间序列数据分组展示
|
||||||
|
假设我们想查询一段时间内的平均电流大小,按 `groupid` 分组展示,我们可以修改之前的 SQL 为 `select _wstart as ts, groupid, avg(current) as current from power.meters where ts > $from and ts < $to partition by groupid interval($interval) fill(null)`
|
||||||
|
|
||||||

|
- “Group by column(s)“: **半角**逗号分隔的 `group by` 或 `partition by` 列名。如果是 `group by` 或 `partition by` 查询语句,设置 “Group by“ 列,可以展示多维数据。此处设置 “Group by“ 列名为 `groupid`,可以按 `groupid` 分组展示数据。
|
||||||
|
- “Group By Format“: `Group by` 或 `Partition by` 场景下多维数据 legend 格式化格式。例如上述 INPUT SQL,将 “Group By Format“ 设置为 `groupid-{{groupid}}`,展示的 legend 名字为格式化的分组名。
|
||||||
|
|
||||||
查询 `meters` 超级表所有设备电流平均值,并按照 `groupid` 分组展示如下图:
|
完成设置后,按照 `groupid` 分组展示如下图:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
> 关于如何使用 Grafana 创建相应的监测界面以及更多有关使用 Grafana 的信息,请参考 Grafana 官方的[文档](https://grafana.com/docs/)。
|
> 关于如何使用 Grafana 创建相应的监测界面以及更多有关使用 Grafana 的信息,请参考 Grafana 官方的[文档](https://grafana.com/docs/)。
|
||||||
|
|
||||||
### 导入 Dashboard
|
## 性能建议
|
||||||
|
- **所有查询加上时间范围**,在时序数据库中,如果不加查询的时间范围,会扫表导致性能低下。常见的 SQL 写法是 `select column_name from db.table where ts > $from and ts < $to;`
|
||||||
|
- 对于最新状态类型的查询,我们一般建议在**创建数据库的时候打开缓存**(`CACHEMODEL` 设置为 last_row 或者 both),常见的 SQL 写法是 `select last(column_name) from db.table where ts > $from and ts < $to;`
|
||||||
|
|
||||||
|
|
||||||
|
## 导入 Dashboard
|
||||||
|
|
||||||
在数据源配置页面,您可以为该数据源导入 TDinsight 面板,作为 TDengine 集群的监控可视化工具。如果 TDengine 服务端为 3.0 版本请选择 `TDinsight for 3.x` 导入。注意 TDinsight for 3.x 需要运行和配置 taoskeeper。
|
在数据源配置页面,您可以为该数据源导入 TDinsight 面板,作为 TDengine 集群的监控可视化工具。如果 TDengine 服务端为 3.0 版本请选择 `TDinsight for 3.x` 导入。注意 TDinsight for 3.x 需要运行和配置 taoskeeper。
|
||||||
|
|
||||||
|
@ -216,7 +281,148 @@ docker run -d \
|
||||||
|
|
||||||
使用 TDengine 作为数据源的其他面板,可以[在此搜索](https://grafana.com/grafana/dashboards/?dataSource=tdengine-datasource)。以下是一份不完全列表:
|
使用 TDengine 作为数据源的其他面板,可以[在此搜索](https://grafana.com/grafana/dashboards/?dataSource=tdengine-datasource)。以下是一份不完全列表:
|
||||||
|
|
||||||
- [15146](https://grafana.com/grafana/dashboards/15146): 监控多个 TDengine 集群
|
- [15146](https://grafana.com/grafana/dashboards/15146): 监控多个 TDengine 集群
|
||||||
- [15155](https://grafana.com/grafana/dashboards/15155): TDengine 告警示例
|
- [15155](https://grafana.com/grafana/dashboards/15155): TDengine 告警示例
|
||||||
- [15167](https://grafana.com/grafana/dashboards/15167): TDinsight
|
- [15167](https://grafana.com/grafana/dashboards/15167): TDinsight
|
||||||
- [16388](https://grafana.com/grafana/dashboards/16388): Telegraf 采集节点信息的数据展示
|
- [16388](https://grafana.com/grafana/dashboards/16388): Telegraf 采集节点信息的数据展示
|
||||||
|
|
||||||
|
## 告警配置简介
|
||||||
|
### 告警配置流程
|
||||||
|
TDengine Grafana 插件支持告警,如果要配置告警,需要以下几个步骤:
|
||||||
|
1. 配置联络点(“Contact points“):配置通知渠道,包括 DingDing、Email、Slack、WebHook、Prometheus Alertmanager 等
|
||||||
|
2. 配置告警通知策略(“Notification policies“):配置告警发送到哪个通道的路由,以及发送通知的时间和重复频率
|
||||||
|
3. 配置 “Alert rules“:配置详细的告警规则
|
||||||
|
3.1 配置告警名称
|
||||||
|
3.2 配置查询及告警触发条件
|
||||||
|
3.3 配置规则评估策略
|
||||||
|
3.4 配置标签和告警通道
|
||||||
|
3.5 配置通知文案
|
||||||
|
|
||||||
|
### 告警配置界面
|
||||||
|
在Grafana 11 告警界面一共有 6 个 Tab,分别是 “Alert rules“、“Contact points“、“Notification policies“、“Silences“、 “Groups“ 和 “Settings“。
|
||||||
|
- “Alert rules“ 告警规则列表,用于展示和配置告警规则
|
||||||
|
- “Contact points“ 通知渠道,包括 DingDing、Email、Slack、WebHook、Prometheus Alertmanager 等
|
||||||
|
- “Notification policies“ 配置告警发送到哪个通道的路由,以及发送通知的时间和重复频率
|
||||||
|
- “Silences“ 配置告警静默时间段
|
||||||
|
- “Groups“ 告警组,配置的告警触发后会在这里分组显示
|
||||||
|
- “Settings“ 提供通过 JSON 方式修改告警配置
|
||||||
|
|
||||||
|
## 配置邮件联络点
|
||||||
|
### Grafana Server 配置文件修改
|
||||||
|
在 Grafana 服务的配置文件中添加 SMTP/Emailing 和 Alerting 模块,以 Linux 系统为例,其配置文件一般位于 `/etc/grafana/grafana.ini`
|
||||||
|
在配置文件中增加下面内容:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
#################################### SMTP / Emailing ##########################
|
||||||
|
[smtp]
|
||||||
|
enabled = true
|
||||||
|
host = smtp.qq.com:465 #使用的邮箱
|
||||||
|
user = receiver@foxmail.com
|
||||||
|
password = *********** #使用mail授权码
|
||||||
|
skip_verify = true
|
||||||
|
from_address = sender@foxmail.com
|
||||||
|
```
|
||||||
|
|
||||||
|
然后重启 Grafana 服务即可, 以 Linux 系统为例,执行 `systemctl restart grafana-server.service`
|
||||||
|
|
||||||
|
### Grafana 页面创建新联络点
|
||||||
|
|
||||||
|
在 Grafana 页面找到 “Home“ -> “Alerting“ -> “Contact points“,创建新联络点
|
||||||
|
”Name“: Email Contact Point
|
||||||
|
“Integration“:选择联络类型,这里选择 Email,填写邮件接收地址,完成后保存联络点
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## 配置飞书联络点
|
||||||
|
|
||||||
|
### 飞书机器人配置
|
||||||
|
1. “飞书工作台“ -> “获取应用“ -> “搜索飞书机器人助手“ -> “新建指令“
|
||||||
|
2. 选择触发器:Grafana
|
||||||
|
3. 选择操作:通过官方机器人发送消息,填写发送对象和发送内容
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### Grafana 配置飞书联络点
|
||||||
|
|
||||||
|
在 Grafana 页面找到 “Home“ -> “Alerting“ -> “Contact points“ 创建新联络点
|
||||||
|
“Name“:Feishu Contact Point
|
||||||
|
“Integration“:选择联络类型,这里选择 Webhook,并填写 URL (在飞书机器人助手的 Grafana 触发器 Webhook 地址),完成后保存联络点
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## 通知策略
|
||||||
|
配置好联络点后,可以看到已有一个Default Policy
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
点击右侧 ”...“ -> ”Edit“,然后编辑默认通知策略,弹出配置窗口:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
然后配置下列参数:
|
||||||
|
- “Group wait“: 发送首次告警之前的等待时间。
|
||||||
|
- “Group interval“: 发送第一个告警后,为该组发送下一批新告警的等待时间。
|
||||||
|
- “Repeat interval“: 成功发送告警后再次重复发送告警的等待时间。
|
||||||
|
|
||||||
|
## 配置告警规则
|
||||||
|
|
||||||
|
### 配置查询和告警触发条件
|
||||||
|
|
||||||
|
在需要配置告警的面板中选择 “Edit“ -> “Alert“ -> “New alert rule“。
|
||||||
|
|
||||||
|
1. “Enter alert rule name“ (输入告警规则名称):此处以智能电表为例输入 `power meters alert`
|
||||||
|
2. “Define query and alert condition“ (定义查询和告警触发条件)
|
||||||
|
2.1 选择数据源:`TDengine Datasource`
|
||||||
|
2.2 查询语句:
|
||||||
|
```sql
|
||||||
|
select _wstart as ts, groupid, avg(current) as current from power.meters where ts > $from and ts < $to partition by groupid interval($interval) fill(null)
|
||||||
|
```
|
||||||
|
2.3 设置 ”Expression“(表达式):`Threshold is above 100`
|
||||||
|
2.4 点击【Set as alert condition】
|
||||||
|
2.5 “Preview“:查看设置的规则的结果
|
||||||
|
|
||||||
|
完成设置后可以看到下面图片展示:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### 配置表达式和计算规则
|
||||||
|
|
||||||
|
Grafana 的 “Expression“(表达式)支持对数据做各种操作和计算,其类型分为:
|
||||||
|
1. “Reduce“:将所选时间范围内的时间序列值聚合为单个值
|
||||||
|
1.1 “Function“ 用来设置聚合方法,支持 Min、Max、Last、Mean、Sum 和 Count。
|
||||||
|
1.2 “Mode“ 支持下面三种:
|
||||||
|
- “Strict“:如果查询不到数据,数据会赋值为 NaN。
|
||||||
|
- “Drop Non-numeric Value“:去掉非法数据结果。
|
||||||
|
- “Replace Non-numeric Value“:如果是非法数据,使用固定值进行替换。
|
||||||
|
2. “Threshold“:检查时间序列数据是否符合阈值判断条件。当条件为假时返回 0,为真则返回1。支持下列方式:
|
||||||
|
- Is above (x > y)
|
||||||
|
- Is below (x < y)
|
||||||
|
- Is within range (x > y1 AND x < y2)
|
||||||
|
- Is outside range (x < y1 AND x > y2)
|
||||||
|
3. “Math“:对时间序列的数据进行数学运算。
|
||||||
|
4. “Resample“:更改每个时间序列中的时间戳使其具有一致的时间间隔,以便在它们之间执行数学运算。
|
||||||
|
5. “Classic condition (legacy)“: 可配置多个逻辑条件,判断是否触发告警。
|
||||||
|
|
||||||
|
如上节截图显示,此处我们设置最大值超过 100 触发告警。
|
||||||
|
|
||||||
|
### 配置评估策略
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
完成下面配置:
|
||||||
|
- “Folder“:设置告警规则所属目录。
|
||||||
|
- “Evaluation group“:设置告警规则评估组。“Evaluation group“ 可以选择已有组或者新建组,新建组可以设置组名和评估时间间隔。
|
||||||
|
- “Pending period“:在告警规则的阈值被触发后,异常值持续多长时间可以触发告警,合理设置可以避免误报。
|
||||||
|
|
||||||
|
### 配置标签和告警通道
|
||||||
|

|
||||||
|
|
||||||
|
完成下面配置:
|
||||||
|
- “Labels“ 将标签添加到规则中,以便进行搜索、静默或路由到通知策略。
|
||||||
|
- “Contact point“ 选择联络点,当告警发生时通过设置的联络点进行通知。
|
||||||
|
|
||||||
|
### 配置通知文案
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
设置 “Summary” 和 ”Description” 后,若告警触发,将会收到告警通知。
|
||||||
|
|
|
@ -93,7 +93,7 @@ curl http://localhost:8083/connectors
|
||||||
|
|
||||||
TDengine Sink Connector 的作用是同步指定 topic 的数据到 TDengine。用户无需提前创建数据库和超级表。可手动指定目标数据库的名字(见配置参数 connection.database), 也可按一定规则生成(见配置参数 connection.database.prefix)。
|
TDengine Sink Connector 的作用是同步指定 topic 的数据到 TDengine。用户无需提前创建数据库和超级表。可手动指定目标数据库的名字(见配置参数 connection.database), 也可按一定规则生成(见配置参数 connection.database.prefix)。
|
||||||
|
|
||||||
TDengine Sink Connector 内部使用 TDengine [无模式写入接口](../../connector/cpp#无模式写入-api)写数据到 TDengine,目前支持三种格式的数据:[InfluxDB 行协议格式](../../develop/insert-data/influxdb-line)、 [OpenTSDB Telnet 协议格式](../../develop/insert-data/opentsdb-telnet) 和 [OpenTSDB JSON 协议格式](../../develop/insert-data/opentsdb-json)。
|
TDengine Sink Connector 内部使用 TDengine [无模式写入接口](../../connector/cpp/#无模式schemaless写入-api)写数据到 TDengine,目前支持三种格式的数据:[InfluxDB 行协议格式](../../develop/insert-data/influxdb-line)、 [OpenTSDB Telnet 协议格式](../../develop/insert-data/opentsdb-telnet) 和 [OpenTSDB JSON 协议格式](../../develop/insert-data/opentsdb-json)。
|
||||||
|
|
||||||
下面的示例将主题 meters 的数据,同步到目标数据库 power。数据格式为 InfluxDB Line 协议格式。
|
下面的示例将主题 meters 的数据,同步到目标数据库 power。数据格式为 InfluxDB Line 协议格式。
|
||||||
|
|
||||||
|
|
After Width: | Height: | Size: 83 KiB |
After Width: | Height: | Size: 39 KiB |
After Width: | Height: | Size: 105 KiB |
After Width: | Height: | Size: 41 KiB |
After Width: | Height: | Size: 36 KiB |
After Width: | Height: | Size: 116 KiB |
After Width: | Height: | Size: 37 KiB |
After Width: | Height: | Size: 98 KiB |
After Width: | Height: | Size: 204 KiB |
Before Width: | Height: | Size: 124 KiB After Width: | Height: | Size: 154 KiB |
Before Width: | Height: | Size: 122 KiB After Width: | Height: | Size: 157 KiB |
|
@ -3,9 +3,51 @@ title: 3.3.2.0 版本说明
|
||||||
sidebar_label: 3.3.2.0
|
sidebar_label: 3.3.2.0
|
||||||
description: 3.3.2.0 版本说明
|
description: 3.3.2.0 版本说明
|
||||||
---
|
---
|
||||||
|
### 新特性/优化
|
||||||
### 新特性
|
1. alter table add column支持ENCODE/COMPRESS
|
||||||
|
2. 改进stt_trigger=1下compact对读写的影响
|
||||||
### 优化
|
3. 调整SupportVnodes默认值=5+2*CPU cores
|
||||||
|
4. 取消lossyColumns参数
|
||||||
|
5. alter table修改多项参数仅一项生效
|
||||||
|
6. SupportVnodes支持热更新
|
||||||
|
7. 支持CentOS Stream
|
||||||
|
### 新特性/优化(企业版)
|
||||||
|
1. 对指定db进行balance vgroup leader
|
||||||
|
2. 多级存储新增配置项disable_create_new_file
|
||||||
|
3. 多级存储跨级迁移数据增加限速设置
|
||||||
|
4. IP白名单启停支持热更新
|
||||||
|
5. 普通用户取消建库权限
|
||||||
|
6. 数据库加密改进密钥配置
|
||||||
|
7. TDengine 2.0/3.0数据压缩的支持
|
||||||
|
8. Oracle数据源支持
|
||||||
|
9. 支持Microsoft SQL Server数据源
|
||||||
|
10. OPC类型任务可动态获取新增点位
|
||||||
|
11. PI backfill支持断点续传功能
|
||||||
|
12. PI backfill类型的任务支持 Transformer
|
||||||
|
13. PI数据接入性能优化
|
||||||
|
14. taos-explorer支持GEOMETRY/VARBINARY数据类型
|
||||||
|
15. taos-explorer支持用户及权限信息的导入导出
|
||||||
|
16. PI数据源支持新增数据点位/数据元素属性同步到TDengine
|
||||||
|
17. taosX写入端支持原生连接
|
||||||
|
18. Kafka支持GSSAPI
|
||||||
|
19. MQTT类型任务可从数据源拉取示例数据
|
||||||
|
20. 支持Object数组类型的数据
|
||||||
|
21. 支持通过自定义脚本解析数据
|
||||||
|
22. 支持通过插件的形式对数据动态筛选
|
||||||
### 修复问题
|
### 修复问题
|
||||||
|
1. 修改子表中缺少修改TTL和COMMENT的命令说明
|
||||||
|
2. 查询first/last+interval+fill导致taosd异常退出
|
||||||
|
3. tmq删除topicA消费组导致topicB同消费组消费失败
|
||||||
|
4. 参数绑定column index越界导致taosd异常退出
|
||||||
|
5. 查询cast函数导致taosd异常退出
|
||||||
|
6. 多次resetlog后taosdlog消失
|
||||||
|
7. insert 插入带有常量字段的select子查询数据失败
|
||||||
|
8. event_window查询导致taosd异常退出
|
||||||
|
9. 查询interp+partition by column+fill导致taosd异常退出
|
||||||
|
10. 查询last返回值与预期不符
|
||||||
|
11. event_window+having过滤条件未生效
|
||||||
|
12. taosX同步遇首列空值导致taosd异常退出(仅企业版)
|
||||||
|
13. 升级至3.3.0.0开启cachemodel后查询last+group by返回行数不正确
|
||||||
|
14. taos-explorer导航栏未显示所有超级表名(仅企业版)
|
||||||
|
15. 复合主键VARCHAR长度超125遇查询导致taosd异常退出
|
||||||
|
16. taos CLI和taosAdapter占用CPU过高
|
||||||
|
|
|
@ -127,6 +127,9 @@ int32_t TEST_char2ts(const char* format, int64_t* ts, int32_t precision, const c
|
||||||
/// @return 0 success, other fail
|
/// @return 0 success, other fail
|
||||||
int32_t offsetOfTimezone(char* tzStr, int64_t* offset);
|
int32_t offsetOfTimezone(char* tzStr, int64_t* offset);
|
||||||
|
|
||||||
|
bool checkRecursiveTsmaInterval(int64_t baseInterval, int8_t baseUnit, int64_t interval, int8_t unit, int8_t precision,
|
||||||
|
bool checkEq);
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
|
@ -41,6 +41,7 @@ typedef int32_t (*FExecProcess)(struct SqlFunctionCtx *pCtx);
|
||||||
typedef int32_t (*FExecFinalize)(struct SqlFunctionCtx *pCtx, SSDataBlock *pBlock);
|
typedef int32_t (*FExecFinalize)(struct SqlFunctionCtx *pCtx, SSDataBlock *pBlock);
|
||||||
typedef int32_t (*FScalarExecProcess)(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
typedef int32_t (*FScalarExecProcess)(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||||
typedef int32_t (*FExecCombine)(struct SqlFunctionCtx *pDestCtx, struct SqlFunctionCtx *pSourceCtx);
|
typedef int32_t (*FExecCombine)(struct SqlFunctionCtx *pDestCtx, struct SqlFunctionCtx *pSourceCtx);
|
||||||
|
typedef int32_t (*processFuncByRow)(SArray* pCtx); // array of SqlFunctionCtx
|
||||||
|
|
||||||
typedef struct SScalarFuncExecFuncs {
|
typedef struct SScalarFuncExecFuncs {
|
||||||
FExecGetEnv getEnv;
|
FExecGetEnv getEnv;
|
||||||
|
@ -48,11 +49,12 @@ typedef struct SScalarFuncExecFuncs {
|
||||||
} SScalarFuncExecFuncs;
|
} SScalarFuncExecFuncs;
|
||||||
|
|
||||||
typedef struct SFuncExecFuncs {
|
typedef struct SFuncExecFuncs {
|
||||||
FExecGetEnv getEnv;
|
FExecGetEnv getEnv;
|
||||||
FExecInit init;
|
FExecInit init;
|
||||||
FExecProcess process;
|
FExecProcess process;
|
||||||
FExecFinalize finalize;
|
FExecFinalize finalize;
|
||||||
FExecCombine combine;
|
FExecCombine combine;
|
||||||
|
processFuncByRow processFuncByRow;
|
||||||
} SFuncExecFuncs;
|
} SFuncExecFuncs;
|
||||||
|
|
||||||
#define MAX_INTERVAL_TIME_WINDOW 10000000 // maximum allowed time windows in final results
|
#define MAX_INTERVAL_TIME_WINDOW 10000000 // maximum allowed time windows in final results
|
||||||
|
@ -253,6 +255,7 @@ typedef struct SqlFunctionCtx {
|
||||||
bool hasPrimaryKey;
|
bool hasPrimaryKey;
|
||||||
SFuncInputRowIter rowIter;
|
SFuncInputRowIter rowIter;
|
||||||
bool bInputFinished;
|
bool bInputFinished;
|
||||||
|
bool hasWindowOrGroup; // denote that the function is used with time window or group
|
||||||
} SqlFunctionCtx;
|
} SqlFunctionCtx;
|
||||||
|
|
||||||
typedef struct tExprNode {
|
typedef struct tExprNode {
|
||||||
|
|
|
@ -255,6 +255,7 @@ bool fmIsIgnoreNullFunc(int32_t funcId);
|
||||||
bool fmIsConstantResFunc(SFunctionNode* pFunc);
|
bool fmIsConstantResFunc(SFunctionNode* pFunc);
|
||||||
bool fmIsSkipScanCheckFunc(int32_t funcId);
|
bool fmIsSkipScanCheckFunc(int32_t funcId);
|
||||||
bool fmIsPrimaryKeyFunc(int32_t funcId);
|
bool fmIsPrimaryKeyFunc(int32_t funcId);
|
||||||
|
bool fmIsProcessByRowFunc(int32_t funcId);
|
||||||
|
|
||||||
void getLastCacheDataType(SDataType* pType, int32_t pkBytes);
|
void getLastCacheDataType(SDataType* pType, int32_t pkBytes);
|
||||||
SFunctionNode* createFunction(const char* pName, SNodeList* pParameterList);
|
SFunctionNode* createFunction(const char* pName, SNodeList* pParameterList);
|
||||||
|
|
|
@ -640,6 +640,7 @@ typedef struct SCreateTSMAStmt {
|
||||||
STSMAOptions* pOptions;
|
STSMAOptions* pOptions;
|
||||||
SNode* pPrevQuery;
|
SNode* pPrevQuery;
|
||||||
SMCreateSmaReq* pReq;
|
SMCreateSmaReq* pReq;
|
||||||
|
uint8_t precision;
|
||||||
} SCreateTSMAStmt;
|
} SCreateTSMAStmt;
|
||||||
|
|
||||||
typedef struct SDropTSMAStmt {
|
typedef struct SDropTSMAStmt {
|
||||||
|
|
|
@ -415,6 +415,7 @@ typedef struct SSelectStmt {
|
||||||
int32_t returnRows; // EFuncReturnRows
|
int32_t returnRows; // EFuncReturnRows
|
||||||
ETimeLineMode timeLineCurMode;
|
ETimeLineMode timeLineCurMode;
|
||||||
ETimeLineMode timeLineResMode;
|
ETimeLineMode timeLineResMode;
|
||||||
|
int32_t lastProcessByRowFuncId;
|
||||||
bool timeLineFromOrderBy;
|
bool timeLineFromOrderBy;
|
||||||
bool isEmptyResult;
|
bool isEmptyResult;
|
||||||
bool isSubquery;
|
bool isSubquery;
|
||||||
|
|
|
@ -138,6 +138,7 @@ int32_t taosGetErrSize();
|
||||||
#define TSDB_CODE_TIMEOUT_ERROR TAOS_DEF_ERROR_CODE(0, 0x012C)
|
#define TSDB_CODE_TIMEOUT_ERROR TAOS_DEF_ERROR_CODE(0, 0x012C)
|
||||||
#define TSDB_CODE_MSG_ENCODE_ERROR TAOS_DEF_ERROR_CODE(0, 0x012D)
|
#define TSDB_CODE_MSG_ENCODE_ERROR TAOS_DEF_ERROR_CODE(0, 0x012D)
|
||||||
#define TSDB_CODE_NO_ENOUGH_DISKSPACE TAOS_DEF_ERROR_CODE(0, 0x012E)
|
#define TSDB_CODE_NO_ENOUGH_DISKSPACE TAOS_DEF_ERROR_CODE(0, 0x012E)
|
||||||
|
#define TSDB_CODE_THIRDPARTY_ERROR TAOS_DEF_ERROR_CODE(0, 0x012F)
|
||||||
|
|
||||||
#define TSDB_CODE_APP_IS_STARTING TAOS_DEF_ERROR_CODE(0, 0x0130)
|
#define TSDB_CODE_APP_IS_STARTING TAOS_DEF_ERROR_CODE(0, 0x0130)
|
||||||
#define TSDB_CODE_APP_IS_STOPPING TAOS_DEF_ERROR_CODE(0, 0x0131)
|
#define TSDB_CODE_APP_IS_STOPPING TAOS_DEF_ERROR_CODE(0, 0x0131)
|
||||||
|
@ -834,6 +835,7 @@ int32_t taosGetErrSize();
|
||||||
#define TSDB_CODE_PAR_TBNAME_ERROR TAOS_DEF_ERROR_CODE(0, 0x267D)
|
#define TSDB_CODE_PAR_TBNAME_ERROR TAOS_DEF_ERROR_CODE(0, 0x267D)
|
||||||
#define TSDB_CODE_PAR_TBNAME_DUPLICATED TAOS_DEF_ERROR_CODE(0, 0x267E)
|
#define TSDB_CODE_PAR_TBNAME_DUPLICATED TAOS_DEF_ERROR_CODE(0, 0x267E)
|
||||||
#define TSDB_CODE_PAR_TAG_NAME_DUPLICATED TAOS_DEF_ERROR_CODE(0, 0x267F)
|
#define TSDB_CODE_PAR_TAG_NAME_DUPLICATED TAOS_DEF_ERROR_CODE(0, 0x267F)
|
||||||
|
#define TSDB_CODE_PAR_NOT_ALLOWED_DIFFERENT_BY_ROW_FUNC TAOS_DEF_ERROR_CODE(0, 0x2680)
|
||||||
#define TSDB_CODE_PAR_INTERNAL_ERROR TAOS_DEF_ERROR_CODE(0, 0x26FF)
|
#define TSDB_CODE_PAR_INTERNAL_ERROR TAOS_DEF_ERROR_CODE(0, 0x26FF)
|
||||||
|
|
||||||
//planner
|
//planner
|
||||||
|
|
|
@ -455,6 +455,8 @@ enum {
|
||||||
|
|
||||||
void sqlReqLog(int64_t rid, bool killed, int32_t code, int8_t type);
|
void sqlReqLog(int64_t rid, bool killed, int32_t code, int8_t type);
|
||||||
|
|
||||||
|
void tmqMgmtClose(void);
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
|
@ -301,7 +301,7 @@ int32_t execLocalCmd(SRequestObj* pRequest, SQuery* pQuery) {
|
||||||
int8_t biMode = atomic_load_8(&pRequest->pTscObj->biMode);
|
int8_t biMode = atomic_load_8(&pRequest->pTscObj->biMode);
|
||||||
int32_t code = qExecCommand(&pRequest->pTscObj->id, pRequest->pTscObj->sysInfo, pQuery->pRoot, &pRsp, biMode);
|
int32_t code = qExecCommand(&pRequest->pTscObj->id, pRequest->pTscObj->sysInfo, pQuery->pRoot, &pRsp, biMode);
|
||||||
if (TSDB_CODE_SUCCESS == code && NULL != pRsp) {
|
if (TSDB_CODE_SUCCESS == code && NULL != pRsp) {
|
||||||
code = setQueryResultFromRsp(&pRequest->body.resInfo, pRsp, false);
|
code = setQueryResultFromRsp(&pRequest->body.resInfo, pRsp, pRequest->body.resInfo.convertUcs4);
|
||||||
}
|
}
|
||||||
|
|
||||||
return code;
|
return code;
|
||||||
|
@ -340,7 +340,7 @@ void asyncExecLocalCmd(SRequestObj* pRequest, SQuery* pQuery) {
|
||||||
int32_t code = qExecCommand(&pRequest->pTscObj->id, pRequest->pTscObj->sysInfo, pQuery->pRoot, &pRsp,
|
int32_t code = qExecCommand(&pRequest->pTscObj->id, pRequest->pTscObj->sysInfo, pQuery->pRoot, &pRsp,
|
||||||
atomic_load_8(&pRequest->pTscObj->biMode));
|
atomic_load_8(&pRequest->pTscObj->biMode));
|
||||||
if (TSDB_CODE_SUCCESS == code && NULL != pRsp) {
|
if (TSDB_CODE_SUCCESS == code && NULL != pRsp) {
|
||||||
code = setQueryResultFromRsp(&pRequest->body.resInfo, pRsp, false);
|
code = setQueryResultFromRsp(&pRequest->body.resInfo, pRsp, pRequest->body.resInfo.convertUcs4);
|
||||||
}
|
}
|
||||||
|
|
||||||
SReqResultInfo* pResultInfo = &pRequest->body.resInfo;
|
SReqResultInfo* pResultInfo = &pRequest->body.resInfo;
|
||||||
|
|
|
@ -57,8 +57,6 @@ void taos_cleanup(void) {
|
||||||
}
|
}
|
||||||
|
|
||||||
monitorClose();
|
monitorClose();
|
||||||
taosHashCleanup(appInfo.pInstMap);
|
|
||||||
taosHashCleanup(appInfo.pInstMapByClusterId);
|
|
||||||
tscStopCrashReport();
|
tscStopCrashReport();
|
||||||
|
|
||||||
hbMgrCleanUp();
|
hbMgrCleanUp();
|
||||||
|
@ -85,6 +83,8 @@ void taos_cleanup(void) {
|
||||||
|
|
||||||
taosConvDestroy();
|
taosConvDestroy();
|
||||||
|
|
||||||
|
tmqMgmtClose();
|
||||||
|
|
||||||
tscInfo("all local resources released");
|
tscInfo("all local resources released");
|
||||||
taosCleanupCfg();
|
taosCleanupCfg();
|
||||||
taosCloseLog();
|
taosCloseLog();
|
||||||
|
|
|
@ -682,7 +682,7 @@ static void* monitorThreadFunc(void *param){
|
||||||
tscDebug("monitorThreadFunc start");
|
tscDebug("monitorThreadFunc start");
|
||||||
int64_t quitTime = 0;
|
int64_t quitTime = 0;
|
||||||
while (1) {
|
while (1) {
|
||||||
if (slowLogFlag > 0) {
|
if (atomic_load_32(&slowLogFlag) > 0 > 0) {
|
||||||
if(quitCnt == 0){
|
if(quitCnt == 0){
|
||||||
monitorSendAllSlowLogAtQuit();
|
monitorSendAllSlowLogAtQuit();
|
||||||
if(quitCnt == 0){
|
if(quitCnt == 0){
|
||||||
|
@ -727,10 +727,7 @@ static void* monitorThreadFunc(void *param){
|
||||||
}
|
}
|
||||||
tsem2_timewait(&monitorSem, 100);
|
tsem2_timewait(&monitorSem, 100);
|
||||||
}
|
}
|
||||||
|
atomic_store_32(&slowLogFlag, -2);
|
||||||
taosCloseQueue(monitorQueue);
|
|
||||||
tsem2_destroy(&monitorSem);
|
|
||||||
slowLogFlag = -2;
|
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -826,10 +823,16 @@ void monitorClose() {
|
||||||
taosHashCleanup(monitorCounterHash);
|
taosHashCleanup(monitorCounterHash);
|
||||||
taosHashCleanup(monitorSlowLogHash);
|
taosHashCleanup(monitorSlowLogHash);
|
||||||
taosTmrCleanUp(monitorTimer);
|
taosTmrCleanUp(monitorTimer);
|
||||||
|
taosCloseQueue(monitorQueue);
|
||||||
|
tsem2_destroy(&monitorSem);
|
||||||
taosWUnLockLatch(&monitorLock);
|
taosWUnLockLatch(&monitorLock);
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t monitorPutData2MonitorQueue(MonitorSlowLogData data){
|
int32_t monitorPutData2MonitorQueue(MonitorSlowLogData data){
|
||||||
|
if (atomic_load_32(&slowLogFlag) == -2) {
|
||||||
|
tscError("[monitor] slow log thread is exiting");
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
MonitorSlowLogData* slowLogData = taosAllocateQitem(sizeof(MonitorSlowLogData), DEF_QITEM, 0);
|
MonitorSlowLogData* slowLogData = taosAllocateQitem(sizeof(MonitorSlowLogData), DEF_QITEM, 0);
|
||||||
if (slowLogData == NULL) {
|
if (slowLogData == NULL) {
|
||||||
tscError("[monitor] failed to allocate slow log data");
|
tscError("[monitor] failed to allocate slow log data");
|
||||||
|
|
|
@ -27,9 +27,9 @@
|
||||||
#define EMPTY_BLOCK_POLL_IDLE_DURATION 10
|
#define EMPTY_BLOCK_POLL_IDLE_DURATION 10
|
||||||
#define DEFAULT_AUTO_COMMIT_INTERVAL 5000
|
#define DEFAULT_AUTO_COMMIT_INTERVAL 5000
|
||||||
#define DEFAULT_HEARTBEAT_INTERVAL 3000
|
#define DEFAULT_HEARTBEAT_INTERVAL 3000
|
||||||
|
#define DEFAULT_ASKEP_INTERVAL 1000
|
||||||
|
|
||||||
struct SMqMgmt {
|
struct SMqMgmt {
|
||||||
int8_t inited;
|
|
||||||
tmr_h timer;
|
tmr_h timer;
|
||||||
int32_t rsetId;
|
int32_t rsetId;
|
||||||
};
|
};
|
||||||
|
@ -737,35 +737,33 @@ static void generateTimedTask(int64_t refId, int32_t type) {
|
||||||
if (tmq == NULL) return;
|
if (tmq == NULL) return;
|
||||||
|
|
||||||
int8_t* pTaskType = taosAllocateQitem(sizeof(int8_t), DEF_QITEM, 0);
|
int8_t* pTaskType = taosAllocateQitem(sizeof(int8_t), DEF_QITEM, 0);
|
||||||
if (pTaskType == NULL) return;
|
if (pTaskType != NULL){
|
||||||
|
*pTaskType = type;
|
||||||
|
if (taosWriteQitem(tmq->delayedTask, pTaskType) == 0){
|
||||||
|
tsem2_post(&tmq->rspSem);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
*pTaskType = type;
|
|
||||||
taosWriteQitem(tmq->delayedTask, pTaskType);
|
|
||||||
tsem2_post(&tmq->rspSem);
|
|
||||||
taosReleaseRef(tmqMgmt.rsetId, refId);
|
taosReleaseRef(tmqMgmt.rsetId, refId);
|
||||||
}
|
}
|
||||||
|
|
||||||
void tmqAssignAskEpTask(void* param, void* tmrId) {
|
void tmqAssignAskEpTask(void* param, void* tmrId) {
|
||||||
int64_t refId = *(int64_t*)param;
|
int64_t refId = (int64_t)param;
|
||||||
generateTimedTask(refId, TMQ_DELAYED_TASK__ASK_EP);
|
generateTimedTask(refId, TMQ_DELAYED_TASK__ASK_EP);
|
||||||
taosMemoryFree(param);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void tmqReplayTask(void* param, void* tmrId) {
|
void tmqReplayTask(void* param, void* tmrId) {
|
||||||
int64_t refId = *(int64_t*)param;
|
int64_t refId = (int64_t)param;
|
||||||
tmq_t* tmq = taosAcquireRef(tmqMgmt.rsetId, refId);
|
tmq_t* tmq = taosAcquireRef(tmqMgmt.rsetId, refId);
|
||||||
if (tmq == NULL) goto END;
|
if (tmq == NULL) return;
|
||||||
|
|
||||||
tsem2_post(&tmq->rspSem);
|
tsem2_post(&tmq->rspSem);
|
||||||
taosReleaseRef(tmqMgmt.rsetId, refId);
|
taosReleaseRef(tmqMgmt.rsetId, refId);
|
||||||
END:
|
|
||||||
taosMemoryFree(param);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void tmqAssignDelayedCommitTask(void* param, void* tmrId) {
|
void tmqAssignDelayedCommitTask(void* param, void* tmrId) {
|
||||||
int64_t refId = *(int64_t*)param;
|
int64_t refId = (int64_t)param;
|
||||||
generateTimedTask(refId, TMQ_DELAYED_TASK__COMMIT);
|
generateTimedTask(refId, TMQ_DELAYED_TASK__COMMIT);
|
||||||
taosMemoryFree(param);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tmqHbCb(void* param, SDataBuf* pMsg, int32_t code) {
|
int32_t tmqHbCb(void* param, SDataBuf* pMsg, int32_t code) {
|
||||||
|
@ -802,11 +800,10 @@ int32_t tmqHbCb(void* param, SDataBuf* pMsg, int32_t code) {
|
||||||
}
|
}
|
||||||
|
|
||||||
void tmqSendHbReq(void* param, void* tmrId) {
|
void tmqSendHbReq(void* param, void* tmrId) {
|
||||||
int64_t refId = *(int64_t*)param;
|
int64_t refId = (int64_t)param;
|
||||||
|
|
||||||
tmq_t* tmq = taosAcquireRef(tmqMgmt.rsetId, refId);
|
tmq_t* tmq = taosAcquireRef(tmqMgmt.rsetId, refId);
|
||||||
if (tmq == NULL) {
|
if (tmq == NULL) {
|
||||||
taosMemoryFree(param);
|
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -880,7 +877,9 @@ void tmqSendHbReq(void* param, void* tmrId) {
|
||||||
|
|
||||||
OVER:
|
OVER:
|
||||||
tDestroySMqHbReq(&req);
|
tDestroySMqHbReq(&req);
|
||||||
taosTmrReset(tmqSendHbReq, DEFAULT_HEARTBEAT_INTERVAL, param, tmqMgmt.timer, &tmq->hbLiveTimer);
|
if(tmrId != NULL){
|
||||||
|
taosTmrReset(tmqSendHbReq, DEFAULT_HEARTBEAT_INTERVAL, param, tmqMgmt.timer, &tmq->hbLiveTimer);
|
||||||
|
}
|
||||||
taosReleaseRef(tmqMgmt.rsetId, refId);
|
taosReleaseRef(tmqMgmt.rsetId, refId);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -908,21 +907,14 @@ int32_t tmqHandleAllDelayedTask(tmq_t* pTmq) {
|
||||||
if (*pTaskType == TMQ_DELAYED_TASK__ASK_EP) {
|
if (*pTaskType == TMQ_DELAYED_TASK__ASK_EP) {
|
||||||
askEp(pTmq, NULL, false, false);
|
askEp(pTmq, NULL, false, false);
|
||||||
|
|
||||||
int64_t* pRefId = taosMemoryMalloc(sizeof(int64_t));
|
|
||||||
*pRefId = pTmq->refId;
|
|
||||||
|
|
||||||
tscDebug("consumer:0x%" PRIx64 " retrieve ep from mnode in 1s", pTmq->consumerId);
|
tscDebug("consumer:0x%" PRIx64 " retrieve ep from mnode in 1s", pTmq->consumerId);
|
||||||
taosTmrReset(tmqAssignAskEpTask, 1000, pRefId, tmqMgmt.timer, &pTmq->epTimer);
|
taosTmrReset(tmqAssignAskEpTask, DEFAULT_ASKEP_INTERVAL, (void*)(pTmq->refId), tmqMgmt.timer, &pTmq->epTimer);
|
||||||
} else if (*pTaskType == TMQ_DELAYED_TASK__COMMIT) {
|
} else if (*pTaskType == TMQ_DELAYED_TASK__COMMIT) {
|
||||||
tmq_commit_cb* pCallbackFn = pTmq->commitCb ? pTmq->commitCb : defaultCommitCbFn;
|
tmq_commit_cb* pCallbackFn = pTmq->commitCb ? pTmq->commitCb : defaultCommitCbFn;
|
||||||
|
|
||||||
asyncCommitAllOffsets(pTmq, pCallbackFn, pTmq->commitCbUserParam);
|
asyncCommitAllOffsets(pTmq, pCallbackFn, pTmq->commitCbUserParam);
|
||||||
int64_t* pRefId = taosMemoryMalloc(sizeof(int64_t));
|
|
||||||
*pRefId = pTmq->refId;
|
|
||||||
|
|
||||||
tscDebug("consumer:0x%" PRIx64 " next commit to vnode(s) in %.2fs", pTmq->consumerId,
|
tscDebug("consumer:0x%" PRIx64 " next commit to vnode(s) in %.2fs", pTmq->consumerId,
|
||||||
pTmq->autoCommitInterval / 1000.0);
|
pTmq->autoCommitInterval / 1000.0);
|
||||||
taosTmrReset(tmqAssignDelayedCommitTask, pTmq->autoCommitInterval, pRefId, tmqMgmt.timer, &pTmq->commitTimer);
|
taosTmrReset(tmqAssignDelayedCommitTask, pTmq->autoCommitInterval, (void*)(pTmq->refId), tmqMgmt.timer, &pTmq->commitTimer);
|
||||||
} else {
|
} else {
|
||||||
tscError("consumer:0x%" PRIx64 " invalid task type:%d", pTmq->consumerId, *pTaskType);
|
tscError("consumer:0x%" PRIx64 " invalid task type:%d", pTmq->consumerId, *pTaskType);
|
||||||
}
|
}
|
||||||
|
@ -1064,6 +1056,16 @@ void tmqFreeImpl(void* handle) {
|
||||||
|
|
||||||
taosArrayDestroyEx(tmq->clientTopics, freeClientVgImpl);
|
taosArrayDestroyEx(tmq->clientTopics, freeClientVgImpl);
|
||||||
taos_close_internal(tmq->pTscObj);
|
taos_close_internal(tmq->pTscObj);
|
||||||
|
|
||||||
|
if(tmq->commitTimer) {
|
||||||
|
taosTmrStopA(&tmq->commitTimer);
|
||||||
|
}
|
||||||
|
if(tmq->epTimer) {
|
||||||
|
taosTmrStopA(&tmq->epTimer);
|
||||||
|
}
|
||||||
|
if(tmq->hbLiveTimer) {
|
||||||
|
taosTmrStopA(&tmq->hbLiveTimer);
|
||||||
|
}
|
||||||
taosMemoryFree(tmq);
|
taosMemoryFree(tmq);
|
||||||
|
|
||||||
tscDebug("consumer:0x%" PRIx64 " closed", id);
|
tscDebug("consumer:0x%" PRIx64 " closed", id);
|
||||||
|
@ -1083,6 +1085,18 @@ static void tmqMgmtInit(void) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void tmqMgmtClose(void) {
|
||||||
|
if (tmqMgmt.timer) {
|
||||||
|
taosTmrCleanUp(tmqMgmt.timer);
|
||||||
|
tmqMgmt.timer = NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (tmqMgmt.rsetId >= 0) {
|
||||||
|
taosCloseRef(tmqMgmt.rsetId);
|
||||||
|
tmqMgmt.rsetId = -1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#define SET_ERROR_MSG_TMQ(MSG) \
|
#define SET_ERROR_MSG_TMQ(MSG) \
|
||||||
if (errstr != NULL) snprintf(errstr, errstrLen, MSG);
|
if (errstr != NULL) snprintf(errstr, errstrLen, MSG);
|
||||||
|
|
||||||
|
@ -1171,9 +1185,7 @@ tmq_t* tmq_consumer_new(tmq_conf_t* conf, char* errstr, int32_t errstrLen) {
|
||||||
goto _failed;
|
goto _failed;
|
||||||
}
|
}
|
||||||
|
|
||||||
int64_t* pRefId = taosMemoryMalloc(sizeof(int64_t));
|
pTmq->hbLiveTimer = taosTmrStart(tmqSendHbReq, DEFAULT_HEARTBEAT_INTERVAL, (void*)pTmq->refId, tmqMgmt.timer);
|
||||||
*pRefId = pTmq->refId;
|
|
||||||
pTmq->hbLiveTimer = taosTmrStart(tmqSendHbReq, DEFAULT_HEARTBEAT_INTERVAL, pRefId, tmqMgmt.timer);
|
|
||||||
|
|
||||||
char buf[TSDB_OFFSET_LEN] = {0};
|
char buf[TSDB_OFFSET_LEN] = {0};
|
||||||
STqOffsetVal offset = {.type = pTmq->resetOffsetCfg};
|
STqOffsetVal offset = {.type = pTmq->resetOffsetCfg};
|
||||||
|
@ -1301,18 +1313,9 @@ int32_t tmq_subscribe(tmq_t* tmq, const tmq_list_t* topic_list) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// init ep timer
|
// init ep timer
|
||||||
if (tmq->epTimer == NULL) {
|
tmq->epTimer = taosTmrStart(tmqAssignAskEpTask, DEFAULT_ASKEP_INTERVAL, (void*)(tmq->refId), tmqMgmt.timer);
|
||||||
int64_t* pRefId1 = taosMemoryMalloc(sizeof(int64_t));
|
|
||||||
*pRefId1 = tmq->refId;
|
|
||||||
tmq->epTimer = taosTmrStart(tmqAssignAskEpTask, 1000, pRefId1, tmqMgmt.timer);
|
|
||||||
}
|
|
||||||
|
|
||||||
// init auto commit timer
|
// init auto commit timer
|
||||||
if (tmq->autoCommit && tmq->commitTimer == NULL) {
|
tmq->commitTimer = taosTmrStart(tmqAssignDelayedCommitTask, tmq->autoCommitInterval, (void*)(tmq->refId), tmqMgmt.timer);
|
||||||
int64_t* pRefId2 = taosMemoryMalloc(sizeof(int64_t));
|
|
||||||
*pRefId2 = tmq->refId;
|
|
||||||
tmq->commitTimer = taosTmrStart(tmqAssignDelayedCommitTask, tmq->autoCommitInterval, pRefId2, tmqMgmt.timer);
|
|
||||||
}
|
|
||||||
|
|
||||||
FAIL:
|
FAIL:
|
||||||
taosArrayDestroyP(req.topicNames, taosMemoryFree);
|
taosArrayDestroyP(req.topicNames, taosMemoryFree);
|
||||||
|
@ -2015,9 +2018,7 @@ static void* tmqHandleAllRsp(tmq_t* tmq, int64_t timeout) {
|
||||||
pVg->blockReceiveTs = taosGetTimestampMs();
|
pVg->blockReceiveTs = taosGetTimestampMs();
|
||||||
pVg->blockSleepForReplay = pRsp->rsp.sleepTime;
|
pVg->blockSleepForReplay = pRsp->rsp.sleepTime;
|
||||||
if (pVg->blockSleepForReplay > 0) {
|
if (pVg->blockSleepForReplay > 0) {
|
||||||
int64_t* pRefId1 = taosMemoryMalloc(sizeof(int64_t));
|
taosTmrStart(tmqReplayTask, pVg->blockSleepForReplay, (void*)(tmq->refId), tmqMgmt.timer);
|
||||||
*pRefId1 = tmq->refId;
|
|
||||||
taosTmrStart(tmqReplayTask, pVg->blockSleepForReplay, pRefId1, tmqMgmt.timer);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
tscDebug("consumer:0x%" PRIx64 " process poll rsp, vgId:%d, offset:%s, blocks:%d, rows:%" PRId64
|
tscDebug("consumer:0x%" PRIx64 " process poll rsp, vgId:%d, offset:%s, blocks:%d, rows:%" PRId64
|
||||||
|
@ -2274,7 +2275,7 @@ int32_t tmq_consumer_close(tmq_t* tmq) {
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
taosSsleep(2); // sleep 2s for hb to send offset and rows to server
|
tmqSendHbReq((void*)(tmq->refId), NULL);
|
||||||
|
|
||||||
tmq_list_t* lst = tmq_list_new();
|
tmq_list_t* lst = tmq_list_new();
|
||||||
int32_t code = tmq_subscribe(tmq, lst);
|
int32_t code = tmq_subscribe(tmq, lst);
|
||||||
|
|
|
@ -1344,7 +1344,10 @@ int32_t taosReadDataFolder(const char *cfgDir, const char **envCmd, const char *
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
tstrncpy(tsDataDir, cfgGetItem(pCfg, "dataDir")->str, PATH_MAX);
|
if (taosSetTfsCfg(pCfg) != 0) {
|
||||||
|
cfgCleanup(pCfg);
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
dDebugFlag = cfgGetItem(pCfg, "dDebugFlag")->i32;
|
dDebugFlag = cfgGetItem(pCfg, "dDebugFlag")->i32;
|
||||||
|
|
||||||
cfgCleanup(pCfg);
|
cfgCleanup(pCfg);
|
||||||
|
|
|
@ -1969,3 +1969,98 @@ int32_t TEST_char2ts(const char* format, int64_t* ts, int32_t precision, const c
|
||||||
taosArrayDestroy(formats);
|
taosArrayDestroy(formats);
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int8_t UNIT_INDEX[26] = {/*a*/ 2, 0, -1, 6, -1, -1, -1,
|
||||||
|
/*h*/ 5, -1, -1, -1, -1, 4, 8,
|
||||||
|
/*o*/ -1, -1, -1, -1, 3, -1,
|
||||||
|
/*u*/ 1, -1, 7, -1, 9, -1};
|
||||||
|
|
||||||
|
#define GET_UNIT_INDEX(idx) UNIT_INDEX[(idx) - 97]
|
||||||
|
|
||||||
|
static int64_t UNIT_MATRIX[10][11] = { /* ns, us, ms, s, min, h, d, w, month, y*/
|
||||||
|
/*ns*/ { 1, 1000, 0},
|
||||||
|
/*us*/ {1000, 1, 1000, 0},
|
||||||
|
/*ms*/ { 0, 1000, 1, 1000, 0},
|
||||||
|
/*s*/ { 0, 0, 1000, 1, 60, 0},
|
||||||
|
/*min*/ { 0, 0, 0, 60, 1, 60, 0},
|
||||||
|
/*h*/ { 0, 0, 0, 0, 60, 1, 1, 0},
|
||||||
|
/*d*/ { 0, 0, 0, 0, 0, 24, 1, 7, 1, 0},
|
||||||
|
/*w*/ { 0, 0, 0, 0, 0, 0, 7, 1, -1, 0},
|
||||||
|
/*mon*/ { 0, 0, 0, 0, 0, 0, 0, 0, 1, 12, 0},
|
||||||
|
/*y*/ { 0, 0, 0, 0, 0, 0, 0, 0, 12, 1, 0}};
|
||||||
|
|
||||||
|
static bool recursiveTsmaCheckRecursive(int64_t baseInterval, int8_t baseIdx, int64_t interval, int8_t idx, bool checkEq) {
|
||||||
|
if (UNIT_MATRIX[baseIdx][idx] == -1) return false;
|
||||||
|
if (baseIdx == idx) {
|
||||||
|
if (interval < baseInterval) return false;
|
||||||
|
if (checkEq && interval == baseInterval) return false;
|
||||||
|
return interval % baseInterval == 0;
|
||||||
|
}
|
||||||
|
int8_t next = baseIdx + 1;
|
||||||
|
int64_t val = UNIT_MATRIX[baseIdx][next];
|
||||||
|
while (val != 0 && next <= idx) {
|
||||||
|
if (val == -1) {
|
||||||
|
next++;
|
||||||
|
val = UNIT_MATRIX[baseIdx][next];
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if (val % baseInterval == 0 || baseInterval % val == 0) {
|
||||||
|
int8_t extra = baseInterval >= val ? 0 : 1;
|
||||||
|
bool needCheckEq = baseInterval >= val && !(baseIdx < next && val == 1);
|
||||||
|
if (!recursiveTsmaCheckRecursive(baseInterval / val + extra, next, interval, idx, needCheckEq && checkEq)) {
|
||||||
|
next++;
|
||||||
|
val = UNIT_MATRIX[baseIdx][next];
|
||||||
|
continue;
|
||||||
|
} else {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
static bool recursiveTsmaCheckRecursiveReverse(int64_t baseInterval, int8_t baseIdx, int64_t interval, int8_t idx, bool checkEq) {
|
||||||
|
if (UNIT_MATRIX[baseIdx][idx] == -1) return false;
|
||||||
|
|
||||||
|
if (baseIdx == idx) {
|
||||||
|
if (interval < baseInterval) return false;
|
||||||
|
if (checkEq && interval == baseInterval) return false;
|
||||||
|
return interval % baseInterval == 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
int8_t next = baseIdx - 1;
|
||||||
|
int64_t val = UNIT_MATRIX[baseIdx][next];
|
||||||
|
while (val != 0 && next >= 0) {
|
||||||
|
return recursiveTsmaCheckRecursiveReverse(baseInterval * val, next, interval, idx, checkEq);
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* @breif check if tsma with param [interval], [unit] can create based on base tsma with baseInterval and baseUnit
|
||||||
|
* @param baseInterval, baseUnit, interval/unit of base tsma
|
||||||
|
* @param interval the tsma interval going to create. Not that if unit is not calander unit, then interval has already been
|
||||||
|
* translated to TICKS of [precision]
|
||||||
|
* @param unit the tsma unit going to create
|
||||||
|
* @param precision the precision of this db
|
||||||
|
* @param checkEq pass true if same interval is not acceptable, false if acceptable.
|
||||||
|
* @ret true the tsma can be created, else cannot
|
||||||
|
* */
|
||||||
|
bool checkRecursiveTsmaInterval(int64_t baseInterval, int8_t baseUnit, int64_t interval, int8_t unit, int8_t precision, bool checkEq) {
|
||||||
|
bool baseIsCalendarDuration = IS_CALENDAR_TIME_DURATION(baseUnit);
|
||||||
|
if (!baseIsCalendarDuration) baseInterval = convertTimeFromPrecisionToUnit(baseInterval, precision, baseUnit);
|
||||||
|
bool isCalendarDuration = IS_CALENDAR_TIME_DURATION(unit);
|
||||||
|
if (!isCalendarDuration) interval = convertTimeFromPrecisionToUnit(interval, precision, unit);
|
||||||
|
|
||||||
|
bool needCheckEq = baseIsCalendarDuration == isCalendarDuration && checkEq;
|
||||||
|
|
||||||
|
int8_t baseIdx = GET_UNIT_INDEX(baseUnit), idx = GET_UNIT_INDEX(unit);
|
||||||
|
if (baseIdx <= idx) {
|
||||||
|
return recursiveTsmaCheckRecursive(baseInterval, baseIdx, interval, idx, needCheckEq);
|
||||||
|
} else {
|
||||||
|
return recursiveTsmaCheckRecursiveReverse(baseInterval, baseIdx, interval, idx, checkEq);
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
|
@ -369,7 +369,7 @@ int mainWindows(int argc, char **argv) {
|
||||||
if(global.generateCode) {
|
if(global.generateCode) {
|
||||||
bool toLogFile = false;
|
bool toLogFile = false;
|
||||||
if(taosReadDataFolder(configDir, global.envCmd, global.envFile, global.apolloUrl, global.pArgs) != 0){
|
if(taosReadDataFolder(configDir, global.envCmd, global.envFile, global.apolloUrl, global.pArgs) != 0){
|
||||||
encryptError("failed to generate encrypt code since taosd is running, please stop it first");
|
encryptError("failed to generate encrypt code since dataDir can not be set from cfg file");
|
||||||
return -1;
|
return -1;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -2004,8 +2004,13 @@ static int32_t mndRetrieveTSMA(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBlo
|
||||||
|
|
||||||
// interval
|
// interval
|
||||||
char interval[64 + VARSTR_HEADER_SIZE] = {0};
|
char interval[64 + VARSTR_HEADER_SIZE] = {0};
|
||||||
int32_t len = snprintf(interval + VARSTR_HEADER_SIZE, 64, "%" PRId64 "%c", pSma->interval,
|
int32_t len = 0;
|
||||||
getPrecisionUnit(pSrcDb->cfg.precision));
|
if (!IS_CALENDAR_TIME_DURATION(pSma->intervalUnit)) {
|
||||||
|
len = snprintf(interval + VARSTR_HEADER_SIZE, 64, "%" PRId64 "%c", pSma->interval,
|
||||||
|
getPrecisionUnit(pSrcDb->cfg.precision));
|
||||||
|
} else {
|
||||||
|
len = snprintf(interval + VARSTR_HEADER_SIZE, 64, "%" PRId64 "%c", pSma->interval, pSma->intervalUnit);
|
||||||
|
}
|
||||||
varDataSetLen(interval, len);
|
varDataSetLen(interval, len);
|
||||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||||
colDataSetVal(pColInfo, numOfRows, interval, false);
|
colDataSetVal(pColInfo, numOfRows, interval, false);
|
||||||
|
|
|
@ -1241,6 +1241,7 @@ static void mndTransResetActions(SMnode *pMnode, STrans *pTrans, SArray *pArray)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// execute at bottom half
|
||||||
static int32_t mndTransWriteSingleLog(SMnode *pMnode, STrans *pTrans, STransAction *pAction, bool topHalf) {
|
static int32_t mndTransWriteSingleLog(SMnode *pMnode, STrans *pTrans, STransAction *pAction, bool topHalf) {
|
||||||
if (pAction->rawWritten) return 0;
|
if (pAction->rawWritten) return 0;
|
||||||
if (topHalf) {
|
if (topHalf) {
|
||||||
|
@ -1267,6 +1268,7 @@ static int32_t mndTransWriteSingleLog(SMnode *pMnode, STrans *pTrans, STransActi
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// execute at top half
|
||||||
static int32_t mndTransSendSingleMsg(SMnode *pMnode, STrans *pTrans, STransAction *pAction, bool topHalf) {
|
static int32_t mndTransSendSingleMsg(SMnode *pMnode, STrans *pTrans, STransAction *pAction, bool topHalf) {
|
||||||
if (pAction->msgSent) return 0;
|
if (pAction->msgSent) return 0;
|
||||||
if (mndCannotExecuteTransAction(pMnode, topHalf)) {
|
if (mndCannotExecuteTransAction(pMnode, topHalf)) {
|
||||||
|
@ -1644,6 +1646,11 @@ static bool mndTransPerformCommitActionStage(SMnode *pMnode, STrans *pTrans, boo
|
||||||
pTrans->stage = TRN_STAGE_FINISH; // TRN_STAGE_PRE_FINISH is not necessary
|
pTrans->stage = TRN_STAGE_FINISH; // TRN_STAGE_PRE_FINISH is not necessary
|
||||||
mInfo("trans:%d, stage from commitAction to finished", pTrans->id);
|
mInfo("trans:%d, stage from commitAction to finished", pTrans->id);
|
||||||
continueExec = true;
|
continueExec = true;
|
||||||
|
} else if (code == TSDB_CODE_MND_TRANS_CTX_SWITCH && topHalf) {
|
||||||
|
pTrans->code = 0;
|
||||||
|
pTrans->stage = TRN_STAGE_COMMIT;
|
||||||
|
mInfo("trans:%d, back to commit stage", pTrans->id);
|
||||||
|
continueExec = true;
|
||||||
} else {
|
} else {
|
||||||
pTrans->code = terrno;
|
pTrans->code = terrno;
|
||||||
pTrans->failedTimes++;
|
pTrans->failedTimes++;
|
||||||
|
@ -1783,11 +1790,13 @@ void mndTransExecuteImp(SMnode *pMnode, STrans *pTrans, bool topHalf) {
|
||||||
mndTransSendRpcRsp(pMnode, pTrans);
|
mndTransSendRpcRsp(pMnode, pTrans);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// start trans, pullup, receive rsp, kill
|
||||||
void mndTransExecute(SMnode *pMnode, STrans *pTrans) {
|
void mndTransExecute(SMnode *pMnode, STrans *pTrans) {
|
||||||
bool topHalf = true;
|
bool topHalf = true;
|
||||||
return mndTransExecuteImp(pMnode, pTrans, topHalf);
|
return mndTransExecuteImp(pMnode, pTrans, topHalf);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// update trans
|
||||||
void mndTransRefresh(SMnode *pMnode, STrans *pTrans) {
|
void mndTransRefresh(SMnode *pMnode, STrans *pTrans) {
|
||||||
bool topHalf = false;
|
bool topHalf = false;
|
||||||
return mndTransExecuteImp(pMnode, pTrans, topHalf);
|
return mndTransExecuteImp(pMnode, pTrans, topHalf);
|
||||||
|
|
|
@ -51,10 +51,9 @@ int32_t streamStateSnapReaderOpen(STQ* pTq, int64_t sver, int64_t ever, SStreamS
|
||||||
|
|
||||||
SStreamSnapReader* pSnapReader = NULL;
|
SStreamSnapReader* pSnapReader = NULL;
|
||||||
|
|
||||||
if (streamSnapReaderOpen(meta, sver, chkpId, meta->path, &pSnapReader) == 0) {
|
if ((code = streamSnapReaderOpen(meta, sver, chkpId, meta->path, &pSnapReader)) == 0) {
|
||||||
pReader->complete = 1;
|
pReader->complete = 1;
|
||||||
} else {
|
} else {
|
||||||
code = -1;
|
|
||||||
taosMemoryFree(pReader);
|
taosMemoryFree(pReader);
|
||||||
goto _err;
|
goto _err;
|
||||||
}
|
}
|
||||||
|
@ -75,7 +74,7 @@ _err:
|
||||||
int32_t streamStateSnapReaderClose(SStreamStateReader* pReader) {
|
int32_t streamStateSnapReaderClose(SStreamStateReader* pReader) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
tqDebug("vgId:%d, vnode %s snapshot reader closed", TD_VID(pReader->pTq->pVnode), STREAM_STATE_TRANSFER);
|
tqDebug("vgId:%d, vnode %s snapshot reader closed", TD_VID(pReader->pTq->pVnode), STREAM_STATE_TRANSFER);
|
||||||
streamSnapReaderClose(pReader->pReaderImpl);
|
code = streamSnapReaderClose(pReader->pReaderImpl);
|
||||||
taosMemoryFree(pReader);
|
taosMemoryFree(pReader);
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
@ -138,32 +137,36 @@ int32_t streamStateSnapWriterOpen(STQ* pTq, int64_t sver, int64_t ever, SStreamS
|
||||||
pWriter->sver = sver;
|
pWriter->sver = sver;
|
||||||
pWriter->ever = ever;
|
pWriter->ever = ever;
|
||||||
|
|
||||||
taosMkDir(pTq->pStreamMeta->path);
|
if (taosMkDir(pTq->pStreamMeta->path) != 0) {
|
||||||
|
code = TAOS_SYSTEM_ERROR(errno);
|
||||||
SStreamSnapWriter* pSnapWriter = NULL;
|
tqError("vgId:%d, vnode %s snapshot writer failed to create directory %s since %s", TD_VID(pTq->pVnode),
|
||||||
if (streamSnapWriterOpen(pTq, sver, ever, pTq->pStreamMeta->path, &pSnapWriter) < 0) {
|
STREAM_STATE_TRANSFER, pTq->pStreamMeta->path, tstrerror(code));
|
||||||
goto _err;
|
goto _err;
|
||||||
}
|
}
|
||||||
|
|
||||||
tqDebug("vgId:%d, vnode %s snapshot writer opened, path:%s", TD_VID(pTq->pVnode), STREAM_STATE_TRANSFER, pTq->pStreamMeta->path);
|
SStreamSnapWriter* pSnapWriter = NULL;
|
||||||
|
if ((code = streamSnapWriterOpen(pTq, sver, ever, pTq->pStreamMeta->path, &pSnapWriter)) < 0) {
|
||||||
|
goto _err;
|
||||||
|
}
|
||||||
|
|
||||||
|
tqDebug("vgId:%d, vnode %s snapshot writer opened, path:%s", TD_VID(pTq->pVnode), STREAM_STATE_TRANSFER,
|
||||||
|
pTq->pStreamMeta->path);
|
||||||
pWriter->pWriterImpl = pSnapWriter;
|
pWriter->pWriterImpl = pSnapWriter;
|
||||||
|
|
||||||
*ppWriter = pWriter;
|
*ppWriter = pWriter;
|
||||||
return code;
|
return 0;
|
||||||
|
|
||||||
_err:
|
_err:
|
||||||
tqError("vgId:%d, vnode %s snapshot writer failed to open since %s", TD_VID(pTq->pVnode), STREAM_STATE_TRANSFER,
|
tqError("vgId:%d, vnode %s snapshot writer failed to open since %s", TD_VID(pTq->pVnode), STREAM_STATE_TRANSFER,
|
||||||
tstrerror(code));
|
tstrerror(code));
|
||||||
taosMemoryFree(pWriter);
|
taosMemoryFree(pWriter);
|
||||||
*ppWriter = NULL;
|
*ppWriter = NULL;
|
||||||
return -1;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t streamStateSnapWriterClose(SStreamStateWriter* pWriter, int8_t rollback) {
|
int32_t streamStateSnapWriterClose(SStreamStateWriter* pWriter, int8_t rollback) {
|
||||||
int32_t code = 0;
|
|
||||||
tqDebug("vgId:%d, vnode %s snapshot writer closed", TD_VID(pWriter->pTq->pVnode), STREAM_STATE_TRANSFER);
|
tqDebug("vgId:%d, vnode %s snapshot writer closed", TD_VID(pWriter->pTq->pVnode), STREAM_STATE_TRANSFER);
|
||||||
code = streamSnapWriterClose(pWriter->pWriterImpl, rollback);
|
return streamSnapWriterClose(pWriter->pWriterImpl, rollback);
|
||||||
|
|
||||||
return code;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t streamStateSnapWrite(SStreamStateWriter* pWriter, uint8_t* pData, uint32_t nData) {
|
int32_t streamStateSnapWrite(SStreamStateWriter* pWriter, uint8_t* pData, uint32_t nData) {
|
||||||
|
|
|
@ -1072,7 +1072,7 @@ static int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, SArray
|
||||||
rocksdb_writebatch_t *wb = pTsdb->rCache.writebatch;
|
rocksdb_writebatch_t *wb = pTsdb->rCache.writebatch;
|
||||||
for (int i = 0; i < num_keys; ++i) {
|
for (int i = 0; i < num_keys; ++i) {
|
||||||
SIdxKey *idxKey = &((SIdxKey *)TARRAY_DATA(remainCols))[i];
|
SIdxKey *idxKey = &((SIdxKey *)TARRAY_DATA(remainCols))[i];
|
||||||
SLastUpdateCtx *updCtx = (SLastUpdateCtx *)taosArrayGet(updCtxArray, i);
|
SLastUpdateCtx *updCtx = (SLastUpdateCtx *)taosArrayGet(updCtxArray, idxKey->idx);
|
||||||
SRowKey *pRowKey = &updCtx->tsdbRowKey.key;
|
SRowKey *pRowKey = &updCtx->tsdbRowKey.key;
|
||||||
SColVal *pColVal = &updCtx->colVal;
|
SColVal *pColVal = &updCtx->colVal;
|
||||||
|
|
||||||
|
|
|
@ -128,6 +128,7 @@ typedef struct SExprSupp {
|
||||||
SqlFunctionCtx* pCtx;
|
SqlFunctionCtx* pCtx;
|
||||||
int32_t* rowEntryInfoOffset; // offset value for each row result cell info
|
int32_t* rowEntryInfoOffset; // offset value for each row result cell info
|
||||||
SFilterInfo* pFilterInfo;
|
SFilterInfo* pFilterInfo;
|
||||||
|
bool hasWindowOrGroup;
|
||||||
} SExprSupp;
|
} SExprSupp;
|
||||||
|
|
||||||
typedef enum {
|
typedef enum {
|
||||||
|
|
|
@ -77,6 +77,8 @@ SOperatorInfo* createAggregateOperatorInfo(SOperatorInfo* downstream, SAggPhysiN
|
||||||
goto _error;
|
goto _error;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pOperator->exprSupp.hasWindowOrGroup = false;
|
||||||
|
|
||||||
SSDataBlock* pResBlock = createDataBlockFromDescNode(pAggNode->node.pOutputDataBlockDesc);
|
SSDataBlock* pResBlock = createDataBlockFromDescNode(pAggNode->node.pOutputDataBlockDesc);
|
||||||
initBasicInfo(&pInfo->binfo, pResBlock);
|
initBasicInfo(&pInfo->binfo, pResBlock);
|
||||||
|
|
||||||
|
@ -519,6 +521,7 @@ int32_t initAggSup(SExprSupp* pSup, SAggSupporter* pAggSup, SExprInfo* pExprInfo
|
||||||
}
|
}
|
||||||
|
|
||||||
for (int32_t i = 0; i < numOfCols; ++i) {
|
for (int32_t i = 0; i < numOfCols; ++i) {
|
||||||
|
pSup->pCtx[i].hasWindowOrGroup = pSup->hasWindowOrGroup;
|
||||||
if (pState) {
|
if (pState) {
|
||||||
pSup->pCtx[i].saveHandle.pBuf = NULL;
|
pSup->pCtx[i].saveHandle.pBuf = NULL;
|
||||||
pSup->pCtx[i].saveHandle.pState = pState;
|
pSup->pCtx[i].saveHandle.pState = pState;
|
||||||
|
|
|
@ -207,6 +207,8 @@ SOperatorInfo* createCountwindowOperatorInfo(SOperatorInfo* downstream, SPhysiNo
|
||||||
goto _error;
|
goto _error;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pOperator->exprSupp.hasWindowOrGroup = true;
|
||||||
|
|
||||||
int32_t code = TSDB_CODE_SUCCESS;
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
SCountWinodwPhysiNode* pCountWindowNode = (SCountWinodwPhysiNode*)physiNode;
|
SCountWinodwPhysiNode* pCountWindowNode = (SCountWinodwPhysiNode*)physiNode;
|
||||||
|
|
||||||
|
|
|
@ -66,6 +66,8 @@ SOperatorInfo* createEventwindowOperatorInfo(SOperatorInfo* downstream, SPhysiNo
|
||||||
goto _error;
|
goto _error;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pOperator->exprSupp.hasWindowOrGroup = true;
|
||||||
|
|
||||||
SEventWinodwPhysiNode* pEventWindowNode = (SEventWinodwPhysiNode*)physiNode;
|
SEventWinodwPhysiNode* pEventWindowNode = (SEventWinodwPhysiNode*)physiNode;
|
||||||
|
|
||||||
int32_t tsSlotId = ((SColumnNode*)pEventWindowNode->window.pTspk)->slotId;
|
int32_t tsSlotId = ((SColumnNode*)pEventWindowNode->window.pTspk)->slotId;
|
||||||
|
|
|
@ -1716,6 +1716,7 @@ SqlFunctionCtx* createSqlFunctionCtx(SExprInfo* pExprInfo, int32_t numOfOutput,
|
||||||
pCtx->param = pFunct->pParam;
|
pCtx->param = pFunct->pParam;
|
||||||
pCtx->saveHandle.currentPage = -1;
|
pCtx->saveHandle.currentPage = -1;
|
||||||
pCtx->pStore = pStore;
|
pCtx->pStore = pStore;
|
||||||
|
pCtx->hasWindowOrGroup = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
for (int32_t i = 1; i < numOfOutput; ++i) {
|
for (int32_t i = 1; i < numOfOutput; ++i) {
|
||||||
|
@ -2187,7 +2188,7 @@ int32_t buildGroupIdMapForAllTables(STableListInfo* pTableListInfo, SReadHandle*
|
||||||
|
|
||||||
for (int i = 0; i < numOfTables; i++) {
|
for (int i = 0; i < numOfTables; i++) {
|
||||||
STableKeyInfo* info = taosArrayGet(pTableListInfo->pTableList, i);
|
STableKeyInfo* info = taosArrayGet(pTableListInfo->pTableList, i);
|
||||||
info->groupId = info->uid;
|
info->groupId = groupByTbname ? info->uid : 0;
|
||||||
|
|
||||||
taosHashPut(pTableListInfo->remainGroups, &(info->groupId), sizeof(info->groupId), &(info->uid),
|
taosHashPut(pTableListInfo->remainGroups, &(info->groupId), sizeof(info->groupId), &(info->uid),
|
||||||
sizeof(info->uid));
|
sizeof(info->uid));
|
||||||
|
|
|
@ -909,12 +909,21 @@ void initBasicInfo(SOptrBasicInfo* pInfo, SSDataBlock* pBlock) {
|
||||||
initResultRowInfo(&pInfo->resultRowInfo);
|
initResultRowInfo(&pInfo->resultRowInfo);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void* destroySqlFunctionCtx(SqlFunctionCtx* pCtx, int32_t numOfOutput) {
|
static void* destroySqlFunctionCtx(SqlFunctionCtx* pCtx, SExprInfo* pExpr, int32_t numOfOutput) {
|
||||||
if (pCtx == NULL) {
|
if (pCtx == NULL) {
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
for (int32_t i = 0; i < numOfOutput; ++i) {
|
for (int32_t i = 0; i < numOfOutput; ++i) {
|
||||||
|
if (pExpr != NULL) {
|
||||||
|
SExprInfo* pExprInfo = &pExpr[i];
|
||||||
|
for (int32_t j = 0; j < pExprInfo->base.numOfParams; ++j) {
|
||||||
|
if (pExprInfo->base.pParam[j].type == FUNC_PARAM_TYPE_VALUE) {
|
||||||
|
taosMemoryFree(pCtx[i].input.pData[j]);
|
||||||
|
taosMemoryFree(pCtx[i].input.pColumnDataAgg[j]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
for (int32_t j = 0; j < pCtx[i].numOfParams; ++j) {
|
for (int32_t j = 0; j < pCtx[i].numOfParams; ++j) {
|
||||||
taosVariantDestroy(&pCtx[i].param[j].param);
|
taosVariantDestroy(&pCtx[i].param[j].param);
|
||||||
}
|
}
|
||||||
|
@ -947,7 +956,7 @@ int32_t initExprSupp(SExprSupp* pSup, SExprInfo* pExprInfo, int32_t numOfExpr, S
|
||||||
}
|
}
|
||||||
|
|
||||||
void cleanupExprSupp(SExprSupp* pSupp) {
|
void cleanupExprSupp(SExprSupp* pSupp) {
|
||||||
destroySqlFunctionCtx(pSupp->pCtx, pSupp->numOfExprs);
|
destroySqlFunctionCtx(pSupp->pCtx, pSupp->pExprInfo, pSupp->numOfExprs);
|
||||||
if (pSupp->pExprInfo != NULL) {
|
if (pSupp->pExprInfo != NULL) {
|
||||||
destroyExprInfo(pSupp->pExprInfo, pSupp->numOfExprs);
|
destroyExprInfo(pSupp->pExprInfo, pSupp->numOfExprs);
|
||||||
taosMemoryFreeClear(pSupp->pExprInfo);
|
taosMemoryFreeClear(pSupp->pExprInfo);
|
||||||
|
|
|
@ -502,6 +502,8 @@ SOperatorInfo* createGroupOperatorInfo(SOperatorInfo* downstream, SAggPhysiNode*
|
||||||
goto _error;
|
goto _error;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pOperator->exprSupp.hasWindowOrGroup = true;
|
||||||
|
|
||||||
SSDataBlock* pResBlock = createDataBlockFromDescNode(pAggNode->node.pOutputDataBlockDesc);
|
SSDataBlock* pResBlock = createDataBlockFromDescNode(pAggNode->node.pOutputDataBlockDesc);
|
||||||
initBasicInfo(&pInfo->binfo, pResBlock);
|
initBasicInfo(&pInfo->binfo, pResBlock);
|
||||||
|
|
||||||
|
|
|
@ -99,6 +99,7 @@ SOperatorInfo* createProjectOperatorInfo(SOperatorInfo* downstream, SProjectPhys
|
||||||
goto _error;
|
goto _error;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pOperator->exprSupp.hasWindowOrGroup = false;
|
||||||
pOperator->pTaskInfo = pTaskInfo;
|
pOperator->pTaskInfo = pTaskInfo;
|
||||||
|
|
||||||
int32_t numOfCols = 0;
|
int32_t numOfCols = 0;
|
||||||
|
@ -409,6 +410,7 @@ SOperatorInfo* createIndefinitOutputOperatorInfo(SOperatorInfo* downstream, SPhy
|
||||||
pOperator->pTaskInfo = pTaskInfo;
|
pOperator->pTaskInfo = pTaskInfo;
|
||||||
|
|
||||||
SExprSupp* pSup = &pOperator->exprSupp;
|
SExprSupp* pSup = &pOperator->exprSupp;
|
||||||
|
pSup->hasWindowOrGroup = false;
|
||||||
|
|
||||||
SIndefRowsFuncPhysiNode* pPhyNode = (SIndefRowsFuncPhysiNode*)pNode;
|
SIndefRowsFuncPhysiNode* pPhyNode = (SIndefRowsFuncPhysiNode*)pNode;
|
||||||
|
|
||||||
|
@ -724,9 +726,12 @@ static void setPseudoOutputColInfo(SSDataBlock* pResult, SqlFunctionCtx* pCtx, S
|
||||||
|
|
||||||
int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBlock* pSrcBlock, SqlFunctionCtx* pCtx,
|
int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBlock* pSrcBlock, SqlFunctionCtx* pCtx,
|
||||||
int32_t numOfOutput, SArray* pPseudoList) {
|
int32_t numOfOutput, SArray* pPseudoList) {
|
||||||
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
setPseudoOutputColInfo(pResult, pCtx, pPseudoList);
|
setPseudoOutputColInfo(pResult, pCtx, pPseudoList);
|
||||||
pResult->info.dataLoad = 1;
|
pResult->info.dataLoad = 1;
|
||||||
|
|
||||||
|
SArray* processByRowFunctionCtx = NULL;
|
||||||
|
|
||||||
if (pSrcBlock == NULL) {
|
if (pSrcBlock == NULL) {
|
||||||
for (int32_t k = 0; k < numOfOutput; ++k) {
|
for (int32_t k = 0; k < numOfOutput; ++k) {
|
||||||
int32_t outputSlotId = pExpr[k].base.resSchema.slotId;
|
int32_t outputSlotId = pExpr[k].base.resSchema.slotId;
|
||||||
|
@ -743,7 +748,7 @@ int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBloc
|
||||||
}
|
}
|
||||||
|
|
||||||
pResult->info.rows = 1;
|
pResult->info.rows = 1;
|
||||||
return TSDB_CODE_SUCCESS;
|
goto _exit;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pResult != pSrcBlock) {
|
if (pResult != pSrcBlock) {
|
||||||
|
@ -816,10 +821,10 @@ int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBloc
|
||||||
SColumnInfoData idata = {.info = pResColData->info, .hasNull = true};
|
SColumnInfoData idata = {.info = pResColData->info, .hasNull = true};
|
||||||
|
|
||||||
SScalarParam dest = {.columnData = &idata};
|
SScalarParam dest = {.columnData = &idata};
|
||||||
int32_t code = scalarCalculate(pExpr[k].pExpr->_optrRoot.pRootNode, pBlockList, &dest);
|
code = scalarCalculate(pExpr[k].pExpr->_optrRoot.pRootNode, pBlockList, &dest);
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
taosArrayDestroy(pBlockList);
|
taosArrayDestroy(pBlockList);
|
||||||
return code;
|
goto _exit;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t startOffset = createNewColModel ? 0 : pResult->info.rows;
|
int32_t startOffset = createNewColModel ? 0 : pResult->info.rows;
|
||||||
|
@ -852,11 +857,21 @@ int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBloc
|
||||||
pfCtx->pDstBlock = pResult;
|
pfCtx->pDstBlock = pResult;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t code = pfCtx->fpSet.process(pfCtx);
|
code = pfCtx->fpSet.process(pfCtx);
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
return code;
|
goto _exit;
|
||||||
}
|
}
|
||||||
numOfRows = pResInfo->numOfRes;
|
numOfRows = pResInfo->numOfRes;
|
||||||
|
if (fmIsProcessByRowFunc(pfCtx->functionId)) {
|
||||||
|
if (NULL == processByRowFunctionCtx) {
|
||||||
|
processByRowFunctionCtx = taosArrayInit(1, sizeof(SqlFunctionCtx*));
|
||||||
|
if (!processByRowFunctionCtx) {
|
||||||
|
code = terrno;
|
||||||
|
goto _exit;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
taosArrayPush(processByRowFunctionCtx, &pfCtx);
|
||||||
|
}
|
||||||
} else if (fmIsAggFunc(pfCtx->functionId)) {
|
} else if (fmIsAggFunc(pfCtx->functionId)) {
|
||||||
// selective value output should be set during corresponding function execution
|
// selective value output should be set during corresponding function execution
|
||||||
if (fmIsSelectValueFunc(pfCtx->functionId)) {
|
if (fmIsSelectValueFunc(pfCtx->functionId)) {
|
||||||
|
@ -886,10 +901,10 @@ int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBloc
|
||||||
SColumnInfoData idata = {.info = pResColData->info, .hasNull = true};
|
SColumnInfoData idata = {.info = pResColData->info, .hasNull = true};
|
||||||
|
|
||||||
SScalarParam dest = {.columnData = &idata};
|
SScalarParam dest = {.columnData = &idata};
|
||||||
int32_t code = scalarCalculate((SNode*)pExpr[k].pExpr->_function.pFunctNode, pBlockList, &dest);
|
code = scalarCalculate((SNode*)pExpr[k].pExpr->_function.pFunctNode, pBlockList, &dest);
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
taosArrayDestroy(pBlockList);
|
taosArrayDestroy(pBlockList);
|
||||||
return code;
|
goto _exit;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t startOffset = createNewColModel ? 0 : pResult->info.rows;
|
int32_t startOffset = createNewColModel ? 0 : pResult->info.rows;
|
||||||
|
@ -905,9 +920,21 @@ int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBloc
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (processByRowFunctionCtx && taosArrayGetSize(processByRowFunctionCtx) > 0){
|
||||||
|
SqlFunctionCtx** pfCtx = taosArrayGet(processByRowFunctionCtx, 0);
|
||||||
|
code = (*pfCtx)->fpSet.processFuncByRow(processByRowFunctionCtx);
|
||||||
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
|
goto _exit;
|
||||||
|
}
|
||||||
|
numOfRows = (*pfCtx)->resultInfo->numOfRes;
|
||||||
|
}
|
||||||
if (!createNewColModel) {
|
if (!createNewColModel) {
|
||||||
pResult->info.rows += numOfRows;
|
pResult->info.rows += numOfRows;
|
||||||
}
|
}
|
||||||
|
_exit:
|
||||||
return TSDB_CODE_SUCCESS;
|
if(processByRowFunctionCtx) {
|
||||||
|
taosArrayDestroy(processByRowFunctionCtx);
|
||||||
|
processByRowFunctionCtx = NULL;
|
||||||
|
}
|
||||||
|
return code;
|
||||||
}
|
}
|
||||||
|
|
|
@ -1553,6 +1553,7 @@ SOperatorInfo* createStreamFinalIntervalOperatorInfo(SOperatorInfo* downstream,
|
||||||
goto _error;
|
goto _error;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pOperator->exprSupp.hasWindowOrGroup = true;
|
||||||
pOperator->pTaskInfo = pTaskInfo;
|
pOperator->pTaskInfo = pTaskInfo;
|
||||||
SStorageAPI* pAPI = &pTaskInfo->storageAPI;
|
SStorageAPI* pAPI = &pTaskInfo->storageAPI;
|
||||||
|
|
||||||
|
@ -3128,6 +3129,7 @@ _error:
|
||||||
static void clearStreamSessionOperator(SStreamSessionAggOperatorInfo* pInfo) {
|
static void clearStreamSessionOperator(SStreamSessionAggOperatorInfo* pInfo) {
|
||||||
tSimpleHashClear(pInfo->streamAggSup.pResultRows);
|
tSimpleHashClear(pInfo->streamAggSup.pResultRows);
|
||||||
pInfo->streamAggSup.stateStore.streamStateSessionClear(pInfo->streamAggSup.pState);
|
pInfo->streamAggSup.stateStore.streamStateSessionClear(pInfo->streamAggSup.pState);
|
||||||
|
pInfo->clearState = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
void deleteSessionWinState(SStreamAggSupporter* pAggSup, SSDataBlock* pBlock, SSHashObj* pMapUpdate,
|
void deleteSessionWinState(SStreamAggSupporter* pAggSup, SSDataBlock* pBlock, SSHashObj* pMapUpdate,
|
||||||
|
@ -3177,7 +3179,6 @@ static SSDataBlock* doStreamSessionSemiAgg(SOperatorInfo* pOperator) {
|
||||||
// semi session operator clear disk buffer
|
// semi session operator clear disk buffer
|
||||||
clearStreamSessionOperator(pInfo);
|
clearStreamSessionOperator(pInfo);
|
||||||
setStreamOperatorCompleted(pOperator);
|
setStreamOperatorCompleted(pOperator);
|
||||||
pInfo->clearState = false;
|
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -4216,6 +4217,8 @@ SOperatorInfo* createStreamIntervalOperatorInfo(SOperatorInfo* downstream, SPhys
|
||||||
pInfo->ignoreExpiredDataSaved = false;
|
pInfo->ignoreExpiredDataSaved = false;
|
||||||
|
|
||||||
SExprSupp* pSup = &pOperator->exprSupp;
|
SExprSupp* pSup = &pOperator->exprSupp;
|
||||||
|
pSup->hasWindowOrGroup = true;
|
||||||
|
|
||||||
initBasicInfo(&pInfo->binfo, pResBlock);
|
initBasicInfo(&pInfo->binfo, pResBlock);
|
||||||
initExecTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pTaskInfo->window);
|
initExecTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pTaskInfo->window);
|
||||||
|
|
||||||
|
|
|
@ -1213,6 +1213,8 @@ SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SIntervalPh
|
||||||
initBasicInfo(&pInfo->binfo, pResBlock);
|
initBasicInfo(&pInfo->binfo, pResBlock);
|
||||||
|
|
||||||
SExprSupp* pSup = &pOperator->exprSupp;
|
SExprSupp* pSup = &pOperator->exprSupp;
|
||||||
|
pSup->hasWindowOrGroup = true;
|
||||||
|
|
||||||
pInfo->primaryTsIndex = ((SColumnNode*)pPhyNode->window.pTspk)->slotId;
|
pInfo->primaryTsIndex = ((SColumnNode*)pPhyNode->window.pTspk)->slotId;
|
||||||
|
|
||||||
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
||||||
|
@ -1461,6 +1463,7 @@ SOperatorInfo* createStatewindowOperatorInfo(SOperatorInfo* downstream, SStateWi
|
||||||
goto _error;
|
goto _error;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pOperator->exprSupp.hasWindowOrGroup = true;
|
||||||
int32_t tsSlotId = ((SColumnNode*)pStateNode->window.pTspk)->slotId;
|
int32_t tsSlotId = ((SColumnNode*)pStateNode->window.pTspk)->slotId;
|
||||||
SColumnNode* pColNode = (SColumnNode*)(pStateNode->pStateKey);
|
SColumnNode* pColNode = (SColumnNode*)(pStateNode->pStateKey);
|
||||||
|
|
||||||
|
@ -1558,6 +1561,8 @@ SOperatorInfo* createSessionAggOperatorInfo(SOperatorInfo* downstream, SSessionW
|
||||||
goto _error;
|
goto _error;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pOperator->exprSupp.hasWindowOrGroup = true;
|
||||||
|
|
||||||
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
||||||
initResultSizeInfo(&pOperator->resultInfo, 4096);
|
initResultSizeInfo(&pOperator->resultInfo, 4096);
|
||||||
|
|
||||||
|
@ -1845,6 +1850,7 @@ SOperatorInfo* createMergeAlignedIntervalOperatorInfo(SOperatorInfo* downstream,
|
||||||
|
|
||||||
SIntervalAggOperatorInfo* iaInfo = miaInfo->intervalAggOperatorInfo;
|
SIntervalAggOperatorInfo* iaInfo = miaInfo->intervalAggOperatorInfo;
|
||||||
SExprSupp* pSup = &pOperator->exprSupp;
|
SExprSupp* pSup = &pOperator->exprSupp;
|
||||||
|
pSup->hasWindowOrGroup = true;
|
||||||
|
|
||||||
int32_t code = filterInitFromNode((SNode*)pNode->window.node.pConditions, &pOperator->exprSupp.pFilterInfo, 0);
|
int32_t code = filterInitFromNode((SNode*)pNode->window.node.pConditions, &pOperator->exprSupp.pFilterInfo, 0);
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
|
@ -2147,6 +2153,7 @@ SOperatorInfo* createMergeIntervalOperatorInfo(SOperatorInfo* downstream, SMerge
|
||||||
pIntervalInfo->binfo.outputTsOrder = pIntervalPhyNode->window.node.outputTsOrder;
|
pIntervalInfo->binfo.outputTsOrder = pIntervalPhyNode->window.node.outputTsOrder;
|
||||||
|
|
||||||
SExprSupp* pExprSupp = &pOperator->exprSupp;
|
SExprSupp* pExprSupp = &pOperator->exprSupp;
|
||||||
|
pExprSupp->hasWindowOrGroup = true;
|
||||||
|
|
||||||
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
||||||
initResultSizeInfo(&pOperator->resultInfo, 4096);
|
initResultSizeInfo(&pOperator->resultInfo, 4096);
|
||||||
|
|
|
@ -50,6 +50,7 @@ typedef struct SBuiltinFuncDefinition {
|
||||||
const char* pStateFunc;
|
const char* pStateFunc;
|
||||||
FCreateMergeFuncParameters createMergeParaFuc;
|
FCreateMergeFuncParameters createMergeParaFuc;
|
||||||
FEstimateReturnRows estimateReturnRowsFunc;
|
FEstimateReturnRows estimateReturnRowsFunc;
|
||||||
|
processFuncByRow processFuncByRow;
|
||||||
} SBuiltinFuncDefinition;
|
} SBuiltinFuncDefinition;
|
||||||
|
|
||||||
extern const SBuiltinFuncDefinition funcMgtBuiltins[];
|
extern const SBuiltinFuncDefinition funcMgtBuiltins[];
|
||||||
|
|
|
@ -133,6 +133,7 @@ int32_t getApercentileMaxSize();
|
||||||
bool getDiffFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
|
bool getDiffFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
|
||||||
bool diffFunctionSetup(SqlFunctionCtx* pCtx, SResultRowEntryInfo* pResInfo);
|
bool diffFunctionSetup(SqlFunctionCtx* pCtx, SResultRowEntryInfo* pResInfo);
|
||||||
int32_t diffFunction(SqlFunctionCtx* pCtx);
|
int32_t diffFunction(SqlFunctionCtx* pCtx);
|
||||||
|
int32_t diffFunctionByRow(SArray* pCtx);
|
||||||
|
|
||||||
bool getDerivativeFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
|
bool getDerivativeFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
|
||||||
bool derivativeFuncSetup(SqlFunctionCtx* pCtx, SResultRowEntryInfo* pResInfo);
|
bool derivativeFuncSetup(SqlFunctionCtx* pCtx, SResultRowEntryInfo* pResInfo);
|
||||||
|
|
|
@ -57,6 +57,7 @@ extern "C" {
|
||||||
#define FUNC_MGT_PRIMARY_KEY_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(28)
|
#define FUNC_MGT_PRIMARY_KEY_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(28)
|
||||||
#define FUNC_MGT_TSMA_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(29)
|
#define FUNC_MGT_TSMA_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(29)
|
||||||
#define FUNC_MGT_COUNT_LIKE_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(30) // funcs that should also return 0 when no rows found
|
#define FUNC_MGT_COUNT_LIKE_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(30) // funcs that should also return 0 when no rows found
|
||||||
|
#define FUNC_MGT_PROCESS_BY_ROW FUNC_MGT_FUNC_CLASSIFICATION_MASK(31)
|
||||||
|
|
||||||
#define FUNC_MGT_TEST_MASK(val, mask) (((val) & (mask)) != 0)
|
#define FUNC_MGT_TEST_MASK(val, mask) (((val) & (mask)) != 0)
|
||||||
|
|
||||||
|
|
|
@ -67,7 +67,7 @@ typedef struct tMemBucket {
|
||||||
SHashObj *groupPagesMap; // disk page map for different groups;
|
SHashObj *groupPagesMap; // disk page map for different groups;
|
||||||
} tMemBucket;
|
} tMemBucket;
|
||||||
|
|
||||||
tMemBucket *tMemBucketCreate(int32_t nElemSize, int16_t dataType, double minval, double maxval);
|
tMemBucket *tMemBucketCreate(int32_t nElemSize, int16_t dataType, double minval, double maxval, bool hasWindowOrGroup);
|
||||||
|
|
||||||
void tMemBucketDestroy(tMemBucket *pBucket);
|
void tMemBucketDestroy(tMemBucket *pBucket);
|
||||||
|
|
||||||
|
|
|
@ -1965,9 +1965,9 @@ static int32_t translateDiff(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
|
||||||
}
|
}
|
||||||
|
|
||||||
SValueNode* pValue = (SValueNode*)pParamNode1;
|
SValueNode* pValue = (SValueNode*)pParamNode1;
|
||||||
if (pValue->datum.i != 0 && pValue->datum.i != 1) {
|
if (pValue->datum.i < 0 || pValue->datum.i > 3) {
|
||||||
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
|
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
|
||||||
"Second parameter of DIFF function should be only 0 or 1");
|
"Second parameter of DIFF function should be a number between 0 and 3.");
|
||||||
}
|
}
|
||||||
|
|
||||||
pValue->notReserved = true;
|
pValue->notReserved = true;
|
||||||
|
@ -1977,7 +1977,7 @@ static int32_t translateDiff(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
|
||||||
if (IS_SIGNED_NUMERIC_TYPE(colType) || IS_TIMESTAMP_TYPE(colType) || TSDB_DATA_TYPE_BOOL == colType) {
|
if (IS_SIGNED_NUMERIC_TYPE(colType) || IS_TIMESTAMP_TYPE(colType) || TSDB_DATA_TYPE_BOOL == colType) {
|
||||||
resType = TSDB_DATA_TYPE_BIGINT;
|
resType = TSDB_DATA_TYPE_BIGINT;
|
||||||
} else if (IS_UNSIGNED_NUMERIC_TYPE(colType)) {
|
} else if (IS_UNSIGNED_NUMERIC_TYPE(colType)) {
|
||||||
resType = TSDB_DATA_TYPE_UBIGINT;
|
resType = TSDB_DATA_TYPE_BIGINT;
|
||||||
} else {
|
} else {
|
||||||
resType = TSDB_DATA_TYPE_DOUBLE;
|
resType = TSDB_DATA_TYPE_DOUBLE;
|
||||||
}
|
}
|
||||||
|
@ -1989,7 +1989,7 @@ static EFuncReturnRows diffEstReturnRows(SFunctionNode* pFunc) {
|
||||||
if (1 == LIST_LENGTH(pFunc->pParameterList)) {
|
if (1 == LIST_LENGTH(pFunc->pParameterList)) {
|
||||||
return FUNC_RETURN_ROWS_N_MINUS_1;
|
return FUNC_RETURN_ROWS_N_MINUS_1;
|
||||||
}
|
}
|
||||||
return 1 == ((SValueNode*)nodesListGetNode(pFunc->pParameterList, 1))->datum.i ? FUNC_RETURN_ROWS_INDEFINITE
|
return 1 < ((SValueNode*)nodesListGetNode(pFunc->pParameterList, 1))->datum.i ? FUNC_RETURN_ROWS_INDEFINITE
|
||||||
: FUNC_RETURN_ROWS_N_MINUS_1;
|
: FUNC_RETURN_ROWS_N_MINUS_1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3206,7 +3206,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
||||||
{
|
{
|
||||||
.name = "diff",
|
.name = "diff",
|
||||||
.type = FUNCTION_TYPE_DIFF,
|
.type = FUNCTION_TYPE_DIFF,
|
||||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC |
|
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC | FUNC_MGT_PROCESS_BY_ROW |
|
||||||
FUNC_MGT_KEEP_ORDER_FUNC | FUNC_MGT_FORBID_STREAM_FUNC | FUNC_MGT_CUMULATIVE_FUNC | FUNC_MGT_FORBID_SYSTABLE_FUNC | FUNC_MGT_PRIMARY_KEY_FUNC,
|
FUNC_MGT_KEEP_ORDER_FUNC | FUNC_MGT_FORBID_STREAM_FUNC | FUNC_MGT_CUMULATIVE_FUNC | FUNC_MGT_FORBID_SYSTABLE_FUNC | FUNC_MGT_PRIMARY_KEY_FUNC,
|
||||||
.translateFunc = translateDiff,
|
.translateFunc = translateDiff,
|
||||||
.getEnvFunc = getDiffFuncEnv,
|
.getEnvFunc = getDiffFuncEnv,
|
||||||
|
@ -3215,6 +3215,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
||||||
.sprocessFunc = diffScalarFunction,
|
.sprocessFunc = diffScalarFunction,
|
||||||
.finalizeFunc = functionFinalize,
|
.finalizeFunc = functionFinalize,
|
||||||
.estimateReturnRowsFunc = diffEstReturnRows,
|
.estimateReturnRowsFunc = diffEstReturnRows,
|
||||||
|
.processFuncByRow = diffFunctionByRow,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
.name = "statecount",
|
.name = "statecount",
|
||||||
|
|
|
@ -110,10 +110,9 @@ typedef enum {
|
||||||
} EAPerctAlgoType;
|
} EAPerctAlgoType;
|
||||||
|
|
||||||
typedef struct SDiffInfo {
|
typedef struct SDiffInfo {
|
||||||
bool hasPrev;
|
bool hasPrev;
|
||||||
bool includeNull;
|
bool isFirstRow;
|
||||||
bool ignoreNegative; // replace the ignore with case when
|
int8_t ignoreOption; // replace the ignore with case when
|
||||||
bool firstOutput;
|
|
||||||
union {
|
union {
|
||||||
int64_t i64;
|
int64_t i64;
|
||||||
double d64;
|
double d64;
|
||||||
|
@ -122,6 +121,12 @@ typedef struct SDiffInfo {
|
||||||
int64_t prevTs;
|
int64_t prevTs;
|
||||||
} SDiffInfo;
|
} SDiffInfo;
|
||||||
|
|
||||||
|
bool ignoreNegative(int8_t ignoreOption){
|
||||||
|
return (ignoreOption & 0x1) == 0x1;
|
||||||
|
}
|
||||||
|
bool ignoreNull(int8_t ignoreOption){
|
||||||
|
return (ignoreOption & 0x2) == 0x2;
|
||||||
|
}
|
||||||
typedef struct SSpreadInfo {
|
typedef struct SSpreadInfo {
|
||||||
double result;
|
double result;
|
||||||
bool hasResult;
|
bool hasResult;
|
||||||
|
@ -1897,7 +1902,7 @@ int32_t percentileFunction(SqlFunctionCtx* pCtx) {
|
||||||
pResInfo->complete = true;
|
pResInfo->complete = true;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
} else {
|
} else {
|
||||||
pInfo->pMemBucket = tMemBucketCreate(pCol->info.bytes, type, pInfo->minval, pInfo->maxval);
|
pInfo->pMemBucket = tMemBucketCreate(pCol->info.bytes, type, pInfo->minval, pInfo->maxval, pCtx->hasWindowOrGroup);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3100,15 +3105,14 @@ bool diffFunctionSetup(SqlFunctionCtx* pCtx, SResultRowEntryInfo* pResInfo) {
|
||||||
|
|
||||||
SDiffInfo* pDiffInfo = GET_ROWCELL_INTERBUF(pResInfo);
|
SDiffInfo* pDiffInfo = GET_ROWCELL_INTERBUF(pResInfo);
|
||||||
pDiffInfo->hasPrev = false;
|
pDiffInfo->hasPrev = false;
|
||||||
|
pDiffInfo->isFirstRow = true;
|
||||||
pDiffInfo->prev.i64 = 0;
|
pDiffInfo->prev.i64 = 0;
|
||||||
pDiffInfo->prevTs = -1;
|
pDiffInfo->prevTs = -1;
|
||||||
if (pCtx->numOfParams > 1) {
|
if (pCtx->numOfParams > 1) {
|
||||||
pDiffInfo->ignoreNegative = pCtx->param[1].param.i; // TODO set correct param
|
pDiffInfo->ignoreOption = pCtx->param[1].param.i; // TODO set correct param
|
||||||
} else {
|
} else {
|
||||||
pDiffInfo->ignoreNegative = false;
|
pDiffInfo->ignoreOption = 0;
|
||||||
}
|
}
|
||||||
pDiffInfo->includeNull = true;
|
|
||||||
pDiffInfo->firstOutput = false;
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3144,91 +3148,153 @@ static int32_t doSetPrevVal(SDiffInfo* pDiffInfo, int32_t type, const char* pv,
|
||||||
return TSDB_CODE_FUNC_FUNTION_PARA_TYPE;
|
return TSDB_CODE_FUNC_FUNTION_PARA_TYPE;
|
||||||
}
|
}
|
||||||
pDiffInfo->prevTs = ts;
|
pDiffInfo->prevTs = ts;
|
||||||
|
pDiffInfo->hasPrev = true;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int32_t diffIsNegtive(SDiffInfo* pDiffInfo, int32_t type, const char* pv) {
|
||||||
|
switch (type) {
|
||||||
|
case TSDB_DATA_TYPE_UINT: {
|
||||||
|
int64_t v = *(uint32_t*)pv;
|
||||||
|
return v < pDiffInfo->prev.i64;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_INT: {
|
||||||
|
int64_t v = *(int32_t*)pv;
|
||||||
|
return v < pDiffInfo->prev.i64;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_BOOL: {
|
||||||
|
int64_t v = *(bool*)pv;
|
||||||
|
return v < pDiffInfo->prev.i64;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_UTINYINT: {
|
||||||
|
int64_t v = *(uint8_t*)pv;
|
||||||
|
return v < pDiffInfo->prev.i64;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_TINYINT: {
|
||||||
|
int64_t v = *(int8_t*)pv;
|
||||||
|
return v < pDiffInfo->prev.i64;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_USMALLINT: {
|
||||||
|
int64_t v = *(uint16_t*)pv;
|
||||||
|
return v < pDiffInfo->prev.i64;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_SMALLINT: {
|
||||||
|
int64_t v = *(int16_t*)pv;
|
||||||
|
return v < pDiffInfo->prev.i64;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_UBIGINT:{
|
||||||
|
uint64_t v = *(uint64_t*)pv;
|
||||||
|
return v < (uint64_t)pDiffInfo->prev.i64;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_TIMESTAMP:
|
||||||
|
case TSDB_DATA_TYPE_BIGINT: {
|
||||||
|
int64_t v = *(int64_t*)pv;
|
||||||
|
return v < pDiffInfo->prev.i64;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_FLOAT: {
|
||||||
|
float v = *(float*)pv;
|
||||||
|
return v < pDiffInfo->prev.d64;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_DOUBLE: {
|
||||||
|
double v = *(double*)pv;
|
||||||
|
return v < pDiffInfo->prev.d64;
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void tryToSetInt64(SDiffInfo* pDiffInfo, int32_t type, SColumnInfoData* pOutput, int64_t v, int32_t pos) {
|
||||||
|
bool isNegative = v < pDiffInfo->prev.i64;
|
||||||
|
if(type == TSDB_DATA_TYPE_UBIGINT){
|
||||||
|
isNegative = (uint64_t)v < (uint64_t)pDiffInfo->prev.i64;
|
||||||
|
}
|
||||||
|
int64_t delta = v - pDiffInfo->prev.i64;
|
||||||
|
if (isNegative && ignoreNegative(pDiffInfo->ignoreOption)) {
|
||||||
|
colDataSetNull_f_s(pOutput, pos);
|
||||||
|
pOutput->hasNull = true;
|
||||||
|
} else {
|
||||||
|
colDataSetInt64(pOutput, pos, &delta);
|
||||||
|
}
|
||||||
|
pDiffInfo->prev.i64 = v;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void tryToSetDouble(SDiffInfo* pDiffInfo, SColumnInfoData* pOutput, double v, int32_t pos) {
|
||||||
|
double delta = v - pDiffInfo->prev.d64;
|
||||||
|
if (delta < 0 && ignoreNegative(pDiffInfo->ignoreOption)) {
|
||||||
|
colDataSetNull_f_s(pOutput, pos);
|
||||||
|
} else {
|
||||||
|
colDataSetDouble(pOutput, pos, &delta);
|
||||||
|
}
|
||||||
|
pDiffInfo->prev.d64 = v;
|
||||||
|
}
|
||||||
|
|
||||||
static int32_t doHandleDiff(SDiffInfo* pDiffInfo, int32_t type, const char* pv, SColumnInfoData* pOutput, int32_t pos,
|
static int32_t doHandleDiff(SDiffInfo* pDiffInfo, int32_t type, const char* pv, SColumnInfoData* pOutput, int32_t pos,
|
||||||
int64_t ts) {
|
int64_t ts) {
|
||||||
|
if (!pDiffInfo->hasPrev) {
|
||||||
|
colDataSetNull_f_s(pOutput, pos);
|
||||||
|
return doSetPrevVal(pDiffInfo, type, pv, ts);
|
||||||
|
}
|
||||||
pDiffInfo->prevTs = ts;
|
pDiffInfo->prevTs = ts;
|
||||||
switch (type) {
|
switch (type) {
|
||||||
case TSDB_DATA_TYPE_UINT:
|
case TSDB_DATA_TYPE_UINT: {
|
||||||
|
int64_t v = *(uint32_t*)pv;
|
||||||
|
tryToSetInt64(pDiffInfo, type, pOutput, v, pos);
|
||||||
|
break;
|
||||||
|
}
|
||||||
case TSDB_DATA_TYPE_INT: {
|
case TSDB_DATA_TYPE_INT: {
|
||||||
int32_t v = *(int32_t*)pv;
|
int64_t v = *(int32_t*)pv;
|
||||||
int64_t delta = v - pDiffInfo->prev.i64; // direct previous may be null
|
tryToSetInt64(pDiffInfo, type, pOutput, v, pos);
|
||||||
if (delta < 0 && pDiffInfo->ignoreNegative) {
|
break;
|
||||||
colDataSetNull_f_s(pOutput, pos);
|
}
|
||||||
} else {
|
case TSDB_DATA_TYPE_BOOL: {
|
||||||
colDataSetInt64(pOutput, pos, &delta);
|
int64_t v = *(bool*)pv;
|
||||||
}
|
tryToSetInt64(pDiffInfo, type, pOutput, v, pos);
|
||||||
pDiffInfo->prev.i64 = v;
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_UTINYINT: {
|
||||||
|
int64_t v = *(uint8_t*)pv;
|
||||||
|
tryToSetInt64(pDiffInfo, type, pOutput, v, pos);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case TSDB_DATA_TYPE_BOOL:
|
|
||||||
case TSDB_DATA_TYPE_UTINYINT:
|
|
||||||
case TSDB_DATA_TYPE_TINYINT: {
|
case TSDB_DATA_TYPE_TINYINT: {
|
||||||
int8_t v = *(int8_t*)pv;
|
int64_t v = *(int8_t*)pv;
|
||||||
int64_t delta = v - pDiffInfo->prev.i64; // direct previous may be null
|
tryToSetInt64(pDiffInfo, type, pOutput, v, pos);
|
||||||
if (delta < 0 && pDiffInfo->ignoreNegative) {
|
break;
|
||||||
colDataSetNull_f_s(pOutput, pos);
|
}
|
||||||
} else {
|
case TSDB_DATA_TYPE_USMALLINT:{
|
||||||
colDataSetInt64(pOutput, pos, &delta);
|
int64_t v = *(uint16_t*)pv;
|
||||||
}
|
tryToSetInt64(pDiffInfo, type, pOutput, v, pos);
|
||||||
pDiffInfo->prev.i64 = v;
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case TSDB_DATA_TYPE_USMALLINT:
|
|
||||||
case TSDB_DATA_TYPE_SMALLINT: {
|
case TSDB_DATA_TYPE_SMALLINT: {
|
||||||
int16_t v = *(int16_t*)pv;
|
int64_t v = *(int16_t*)pv;
|
||||||
int64_t delta = v - pDiffInfo->prev.i64; // direct previous may be null
|
tryToSetInt64(pDiffInfo, type, pOutput, v, pos);
|
||||||
if (delta < 0 && pDiffInfo->ignoreNegative) {
|
|
||||||
colDataSetNull_f_s(pOutput, pos);
|
|
||||||
} else {
|
|
||||||
colDataSetInt64(pOutput, pos, &delta);
|
|
||||||
}
|
|
||||||
pDiffInfo->prev.i64 = v;
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case TSDB_DATA_TYPE_TIMESTAMP:
|
case TSDB_DATA_TYPE_TIMESTAMP:
|
||||||
case TSDB_DATA_TYPE_UBIGINT:
|
case TSDB_DATA_TYPE_UBIGINT:
|
||||||
case TSDB_DATA_TYPE_BIGINT: {
|
case TSDB_DATA_TYPE_BIGINT: {
|
||||||
int64_t v = *(int64_t*)pv;
|
int64_t v = *(int64_t*)pv;
|
||||||
int64_t delta = v - pDiffInfo->prev.i64; // direct previous may be null
|
tryToSetInt64(pDiffInfo, type, pOutput, v, pos);
|
||||||
if (delta < 0 && pDiffInfo->ignoreNegative) {
|
|
||||||
colDataSetNull_f_s(pOutput, pos);
|
|
||||||
} else {
|
|
||||||
colDataSetInt64(pOutput, pos, &delta);
|
|
||||||
}
|
|
||||||
pDiffInfo->prev.i64 = v;
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case TSDB_DATA_TYPE_FLOAT: {
|
case TSDB_DATA_TYPE_FLOAT: {
|
||||||
float v = *(float*)pv;
|
double v = *(float*)pv;
|
||||||
double delta = v - pDiffInfo->prev.d64; // direct previous may be null
|
tryToSetDouble(pDiffInfo, pOutput, v, pos);
|
||||||
if ((delta < 0 && pDiffInfo->ignoreNegative) || isinf(delta) || isnan(delta)) { // check for overflow
|
|
||||||
colDataSetNull_f_s(pOutput, pos);
|
|
||||||
} else {
|
|
||||||
colDataSetDouble(pOutput, pos, &delta);
|
|
||||||
}
|
|
||||||
pDiffInfo->prev.d64 = v;
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case TSDB_DATA_TYPE_DOUBLE: {
|
case TSDB_DATA_TYPE_DOUBLE: {
|
||||||
double v = *(double*)pv;
|
double v = *(double*)pv;
|
||||||
double delta = v - pDiffInfo->prev.d64; // direct previous may be null
|
tryToSetDouble(pDiffInfo, pOutput, v, pos);
|
||||||
if ((delta < 0 && pDiffInfo->ignoreNegative) || isinf(delta) || isnan(delta)) { // check for overflow
|
|
||||||
colDataSetNull_f_s(pOutput, pos);
|
|
||||||
} else {
|
|
||||||
colDataSetDouble(pOutput, pos, &delta);
|
|
||||||
}
|
|
||||||
pDiffInfo->prev.d64 = v;
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
default:
|
default:
|
||||||
return TSDB_CODE_FUNC_FUNTION_PARA_TYPE;
|
return TSDB_CODE_FUNC_FUNTION_PARA_TYPE;
|
||||||
}
|
}
|
||||||
|
pDiffInfo->hasPrev = true;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3271,71 +3337,155 @@ bool funcInputGetNextRowIndex(SInputColumnInfoData* pInput, int32_t from, bool f
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t diffFunction(SqlFunctionCtx* pCtx) {
|
int32_t diffResultIsNull(SqlFunctionCtx* pCtx, SFuncInputRow* pRow){
|
||||||
|
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
|
||||||
|
SDiffInfo* pDiffInfo = GET_ROWCELL_INTERBUF(pResInfo);
|
||||||
|
|
||||||
|
if (pRow->isDataNull || !pDiffInfo->hasPrev ) {
|
||||||
|
return true;
|
||||||
|
} else if (ignoreNegative(pDiffInfo->ignoreOption)){
|
||||||
|
return diffIsNegtive(pDiffInfo, pCtx->input.pData[0]->info.type, pRow->pData);
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool isFirstRow(SqlFunctionCtx* pCtx, SFuncInputRow* pRow) {
|
||||||
|
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
|
||||||
|
SDiffInfo* pDiffInfo = GET_ROWCELL_INTERBUF(pResInfo);
|
||||||
|
return pDiffInfo->isFirstRow;
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t trySetPreVal(SqlFunctionCtx* pCtx, SFuncInputRow* pRow) {
|
||||||
|
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
|
||||||
|
SDiffInfo* pDiffInfo = GET_ROWCELL_INTERBUF(pResInfo);
|
||||||
|
pDiffInfo->isFirstRow = false;
|
||||||
|
if (pRow->isDataNull) {
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
SInputColumnInfoData* pInput = &pCtx->input;
|
||||||
|
SColumnInfoData* pInputCol = pInput->pData[0];
|
||||||
|
int8_t inputType = pInputCol->info.type;
|
||||||
|
|
||||||
|
char* pv = pRow->pData;
|
||||||
|
return doSetPrevVal(pDiffInfo, inputType, pv, pRow->ts);
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t setDoDiffResult(SqlFunctionCtx* pCtx, SFuncInputRow* pRow, int32_t pos) {
|
||||||
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
|
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
|
||||||
SDiffInfo* pDiffInfo = GET_ROWCELL_INTERBUF(pResInfo);
|
SDiffInfo* pDiffInfo = GET_ROWCELL_INTERBUF(pResInfo);
|
||||||
|
|
||||||
SInputColumnInfoData* pInput = &pCtx->input;
|
SInputColumnInfoData* pInput = &pCtx->input;
|
||||||
SColumnInfoData* pInputCol = pInput->pData[0];
|
SColumnInfoData* pInputCol = pInput->pData[0];
|
||||||
int8_t inputType = pInputCol->info.type;
|
int8_t inputType = pInputCol->info.type;
|
||||||
|
SColumnInfoData* pOutput = (SColumnInfoData*)pCtx->pOutput;
|
||||||
|
|
||||||
TSKEY* tsList = (int64_t*)pInput->pPTS->pData;
|
if (pRow->isDataNull) {
|
||||||
|
colDataSetNull_f_s(pOutput, pos);
|
||||||
|
pOutput->hasNull = true;
|
||||||
|
|
||||||
int32_t numOfElems = 0;
|
// handle selectivity
|
||||||
int32_t startOffset = pCtx->offset;
|
if (pCtx->subsidiaries.num > 0) {
|
||||||
|
appendSelectivityCols(pCtx, pRow->block, pRow->rowIndex, pos);
|
||||||
SColumnInfoData* pOutput = (SColumnInfoData*)pCtx->pOutput;
|
|
||||||
|
|
||||||
funcInputUpdate(pCtx);
|
|
||||||
|
|
||||||
SFuncInputRow row = {0};
|
|
||||||
while (funcInputGetNextRow(pCtx, &row)) {
|
|
||||||
int32_t pos = startOffset + numOfElems;
|
|
||||||
|
|
||||||
if (row.isDataNull) {
|
|
||||||
if (pDiffInfo->includeNull) {
|
|
||||||
colDataSetNull_f_s(pOutput, pos);
|
|
||||||
|
|
||||||
// handle selectivity
|
|
||||||
if (pCtx->subsidiaries.num > 0) {
|
|
||||||
appendSelectivityCols(pCtx, row.block, row.rowIndex, pos);
|
|
||||||
}
|
|
||||||
|
|
||||||
numOfElems += 1;
|
|
||||||
}
|
|
||||||
continue;
|
|
||||||
}
|
}
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
char* pv = row.pData;
|
}
|
||||||
|
|
||||||
if (pDiffInfo->hasPrev) {
|
char* pv = pRow->pData;
|
||||||
if (row.ts == pDiffInfo->prevTs) {
|
|
||||||
return TSDB_CODE_FUNC_DUP_TIMESTAMP;
|
if (pRow->ts == pDiffInfo->prevTs) {
|
||||||
}
|
return TSDB_CODE_FUNC_DUP_TIMESTAMP;
|
||||||
int32_t code = doHandleDiff(pDiffInfo, inputType, pv, pOutput, pos, row.ts);
|
}
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
int32_t code = doHandleDiff(pDiffInfo, inputType, pv, pOutput, pos, pRow->ts);
|
||||||
return code;
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
}
|
return code;
|
||||||
// handle selectivity
|
}
|
||||||
if (pCtx->subsidiaries.num > 0) {
|
// handle selectivity
|
||||||
appendSelectivityCols(pCtx, row.block, row.rowIndex, pos);
|
if (pCtx->subsidiaries.num > 0) {
|
||||||
}
|
appendSelectivityCols(pCtx, pRow->block, pRow->rowIndex, pos);
|
||||||
|
|
||||||
numOfElems++;
|
|
||||||
} else {
|
|
||||||
int32_t code = doSetPrevVal(pDiffInfo, inputType, pv, row.ts);
|
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
|
||||||
return code;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pDiffInfo->hasPrev = true;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pResInfo->numOfRes = numOfElems;
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int32_t diffFunction(SqlFunctionCtx* pCtx) {
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t diffFunctionByRow(SArray* pCtxArray) {
|
||||||
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
|
int diffColNum = pCtxArray->size;
|
||||||
|
if(diffColNum == 0) {
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
int32_t numOfElems = 0;
|
||||||
|
|
||||||
|
SArray* pRows = taosArrayInit_s(sizeof(SFuncInputRow), diffColNum);
|
||||||
|
|
||||||
|
bool keepNull = false;
|
||||||
|
for (int i = 0; i < diffColNum; ++i) {
|
||||||
|
SqlFunctionCtx* pCtx = *(SqlFunctionCtx**)taosArrayGet(pCtxArray, i);
|
||||||
|
funcInputUpdate(pCtx);
|
||||||
|
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
|
||||||
|
SDiffInfo* pDiffInfo = GET_ROWCELL_INTERBUF(pResInfo);
|
||||||
|
if (!ignoreNull(pDiffInfo->ignoreOption)) {
|
||||||
|
keepNull = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
SqlFunctionCtx* pCtx0 = *(SqlFunctionCtx**)taosArrayGet(pCtxArray, 0);
|
||||||
|
SFuncInputRow* pRow0 = (SFuncInputRow*)taosArrayGet(pRows, 0);
|
||||||
|
int32_t startOffset = pCtx0->offset;
|
||||||
|
while (funcInputGetNextRow(pCtx0, pRow0)) {
|
||||||
|
bool hasNotNullValue = !diffResultIsNull(pCtx0, pRow0);
|
||||||
|
for (int i = 1; i < diffColNum; ++i) {
|
||||||
|
SqlFunctionCtx* pCtx = *(SqlFunctionCtx**)taosArrayGet(pCtxArray, i);
|
||||||
|
SFuncInputRow* pRow = (SFuncInputRow*)taosArrayGet(pRows, i);
|
||||||
|
if(!funcInputGetNextRow(pCtx, pRow)) {
|
||||||
|
// rows are not equal
|
||||||
|
code = TSDB_CODE_QRY_EXECUTOR_INTERNAL_ERROR;
|
||||||
|
goto _exit;
|
||||||
|
}
|
||||||
|
if (!diffResultIsNull(pCtx, pRow)) {
|
||||||
|
hasNotNullValue = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
int32_t pos = startOffset + numOfElems;
|
||||||
|
|
||||||
|
bool newRow = false;
|
||||||
|
for (int i = 0; i < diffColNum; ++i) {
|
||||||
|
SqlFunctionCtx* pCtx = *(SqlFunctionCtx**)taosArrayGet(pCtxArray, i);
|
||||||
|
SFuncInputRow* pRow = (SFuncInputRow*)taosArrayGet(pRows, i);
|
||||||
|
if ((keepNull || hasNotNullValue) && !isFirstRow(pCtx, pRow)){
|
||||||
|
code = setDoDiffResult(pCtx, pRow, pos);
|
||||||
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
|
goto _exit;
|
||||||
|
}
|
||||||
|
newRow = true;
|
||||||
|
} else {
|
||||||
|
code = trySetPreVal(pCtx, pRow);
|
||||||
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
|
goto _exit;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (newRow) ++numOfElems;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (int i = 0; i < diffColNum; ++i) {
|
||||||
|
SqlFunctionCtx* pCtx = *(SqlFunctionCtx**)taosArrayGet(pCtxArray, i);
|
||||||
|
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
|
||||||
|
pResInfo->numOfRes = numOfElems;
|
||||||
|
}
|
||||||
|
|
||||||
|
_exit:
|
||||||
|
if (pRows) {
|
||||||
|
taosArrayDestroy(pRows);
|
||||||
|
pRows = NULL;
|
||||||
|
}
|
||||||
|
return code;
|
||||||
|
}
|
||||||
|
|
||||||
int32_t getTopBotInfoSize(int64_t numOfItems) { return sizeof(STopBotRes) + numOfItems * sizeof(STopBotResItem); }
|
int32_t getTopBotInfoSize(int64_t numOfItems) { return sizeof(STopBotRes) + numOfItems * sizeof(STopBotResItem); }
|
||||||
|
|
||||||
bool getTopBotFuncEnv(SFunctionNode* pFunc, SFuncExecEnv* pEnv) {
|
bool getTopBotFuncEnv(SFunctionNode* pFunc, SFuncExecEnv* pEnv) {
|
||||||
|
|
|
@ -141,6 +141,7 @@ int32_t fmGetFuncExecFuncs(int32_t funcId, SFuncExecFuncs* pFpSet) {
|
||||||
pFpSet->process = funcMgtBuiltins[funcId].processFunc;
|
pFpSet->process = funcMgtBuiltins[funcId].processFunc;
|
||||||
pFpSet->finalize = funcMgtBuiltins[funcId].finalizeFunc;
|
pFpSet->finalize = funcMgtBuiltins[funcId].finalizeFunc;
|
||||||
pFpSet->combine = funcMgtBuiltins[funcId].combineFunc;
|
pFpSet->combine = funcMgtBuiltins[funcId].combineFunc;
|
||||||
|
pFpSet->processFuncByRow = funcMgtBuiltins[funcId].processFuncByRow;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -274,6 +275,8 @@ bool fmIsBlockDistFunc(int32_t funcId) {
|
||||||
return FUNCTION_TYPE_BLOCK_DIST == funcMgtBuiltins[funcId].type;
|
return FUNCTION_TYPE_BLOCK_DIST == funcMgtBuiltins[funcId].type;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool fmIsProcessByRowFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_PROCESS_BY_ROW); }
|
||||||
|
|
||||||
bool fmIsIgnoreNullFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_IGNORE_NULL_FUNC); }
|
bool fmIsIgnoreNullFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_IGNORE_NULL_FUNC); }
|
||||||
|
|
||||||
void fmFuncMgtDestroy() {
|
void fmFuncMgtDestroy() {
|
||||||
|
|
|
@ -238,14 +238,20 @@ static void resetSlotInfo(tMemBucket *pBucket) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
tMemBucket *tMemBucketCreate(int32_t nElemSize, int16_t dataType, double minval, double maxval) {
|
tMemBucket *tMemBucketCreate(int32_t nElemSize, int16_t dataType, double minval, double maxval, bool hasWindowOrGroup) {
|
||||||
tMemBucket *pBucket = (tMemBucket *)taosMemoryCalloc(1, sizeof(tMemBucket));
|
tMemBucket *pBucket = (tMemBucket *)taosMemoryCalloc(1, sizeof(tMemBucket));
|
||||||
if (pBucket == NULL) {
|
if (pBucket == NULL) {
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
pBucket->numOfSlots = DEFAULT_NUM_OF_SLOT;
|
if (hasWindowOrGroup) {
|
||||||
pBucket->bufPageSize = 16384 * 4; // 16k per page
|
// With window or group by, we need to shrink page size and reduce page num to save memory.
|
||||||
|
pBucket->numOfSlots = DEFAULT_NUM_OF_SLOT / 8 ; // 128 bucket
|
||||||
|
pBucket->bufPageSize = 4096; // 4k per page
|
||||||
|
} else {
|
||||||
|
pBucket->numOfSlots = DEFAULT_NUM_OF_SLOT;
|
||||||
|
pBucket->bufPageSize = 16384 * 4; // 16k per page
|
||||||
|
}
|
||||||
|
|
||||||
pBucket->type = dataType;
|
pBucket->type = dataType;
|
||||||
pBucket->bytes = nElemSize;
|
pBucket->bytes = nElemSize;
|
||||||
|
|
|
@ -883,6 +883,8 @@ static uint8_t getPrecisionFromCurrStmt(SNode* pCurrStmt, uint8_t defaultVal) {
|
||||||
if (isDeleteStmt(pCurrStmt)) {
|
if (isDeleteStmt(pCurrStmt)) {
|
||||||
return ((SDeleteStmt*)pCurrStmt)->precision;
|
return ((SDeleteStmt*)pCurrStmt)->precision;
|
||||||
}
|
}
|
||||||
|
if (pCurrStmt && nodeType(pCurrStmt) == QUERY_NODE_CREATE_TSMA_STMT)
|
||||||
|
return ((SCreateTSMAStmt*)pCurrStmt)->precision;
|
||||||
return defaultVal;
|
return defaultVal;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2194,11 +2196,17 @@ static int32_t translateIndefiniteRowsFunc(STranslateContext* pCxt, SFunctionNod
|
||||||
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
||||||
}
|
}
|
||||||
SSelectStmt* pSelect = (SSelectStmt*)pCxt->pCurrStmt;
|
SSelectStmt* pSelect = (SSelectStmt*)pCxt->pCurrStmt;
|
||||||
if (pSelect->hasAggFuncs || pSelect->hasMultiRowsFunc ||
|
if (pSelect->hasAggFuncs || pSelect->hasMultiRowsFunc) {
|
||||||
(pSelect->hasIndefiniteRowsFunc &&
|
|
||||||
(FUNC_RETURN_ROWS_INDEFINITE == pSelect->returnRows || pSelect->returnRows != fmGetFuncReturnRows(pFunc)))) {
|
|
||||||
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
||||||
}
|
}
|
||||||
|
if (pSelect->hasIndefiniteRowsFunc &&
|
||||||
|
(FUNC_RETURN_ROWS_INDEFINITE == pSelect->returnRows || pSelect->returnRows != fmGetFuncReturnRows(pFunc)) &&
|
||||||
|
(pSelect->lastProcessByRowFuncId == -1 || !fmIsProcessByRowFunc(pFunc->funcId))) {
|
||||||
|
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
||||||
|
}
|
||||||
|
if (pSelect->lastProcessByRowFuncId != -1 && pSelect->lastProcessByRowFuncId != pFunc->funcId) {
|
||||||
|
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_DIFFERENT_BY_ROW_FUNC);
|
||||||
|
}
|
||||||
if (NULL != pSelect->pWindow || NULL != pSelect->pGroupByList) {
|
if (NULL != pSelect->pWindow || NULL != pSelect->pGroupByList) {
|
||||||
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC,
|
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC,
|
||||||
"%s function is not supported in window query or group query", pFunc->functionName);
|
"%s function is not supported in window query or group query", pFunc->functionName);
|
||||||
|
@ -2462,6 +2470,9 @@ static void setFuncClassification(SNode* pCurrStmt, SFunctionNode* pFunc) {
|
||||||
} else if (fmIsInterpFunc(pFunc->funcId)) {
|
} else if (fmIsInterpFunc(pFunc->funcId)) {
|
||||||
pSelect->returnRows = fmGetFuncReturnRows(pFunc);
|
pSelect->returnRows = fmGetFuncReturnRows(pFunc);
|
||||||
}
|
}
|
||||||
|
if (fmIsProcessByRowFunc(pFunc->funcId)) {
|
||||||
|
pSelect->lastProcessByRowFuncId = pFunc->funcId;
|
||||||
|
}
|
||||||
|
|
||||||
pSelect->hasMultiRowsFunc = pSelect->hasMultiRowsFunc ? true : fmIsMultiRowsFunc(pFunc->funcId);
|
pSelect->hasMultiRowsFunc = pSelect->hasMultiRowsFunc ? true : fmIsMultiRowsFunc(pFunc->funcId);
|
||||||
if (fmIsSelectFunc(pFunc->funcId)) {
|
if (fmIsSelectFunc(pFunc->funcId)) {
|
||||||
|
@ -3398,6 +3409,7 @@ static int32_t checkIsEmptyResult(STranslateContext* pCxt, SSelectStmt* pSelect)
|
||||||
static int32_t resetSelectFuncNumWithoutDup(SSelectStmt* pSelect) {
|
static int32_t resetSelectFuncNumWithoutDup(SSelectStmt* pSelect) {
|
||||||
if (pSelect->selectFuncNum <= 1) return TSDB_CODE_SUCCESS;
|
if (pSelect->selectFuncNum <= 1) return TSDB_CODE_SUCCESS;
|
||||||
pSelect->selectFuncNum = 0;
|
pSelect->selectFuncNum = 0;
|
||||||
|
pSelect->lastProcessByRowFuncId = -1;
|
||||||
SNodeList* pNodeList = nodesMakeList();
|
SNodeList* pNodeList = nodesMakeList();
|
||||||
int32_t code = nodesCollectSelectFuncs(pSelect, SQL_CLAUSE_FROM, NULL, fmIsSelectFunc, pNodeList);
|
int32_t code = nodesCollectSelectFuncs(pSelect, SQL_CLAUSE_FROM, NULL, fmIsSelectFunc, pNodeList);
|
||||||
if (TSDB_CODE_SUCCESS != code) {
|
if (TSDB_CODE_SUCCESS != code) {
|
||||||
|
@ -11125,22 +11137,41 @@ static int32_t rewriteTSMAFuncs(STranslateContext* pCxt, SCreateTSMAStmt* pStmt,
|
||||||
|
|
||||||
static int32_t buildCreateTSMAReq(STranslateContext* pCxt, SCreateTSMAStmt* pStmt, SMCreateSmaReq* pReq,
|
static int32_t buildCreateTSMAReq(STranslateContext* pCxt, SCreateTSMAStmt* pStmt, SMCreateSmaReq* pReq,
|
||||||
SName* useTbName) {
|
SName* useTbName) {
|
||||||
SName name;
|
SName name;
|
||||||
|
SDbCfgInfo pDbInfo = {0};
|
||||||
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
tNameExtractFullName(toName(pCxt->pParseCxt->acctId, pStmt->dbName, pStmt->tsmaName, &name), pReq->name);
|
tNameExtractFullName(toName(pCxt->pParseCxt->acctId, pStmt->dbName, pStmt->tsmaName, &name), pReq->name);
|
||||||
memset(&name, 0, sizeof(SName));
|
memset(&name, 0, sizeof(SName));
|
||||||
toName(pCxt->pParseCxt->acctId, pStmt->dbName, pStmt->tableName, useTbName);
|
toName(pCxt->pParseCxt->acctId, pStmt->dbName, pStmt->tableName, useTbName);
|
||||||
tNameExtractFullName(useTbName, pReq->stb);
|
tNameExtractFullName(useTbName, pReq->stb);
|
||||||
pReq->igExists = pStmt->ignoreExists;
|
pReq->igExists = pStmt->ignoreExists;
|
||||||
|
|
||||||
|
code = getDBCfg(pCxt, pStmt->dbName, &pDbInfo);
|
||||||
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
|
return code;
|
||||||
|
}
|
||||||
|
pStmt->precision = pDbInfo.precision;
|
||||||
|
code = translateValue(pCxt, (SValueNode*)pStmt->pOptions->pInterval);
|
||||||
|
if (code == DEAL_RES_ERROR) {
|
||||||
|
return code;
|
||||||
|
}
|
||||||
pReq->interval = ((SValueNode*)pStmt->pOptions->pInterval)->datum.i;
|
pReq->interval = ((SValueNode*)pStmt->pOptions->pInterval)->datum.i;
|
||||||
pReq->intervalUnit = TIME_UNIT_MILLISECOND;
|
pReq->intervalUnit = ((SValueNode*)pStmt->pOptions->pInterval)->unit;
|
||||||
|
|
||||||
#define TSMA_MIN_INTERVAL_MS 1000 * 60 // 1m
|
#define TSMA_MIN_INTERVAL_MS 1000 * 60 // 1m
|
||||||
#define TSMA_MAX_INTERVAL_MS (60 * 60 * 1000) // 1h
|
#define TSMA_MAX_INTERVAL_MS (60UL * 60UL * 1000UL * 24UL * 365UL) // 1y
|
||||||
if (pReq->interval > TSMA_MAX_INTERVAL_MS || pReq->interval < TSMA_MIN_INTERVAL_MS) {
|
|
||||||
return TSDB_CODE_TSMA_INVALID_INTERVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
int32_t code = TSDB_CODE_SUCCESS;
|
if (!IS_CALENDAR_TIME_DURATION(pReq->intervalUnit)) {
|
||||||
|
int64_t factor = TSDB_TICK_PER_SECOND(pDbInfo.precision) / TSDB_TICK_PER_SECOND(TSDB_TIME_PRECISION_MILLI);
|
||||||
|
if (pReq->interval > TSMA_MAX_INTERVAL_MS * factor || pReq->interval < TSMA_MIN_INTERVAL_MS * factor) {
|
||||||
|
return TSDB_CODE_TSMA_INVALID_INTERVAL;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if (pReq->intervalUnit == TIME_UNIT_MONTH && (pReq->interval < 1 || pReq->interval > 12))
|
||||||
|
return TSDB_CODE_TSMA_INVALID_INTERVAL;
|
||||||
|
if (pReq->intervalUnit == TIME_UNIT_YEAR && (pReq->interval != 1))
|
||||||
|
return TSDB_CODE_TSMA_INVALID_INTERVAL;
|
||||||
|
}
|
||||||
|
|
||||||
STableMeta* pTableMeta = NULL;
|
STableMeta* pTableMeta = NULL;
|
||||||
STableTSMAInfo* pRecursiveTsma = NULL;
|
STableTSMAInfo* pRecursiveTsma = NULL;
|
||||||
|
@ -11153,7 +11184,8 @@ static int32_t buildCreateTSMAReq(STranslateContext* pCxt, SCreateTSMAStmt* pStm
|
||||||
pReq->recursiveTsma = true;
|
pReq->recursiveTsma = true;
|
||||||
tNameExtractFullName(useTbName, pReq->baseTsmaName);
|
tNameExtractFullName(useTbName, pReq->baseTsmaName);
|
||||||
SValueNode* pInterval = (SValueNode*)pStmt->pOptions->pInterval;
|
SValueNode* pInterval = (SValueNode*)pStmt->pOptions->pInterval;
|
||||||
if (pRecursiveTsma->interval < pInterval->datum.i && pInterval->datum.i % pRecursiveTsma->interval == 0) {
|
if (checkRecursiveTsmaInterval(pRecursiveTsma->interval, pRecursiveTsma->unit, pInterval->datum.i,
|
||||||
|
pInterval->unit, pDbInfo.precision, true)) {
|
||||||
} else {
|
} else {
|
||||||
code = TSDB_CODE_TSMA_INVALID_PARA;
|
code = TSDB_CODE_TSMA_INVALID_PARA;
|
||||||
}
|
}
|
||||||
|
@ -11224,7 +11256,8 @@ static int32_t buildCreateTSMAReq(STranslateContext* pCxt, SCreateTSMAStmt* pStm
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t translateCreateTSMA(STranslateContext* pCxt, SCreateTSMAStmt* pStmt) {
|
static int32_t translateCreateTSMA(STranslateContext* pCxt, SCreateTSMAStmt* pStmt) {
|
||||||
int32_t code = doTranslateValue(pCxt, (SValueNode*)pStmt->pOptions->pInterval);
|
pCxt->pCurrStmt = (SNode*)pStmt;
|
||||||
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
|
|
||||||
SName useTbName = {0};
|
SName useTbName = {0};
|
||||||
if (code == TSDB_CODE_SUCCESS) {
|
if (code == TSDB_CODE_SUCCESS) {
|
||||||
|
|
|
@ -221,6 +221,8 @@ static char* getSyntaxErrFormat(int32_t errCode) {
|
||||||
return "Table name:%s duplicated";
|
return "Table name:%s duplicated";
|
||||||
case TSDB_CODE_PAR_TAG_NAME_DUPLICATED:
|
case TSDB_CODE_PAR_TAG_NAME_DUPLICATED:
|
||||||
return "Tag name:%s duplicated";
|
return "Tag name:%s duplicated";
|
||||||
|
case TSDB_CODE_PAR_NOT_ALLOWED_DIFFERENT_BY_ROW_FUNC:
|
||||||
|
return "Some functions cannot appear in the select list at the same time";
|
||||||
default:
|
default:
|
||||||
return "Unknown error";
|
return "Unknown error";
|
||||||
}
|
}
|
||||||
|
@ -772,6 +774,7 @@ SNode* createSelectStmtImpl(bool isDistinct, SNodeList* pProjectionList, SNode*
|
||||||
select->onlyHasKeepOrderFunc = true;
|
select->onlyHasKeepOrderFunc = true;
|
||||||
select->timeRange = TSWINDOW_INITIALIZER;
|
select->timeRange = TSWINDOW_INITIALIZER;
|
||||||
select->pHint = pHint;
|
select->pHint = pHint;
|
||||||
|
select->lastProcessByRowFuncId = -1;
|
||||||
return (SNode*)select;
|
return (SNode*)select;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -6018,6 +6018,7 @@ typedef struct STSMAOptUsefulTsma {
|
||||||
SArray* pTsmaScanCols; // SArray<int32_t> index of tsmaFuncs array
|
SArray* pTsmaScanCols; // SArray<int32_t> index of tsmaFuncs array
|
||||||
char targetTbName[TSDB_TABLE_NAME_LEN]; // the scanning table name, used only when pTsma is not NULL
|
char targetTbName[TSDB_TABLE_NAME_LEN]; // the scanning table name, used only when pTsma is not NULL
|
||||||
uint64_t targetTbUid; // the scanning table uid, used only when pTsma is not NULL
|
uint64_t targetTbUid; // the scanning table uid, used only when pTsma is not NULL
|
||||||
|
int8_t precision;
|
||||||
} STSMAOptUsefulTsma;
|
} STSMAOptUsefulTsma;
|
||||||
|
|
||||||
typedef struct STSMAOptCtx {
|
typedef struct STSMAOptCtx {
|
||||||
|
@ -6085,12 +6086,19 @@ static void clearTSMAOptCtx(STSMAOptCtx* pTsmaOptCtx) {
|
||||||
taosMemoryFreeClear(pTsmaOptCtx->queryInterval);
|
taosMemoryFreeClear(pTsmaOptCtx->queryInterval);
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool tsmaOptCheckValidInterval(int64_t tsmaInterval, int8_t tsmaIntevalUnit, const STSMAOptCtx* pTsmaOptCtx) {
|
static bool tsmaOptCheckValidInterval(int64_t tsmaInterval, int8_t unit, const STSMAOptCtx* pTsmaOptCtx) {
|
||||||
if (!pTsmaOptCtx->queryInterval) return true;
|
if (!pTsmaOptCtx->queryInterval) return true;
|
||||||
|
|
||||||
bool validInterval = pTsmaOptCtx->queryInterval->interval % tsmaInterval == 0;
|
bool validInterval = checkRecursiveTsmaInterval(tsmaInterval, unit, pTsmaOptCtx->queryInterval->interval,
|
||||||
bool validSliding = pTsmaOptCtx->queryInterval->sliding % tsmaInterval == 0;
|
pTsmaOptCtx->queryInterval->intervalUnit,
|
||||||
bool validOffset = pTsmaOptCtx->queryInterval->offset % tsmaInterval == 0;
|
pTsmaOptCtx->queryInterval->precision, false);
|
||||||
|
bool validSliding =
|
||||||
|
checkRecursiveTsmaInterval(tsmaInterval, unit, pTsmaOptCtx->queryInterval->sliding,
|
||||||
|
pTsmaOptCtx->queryInterval->slidingUnit, pTsmaOptCtx->queryInterval->precision, false);
|
||||||
|
bool validOffset =
|
||||||
|
pTsmaOptCtx->queryInterval->offset == 0 ||
|
||||||
|
checkRecursiveTsmaInterval(tsmaInterval, unit, pTsmaOptCtx->queryInterval->offset,
|
||||||
|
pTsmaOptCtx->queryInterval->offsetUnit, pTsmaOptCtx->queryInterval->precision, false);
|
||||||
return validInterval && validSliding && validOffset;
|
return validInterval && validSliding && validOffset;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -6171,7 +6179,8 @@ static bool tsmaOptCheckTags(STSMAOptCtx* pCtx, const STableTSMAInfo* pTsma) {
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t tsmaOptFilterTsmas(STSMAOptCtx* pTsmaOptCtx) {
|
static int32_t tsmaOptFilterTsmas(STSMAOptCtx* pTsmaOptCtx) {
|
||||||
STSMAOptUsefulTsma usefulTsma = {.pTsma = NULL, .scanRange = {.skey = TSKEY_MIN, .ekey = TSKEY_MAX}};
|
STSMAOptUsefulTsma usefulTsma = {
|
||||||
|
.pTsma = NULL, .scanRange = {.skey = TSKEY_MIN, .ekey = TSKEY_MAX}, .precision = pTsmaOptCtx->precision};
|
||||||
SArray* pTsmaScanCols = NULL;
|
SArray* pTsmaScanCols = NULL;
|
||||||
|
|
||||||
for (int32_t i = 0; i < pTsmaOptCtx->pTsmas->size; ++i) {
|
for (int32_t i = 0; i < pTsmaOptCtx->pTsmas->size; ++i) {
|
||||||
|
@ -6208,31 +6217,26 @@ static int32_t tsmaOptFilterTsmas(STSMAOptCtx* pTsmaOptCtx) {
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t tsmaInfoCompWithIntervalDesc(const void* pLeft, const void* pRight) {
|
static int32_t tsmaInfoCompWithIntervalDesc(const void* pLeft, const void* pRight) {
|
||||||
|
const int64_t factors[3] = {NANOSECOND_PER_MSEC, NANOSECOND_PER_USEC, 1};
|
||||||
const STSMAOptUsefulTsma *p = pLeft, *q = pRight;
|
const STSMAOptUsefulTsma *p = pLeft, *q = pRight;
|
||||||
int64_t pInterval = p->pTsma->interval, qInterval = q->pTsma->interval;
|
int64_t pInterval = p->pTsma->interval, qInterval = q->pTsma->interval;
|
||||||
int32_t code = getDuration(pInterval, p->pTsma->unit, &pInterval, TSDB_TIME_PRECISION_MILLI);
|
int8_t pUnit = p->pTsma->unit, qUnit = q->pTsma->unit;
|
||||||
ASSERT(code == TSDB_CODE_SUCCESS);
|
if (TIME_UNIT_MONTH == pUnit) {
|
||||||
code = getDuration(qInterval, q->pTsma->unit, &qInterval, TSDB_TIME_PRECISION_MILLI);
|
pInterval = pInterval * 31 * (NANOSECOND_PER_DAY / factors[p->precision]);
|
||||||
ASSERT(code == TSDB_CODE_SUCCESS);
|
} else if (TIME_UNIT_YEAR == pUnit){
|
||||||
|
pInterval = pInterval * 365 * (NANOSECOND_PER_DAY / factors[p->precision]);
|
||||||
|
}
|
||||||
|
if (TIME_UNIT_MONTH == qUnit) {
|
||||||
|
qInterval = qInterval * 31 * (NANOSECOND_PER_DAY / factors[q->precision]);
|
||||||
|
} else if (TIME_UNIT_YEAR == qUnit){
|
||||||
|
qInterval = qInterval * 365 * (NANOSECOND_PER_DAY / factors[q->precision]);
|
||||||
|
}
|
||||||
|
|
||||||
if (pInterval > qInterval) return -1;
|
if (pInterval > qInterval) return -1;
|
||||||
if (pInterval < qInterval) return 1;
|
if (pInterval < qInterval) return 1;
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static const STSMAOptUsefulTsma* tsmaOptFindUsefulTsma(const SArray* pUsefulTsmas, int32_t startIdx,
|
|
||||||
int64_t alignInterval, int64_t alignInterval2,
|
|
||||||
int8_t precision) {
|
|
||||||
int64_t tsmaInterval;
|
|
||||||
for (int32_t i = startIdx; i < pUsefulTsmas->size; ++i) {
|
|
||||||
const STSMAOptUsefulTsma* pUsefulTsma = taosArrayGet(pUsefulTsmas, i);
|
|
||||||
getDuration(pUsefulTsma->pTsma->interval, pUsefulTsma->pTsma->unit, &tsmaInterval, precision);
|
|
||||||
if (alignInterval % tsmaInterval == 0 && alignInterval2 % tsmaInterval == 0) {
|
|
||||||
return pUsefulTsma;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return NULL;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void tsmaOptInitIntervalFromTsma(SInterval* pInterval, const STableTSMAInfo* pTsma, int8_t precision) {
|
static void tsmaOptInitIntervalFromTsma(SInterval* pInterval, const STableTSMAInfo* pTsma, int8_t precision) {
|
||||||
pInterval->interval = pTsma->interval;
|
pInterval->interval = pTsma->interval;
|
||||||
pInterval->intervalUnit = pTsma->unit;
|
pInterval->intervalUnit = pTsma->unit;
|
||||||
|
@ -6243,14 +6247,28 @@ static void tsmaOptInitIntervalFromTsma(SInterval* pInterval, const STableTSMAIn
|
||||||
pInterval->precision = precision;
|
pInterval->precision = precision;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static const STSMAOptUsefulTsma* tsmaOptFindUsefulTsma(const SArray* pUsefulTsmas, int32_t startIdx,
|
||||||
|
int64_t startAlignInterval, int64_t endAlignInterval,
|
||||||
|
int8_t precision) {
|
||||||
|
SInterval tsmaInterval;
|
||||||
|
for (int32_t i = startIdx; i < pUsefulTsmas->size; ++i) {
|
||||||
|
const STSMAOptUsefulTsma* pUsefulTsma = taosArrayGet(pUsefulTsmas, i);
|
||||||
|
tsmaOptInitIntervalFromTsma(&tsmaInterval, pUsefulTsma->pTsma, precision);
|
||||||
|
if (taosTimeTruncate(startAlignInterval, &tsmaInterval) == startAlignInterval &&
|
||||||
|
taosTimeTruncate(endAlignInterval, &tsmaInterval) == endAlignInterval) {
|
||||||
|
return pUsefulTsma;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
static void tsmaOptSplitWindows(STSMAOptCtx* pTsmaOptCtx, const STimeWindow* pScanRange) {
|
static void tsmaOptSplitWindows(STSMAOptCtx* pTsmaOptCtx, const STimeWindow* pScanRange) {
|
||||||
bool needTailWindow = false;
|
bool needTailWindow = false;
|
||||||
bool isSkeyAlignedWithTsma = true, isEkeyAlignedWithTsma = true;
|
bool isSkeyAlignedWithTsma = true, isEkeyAlignedWithTsma = true;
|
||||||
int64_t winSkey = TSKEY_MIN, winEkey = TSKEY_MAX;
|
int64_t winSkey = TSKEY_MIN, winEkey = TSKEY_MAX;
|
||||||
int64_t startOfSkeyFirstWin = pScanRange->skey, endOfSkeyFirstWin;
|
int64_t startOfSkeyFirstWin = pScanRange->skey, endOfSkeyFirstWin;
|
||||||
int64_t startOfEkeyFirstWin = pScanRange->ekey, endOfEkeyFirstWin;
|
int64_t startOfEkeyFirstWin = pScanRange->ekey, endOfEkeyFirstWin;
|
||||||
int64_t tsmaInterval;
|
SInterval interval, tsmaInterval;
|
||||||
SInterval interval;
|
|
||||||
STimeWindow scanRange = *pScanRange;
|
STimeWindow scanRange = *pScanRange;
|
||||||
const SInterval* pInterval = pTsmaOptCtx->queryInterval;
|
const SInterval* pInterval = pTsmaOptCtx->queryInterval;
|
||||||
const STSMAOptUsefulTsma* pUsefulTsma = taosArrayGet(pTsmaOptCtx->pUsefulTsmas, 0);
|
const STSMAOptUsefulTsma* pUsefulTsma = taosArrayGet(pTsmaOptCtx->pUsefulTsmas, 0);
|
||||||
|
@ -6263,14 +6281,14 @@ static void tsmaOptSplitWindows(STSMAOptCtx* pTsmaOptCtx, const STimeWindow* pSc
|
||||||
pInterval = &interval;
|
pInterval = &interval;
|
||||||
}
|
}
|
||||||
|
|
||||||
tsmaInterval = pTsma->interval;
|
tsmaOptInitIntervalFromTsma(&tsmaInterval, pTsma, pTsmaOptCtx->precision);
|
||||||
|
|
||||||
// check for head windows
|
// check for head windows
|
||||||
if (pScanRange->skey != TSKEY_MIN) {
|
if (pScanRange->skey != TSKEY_MIN) {
|
||||||
startOfSkeyFirstWin = taosTimeTruncate(pScanRange->skey, pInterval);
|
startOfSkeyFirstWin = taosTimeTruncate(pScanRange->skey, pInterval);
|
||||||
endOfSkeyFirstWin =
|
endOfSkeyFirstWin =
|
||||||
taosTimeAdd(startOfSkeyFirstWin, pInterval->interval, pInterval->intervalUnit, pTsmaOptCtx->precision);
|
taosTimeAdd(startOfSkeyFirstWin, pInterval->interval, pInterval->intervalUnit, pTsmaOptCtx->precision);
|
||||||
isSkeyAlignedWithTsma = ((pScanRange->skey - startOfSkeyFirstWin) % tsmaInterval == 0);
|
isSkeyAlignedWithTsma = taosTimeTruncate(pScanRange->skey, &tsmaInterval) == pScanRange->skey;
|
||||||
} else {
|
} else {
|
||||||
endOfSkeyFirstWin = TSKEY_MIN;
|
endOfSkeyFirstWin = TSKEY_MIN;
|
||||||
}
|
}
|
||||||
|
@ -6280,7 +6298,7 @@ static void tsmaOptSplitWindows(STSMAOptCtx* pTsmaOptCtx, const STimeWindow* pSc
|
||||||
startOfEkeyFirstWin = taosTimeTruncate(pScanRange->ekey, pInterval);
|
startOfEkeyFirstWin = taosTimeTruncate(pScanRange->ekey, pInterval);
|
||||||
endOfEkeyFirstWin =
|
endOfEkeyFirstWin =
|
||||||
taosTimeAdd(startOfEkeyFirstWin, pInterval->interval, pInterval->intervalUnit, pTsmaOptCtx->precision);
|
taosTimeAdd(startOfEkeyFirstWin, pInterval->interval, pInterval->intervalUnit, pTsmaOptCtx->precision);
|
||||||
isEkeyAlignedWithTsma = ((pScanRange->ekey + 1 - startOfEkeyFirstWin) % tsmaInterval == 0);
|
isEkeyAlignedWithTsma = taosTimeTruncate(pScanRange->ekey + 1, &tsmaInterval) == (pScanRange->ekey + 1);
|
||||||
if (startOfEkeyFirstWin > startOfSkeyFirstWin) {
|
if (startOfEkeyFirstWin > startOfSkeyFirstWin) {
|
||||||
needTailWindow = true;
|
needTailWindow = true;
|
||||||
}
|
}
|
||||||
|
@ -6292,8 +6310,7 @@ static void tsmaOptSplitWindows(STSMAOptCtx* pTsmaOptCtx, const STimeWindow* pSc
|
||||||
scanRange.ekey,
|
scanRange.ekey,
|
||||||
taosTimeAdd(startOfSkeyFirstWin, pInterval->interval * 1, pInterval->intervalUnit, pTsmaOptCtx->precision) - 1);
|
taosTimeAdd(startOfSkeyFirstWin, pInterval->interval * 1, pInterval->intervalUnit, pTsmaOptCtx->precision) - 1);
|
||||||
const STSMAOptUsefulTsma* pTsmaFound =
|
const STSMAOptUsefulTsma* pTsmaFound =
|
||||||
tsmaOptFindUsefulTsma(pTsmaOptCtx->pUsefulTsmas, 1, scanRange.skey - startOfSkeyFirstWin,
|
tsmaOptFindUsefulTsma(pTsmaOptCtx->pUsefulTsmas, 1, scanRange.skey, scanRange.ekey + 1, pTsmaOptCtx->precision);
|
||||||
(scanRange.ekey + 1 - startOfSkeyFirstWin), pTsmaOptCtx->precision);
|
|
||||||
STSMAOptUsefulTsma usefulTsma = {.pTsma = pTsmaFound ? pTsmaFound->pTsma : NULL,
|
STSMAOptUsefulTsma usefulTsma = {.pTsma = pTsmaFound ? pTsmaFound->pTsma : NULL,
|
||||||
.scanRange = scanRange,
|
.scanRange = scanRange,
|
||||||
.pTsmaScanCols = pTsmaFound ? pTsmaFound->pTsmaScanCols : NULL};
|
.pTsmaScanCols = pTsmaFound ? pTsmaFound->pTsmaScanCols : NULL};
|
||||||
|
|
|
@ -201,6 +201,7 @@ void schedulerDestroy(void) {
|
||||||
}
|
}
|
||||||
SCH_UNLOCK(SCH_WRITE, &schMgmt.hbLock);
|
SCH_UNLOCK(SCH_WRITE, &schMgmt.hbLock);
|
||||||
|
|
||||||
|
taosTmrCleanUp(schMgmt.timer);
|
||||||
qWorkerDestroy(&schMgmt.queryMgmt);
|
qWorkerDestroy(&schMgmt.queryMgmt);
|
||||||
schMgmt.queryMgmt = NULL;
|
schMgmt.queryMgmt = NULL;
|
||||||
}
|
}
|
||||||
|
|
|
@ -131,20 +131,21 @@ typedef struct {
|
||||||
TdThreadRwlock rwLock;
|
TdThreadRwlock rwLock;
|
||||||
} SBkdMgt;
|
} SBkdMgt;
|
||||||
|
|
||||||
bool streamBackendDataIsExist(const char* path, int64_t chkpId, int32_t vgId);
|
#define META_ON_S3_FORMATE "%s_%" PRId64 "\n%s_%" PRId64 "\n%s_%" PRId64 ""
|
||||||
|
|
||||||
|
bool streamBackendDataIsExist(const char* path, int64_t chkpId);
|
||||||
void* streamBackendInit(const char* path, int64_t chkpId, int32_t vgId);
|
void* streamBackendInit(const char* path, int64_t chkpId, int32_t vgId);
|
||||||
void streamBackendCleanup(void* arg);
|
void streamBackendCleanup(void* arg);
|
||||||
void streamBackendHandleCleanup(void* arg);
|
void streamBackendHandleCleanup(void* arg);
|
||||||
int32_t streamBackendLoadCheckpointInfo(void* pMeta);
|
int32_t streamBackendLoadCheckpointInfo(void* pMeta);
|
||||||
int32_t streamBackendDoCheckpoint(void* pMeta, int64_t checkpointId);
|
int32_t streamBackendDoCheckpoint(void* pMeta, int64_t checkpointId, int64_t processver);
|
||||||
SListNode* streamBackendAddCompare(void* backend, void* arg);
|
SListNode* streamBackendAddCompare(void* backend, void* arg);
|
||||||
void streamBackendDelCompare(void* backend, void* arg);
|
void streamBackendDelCompare(void* backend, void* arg);
|
||||||
int32_t streamStateCvtDataFormat(char* path, char* key, void* cfInst);
|
int32_t streamStateCvtDataFormat(char* path, char* key, void* cfInst);
|
||||||
|
|
||||||
STaskDbWrapper* taskDbOpen(const char* path, const char* key, int64_t chkptId);
|
STaskDbWrapper* taskDbOpen(const char* path, const char* key, int64_t chkptId, int64_t* processVer);
|
||||||
void taskDbDestroy(void* pBackend, bool flush);
|
void taskDbDestroy(void* pBackend, bool flush);
|
||||||
void taskDbDestroy2(void* pBackend);
|
void taskDbDestroy2(void* pBackend);
|
||||||
int32_t taskDbDoCheckpoint(void* arg, int64_t chkpId);
|
|
||||||
|
|
||||||
void taskDbUpdateChkpId(void* pTaskDb, int64_t chkpId);
|
void taskDbUpdateChkpId(void* pTaskDb, int64_t chkpId);
|
||||||
|
|
||||||
|
@ -249,7 +250,7 @@ int32_t streamBackendDelInUseChkp(void* arg, int64_t chkpId);
|
||||||
int32_t taskDbBuildSnap(void* arg, SArray* pSnap);
|
int32_t taskDbBuildSnap(void* arg, SArray* pSnap);
|
||||||
int32_t taskDbDestroySnap(void* arg, SArray* pSnapInfo);
|
int32_t taskDbDestroySnap(void* arg, SArray* pSnapInfo);
|
||||||
|
|
||||||
int32_t taskDbDoCheckpoint(void* arg, int64_t chkpId);
|
int32_t taskDbDoCheckpoint(void* arg, int64_t chkpId, int64_t processId);
|
||||||
|
|
||||||
SBkdMgt* bkdMgtCreate(char* path);
|
SBkdMgt* bkdMgtCreate(char* path);
|
||||||
int32_t bkdMgtAddChkp(SBkdMgt* bm, char* task, char* path);
|
int32_t bkdMgtAddChkp(SBkdMgt* bm, char* task, char* path);
|
||||||
|
@ -259,6 +260,7 @@ void bkdMgtDestroy(SBkdMgt* bm);
|
||||||
|
|
||||||
int32_t taskDbGenChkpUploadData(void* arg, void* bkdMgt, int64_t chkpId, int8_t type, char** path, SArray* list,
|
int32_t taskDbGenChkpUploadData(void* arg, void* bkdMgt, int64_t chkpId, int8_t type, char** path, SArray* list,
|
||||||
const char* id);
|
const char* id);
|
||||||
|
int32_t remoteChkpGetDelFile(char* path, SArray* toDel);
|
||||||
|
|
||||||
void* taskAcquireDb(int64_t refId);
|
void* taskAcquireDb(int64_t refId);
|
||||||
void taskReleaseDb(int64_t refId);
|
void taskReleaseDb(int64_t refId);
|
||||||
|
|
|
@ -57,6 +57,13 @@ SStreamDataBlock* createChkptTriggerBlock(SStreamTask* pTask, int32_t checkpoint
|
||||||
pBlock->info.childId = pTask->info.selfChildId;
|
pBlock->info.childId = pTask->info.selfChildId;
|
||||||
|
|
||||||
pChkpoint->blocks = taosArrayInit(4, sizeof(SSDataBlock)); // pBlock;
|
pChkpoint->blocks = taosArrayInit(4, sizeof(SSDataBlock)); // pBlock;
|
||||||
|
if (pChkpoint->blocks == NULL) {
|
||||||
|
taosMemoryFree(pBlock);
|
||||||
|
taosFreeQitem(pChkpoint);
|
||||||
|
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
taosArrayPush(pChkpoint->blocks, pBlock);
|
taosArrayPush(pChkpoint->blocks, pBlock);
|
||||||
|
|
||||||
taosMemoryFree(pBlock);
|
taosMemoryFree(pBlock);
|
||||||
|
@ -112,7 +119,12 @@ int32_t streamTaskProcessCheckpointTriggerRsp(SStreamTask* pTask, SCheckpointTri
|
||||||
int32_t streamTaskSendCheckpointTriggerMsg(SStreamTask* pTask, int32_t dstTaskId, int32_t downstreamNodeId,
|
int32_t streamTaskSendCheckpointTriggerMsg(SStreamTask* pTask, int32_t dstTaskId, int32_t downstreamNodeId,
|
||||||
SRpcHandleInfo* pRpcInfo, int32_t code) {
|
SRpcHandleInfo* pRpcInfo, int32_t code) {
|
||||||
int32_t size = sizeof(SMsgHead) + sizeof(SCheckpointTriggerRsp);
|
int32_t size = sizeof(SMsgHead) + sizeof(SCheckpointTriggerRsp);
|
||||||
void* pBuf = rpcMallocCont(size);
|
|
||||||
|
void* pBuf = rpcMallocCont(size);
|
||||||
|
if (pBuf == NULL) {
|
||||||
|
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
return terrno;
|
||||||
|
}
|
||||||
|
|
||||||
SCheckpointTriggerRsp* pRsp = POINTER_SHIFT(pBuf, sizeof(SMsgHead));
|
SCheckpointTriggerRsp* pRsp = POINTER_SHIFT(pBuf, sizeof(SMsgHead));
|
||||||
|
|
||||||
|
@ -133,6 +145,7 @@ int32_t streamTaskSendCheckpointTriggerMsg(SStreamTask* pTask, int32_t dstTaskId
|
||||||
|
|
||||||
SRpcMsg rspMsg = {.code = 0, .pCont = pBuf, .contLen = size, .info = *pRpcInfo};
|
SRpcMsg rspMsg = {.code = 0, .pCont = pBuf, .contLen = size, .info = *pRpcInfo};
|
||||||
tmsgSendRsp(&rspMsg);
|
tmsgSendRsp(&rspMsg);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -533,65 +546,57 @@ void streamTaskSetFailedCheckpointId(SStreamTask* pTask) {
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t getCheckpointDataMeta(const char* id, const char* path, SArray* list) {
|
static int32_t getCheckpointDataMeta(const char* id, const char* path, SArray* list) {
|
||||||
char buf[128] = {0};
|
int32_t code = 0;
|
||||||
|
int32_t cap = strlen(path) + 64;
|
||||||
|
|
||||||
char* file = taosMemoryCalloc(1, strlen(path) + 32);
|
char* filePath = taosMemoryCalloc(1, cap);
|
||||||
sprintf(file, "%s%s%s", path, TD_DIRSEP, "META_TMP");
|
if (filePath == NULL) {
|
||||||
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
}
|
||||||
|
|
||||||
int32_t code = downloadCheckpointDataByName(id, "META", file);
|
int32_t nBytes = snprintf(filePath, cap, "%s%s%s", path, TD_DIRSEP, "META_TMP");
|
||||||
|
if (nBytes <= 0 || nBytes >= cap) {
|
||||||
|
taosMemoryFree(filePath);
|
||||||
|
return TSDB_CODE_OUT_OF_RANGE;
|
||||||
|
}
|
||||||
|
|
||||||
|
code = downloadCheckpointDataByName(id, "META", filePath);
|
||||||
if (code != 0) {
|
if (code != 0) {
|
||||||
stDebug("%s chkp failed to download meta file:%s", id, file);
|
stError("%s chkp failed to download meta file:%s", id, filePath);
|
||||||
taosMemoryFree(file);
|
taosMemoryFree(filePath);
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
TdFilePtr pFile = taosOpenFile(file, TD_FILE_READ);
|
code = remoteChkpGetDelFile(filePath, list);
|
||||||
if (pFile == NULL) {
|
if (code != 0) {
|
||||||
stError("%s failed to open meta file:%s for checkpoint", id, file);
|
stError("%s chkp failed to get to del:%s", id, filePath);
|
||||||
code = -1;
|
taosMemoryFree(filePath);
|
||||||
return code;
|
|
||||||
}
|
}
|
||||||
|
return 0;
|
||||||
if (taosReadFile(pFile, buf, sizeof(buf)) <= 0) {
|
|
||||||
stError("%s failed to read meta file:%s for checkpoint", id, file);
|
|
||||||
code = -1;
|
|
||||||
} else {
|
|
||||||
int32_t len = strnlen(buf, tListLen(buf));
|
|
||||||
for (int i = 0; i < len; i++) {
|
|
||||||
if (buf[i] == '\n') {
|
|
||||||
char* item = taosMemoryCalloc(1, i + 1);
|
|
||||||
memcpy(item, buf, i);
|
|
||||||
taosArrayPush(list, &item);
|
|
||||||
|
|
||||||
item = taosMemoryCalloc(1, len - i);
|
|
||||||
memcpy(item, buf + i + 1, len - i - 1);
|
|
||||||
taosArrayPush(list, &item);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
taosCloseFile(&pFile);
|
|
||||||
taosRemoveFile(file);
|
|
||||||
taosMemoryFree(file);
|
|
||||||
return code;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t uploadCheckpointData(SStreamTask* pTask, int64_t checkpointId, int64_t dbRefId, ECHECKPOINT_BACKUP_TYPE type) {
|
int32_t uploadCheckpointData(SStreamTask* pTask, int64_t checkpointId, int64_t dbRefId, ECHECKPOINT_BACKUP_TYPE type) {
|
||||||
char* path = NULL;
|
int32_t code = 0;
|
||||||
int32_t code = 0;
|
char* path = NULL;
|
||||||
SArray* toDelFiles = taosArrayInit(4, POINTER_BYTES);
|
|
||||||
int64_t now = taosGetTimestampMs();
|
|
||||||
SStreamMeta* pMeta = pTask->pMeta;
|
SStreamMeta* pMeta = pTask->pMeta;
|
||||||
const char* idStr = pTask->id.idStr;
|
const char* idStr = pTask->id.idStr;
|
||||||
|
int64_t now = taosGetTimestampMs();
|
||||||
|
|
||||||
|
SArray* toDelFiles = taosArrayInit(4, POINTER_BYTES);
|
||||||
|
if (toDelFiles == NULL) {
|
||||||
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
}
|
||||||
|
|
||||||
if ((code = taskDbGenChkpUploadData(pTask->pBackend, pMeta->bkdChkptMgt, checkpointId, type, &path, toDelFiles,
|
if ((code = taskDbGenChkpUploadData(pTask->pBackend, pMeta->bkdChkptMgt, checkpointId, type, &path, toDelFiles,
|
||||||
pTask->id.idStr)) != 0) {
|
pTask->id.idStr)) != 0) {
|
||||||
stError("s-task:%s failed to gen upload checkpoint:%" PRId64, idStr, checkpointId);
|
stError("s-task:%s failed to gen upload checkpoint:%" PRId64 ", reason:%s", idStr, checkpointId, tstrerror(code));
|
||||||
}
|
}
|
||||||
|
|
||||||
if (type == DATA_UPLOAD_S3) {
|
if (type == DATA_UPLOAD_S3) {
|
||||||
if (code == TSDB_CODE_SUCCESS && (code = getCheckpointDataMeta(idStr, path, toDelFiles)) != 0) {
|
if (code == TSDB_CODE_SUCCESS && (code = getCheckpointDataMeta(idStr, path, toDelFiles)) != 0) {
|
||||||
stError("s-task:%s failed to get checkpointData for checkpointId:%" PRId64 " meta", idStr, checkpointId);
|
stError("s-task:%s failed to get checkpointData for checkpointId:%" PRId64 ", reason:%s", idStr, checkpointId,
|
||||||
|
tstrerror(code));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -600,7 +605,8 @@ int32_t uploadCheckpointData(SStreamTask* pTask, int64_t checkpointId, int64_t d
|
||||||
if (code == TSDB_CODE_SUCCESS) {
|
if (code == TSDB_CODE_SUCCESS) {
|
||||||
stDebug("s-task:%s upload checkpointId:%" PRId64 " to remote succ", idStr, checkpointId);
|
stDebug("s-task:%s upload checkpointId:%" PRId64 " to remote succ", idStr, checkpointId);
|
||||||
} else {
|
} else {
|
||||||
stError("s-task:%s failed to upload checkpointId:%" PRId64 " data:%s", idStr, checkpointId, path);
|
stError("s-task:%s failed to upload checkpointId:%" PRId64 " path:%s,reason:%s", idStr, checkpointId, path,
|
||||||
|
tstrerror(code));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -673,7 +679,8 @@ int32_t streamTaskBuildCheckpoint(SStreamTask* pTask) {
|
||||||
if (pTask->info.taskLevel != TASK_LEVEL__SINK) {
|
if (pTask->info.taskLevel != TASK_LEVEL__SINK) {
|
||||||
stDebug("s-task:%s level:%d start gen checkpoint, checkpointId:%" PRId64, id, pTask->info.taskLevel, ckId);
|
stDebug("s-task:%s level:%d start gen checkpoint, checkpointId:%" PRId64, id, pTask->info.taskLevel, ckId);
|
||||||
|
|
||||||
code = streamBackendDoCheckpoint(pTask->pBackend, ckId);
|
int64_t ver = pTask->chkInfo.processedVer;
|
||||||
|
code = streamBackendDoCheckpoint(pTask->pBackend, ckId, ver);
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
stError("s-task:%s gen checkpoint:%" PRId64 " failed, code:%s", id, ckId, tstrerror(terrno));
|
stError("s-task:%s gen checkpoint:%" PRId64 " failed, code:%s", id, ckId, tstrerror(terrno));
|
||||||
}
|
}
|
||||||
|
@ -781,6 +788,11 @@ void checkpointTriggerMonitorFn(void* param, void* tmrId) {
|
||||||
SArray* pList = pTask->upstreamInfo.pList;
|
SArray* pList = pTask->upstreamInfo.pList;
|
||||||
ASSERT(pTask->info.taskLevel > TASK_LEVEL__SOURCE);
|
ASSERT(pTask->info.taskLevel > TASK_LEVEL__SOURCE);
|
||||||
SArray* pNotSendList = taosArrayInit(4, sizeof(SStreamUpstreamEpInfo));
|
SArray* pNotSendList = taosArrayInit(4, sizeof(SStreamUpstreamEpInfo));
|
||||||
|
if (pNotSendList == NULL) {
|
||||||
|
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
stDebug("s-task:%s start to triggerMonitor, reason:%s", id, tstrerror(terrno));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
for (int32_t i = 0; i < taosArrayGetSize(pList); ++i) {
|
for (int32_t i = 0; i < taosArrayGetSize(pList); ++i) {
|
||||||
SStreamUpstreamEpInfo* pInfo = taosArrayGetP(pList, i);
|
SStreamUpstreamEpInfo* pInfo = taosArrayGetP(pList, i);
|
||||||
|
@ -987,52 +999,77 @@ void streamTaskSetTriggerDispatchConfirmed(SStreamTask* pTask, int32_t vgId) {
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t uploadCheckpointToS3(const char* id, const char* path) {
|
static int32_t uploadCheckpointToS3(const char* id, const char* path) {
|
||||||
|
int32_t code = 0;
|
||||||
|
int32_t nBytes = 0;
|
||||||
|
|
||||||
|
if (s3Init() != 0) {
|
||||||
|
return TSDB_CODE_THIRDPARTY_ERROR;
|
||||||
|
}
|
||||||
|
|
||||||
TdDirPtr pDir = taosOpenDir(path);
|
TdDirPtr pDir = taosOpenDir(path);
|
||||||
if (pDir == NULL) return -1;
|
if (pDir == NULL) {
|
||||||
|
return TAOS_SYSTEM_ERROR(errno);
|
||||||
|
}
|
||||||
|
|
||||||
TdDirEntryPtr de = NULL;
|
TdDirEntryPtr de = NULL;
|
||||||
s3Init();
|
|
||||||
while ((de = taosReadDir(pDir)) != NULL) {
|
while ((de = taosReadDir(pDir)) != NULL) {
|
||||||
char* name = taosGetDirEntryName(de);
|
char* name = taosGetDirEntryName(de);
|
||||||
if (strcmp(name, ".") == 0 || strcmp(name, "..") == 0 || taosDirEntryIsDir(de)) continue;
|
if (strcmp(name, ".") == 0 || strcmp(name, "..") == 0 || taosDirEntryIsDir(de)) continue;
|
||||||
|
|
||||||
char filename[PATH_MAX] = {0};
|
char filename[PATH_MAX] = {0};
|
||||||
if (path[strlen(path) - 1] == TD_DIRSEP_CHAR) {
|
if (path[strlen(path) - 1] == TD_DIRSEP_CHAR) {
|
||||||
snprintf(filename, sizeof(filename), "%s%s", path, name);
|
nBytes = snprintf(filename, sizeof(filename), "%s%s", path, name);
|
||||||
|
if (nBytes <= 0 || nBytes >= sizeof(filename)) {
|
||||||
|
code = TSDB_CODE_OUT_OF_RANGE;
|
||||||
|
break;
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
snprintf(filename, sizeof(filename), "%s%s%s", path, TD_DIRSEP, name);
|
nBytes = snprintf(filename, sizeof(filename), "%s%s%s", path, TD_DIRSEP, name);
|
||||||
|
if (nBytes <= 0 || nBytes >= sizeof(filename)) {
|
||||||
|
code = TSDB_CODE_OUT_OF_RANGE;
|
||||||
|
break;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
char object[PATH_MAX] = {0};
|
char object[PATH_MAX] = {0};
|
||||||
snprintf(object, sizeof(object), "%s%s%s", id, TD_DIRSEP, name);
|
nBytes = snprintf(object, sizeof(object), "%s%s%s", id, TD_DIRSEP, name);
|
||||||
|
if (nBytes <= 0 || nBytes >= sizeof(object)) {
|
||||||
if (s3PutObjectFromFile2(filename, object, 0) != 0) {
|
code = TSDB_CODE_OUT_OF_RANGE;
|
||||||
taosCloseDir(&pDir);
|
break;
|
||||||
return -1;
|
|
||||||
}
|
}
|
||||||
stDebug("[s3] upload checkpoint:%s", filename);
|
|
||||||
// break;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
code = s3PutObjectFromFile2(filename, object, 0);
|
||||||
|
if (code != 0) {
|
||||||
|
stError("[s3] failed to upload checkpoint:%s, reason:%s", filename, tstrerror(code));
|
||||||
|
} else {
|
||||||
|
stDebug("[s3] upload checkpoint:%s", filename);
|
||||||
|
}
|
||||||
|
}
|
||||||
taosCloseDir(&pDir);
|
taosCloseDir(&pDir);
|
||||||
return 0;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t downloadCheckpointByNameS3(const char* id, const char* fname, const char* dstName) {
|
int32_t downloadCheckpointByNameS3(const char* id, const char* fname, const char* dstName) {
|
||||||
int32_t code = 0;
|
int32_t nBytes;
|
||||||
char* buf = taosMemoryCalloc(1, strlen(id) + strlen(dstName) + 4);
|
int32_t cap = strlen(id) + strlen(dstName) + 16;
|
||||||
|
|
||||||
|
char* buf = taosMemoryCalloc(1, cap);
|
||||||
if (buf == NULL) {
|
if (buf == NULL) {
|
||||||
code = terrno = TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
return code;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
sprintf(buf, "%s/%s", id, fname);
|
nBytes = snprintf(buf, cap, "%s/%s", id, fname);
|
||||||
if (s3GetObjectToFile(buf, dstName) != 0) {
|
if (nBytes <= 0 || nBytes >= cap) {
|
||||||
code = errno;
|
taosMemoryFree(buf);
|
||||||
|
return TSDB_CODE_OUT_OF_RANGE;
|
||||||
|
}
|
||||||
|
int32_t code = s3GetObjectToFile(buf, dstName);
|
||||||
|
if (code != 0) {
|
||||||
|
taosMemoryFree(buf);
|
||||||
|
return TAOS_SYSTEM_ERROR(errno);
|
||||||
}
|
}
|
||||||
|
|
||||||
taosMemoryFree(buf);
|
taosMemoryFree(buf);
|
||||||
return code;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
ECHECKPOINT_BACKUP_TYPE streamGetCheckpointBackupType() {
|
ECHECKPOINT_BACKUP_TYPE streamGetCheckpointBackupType() {
|
||||||
|
@ -1046,13 +1083,17 @@ ECHECKPOINT_BACKUP_TYPE streamGetCheckpointBackupType() {
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t streamTaskUploadCheckpoint(const char* id, const char* path) {
|
int32_t streamTaskUploadCheckpoint(const char* id, const char* path) {
|
||||||
|
int32_t code = 0;
|
||||||
if (id == NULL || path == NULL || strlen(id) == 0 || strlen(path) == 0 || strlen(path) >= PATH_MAX) {
|
if (id == NULL || path == NULL || strlen(id) == 0 || strlen(path) == 0 || strlen(path) >= PATH_MAX) {
|
||||||
stError("invalid parameters in upload checkpoint, %s", id);
|
stError("invalid parameters in upload checkpoint, %s", id);
|
||||||
return -1;
|
return TSDB_CODE_INVALID_CFG;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (strlen(tsSnodeAddress) != 0) {
|
if (strlen(tsSnodeAddress) != 0) {
|
||||||
return uploadByRsync(id, path);
|
code = uploadByRsync(id, path);
|
||||||
|
if (code != 0) {
|
||||||
|
return TAOS_SYSTEM_ERROR(errno);
|
||||||
|
}
|
||||||
} else if (tsS3StreamEnabled) {
|
} else if (tsS3StreamEnabled) {
|
||||||
return uploadCheckpointToS3(id, path);
|
return uploadCheckpointToS3(id, path);
|
||||||
}
|
}
|
||||||
|
@ -1064,7 +1105,7 @@ int32_t streamTaskUploadCheckpoint(const char* id, const char* path) {
|
||||||
int32_t downloadCheckpointDataByName(const char* id, const char* fname, const char* dstName) {
|
int32_t downloadCheckpointDataByName(const char* id, const char* fname, const char* dstName) {
|
||||||
if (id == NULL || fname == NULL || strlen(id) == 0 || strlen(fname) == 0 || strlen(fname) >= PATH_MAX) {
|
if (id == NULL || fname == NULL || strlen(id) == 0 || strlen(fname) == 0 || strlen(fname) >= PATH_MAX) {
|
||||||
stError("down load checkpoint data parameters invalid");
|
stError("down load checkpoint data parameters invalid");
|
||||||
return -1;
|
return TSDB_CODE_INVALID_PARA;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (strlen(tsSnodeAddress) != 0) {
|
if (strlen(tsSnodeAddress) != 0) {
|
||||||
|
@ -1094,7 +1135,7 @@ int32_t streamTaskDownloadCheckpointData(const char* id, char* path) {
|
||||||
int32_t deleteCheckpoint(const char* id) {
|
int32_t deleteCheckpoint(const char* id) {
|
||||||
if (id == NULL || strlen(id) == 0) {
|
if (id == NULL || strlen(id) == 0) {
|
||||||
stError("deleteCheckpoint parameters invalid");
|
stError("deleteCheckpoint parameters invalid");
|
||||||
return -1;
|
return TSDB_CODE_INVALID_PARA;
|
||||||
}
|
}
|
||||||
if (strlen(tsSnodeAddress) != 0) {
|
if (strlen(tsSnodeAddress) != 0) {
|
||||||
return deleteRsync(id);
|
return deleteRsync(id);
|
||||||
|
@ -1106,11 +1147,18 @@ int32_t deleteCheckpoint(const char* id) {
|
||||||
|
|
||||||
int32_t deleteCheckpointFile(const char* id, const char* name) {
|
int32_t deleteCheckpointFile(const char* id, const char* name) {
|
||||||
char object[128] = {0};
|
char object[128] = {0};
|
||||||
snprintf(object, sizeof(object), "%s/%s", id, name);
|
|
||||||
|
|
||||||
char* tmp = object;
|
int32_t nBytes = snprintf(object, sizeof(object), "%s/%s", id, name);
|
||||||
s3DeleteObjects((const char**)&tmp, 1);
|
if (nBytes <= 0 || nBytes >= sizeof(object)) {
|
||||||
return 0;
|
return TSDB_CODE_OUT_OF_RANGE;
|
||||||
|
}
|
||||||
|
|
||||||
|
char* tmp = object;
|
||||||
|
int32_t code = s3DeleteObjects((const char**)&tmp, 1);
|
||||||
|
if (code != 0) {
|
||||||
|
return TSDB_CODE_THIRDPARTY_ERROR;
|
||||||
|
}
|
||||||
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t streamTaskSendRestoreChkptMsg(SStreamTask* pTask) {
|
int32_t streamTaskSendRestoreChkptMsg(SStreamTask* pTask) {
|
||||||
|
|
|
@ -182,9 +182,10 @@ int32_t streamMetaCheckBackendCompatible(SStreamMeta* pMeta) {
|
||||||
int32_t streamMetaCvtDbFormat(SStreamMeta* pMeta) {
|
int32_t streamMetaCvtDbFormat(SStreamMeta* pMeta) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
int64_t chkpId = streamMetaGetLatestCheckpointId(pMeta);
|
int64_t chkpId = streamMetaGetLatestCheckpointId(pMeta);
|
||||||
|
terrno = 0;
|
||||||
bool exist = streamBackendDataIsExist(pMeta->path, chkpId, pMeta->vgId);
|
bool exist = streamBackendDataIsExist(pMeta->path, chkpId);
|
||||||
if (exist == false) {
|
if (exist == false) {
|
||||||
|
code = terrno;
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -252,8 +253,9 @@ int32_t streamTaskSetDb(SStreamMeta* pMeta, SStreamTask* pTask, const char* key)
|
||||||
}
|
}
|
||||||
|
|
||||||
STaskDbWrapper* pBackend = NULL;
|
STaskDbWrapper* pBackend = NULL;
|
||||||
|
int64_t processVer = -1;
|
||||||
while (1) {
|
while (1) {
|
||||||
pBackend = taskDbOpen(pMeta->path, key, chkpId);
|
pBackend = taskDbOpen(pMeta->path, key, chkpId, &processVer);
|
||||||
if (pBackend != NULL) {
|
if (pBackend != NULL) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
@ -271,6 +273,8 @@ int32_t streamTaskSetDb(SStreamMeta* pMeta, SStreamTask* pTask, const char* key)
|
||||||
pBackend->pTask = pTask;
|
pBackend->pTask = pTask;
|
||||||
pBackend->pMeta = pMeta;
|
pBackend->pMeta = pMeta;
|
||||||
|
|
||||||
|
if (processVer != -1) pTask->chkInfo.processedVer = processVer;
|
||||||
|
|
||||||
taosHashPut(pMeta->pTaskDbUnique, key, strlen(key), &pBackend, sizeof(void*));
|
taosHashPut(pMeta->pTaskDbUnique, key, strlen(key), &pBackend, sizeof(void*));
|
||||||
taosThreadMutexUnlock(&pMeta->backendMutex);
|
taosThreadMutexUnlock(&pMeta->backendMutex);
|
||||||
|
|
||||||
|
@ -308,7 +312,8 @@ SStreamMeta* streamMetaOpen(const char* path, void* ahandle, FTaskBuild buildTas
|
||||||
}
|
}
|
||||||
|
|
||||||
if (streamMetaMayCvtDbFormat(pMeta) < 0) {
|
if (streamMetaMayCvtDbFormat(pMeta) < 0) {
|
||||||
stError("vgId:%d convert sub info format failed, open stream meta failed", pMeta->vgId);
|
stError("vgId:%d convert sub info format failed, open stream meta failed, reason: %s", pMeta->vgId,
|
||||||
|
tstrerror(terrno));
|
||||||
goto _err;
|
goto _err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -393,6 +398,9 @@ SStreamMeta* streamMetaOpen(const char* path, void* ahandle, FTaskBuild buildTas
|
||||||
pMeta->qHandle = taosInitScheduler(32, 1, "stream-chkp", NULL);
|
pMeta->qHandle = taosInitScheduler(32, 1, "stream-chkp", NULL);
|
||||||
|
|
||||||
pMeta->bkdChkptMgt = bkdMgtCreate(tpath);
|
pMeta->bkdChkptMgt = bkdMgtCreate(tpath);
|
||||||
|
if (pMeta->bkdChkptMgt == NULL) {
|
||||||
|
goto _err;
|
||||||
|
}
|
||||||
taosThreadMutexInit(&pMeta->backendMutex, NULL);
|
taosThreadMutexInit(&pMeta->backendMutex, NULL);
|
||||||
|
|
||||||
return pMeta;
|
return pMeta;
|
||||||
|
@ -408,9 +416,10 @@ _err:
|
||||||
if (pMeta->updateInfo.pTasks) taosHashCleanup(pMeta->updateInfo.pTasks);
|
if (pMeta->updateInfo.pTasks) taosHashCleanup(pMeta->updateInfo.pTasks);
|
||||||
if (pMeta->startInfo.pReadyTaskSet) taosHashCleanup(pMeta->startInfo.pReadyTaskSet);
|
if (pMeta->startInfo.pReadyTaskSet) taosHashCleanup(pMeta->startInfo.pReadyTaskSet);
|
||||||
if (pMeta->startInfo.pFailedTaskSet) taosHashCleanup(pMeta->startInfo.pFailedTaskSet);
|
if (pMeta->startInfo.pFailedTaskSet) taosHashCleanup(pMeta->startInfo.pFailedTaskSet);
|
||||||
|
if (pMeta->bkdChkptMgt) bkdMgtDestroy(pMeta->bkdChkptMgt);
|
||||||
taosMemoryFree(pMeta);
|
taosMemoryFree(pMeta);
|
||||||
|
|
||||||
stError("failed to open stream meta");
|
stError("failed to open stream meta, reason:%s", tstrerror(terrno));
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -900,7 +909,7 @@ void streamMetaLoadAllTasks(SStreamMeta* pMeta) {
|
||||||
if (p == NULL) {
|
if (p == NULL) {
|
||||||
code = pMeta->buildTaskFn(pMeta->ahandle, pTask, pTask->chkInfo.checkpointVer + 1);
|
code = pMeta->buildTaskFn(pMeta->ahandle, pTask, pTask->chkInfo.checkpointVer + 1);
|
||||||
if (code < 0) {
|
if (code < 0) {
|
||||||
stError("failed to load s-task:0x%"PRIx64", code:%s, continue", id.taskId, tstrerror(terrno));
|
stError("failed to load s-task:0x%" PRIx64 ", code:%s, continue", id.taskId, tstrerror(terrno));
|
||||||
tFreeStreamTask(pTask);
|
tFreeStreamTask(pTask);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
@ -985,7 +994,7 @@ void streamMetaNotifyClose(SStreamMeta* pMeta) {
|
||||||
streamMetaGetHbSendInfo(pMeta->pHbInfo, &startTs, &sendCount);
|
streamMetaGetHbSendInfo(pMeta->pHbInfo, &startTs, &sendCount);
|
||||||
|
|
||||||
stInfo("vgId:%d notify all stream tasks that current vnode is closing. isLeader:%d startHb:%" PRId64 ", totalHb:%d",
|
stInfo("vgId:%d notify all stream tasks that current vnode is closing. isLeader:%d startHb:%" PRId64 ", totalHb:%d",
|
||||||
vgId, (pMeta->role == NODE_ROLE_LEADER), startTs, sendCount);
|
vgId, (pMeta->role == NODE_ROLE_LEADER), startTs, sendCount);
|
||||||
|
|
||||||
// wait for the stream meta hb function stopping
|
// wait for the stream meta hb function stopping
|
||||||
streamMetaWaitForHbTmrQuit(pMeta);
|
streamMetaWaitForHbTmrQuit(pMeta);
|
||||||
|
@ -1171,7 +1180,7 @@ int32_t streamMetaStartAllTasks(SStreamMeta* pMeta) {
|
||||||
int64_t now = taosGetTimestampMs();
|
int64_t now = taosGetTimestampMs();
|
||||||
|
|
||||||
int32_t numOfTasks = taosArrayGetSize(pMeta->pTaskList);
|
int32_t numOfTasks = taosArrayGetSize(pMeta->pTaskList);
|
||||||
stInfo("vgId:%d start to consensus checkpointId for all %d task(s), start ts:%"PRId64, vgId, numOfTasks, now);
|
stInfo("vgId:%d start to consensus checkpointId for all %d task(s), start ts:%" PRId64, vgId, numOfTasks, now);
|
||||||
|
|
||||||
if (numOfTasks == 0) {
|
if (numOfTasks == 0) {
|
||||||
stInfo("vgId:%d no tasks exist, quit from consensus checkpointId", pMeta->vgId);
|
stInfo("vgId:%d no tasks exist, quit from consensus checkpointId", pMeta->vgId);
|
||||||
|
|
|
@ -242,7 +242,6 @@ _end:
|
||||||
|
|
||||||
int32_t getSessionFlushedBuff(SStreamFileState* pFileState, SSessionKey* pKey, void** pVal, int32_t* pVLen) {
|
int32_t getSessionFlushedBuff(SStreamFileState* pFileState, SSessionKey* pKey, void** pVal, int32_t* pVLen) {
|
||||||
SRowBuffPos* pNewPos = getNewRowPosForWrite(pFileState);
|
SRowBuffPos* pNewPos = getNewRowPosForWrite(pFileState);
|
||||||
memcpy(pNewPos->pKey, pKey, sizeof(SSessionKey));
|
|
||||||
pNewPos->needFree = true;
|
pNewPos->needFree = true;
|
||||||
pNewPos->beFlushed = true;
|
pNewPos->beFlushed = true;
|
||||||
void* pBuff = NULL;
|
void* pBuff = NULL;
|
||||||
|
@ -250,6 +249,7 @@ int32_t getSessionFlushedBuff(SStreamFileState* pFileState, SSessionKey* pKey, v
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
memcpy(pNewPos->pKey, pKey, sizeof(SSessionKey));
|
||||||
memcpy(pNewPos->pRowBuff, pBuff, *pVLen);
|
memcpy(pNewPos->pRowBuff, pBuff, *pVLen);
|
||||||
taosMemoryFreeClear(pBuff);
|
taosMemoryFreeClear(pBuff);
|
||||||
(*pVal) = pNewPos;
|
(*pVal) = pNewPos;
|
||||||
|
|
|
@ -24,6 +24,7 @@ enum SBackendFileType {
|
||||||
ROCKSDB_SST_TYPE = 3,
|
ROCKSDB_SST_TYPE = 3,
|
||||||
ROCKSDB_CURRENT_TYPE = 4,
|
ROCKSDB_CURRENT_TYPE = 4,
|
||||||
ROCKSDB_CHECKPOINT_META_TYPE = 5,
|
ROCKSDB_CHECKPOINT_META_TYPE = 5,
|
||||||
|
ROCKSDB_CHECKPOINT_SELFCHECK_TYPE = 6,
|
||||||
};
|
};
|
||||||
|
|
||||||
typedef struct SBackendFileItem {
|
typedef struct SBackendFileItem {
|
||||||
|
@ -49,6 +50,7 @@ typedef struct SBackendSnapFiles2 {
|
||||||
char* pOptions;
|
char* pOptions;
|
||||||
SArray* pSst;
|
SArray* pSst;
|
||||||
char* pCheckpointMeta;
|
char* pCheckpointMeta;
|
||||||
|
char* pCheckpointSelfcheck;
|
||||||
char* path;
|
char* path;
|
||||||
|
|
||||||
int64_t checkpointId;
|
int64_t checkpointId;
|
||||||
|
@ -111,6 +113,7 @@ const char* ROCKSDB_MAINFEST = "MANIFEST";
|
||||||
const char* ROCKSDB_SST = "sst";
|
const char* ROCKSDB_SST = "sst";
|
||||||
const char* ROCKSDB_CURRENT = "CURRENT";
|
const char* ROCKSDB_CURRENT = "CURRENT";
|
||||||
const char* ROCKSDB_CHECKPOINT_META = "CHECKPOINT";
|
const char* ROCKSDB_CHECKPOINT_META = "CHECKPOINT";
|
||||||
|
const char* ROCKSDB_CHECKPOINT_SELF_CHECK = "info";
|
||||||
static int64_t kBlockSize = 64 * 1024;
|
static int64_t kBlockSize = 64 * 1024;
|
||||||
|
|
||||||
int32_t streamSnapHandleInit(SStreamSnapHandle* handle, char* path, void* pMeta);
|
int32_t streamSnapHandleInit(SStreamSnapHandle* handle, char* path, void* pMeta);
|
||||||
|
@ -127,6 +130,7 @@ int32_t streamGetFileSize(char* path, char* name, int64_t* sz) {
|
||||||
int32_t ret = 0;
|
int32_t ret = 0;
|
||||||
|
|
||||||
char* fullname = taosMemoryCalloc(1, strlen(path) + 32);
|
char* fullname = taosMemoryCalloc(1, strlen(path) + 32);
|
||||||
|
|
||||||
sprintf(fullname, "%s%s%s", path, TD_DIRSEP, name);
|
sprintf(fullname, "%s%s%s", path, TD_DIRSEP, name);
|
||||||
|
|
||||||
ret = taosStatFile(fullname, sz, NULL, NULL);
|
ret = taosStatFile(fullname, sz, NULL, NULL);
|
||||||
|
@ -148,8 +152,20 @@ int32_t streamDestroyTaskDbSnapInfo(void* arg, SArray* snap) { return taskDbDest
|
||||||
|
|
||||||
void snapFileDebugInfo(SBackendSnapFile2* pSnapFile) {
|
void snapFileDebugInfo(SBackendSnapFile2* pSnapFile) {
|
||||||
if (qDebugFlag & DEBUG_DEBUG) {
|
if (qDebugFlag & DEBUG_DEBUG) {
|
||||||
char* buf = taosMemoryCalloc(1, 512);
|
int16_t cap = 512;
|
||||||
sprintf(buf + strlen(buf), "[");
|
|
||||||
|
char* buf = taosMemoryCalloc(1, cap);
|
||||||
|
if (buf == NULL) {
|
||||||
|
stError("%s failed to alloc memory, reason:%s", STREAM_STATE_TRANSFER, tstrerror(TSDB_CODE_OUT_OF_MEMORY));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t nBytes = snprintf(buf + strlen(buf), cap, "[");
|
||||||
|
if (nBytes <= 0 || nBytes >= cap) {
|
||||||
|
taosMemoryFree(buf);
|
||||||
|
stError("%s failed to write buf, reason:%s", STREAM_STATE_TRANSFER, tstrerror(TSDB_CODE_OUT_OF_RANGE));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
if (pSnapFile->pCurrent) sprintf(buf, "current: %s,", pSnapFile->pCurrent);
|
if (pSnapFile->pCurrent) sprintf(buf, "current: %s,", pSnapFile->pCurrent);
|
||||||
if (pSnapFile->pMainfest) sprintf(buf + strlen(buf), "MANIFEST: %s,", pSnapFile->pMainfest);
|
if (pSnapFile->pMainfest) sprintf(buf + strlen(buf), "MANIFEST: %s,", pSnapFile->pMainfest);
|
||||||
|
@ -157,10 +173,10 @@ void snapFileDebugInfo(SBackendSnapFile2* pSnapFile) {
|
||||||
if (pSnapFile->pSst) {
|
if (pSnapFile->pSst) {
|
||||||
for (int32_t i = 0; i < taosArrayGetSize(pSnapFile->pSst); i++) {
|
for (int32_t i = 0; i < taosArrayGetSize(pSnapFile->pSst); i++) {
|
||||||
char* name = taosArrayGetP(pSnapFile->pSst, i);
|
char* name = taosArrayGetP(pSnapFile->pSst, i);
|
||||||
sprintf(buf + strlen(buf), "%s,", name);
|
if (strlen(buf) + strlen(name) < cap) sprintf(buf + strlen(buf), "%s,", name);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
sprintf(buf + strlen(buf) - 1, "]");
|
if ((strlen(buf)) < cap) sprintf(buf + strlen(buf) - 1, "]");
|
||||||
|
|
||||||
stInfo("%s %" PRId64 "-%" PRId64 " get file list: %s", STREAM_STATE_TRANSFER, pSnapFile->snapInfo.streamId,
|
stInfo("%s %" PRId64 "-%" PRId64 " get file list: %s", STREAM_STATE_TRANSFER, pSnapFile->snapInfo.streamId,
|
||||||
pSnapFile->snapInfo.taskId, buf);
|
pSnapFile->snapInfo.taskId, buf);
|
||||||
|
@ -199,16 +215,25 @@ int32_t snapFileGenMeta(SBackendSnapFile2* pSnapFile) {
|
||||||
// meta
|
// meta
|
||||||
item.name = pSnapFile->pCheckpointMeta;
|
item.name = pSnapFile->pCheckpointMeta;
|
||||||
item.type = ROCKSDB_CHECKPOINT_META_TYPE;
|
item.type = ROCKSDB_CHECKPOINT_META_TYPE;
|
||||||
|
if (streamGetFileSize(pSnapFile->path, item.name, &item.size) == 0) {
|
||||||
|
taosArrayPush(pSnapFile->pFileList, &item);
|
||||||
|
}
|
||||||
|
|
||||||
|
item.name = pSnapFile->pCheckpointSelfcheck;
|
||||||
|
item.type = ROCKSDB_CHECKPOINT_SELFCHECK_TYPE;
|
||||||
|
|
||||||
if (streamGetFileSize(pSnapFile->path, item.name, &item.size) == 0) {
|
if (streamGetFileSize(pSnapFile->path, item.name, &item.size) == 0) {
|
||||||
taosArrayPush(pSnapFile->pFileList, &item);
|
taosArrayPush(pSnapFile->pFileList, &item);
|
||||||
}
|
}
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
int32_t snapFileReadMeta(SBackendSnapFile2* pSnapFile) {
|
int32_t snapFileReadMeta(SBackendSnapFile2* pSnapFile) {
|
||||||
|
int32_t code = 0;
|
||||||
TdDirPtr pDir = taosOpenDir(pSnapFile->path);
|
TdDirPtr pDir = taosOpenDir(pSnapFile->path);
|
||||||
if (NULL == pDir) {
|
if (NULL == pDir) {
|
||||||
stError("%s failed to open %s", STREAM_STATE_TRANSFER, pSnapFile->path);
|
code = TAOS_SYSTEM_ERROR(errno);
|
||||||
return -1;
|
stError("%s failed to open %s, reason:%s", STREAM_STATE_TRANSFER, pSnapFile->path, tstrerror(code));
|
||||||
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
TdDirEntryPtr pDirEntry;
|
TdDirEntryPtr pDirEntry;
|
||||||
|
@ -216,43 +241,88 @@ int32_t snapFileReadMeta(SBackendSnapFile2* pSnapFile) {
|
||||||
char* name = taosGetDirEntryName(pDirEntry);
|
char* name = taosGetDirEntryName(pDirEntry);
|
||||||
if (strlen(name) >= strlen(ROCKSDB_CURRENT) && 0 == strncmp(name, ROCKSDB_CURRENT, strlen(ROCKSDB_CURRENT))) {
|
if (strlen(name) >= strlen(ROCKSDB_CURRENT) && 0 == strncmp(name, ROCKSDB_CURRENT, strlen(ROCKSDB_CURRENT))) {
|
||||||
pSnapFile->pCurrent = taosStrdup(name);
|
pSnapFile->pCurrent = taosStrdup(name);
|
||||||
|
if (pSnapFile->pCurrent == NULL) {
|
||||||
|
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
break;
|
||||||
|
}
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
if (strlen(name) >= strlen(ROCKSDB_MAINFEST) && 0 == strncmp(name, ROCKSDB_MAINFEST, strlen(ROCKSDB_MAINFEST))) {
|
if (strlen(name) >= strlen(ROCKSDB_MAINFEST) && 0 == strncmp(name, ROCKSDB_MAINFEST, strlen(ROCKSDB_MAINFEST))) {
|
||||||
pSnapFile->pMainfest = taosStrdup(name);
|
pSnapFile->pMainfest = taosStrdup(name);
|
||||||
|
if (pSnapFile->pMainfest == NULL) {
|
||||||
|
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
break;
|
||||||
|
}
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
if (strlen(name) >= strlen(ROCKSDB_OPTIONS) && 0 == strncmp(name, ROCKSDB_OPTIONS, strlen(ROCKSDB_OPTIONS))) {
|
if (strlen(name) >= strlen(ROCKSDB_OPTIONS) && 0 == strncmp(name, ROCKSDB_OPTIONS, strlen(ROCKSDB_OPTIONS))) {
|
||||||
pSnapFile->pOptions = taosStrdup(name);
|
pSnapFile->pOptions = taosStrdup(name);
|
||||||
|
if (pSnapFile->pOptions == NULL) {
|
||||||
|
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
break;
|
||||||
|
}
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
if (strlen(name) >= strlen(ROCKSDB_CHECKPOINT_META) &&
|
if (strlen(name) >= strlen(ROCKSDB_CHECKPOINT_META) &&
|
||||||
0 == strncmp(name, ROCKSDB_CHECKPOINT_META, strlen(ROCKSDB_CHECKPOINT_META))) {
|
0 == strncmp(name, ROCKSDB_CHECKPOINT_META, strlen(ROCKSDB_CHECKPOINT_META))) {
|
||||||
pSnapFile->pCheckpointMeta = taosStrdup(name);
|
pSnapFile->pCheckpointMeta = taosStrdup(name);
|
||||||
|
if (pSnapFile->pCheckpointMeta == NULL) {
|
||||||
|
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if (strlen(name) >= strlen(ROCKSDB_CHECKPOINT_SELF_CHECK) &&
|
||||||
|
0 == strncmp(name, ROCKSDB_CHECKPOINT_SELF_CHECK, strlen(ROCKSDB_CHECKPOINT_SELF_CHECK))) {
|
||||||
|
pSnapFile->pCheckpointSelfcheck = taosStrdup(name);
|
||||||
|
if (pSnapFile->pCheckpointSelfcheck == NULL) {
|
||||||
|
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
break;
|
||||||
|
}
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
if (strlen(name) >= strlen(ROCKSDB_SST) &&
|
if (strlen(name) >= strlen(ROCKSDB_SST) &&
|
||||||
0 == strncmp(name + strlen(name) - strlen(ROCKSDB_SST), ROCKSDB_SST, strlen(ROCKSDB_SST))) {
|
0 == strncmp(name + strlen(name) - strlen(ROCKSDB_SST), ROCKSDB_SST, strlen(ROCKSDB_SST))) {
|
||||||
char* sst = taosStrdup(name);
|
char* sst = taosStrdup(name);
|
||||||
|
if (sst == NULL) {
|
||||||
|
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
break;
|
||||||
|
}
|
||||||
taosArrayPush(pSnapFile->pSst, &sst);
|
taosArrayPush(pSnapFile->pSst, &sst);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
taosCloseDir(&pDir);
|
taosCloseDir(&pDir);
|
||||||
return 0;
|
return code;
|
||||||
}
|
}
|
||||||
int32_t streamBackendSnapInitFile(char* metaPath, SStreamTaskSnap* pSnap, SBackendSnapFile2* pSnapFile) {
|
int32_t streamBackendSnapInitFile(char* metaPath, SStreamTaskSnap* pSnap, SBackendSnapFile2* pSnapFile) {
|
||||||
int32_t code = -1;
|
int32_t code = 0;
|
||||||
|
int32_t nBytes = 0;
|
||||||
|
int32_t cap = strlen(pSnap->dbPrefixPath) + 256;
|
||||||
|
|
||||||
|
char* path = taosMemoryCalloc(1, cap);
|
||||||
|
if (path == NULL) {
|
||||||
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
}
|
||||||
|
|
||||||
|
nBytes = snprintf(path, cap, "%s%s%s%s%s%" PRId64 "", pSnap->dbPrefixPath, TD_DIRSEP, "checkpoints", TD_DIRSEP,
|
||||||
|
"checkpoint", pSnap->chkpId);
|
||||||
|
if (nBytes <= 0 || nBytes >= cap) {
|
||||||
|
code = TSDB_CODE_OUT_OF_RANGE;
|
||||||
|
goto _ERROR;
|
||||||
|
}
|
||||||
|
|
||||||
char* path = taosMemoryCalloc(1, strlen(pSnap->dbPrefixPath) + 256);
|
|
||||||
// char idstr[64] = {0};
|
|
||||||
sprintf(path, "%s%s%s%s%s%" PRId64 "", pSnap->dbPrefixPath, TD_DIRSEP, "checkpoints", TD_DIRSEP, "checkpoint",
|
|
||||||
pSnap->chkpId);
|
|
||||||
if (!taosIsDir(path)) {
|
if (!taosIsDir(path)) {
|
||||||
|
code = TSDB_CODE_INVALID_MSG;
|
||||||
goto _ERROR;
|
goto _ERROR;
|
||||||
}
|
}
|
||||||
|
|
||||||
pSnapFile->pSst = taosArrayInit(16, sizeof(void*));
|
pSnapFile->pSst = taosArrayInit(16, sizeof(void*));
|
||||||
pSnapFile->pFileList = taosArrayInit(64, sizeof(SBackendFileItem));
|
pSnapFile->pFileList = taosArrayInit(64, sizeof(SBackendFileItem));
|
||||||
|
if (pSnapFile->pSst == NULL || pSnapFile->pFileList == NULL) {
|
||||||
|
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
goto _ERROR;
|
||||||
|
}
|
||||||
|
|
||||||
pSnapFile->path = path;
|
pSnapFile->path = path;
|
||||||
pSnapFile->snapInfo = *pSnap;
|
pSnapFile->snapInfo = *pSnap;
|
||||||
if ((code = snapFileReadMeta(pSnapFile)) != 0) {
|
if ((code = snapFileReadMeta(pSnapFile)) != 0) {
|
||||||
|
@ -264,7 +334,6 @@ int32_t streamBackendSnapInitFile(char* metaPath, SStreamTaskSnap* pSnap, SBacke
|
||||||
|
|
||||||
snapFileDebugInfo(pSnapFile);
|
snapFileDebugInfo(pSnapFile);
|
||||||
path = NULL;
|
path = NULL;
|
||||||
code = 0;
|
|
||||||
|
|
||||||
_ERROR:
|
_ERROR:
|
||||||
taosMemoryFree(path);
|
taosMemoryFree(path);
|
||||||
|
@ -276,6 +345,7 @@ void snapFileDestroy(SBackendSnapFile2* pSnap) {
|
||||||
taosMemoryFree(pSnap->pMainfest);
|
taosMemoryFree(pSnap->pMainfest);
|
||||||
taosMemoryFree(pSnap->pOptions);
|
taosMemoryFree(pSnap->pOptions);
|
||||||
taosMemoryFree(pSnap->path);
|
taosMemoryFree(pSnap->path);
|
||||||
|
taosMemoryFree(pSnap->pCheckpointSelfcheck);
|
||||||
for (int32_t i = 0; i < taosArrayGetSize(pSnap->pSst); i++) {
|
for (int32_t i = 0; i < taosArrayGetSize(pSnap->pSst); i++) {
|
||||||
char* sst = taosArrayGetP(pSnap->pSst, i);
|
char* sst = taosArrayGetP(pSnap->pSst, i);
|
||||||
taosMemoryFree(sst);
|
taosMemoryFree(sst);
|
||||||
|
@ -295,14 +365,25 @@ void snapFileDestroy(SBackendSnapFile2* pSnap) {
|
||||||
}
|
}
|
||||||
int32_t streamSnapHandleInit(SStreamSnapHandle* pHandle, char* path, void* pMeta) {
|
int32_t streamSnapHandleInit(SStreamSnapHandle* pHandle, char* path, void* pMeta) {
|
||||||
// impl later
|
// impl later
|
||||||
|
int32_t code = 0;
|
||||||
SArray* pSnapInfoSet = taosArrayInit(4, sizeof(SStreamTaskSnap));
|
SArray* pSnapInfoSet = taosArrayInit(4, sizeof(SStreamTaskSnap));
|
||||||
int32_t code = streamCreateTaskDbSnapInfo(pMeta, path, pSnapInfoSet);
|
if (pSnapInfoSet == NULL) {
|
||||||
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
}
|
||||||
|
|
||||||
|
code = streamCreateTaskDbSnapInfo(pMeta, path, pSnapInfoSet);
|
||||||
if (code != 0) {
|
if (code != 0) {
|
||||||
|
stError("failed to do task db snap info, reason:%s", tstrerror(code));
|
||||||
taosArrayDestroy(pSnapInfoSet);
|
taosArrayDestroy(pSnapInfoSet);
|
||||||
return -1;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
SArray* pDbSnapSet = taosArrayInit(8, sizeof(SBackendSnapFile2));
|
SArray* pDbSnapSet = taosArrayInit(8, sizeof(SBackendSnapFile2));
|
||||||
|
if (pDbSnapSet == NULL) {
|
||||||
|
taosArrayDestroy(pSnapInfoSet);
|
||||||
|
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
return code;
|
||||||
|
}
|
||||||
|
|
||||||
for (int32_t i = 0; i < taosArrayGetSize(pSnapInfoSet); i++) {
|
for (int32_t i = 0; i < taosArrayGetSize(pSnapInfoSet); i++) {
|
||||||
SStreamTaskSnap* pSnap = taosArrayGet(pSnapInfoSet, i);
|
SStreamTaskSnap* pSnap = taosArrayGet(pSnapInfoSet, i);
|
||||||
|
@ -318,6 +399,10 @@ int32_t streamSnapHandleInit(SStreamSnapHandle* pHandle, char* path, void* pMeta
|
||||||
pHandle->currIdx = 0;
|
pHandle->currIdx = 0;
|
||||||
pHandle->pMeta = pMeta;
|
pHandle->pMeta = pMeta;
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
_err:
|
||||||
|
streamSnapHandleDestroy(pHandle);
|
||||||
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
void streamSnapHandleDestroy(SStreamSnapHandle* handle) {
|
void streamSnapHandleDestroy(SStreamSnapHandle* handle) {
|
||||||
|
@ -348,9 +433,10 @@ int32_t streamSnapReaderOpen(void* pMeta, int64_t sver, int64_t chkpId, char* pa
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (streamSnapHandleInit(&pReader->handle, (char*)path, pMeta) < 0) {
|
int32_t code = streamSnapHandleInit(&pReader->handle, (char*)path, pMeta);
|
||||||
|
if (code != 0) {
|
||||||
taosMemoryFree(pReader);
|
taosMemoryFree(pReader);
|
||||||
return -1;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
*ppReader = pReader;
|
*ppReader = pReader;
|
||||||
|
@ -410,10 +496,10 @@ _NEXT:
|
||||||
int64_t nread = taosPReadFile(pSnapFile->fd, buf + sizeof(SStreamSnapBlockHdr), kBlockSize, pSnapFile->offset);
|
int64_t nread = taosPReadFile(pSnapFile->fd, buf + sizeof(SStreamSnapBlockHdr), kBlockSize, pSnapFile->offset);
|
||||||
if (nread == -1) {
|
if (nread == -1) {
|
||||||
taosMemoryFree(buf);
|
taosMemoryFree(buf);
|
||||||
code = TAOS_SYSTEM_ERROR(terrno);
|
code = TAOS_SYSTEM_ERROR(errno);
|
||||||
stError("%s snap failed to read snap, file name:%s, type:%d,reason:%s", STREAM_STATE_TRANSFER, item->name,
|
stError("%s snap failed to read snap, file name:%s, type:%d,reason:%s", STREAM_STATE_TRANSFER, item->name,
|
||||||
item->type, tstrerror(code));
|
item->type, tstrerror(code));
|
||||||
return -1;
|
return code;
|
||||||
} else if (nread > 0 && nread <= kBlockSize) {
|
} else if (nread > 0 && nread <= kBlockSize) {
|
||||||
// left bytes less than kBlockSize
|
// left bytes less than kBlockSize
|
||||||
stDebug("%s read file %s, current offset:%" PRId64 ",size:% " PRId64 ", file no.%d", STREAM_STATE_TRANSFER,
|
stDebug("%s read file %s, current offset:%" PRId64 ",size:% " PRId64 ", file no.%d", STREAM_STATE_TRANSFER,
|
||||||
|
@ -473,6 +559,7 @@ _NEXT:
|
||||||
// SMetaSnapWriter ========================================
|
// SMetaSnapWriter ========================================
|
||||||
int32_t streamSnapWriterOpen(void* pMeta, int64_t sver, int64_t ever, char* path, SStreamSnapWriter** ppWriter) {
|
int32_t streamSnapWriterOpen(void* pMeta, int64_t sver, int64_t ever, char* path, SStreamSnapWriter** ppWriter) {
|
||||||
// impl later
|
// impl later
|
||||||
|
int32_t code = 0;
|
||||||
SStreamSnapWriter* pWriter = taosMemoryCalloc(1, sizeof(SStreamSnapWriter));
|
SStreamSnapWriter* pWriter = taosMemoryCalloc(1, sizeof(SStreamSnapWriter));
|
||||||
if (pWriter == NULL) {
|
if (pWriter == NULL) {
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
@ -480,11 +567,27 @@ int32_t streamSnapWriterOpen(void* pMeta, int64_t sver, int64_t ever, char* path
|
||||||
|
|
||||||
SStreamSnapHandle* pHandle = &pWriter->handle;
|
SStreamSnapHandle* pHandle = &pWriter->handle;
|
||||||
pHandle->currIdx = 0;
|
pHandle->currIdx = 0;
|
||||||
|
|
||||||
pHandle->metaPath = taosStrdup(path);
|
pHandle->metaPath = taosStrdup(path);
|
||||||
|
if (pHandle->metaPath == NULL) {
|
||||||
|
taosMemoryFree(pWriter);
|
||||||
|
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
return code;
|
||||||
|
}
|
||||||
|
|
||||||
pHandle->pDbSnapSet = taosArrayInit(8, sizeof(SBackendSnapFile2));
|
pHandle->pDbSnapSet = taosArrayInit(8, sizeof(SBackendSnapFile2));
|
||||||
|
if (pHandle->pDbSnapSet == NULL) {
|
||||||
|
streamSnapWriterClose(pWriter, 0);
|
||||||
|
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
return code;
|
||||||
|
}
|
||||||
|
|
||||||
SBackendSnapFile2 snapFile = {0};
|
SBackendSnapFile2 snapFile = {0};
|
||||||
taosArrayPush(pHandle->pDbSnapSet, &snapFile);
|
if (taosArrayPush(pHandle->pDbSnapSet, &snapFile) == NULL) {
|
||||||
|
streamSnapWriterClose(pWriter, 0);
|
||||||
|
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
return code;
|
||||||
|
}
|
||||||
|
|
||||||
*ppWriter = pWriter;
|
*ppWriter = pWriter;
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -506,7 +609,7 @@ int32_t streamSnapWriteImpl(SStreamSnapWriter* pWriter, uint8_t* pData, uint32_t
|
||||||
if (pSnapFile->fd == 0) {
|
if (pSnapFile->fd == 0) {
|
||||||
pSnapFile->fd = streamOpenFile(pSnapFile->path, pItem->name, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_APPEND);
|
pSnapFile->fd = streamOpenFile(pSnapFile->path, pItem->name, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_APPEND);
|
||||||
if (pSnapFile->fd == NULL) {
|
if (pSnapFile->fd == NULL) {
|
||||||
code = TAOS_SYSTEM_ERROR(terrno);
|
code = TAOS_SYSTEM_ERROR(errno);
|
||||||
stError("%s failed to open file name:%s%s%s, reason:%s", STREAM_STATE_TRANSFER, pHandle->metaPath, TD_DIRSEP,
|
stError("%s failed to open file name:%s%s%s, reason:%s", STREAM_STATE_TRANSFER, pHandle->metaPath, TD_DIRSEP,
|
||||||
pHdr->name, tstrerror(code));
|
pHdr->name, tstrerror(code));
|
||||||
}
|
}
|
||||||
|
@ -514,7 +617,7 @@ int32_t streamSnapWriteImpl(SStreamSnapWriter* pWriter, uint8_t* pData, uint32_t
|
||||||
if (strlen(pHdr->name) == strlen(pItem->name) && strcmp(pHdr->name, pItem->name) == 0) {
|
if (strlen(pHdr->name) == strlen(pItem->name) && strcmp(pHdr->name, pItem->name) == 0) {
|
||||||
int64_t bytes = taosPWriteFile(pSnapFile->fd, pHdr->data, pHdr->size, pSnapFile->offset);
|
int64_t bytes = taosPWriteFile(pSnapFile->fd, pHdr->data, pHdr->size, pSnapFile->offset);
|
||||||
if (bytes != pHdr->size) {
|
if (bytes != pHdr->size) {
|
||||||
code = TAOS_SYSTEM_ERROR(terrno);
|
code = TAOS_SYSTEM_ERROR(errno);
|
||||||
stError("%s failed to write snap, file name:%s, reason:%s", STREAM_STATE_TRANSFER, pHdr->name, tstrerror(code));
|
stError("%s failed to write snap, file name:%s, reason:%s", STREAM_STATE_TRANSFER, pHdr->name, tstrerror(code));
|
||||||
return code;
|
return code;
|
||||||
} else {
|
} else {
|
||||||
|
@ -535,12 +638,16 @@ int32_t streamSnapWriteImpl(SStreamSnapWriter* pWriter, uint8_t* pData, uint32_t
|
||||||
SBackendFileItem* pItem = taosArrayGet(pSnapFile->pFileList, pSnapFile->currFileIdx);
|
SBackendFileItem* pItem = taosArrayGet(pSnapFile->pFileList, pSnapFile->currFileIdx);
|
||||||
pSnapFile->fd = streamOpenFile(pSnapFile->path, pItem->name, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_APPEND);
|
pSnapFile->fd = streamOpenFile(pSnapFile->path, pItem->name, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_APPEND);
|
||||||
if (pSnapFile->fd == NULL) {
|
if (pSnapFile->fd == NULL) {
|
||||||
code = TAOS_SYSTEM_ERROR(terrno);
|
code = TAOS_SYSTEM_ERROR(errno);
|
||||||
stError("%s failed to open file name:%s%s%s, reason:%s", STREAM_STATE_TRANSFER, pSnapFile->path, TD_DIRSEP,
|
stError("%s failed to open file name:%s%s%s, reason:%s", STREAM_STATE_TRANSFER, pSnapFile->path, TD_DIRSEP,
|
||||||
pHdr->name, tstrerror(code));
|
pHdr->name, tstrerror(code));
|
||||||
}
|
}
|
||||||
|
|
||||||
taosPWriteFile(pSnapFile->fd, pHdr->data, pHdr->size, pSnapFile->offset);
|
if (taosPWriteFile(pSnapFile->fd, pHdr->data, pHdr->size, pSnapFile->offset) != pHdr->size) {
|
||||||
|
code = TAOS_SYSTEM_ERROR(errno);
|
||||||
|
stError("%s failed to write snap, file name:%s, reason:%s", STREAM_STATE_TRANSFER, pHdr->name, tstrerror(code));
|
||||||
|
return code;
|
||||||
|
}
|
||||||
stInfo("succ to write data %s", pItem->name);
|
stInfo("succ to write data %s", pItem->name);
|
||||||
pSnapFile->offset += pHdr->size;
|
pSnapFile->offset += pHdr->size;
|
||||||
}
|
}
|
||||||
|
|
|
@ -29,7 +29,7 @@ class BackendEnv : public ::testing::Test {
|
||||||
|
|
||||||
void *backendCreate() {
|
void *backendCreate() {
|
||||||
const char *streamPath = "/tmp";
|
const char *streamPath = "/tmp";
|
||||||
void * p = NULL;
|
void *p = NULL;
|
||||||
|
|
||||||
// char *absPath = NULL;
|
// char *absPath = NULL;
|
||||||
// // SBackendWrapper *p = (SBackendWrapper *)streamBackendInit(streamPath, -1, 2);
|
// // SBackendWrapper *p = (SBackendWrapper *)streamBackendInit(streamPath, -1, 2);
|
||||||
|
@ -52,7 +52,7 @@ SStreamState *stateCreate(const char *path) {
|
||||||
}
|
}
|
||||||
void *backendOpen() {
|
void *backendOpen() {
|
||||||
streamMetaInit();
|
streamMetaInit();
|
||||||
const char * path = "/tmp/backend";
|
const char *path = "/tmp/backend";
|
||||||
SStreamState *p = stateCreate(path);
|
SStreamState *p = stateCreate(path);
|
||||||
ASSERT(p != NULL);
|
ASSERT(p != NULL);
|
||||||
|
|
||||||
|
@ -79,7 +79,7 @@ void *backendOpen() {
|
||||||
|
|
||||||
const char *val = "value data";
|
const char *val = "value data";
|
||||||
int32_t len = 0;
|
int32_t len = 0;
|
||||||
char * newVal = NULL;
|
char *newVal = NULL;
|
||||||
streamStateGet_rocksdb(p, &key, (void **)&newVal, &len);
|
streamStateGet_rocksdb(p, &key, (void **)&newVal, &len);
|
||||||
ASSERT(len == strlen(val));
|
ASSERT(len == strlen(val));
|
||||||
}
|
}
|
||||||
|
@ -100,7 +100,7 @@ void *backendOpen() {
|
||||||
|
|
||||||
const char *val = "value data";
|
const char *val = "value data";
|
||||||
int32_t len = 0;
|
int32_t len = 0;
|
||||||
char * newVal = NULL;
|
char *newVal = NULL;
|
||||||
int32_t code = streamStateGet_rocksdb(p, &key, (void **)&newVal, &len);
|
int32_t code = streamStateGet_rocksdb(p, &key, (void **)&newVal, &len);
|
||||||
ASSERT(code != 0);
|
ASSERT(code != 0);
|
||||||
}
|
}
|
||||||
|
@ -130,7 +130,7 @@ void *backendOpen() {
|
||||||
|
|
||||||
winkey.groupId = 0;
|
winkey.groupId = 0;
|
||||||
winkey.ts = tsArray[0];
|
winkey.ts = tsArray[0];
|
||||||
char * val = NULL;
|
char *val = NULL;
|
||||||
int32_t len = 0;
|
int32_t len = 0;
|
||||||
|
|
||||||
pCurr = streamStateSeekKeyNext_rocksdb(p, &winkey);
|
pCurr = streamStateSeekKeyNext_rocksdb(p, &winkey);
|
||||||
|
@ -157,7 +157,7 @@ void *backendOpen() {
|
||||||
key.ts = tsArray[i];
|
key.ts = tsArray[i];
|
||||||
key.exprIdx = i;
|
key.exprIdx = i;
|
||||||
|
|
||||||
char * val = NULL;
|
char *val = NULL;
|
||||||
int32_t len = 0;
|
int32_t len = 0;
|
||||||
streamStateFuncGet_rocksdb(p, &key, (void **)&val, &len);
|
streamStateFuncGet_rocksdb(p, &key, (void **)&val, &len);
|
||||||
ASSERT(len == strlen("Value"));
|
ASSERT(len == strlen("Value"));
|
||||||
|
@ -168,7 +168,7 @@ void *backendOpen() {
|
||||||
key.ts = tsArray[i];
|
key.ts = tsArray[i];
|
||||||
key.exprIdx = i;
|
key.exprIdx = i;
|
||||||
|
|
||||||
char * val = NULL;
|
char *val = NULL;
|
||||||
int32_t len = 0;
|
int32_t len = 0;
|
||||||
streamStateFuncDel_rocksdb(p, &key);
|
streamStateFuncDel_rocksdb(p, &key);
|
||||||
}
|
}
|
||||||
|
@ -213,7 +213,7 @@ void *backendOpen() {
|
||||||
{
|
{
|
||||||
SSessionKey key;
|
SSessionKey key;
|
||||||
memset(&key, 0, sizeof(key));
|
memset(&key, 0, sizeof(key));
|
||||||
char * val = NULL;
|
char *val = NULL;
|
||||||
int32_t vlen = 0;
|
int32_t vlen = 0;
|
||||||
code = streamStateSessionGetKVByCur_rocksdb(pCurr, &key, (void **)&val, &vlen);
|
code = streamStateSessionGetKVByCur_rocksdb(pCurr, &key, (void **)&val, &vlen);
|
||||||
ASSERT(code == 0);
|
ASSERT(code == 0);
|
||||||
|
@ -260,7 +260,7 @@ void *backendOpen() {
|
||||||
SWinKey key = {0}; // {.groupId = (uint64_t)(i), .ts = tsArray[i]};
|
SWinKey key = {0}; // {.groupId = (uint64_t)(i), .ts = tsArray[i]};
|
||||||
key.groupId = (uint64_t)(i);
|
key.groupId = (uint64_t)(i);
|
||||||
key.ts = tsArray[i];
|
key.ts = tsArray[i];
|
||||||
char * val = NULL;
|
char *val = NULL;
|
||||||
int32_t vlen = 0;
|
int32_t vlen = 0;
|
||||||
ASSERT(streamStateFillGet_rocksdb(p, &key, (void **)&val, &vlen) == 0);
|
ASSERT(streamStateFillGet_rocksdb(p, &key, (void **)&val, &vlen) == 0);
|
||||||
taosMemoryFreeClear(val);
|
taosMemoryFreeClear(val);
|
||||||
|
@ -272,7 +272,7 @@ void *backendOpen() {
|
||||||
SStreamStateCur *pCurr = streamStateFillGetCur_rocksdb(p, &key);
|
SStreamStateCur *pCurr = streamStateFillGetCur_rocksdb(p, &key);
|
||||||
ASSERT(pCurr != NULL);
|
ASSERT(pCurr != NULL);
|
||||||
|
|
||||||
char * val = NULL;
|
char *val = NULL;
|
||||||
int32_t vlen = 0;
|
int32_t vlen = 0;
|
||||||
ASSERT(0 == streamStateFillGetKVByCur_rocksdb(pCurr, &key, (const void **)&val, &vlen));
|
ASSERT(0 == streamStateFillGetKVByCur_rocksdb(pCurr, &key, (const void **)&val, &vlen));
|
||||||
ASSERT(vlen == strlen("Value"));
|
ASSERT(vlen == strlen("Value"));
|
||||||
|
@ -296,7 +296,7 @@ void *backendOpen() {
|
||||||
SWinKey key = {0}; // {.groupId = (uint64_t)(i), .ts = tsArray[i]};
|
SWinKey key = {0}; // {.groupId = (uint64_t)(i), .ts = tsArray[i]};
|
||||||
key.groupId = (uint64_t)(i);
|
key.groupId = (uint64_t)(i);
|
||||||
key.ts = tsArray[i];
|
key.ts = tsArray[i];
|
||||||
char * val = NULL;
|
char *val = NULL;
|
||||||
int32_t vlen = 0;
|
int32_t vlen = 0;
|
||||||
ASSERT(streamStateFillDel_rocksdb(p, &key) == 0);
|
ASSERT(streamStateFillDel_rocksdb(p, &key) == 0);
|
||||||
taosMemoryFreeClear(val);
|
taosMemoryFreeClear(val);
|
||||||
|
@ -338,7 +338,7 @@ void *backendOpen() {
|
||||||
char key[128] = {0};
|
char key[128] = {0};
|
||||||
sprintf(key, "tbname_%d", i);
|
sprintf(key, "tbname_%d", i);
|
||||||
|
|
||||||
char * val = NULL;
|
char *val = NULL;
|
||||||
int32_t len = 0;
|
int32_t len = 0;
|
||||||
code = streamDefaultGet_rocksdb(p, key, (void **)&val, &len);
|
code = streamDefaultGet_rocksdb(p, key, (void **)&val, &len);
|
||||||
ASSERT(code == 0);
|
ASSERT(code == 0);
|
||||||
|
@ -354,7 +354,7 @@ TEST_F(BackendEnv, checkOpen) {
|
||||||
SStreamState *p = (SStreamState *)backendOpen();
|
SStreamState *p = (SStreamState *)backendOpen();
|
||||||
int64_t tsStart = taosGetTimestampMs();
|
int64_t tsStart = taosGetTimestampMs();
|
||||||
{
|
{
|
||||||
void * pBatch = streamStateCreateBatch();
|
void *pBatch = streamStateCreateBatch();
|
||||||
int32_t size = 0;
|
int32_t size = 0;
|
||||||
for (int i = 0; i < size; i++) {
|
for (int i = 0; i < size; i++) {
|
||||||
char key[128] = {0};
|
char key[128] = {0};
|
||||||
|
@ -368,7 +368,7 @@ TEST_F(BackendEnv, checkOpen) {
|
||||||
streamStateDestroyBatch(pBatch);
|
streamStateDestroyBatch(pBatch);
|
||||||
}
|
}
|
||||||
{
|
{
|
||||||
void * pBatch = streamStateCreateBatch();
|
void *pBatch = streamStateCreateBatch();
|
||||||
int32_t size = 0;
|
int32_t size = 0;
|
||||||
char valBuf[256] = {0};
|
char valBuf[256] = {0};
|
||||||
for (int i = 0; i < size; i++) {
|
for (int i = 0; i < size; i++) {
|
||||||
|
@ -383,9 +383,9 @@ TEST_F(BackendEnv, checkOpen) {
|
||||||
streamStateDestroyBatch(pBatch);
|
streamStateDestroyBatch(pBatch);
|
||||||
}
|
}
|
||||||
// do checkpoint 2
|
// do checkpoint 2
|
||||||
taskDbDoCheckpoint(p->pTdbState->pOwner->pBackend, 2);
|
taskDbDoCheckpoint(p->pTdbState->pOwner->pBackend, 2, 0);
|
||||||
{
|
{
|
||||||
void * pBatch = streamStateCreateBatch();
|
void *pBatch = streamStateCreateBatch();
|
||||||
int32_t size = 0;
|
int32_t size = 0;
|
||||||
char valBuf[256] = {0};
|
char valBuf[256] = {0};
|
||||||
for (int i = 0; i < size; i++) {
|
for (int i = 0; i < size; i++) {
|
||||||
|
@ -400,17 +400,17 @@ TEST_F(BackendEnv, checkOpen) {
|
||||||
streamStateDestroyBatch(pBatch);
|
streamStateDestroyBatch(pBatch);
|
||||||
}
|
}
|
||||||
|
|
||||||
taskDbDoCheckpoint(p->pTdbState->pOwner->pBackend, 3);
|
taskDbDoCheckpoint(p->pTdbState->pOwner->pBackend, 3, 0);
|
||||||
|
|
||||||
const char *path = "/tmp/backend/stream";
|
const char *path = "/tmp/backend/stream";
|
||||||
const char *dump = "/tmp/backend/stream/dump";
|
const char *dump = "/tmp/backend/stream/dump";
|
||||||
// taosMkDir(dump);
|
// taosMkDir(dump);
|
||||||
taosMulMkDir(dump);
|
taosMulMkDir(dump);
|
||||||
SBkdMgt *mgt = bkdMgtCreate((char *)path);
|
SBkdMgt *mgt = bkdMgtCreate((char *)path);
|
||||||
SArray * result = taosArrayInit(4, sizeof(void *));
|
SArray *result = taosArrayInit(4, sizeof(void *));
|
||||||
bkdMgtGetDelta(mgt, p->pTdbState->idstr, 3, result, (char *)dump);
|
bkdMgtGetDelta(mgt, p->pTdbState->idstr, 3, result, (char *)dump);
|
||||||
|
|
||||||
taskDbDoCheckpoint(p->pTdbState->pOwner->pBackend, 4);
|
taskDbDoCheckpoint(p->pTdbState->pOwner->pBackend, 4, 0);
|
||||||
|
|
||||||
taosArrayClear(result);
|
taosArrayClear(result);
|
||||||
bkdMgtGetDelta(mgt, p->pTdbState->idstr, 4, result, (char *)dump);
|
bkdMgtGetDelta(mgt, p->pTdbState->idstr, 4, result, (char *)dump);
|
||||||
|
|
|
@ -98,6 +98,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_NOT_FOUND, "Not found")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_NO_DISKSPACE, "Out of disk space")
|
TAOS_DEFINE_ERROR(TSDB_CODE_NO_DISKSPACE, "Out of disk space")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_TIMEOUT_ERROR, "Operation timeout")
|
TAOS_DEFINE_ERROR(TSDB_CODE_TIMEOUT_ERROR, "Operation timeout")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_NO_ENOUGH_DISKSPACE, "No enough disk space")
|
TAOS_DEFINE_ERROR(TSDB_CODE_NO_ENOUGH_DISKSPACE, "No enough disk space")
|
||||||
|
TAOS_DEFINE_ERROR(TSDB_CODE_THIRDPARTY_ERROR, "third party error, please check the log")
|
||||||
|
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_APP_IS_STARTING, "Database is starting up")
|
TAOS_DEFINE_ERROR(TSDB_CODE_APP_IS_STARTING, "Database is starting up")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_APP_IS_STOPPING, "Database is closing down")
|
TAOS_DEFINE_ERROR(TSDB_CODE_APP_IS_STOPPING, "Database is closing down")
|
||||||
|
@ -681,6 +682,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_PAR_PRIMARY_KEY_IS_NONE, "Primary key column
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_TBNAME_ERROR, "Pseudo tag tbname not set")
|
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_TBNAME_ERROR, "Pseudo tag tbname not set")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_TBNAME_DUPLICATED, "Table name duplicated")
|
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_TBNAME_DUPLICATED, "Table name duplicated")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_TAG_NAME_DUPLICATED, "Tag name duplicated")
|
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_TAG_NAME_DUPLICATED, "Tag name duplicated")
|
||||||
|
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_NOT_ALLOWED_DIFFERENT_BY_ROW_FUNC, "Some functions cannot appear in the select list at the same time")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INTERNAL_ERROR, "Parser internal error")
|
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INTERNAL_ERROR, "Parser internal error")
|
||||||
|
|
||||||
//planner
|
//planner
|
||||||
|
|
|
@ -577,7 +577,7 @@ static int64_t atomicCompareExchangeRunning(int64_t* ptr, int32_t* expectedVal,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static int64_t atomicComapreAndExchangeActiveAndRunning(int64_t *ptr, int32_t *expectedActive, int32_t newActive,
|
static int64_t atomicCompareExchangeActiveAndRunning(int64_t *ptr, int32_t *expectedActive, int32_t newActive,
|
||||||
int32_t *expectedRunning, int32_t newRunning) {
|
int32_t *expectedRunning, int32_t newRunning) {
|
||||||
int64_t oldVal64 = *expectedActive, newVal64 = newActive;
|
int64_t oldVal64 = *expectedActive, newVal64 = newActive;
|
||||||
oldVal64 <<= 32;
|
oldVal64 <<= 32;
|
||||||
|
@ -683,7 +683,7 @@ static bool tQueryAutoQWorkerTryDecActive(void* p, int32_t minActive) {
|
||||||
int64_t val64 = pPool->activeRunningN;
|
int64_t val64 = pPool->activeRunningN;
|
||||||
int32_t active = GET_ACTIVE_N(val64), running = GET_RUNNING_N(val64);
|
int32_t active = GET_ACTIVE_N(val64), running = GET_RUNNING_N(val64);
|
||||||
while (active > minActive) {
|
while (active > minActive) {
|
||||||
if (atomicComapreAndExchangeActiveAndRunning(&pPool->activeRunningN, &active, active - 1, &running, running - 1))
|
if (atomicCompareExchangeActiveAndRunning(&pPool->activeRunningN, &active, active - 1, &running, running - 1))
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
atomicFetchSubRunning(&pPool->activeRunningN, 1);
|
atomicFetchSubRunning(&pPool->activeRunningN, 1);
|
||||||
|
@ -691,13 +691,18 @@ static bool tQueryAutoQWorkerTryDecActive(void* p, int32_t minActive) {
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t tQueryAutoQWorkerWaitingCheck(SQueryAutoQWorkerPool* pPool) {
|
static int32_t tQueryAutoQWorkerWaitingCheck(SQueryAutoQWorkerPool* pPool) {
|
||||||
int32_t running = GET_RUNNING_N(pPool->activeRunningN);
|
while (1) {
|
||||||
while (running < pPool->num) {
|
int64_t val64 = pPool->activeRunningN;
|
||||||
if (atomicCompareExchangeRunning(&pPool->activeRunningN, &running, running + 1)) {
|
int32_t running = GET_RUNNING_N(val64), active = GET_ACTIVE_N(val64);
|
||||||
return TSDB_CODE_SUCCESS;
|
while (running < pPool->num) {
|
||||||
|
if (atomicCompareExchangeActiveAndRunning(&pPool->activeRunningN, &active, active, &running, running + 1)) {
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (atomicCompareExchangeActive(&pPool->activeRunningN, &active, active - 1)) {
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
atomicFetchSubActive(&pPool->activeRunningN, 1);
|
|
||||||
// to wait for process
|
// to wait for process
|
||||||
taosThreadMutexLock(&pPool->waitingBeforeProcessMsgLock);
|
taosThreadMutexLock(&pPool->waitingBeforeProcessMsgLock);
|
||||||
atomic_fetch_add_32(&pPool->waitingBeforeProcessMsgN, 1);
|
atomic_fetch_add_32(&pPool->waitingBeforeProcessMsgN, 1);
|
||||||
|
@ -976,7 +981,7 @@ static int32_t tQueryAutoQWorkerRecoverFromBlocking(void *p) {
|
||||||
int64_t val64 = pPool->activeRunningN;
|
int64_t val64 = pPool->activeRunningN;
|
||||||
int32_t running = GET_RUNNING_N(val64), active = GET_ACTIVE_N(val64);
|
int32_t running = GET_RUNNING_N(val64), active = GET_ACTIVE_N(val64);
|
||||||
while (running < pPool->num) {
|
while (running < pPool->num) {
|
||||||
if (atomicComapreAndExchangeActiveAndRunning(&pPool->activeRunningN, &active, active + 1, &running, running + 1)) {
|
if (atomicCompareExchangeActiveAndRunning(&pPool->activeRunningN, &active, active + 1, &running, running + 1)) {
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,6 +2,7 @@
|
||||||
#include <stdlib.h>
|
#include <stdlib.h>
|
||||||
#include <tutil.h>
|
#include <tutil.h>
|
||||||
#include <random>
|
#include <random>
|
||||||
|
#include "ttime.h"
|
||||||
|
|
||||||
#include "tarray.h"
|
#include "tarray.h"
|
||||||
#include "tcompare.h"
|
#include "tcompare.h"
|
||||||
|
@ -382,3 +383,94 @@ TEST(utilTest, intToHextStr) {
|
||||||
ASSERT_STREQ(buf, destBuf);
|
ASSERT_STREQ(buf, destBuf);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
static int64_t getIntervalValWithPrecision(int64_t interval, int8_t unit, int8_t precision) {
|
||||||
|
if (IS_CALENDAR_TIME_DURATION(unit)) {
|
||||||
|
return interval;
|
||||||
|
}
|
||||||
|
if(0 != getDuration(interval, unit, &interval, precision)) {
|
||||||
|
assert(0);
|
||||||
|
}
|
||||||
|
return interval;
|
||||||
|
}
|
||||||
|
|
||||||
|
static bool tsmaIntervalCheck(int64_t baseInterval, int8_t baseUnit, int64_t interval, int8_t unit, int8_t precision) {
|
||||||
|
auto ret = checkRecursiveTsmaInterval(getIntervalValWithPrecision(baseInterval, baseUnit, precision), baseUnit,
|
||||||
|
getIntervalValWithPrecision(interval, unit, precision), unit, precision, true);
|
||||||
|
using namespace std;
|
||||||
|
cout << interval << unit << " on " << baseInterval << baseUnit << ": " << ret << endl;
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST(tsma, reverse_unit) {
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(1, 'm', 120, 's', TSDB_TIME_PRECISION_MILLI));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(1, 'h', 120, 'm', TSDB_TIME_PRECISION_MILLI));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(20, 's', 2 * 20 * 1000, 'a', TSDB_TIME_PRECISION_MILLI));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(20, 's', 2 * 20 * 1000 * 1000, 'u', TSDB_TIME_PRECISION_MILLI));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(20, 's', 2UL * 20UL * 1000UL * 1000UL * 1000UL, 'b', TSDB_TIME_PRECISION_MILLI));
|
||||||
|
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(1, 'h', 60, 'm', TSDB_TIME_PRECISION_MILLI));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(1, 'h', 6, 'm', TSDB_TIME_PRECISION_MILLI));
|
||||||
|
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(2, 'h', 120, 'm', TSDB_TIME_PRECISION_MILLI));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(2, 'h', 240, 'm', TSDB_TIME_PRECISION_MILLI));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(1, 'd', 240, 'm', TSDB_TIME_PRECISION_MILLI));
|
||||||
|
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(1, 'd', 1440, 'm', TSDB_TIME_PRECISION_MILLI));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(1, 'd', 2880, 'm', TSDB_TIME_PRECISION_MILLI));
|
||||||
|
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(1, 'y', 365, 'd', TSDB_TIME_PRECISION_MILLI));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(1, 'n', 30, 'd', TSDB_TIME_PRECISION_MILLI));
|
||||||
|
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(1, 'y', 24, 'n', TSDB_TIME_PRECISION_MILLI));
|
||||||
|
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(55, 's', 55, 'm', TSDB_TIME_PRECISION_MILLI));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(10, 's', 1, 'm', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(10, 's', 2, 'm', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(10, 's', 20, 'm', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(10, 's', 50, 'm', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(120, 's', 30, 'm', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(360, 's', 30, 'm', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(600, 's', 30, 'm', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(600, 's', 15, 'm', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(10, 's', 1, 'h', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(15, 's', 1, 'h', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(7*60, 's', 1, 'h', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(10, 's', 1, 'd', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(10, 's', 1, 'w', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(1, 'd', 1, 'w', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(10, 's', 1, 'n', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(10, 's', 1, 'y', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(1, 'd', 1, 'w', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(1, 'd', 1, 'n', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(1, 'd', 2, 'n', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(2, 'd', 2, 'n', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(2, 'd', 2, 'y', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(2, 'd', 1, 'y', TSDB_TIME_PRECISION_MICRO));
|
||||||
|
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(1, 'w', 1, 'n', TSDB_TIME_PRECISION_NANO));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(4, 'w', 1, 'n', TSDB_TIME_PRECISION_NANO));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(1, 'w', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
|
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(1, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(2, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(3, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(4, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(5, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(6, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(7, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(8, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(9, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(10, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(11, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
|
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(1, 'w', 1, 'w', TSDB_TIME_PRECISION_NANO));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(120, 's', 2, 'm', TSDB_TIME_PRECISION_NANO));
|
||||||
|
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(2, 'n', 2, 'n', TSDB_TIME_PRECISION_NANO));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(2, 'y', 2, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
|
ASSERT_FALSE(tsmaIntervalCheck(12, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
|
ASSERT_TRUE(tsmaIntervalCheck(3, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
|
}
|
||||||
|
|
|
@ -74,6 +74,44 @@ class TDTestCase(TBase):
|
||||||
tdSql.checkData(0, 0, ts2)
|
tdSql.checkData(0, 0, ts2)
|
||||||
tdSql.checkData(0, 1, ts2)
|
tdSql.checkData(0, 1, ts2)
|
||||||
|
|
||||||
|
def FIX_TS_5143(self):
|
||||||
|
tdLog.info("check bug TS_5143 ...\n")
|
||||||
|
# 2024-07-11 17:07:38
|
||||||
|
base_ts = 1720688857000
|
||||||
|
new_ts = base_ts + 10
|
||||||
|
sqls = [
|
||||||
|
"drop database if exists ts_5143",
|
||||||
|
"create database ts_5143 cachemodel 'both';",
|
||||||
|
"use ts_5143;",
|
||||||
|
"create table stb1 (ts timestamp, vval varchar(50), ival2 int, ival3 int, ival4 int) tags (itag int);",
|
||||||
|
"create table ntb1 using stb1 tags(1);",
|
||||||
|
f"insert into ntb1 values({base_ts}, 'nihao1', 12, 13, 14);",
|
||||||
|
f"insert into ntb1 values({base_ts + 2}, 'nihao2', NULL, NULL, NULL);",
|
||||||
|
f"delete from ntb1 where ts = {base_ts};",
|
||||||
|
f"insert into ntb1 values('{new_ts}', 'nihao3', 32, 33, 34);",
|
||||||
|
]
|
||||||
|
tdSql.executes(sqls)
|
||||||
|
|
||||||
|
last_sql = "select last(vval), last(ival2), last(ival3), last(ival4) from stb1;"
|
||||||
|
tdLog.debug(f"execute sql: {last_sql}")
|
||||||
|
tdSql.query(last_sql)
|
||||||
|
|
||||||
|
for i in range(1, 10):
|
||||||
|
new_ts = base_ts + i * 1000
|
||||||
|
num = i * 100
|
||||||
|
sqls = [
|
||||||
|
f"insert into ntb1 values({new_ts}, 'nihao{num}', {10*i}, {10*i}, {10*i});",
|
||||||
|
f"insert into ntb1 values({new_ts + 1}, 'nihao{num + 1}', NULL, NULL, NULL);",
|
||||||
|
f"delete from ntb1 where ts = {new_ts};",
|
||||||
|
f"insert into ntb1 values({new_ts + 2}, 'nihao{num + 2}', {11*i}, {11*i}, {11*i});",
|
||||||
|
]
|
||||||
|
tdSql.executes(sqls)
|
||||||
|
|
||||||
|
tdLog.debug(f"{i}th execute sql: {last_sql}")
|
||||||
|
tdSql.query(last_sql)
|
||||||
|
tdSql.checkData(0, 0, f"nihao{num + 2}")
|
||||||
|
tdSql.checkData(0, 1, f"{11*i}")
|
||||||
|
|
||||||
# run
|
# run
|
||||||
def run(self):
|
def run(self):
|
||||||
tdLog.debug(f"start to excute {__file__}")
|
tdLog.debug(f"start to excute {__file__}")
|
||||||
|
@ -83,6 +121,7 @@ class TDTestCase(TBase):
|
||||||
|
|
||||||
# TS BUGS
|
# TS BUGS
|
||||||
self.FIX_TS_5105()
|
self.FIX_TS_5105()
|
||||||
|
self.FIX_TS_5143()
|
||||||
|
|
||||||
tdLog.success(f"{__file__} successfully executed")
|
tdLog.success(f"{__file__} successfully executed")
|
||||||
|
|
||||||
|
|
|
@ -131,6 +131,11 @@
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma.py -Q 2
|
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma.py -Q 2
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma.py -Q 3
|
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma.py -Q 3
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma.py -Q 4
|
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma.py -Q 4
|
||||||
|
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py
|
||||||
|
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py -R
|
||||||
|
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py -Q 2
|
||||||
|
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py -Q 3
|
||||||
|
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py -Q 4
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmqShow.py
|
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmqShow.py
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmqDropStb.py
|
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmqDropStb.py
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/subscribeStb0.py
|
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/subscribeStb0.py
|
||||||
|
|
|
@ -21,8 +21,37 @@ sql create table ts3 using st tags(3,2,2);
|
||||||
sql create table ts4 using st tags(4,2,2);
|
sql create table ts4 using st tags(4,2,2);
|
||||||
sql create stream streams1 trigger at_once IGNORE EXPIRED 0 IGNORE UPDATE 0 watermark 1d into streamt1 as select _wstart, count(*) c1, sum(a) c3 , max(b) c4 from st interval(10s);
|
sql create stream streams1 trigger at_once IGNORE EXPIRED 0 IGNORE UPDATE 0 watermark 1d into streamt1 as select _wstart, count(*) c1, sum(a) c3 , max(b) c4 from st interval(10s);
|
||||||
|
|
||||||
|
print ====check task status start
|
||||||
|
|
||||||
|
$loop_count = 0
|
||||||
|
|
||||||
|
loopCheck:
|
||||||
|
|
||||||
sleep 1000
|
sleep 1000
|
||||||
|
|
||||||
|
$loop_count = $loop_count + 1
|
||||||
|
if $loop_count == 30 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
|
||||||
|
print 1 select * from information_schema.ins_stream_tasks;
|
||||||
|
sql select * from information_schema.ins_stream_tasks;
|
||||||
|
|
||||||
|
if $rows == 0 then
|
||||||
|
print rows=$rows
|
||||||
|
goto loopCheck
|
||||||
|
endi
|
||||||
|
|
||||||
|
print 1 select * from information_schema.ins_stream_tasks where status != "ready";
|
||||||
|
sql select * from information_schema.ins_stream_tasks where status != "ready";
|
||||||
|
|
||||||
|
if $rows != 0 then
|
||||||
|
print rows=$rows
|
||||||
|
goto loopCheck
|
||||||
|
endi
|
||||||
|
|
||||||
|
print ====check task status end
|
||||||
|
|
||||||
sql insert into ts1 values(1648791213000,1,1,3,4.1);
|
sql insert into ts1 values(1648791213000,1,1,3,4.1);
|
||||||
sql insert into ts1 values(1648791223000,2,2,3,1.1);
|
sql insert into ts1 values(1648791223000,2,2,3,1.1);
|
||||||
sql insert into ts1 values(1648791233000,3,3,3,2.1);
|
sql insert into ts1 values(1648791233000,3,3,3,2.1);
|
||||||
|
@ -123,6 +152,8 @@ if $data31 != 2 then
|
||||||
goto loop1
|
goto loop1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
sleep 5000
|
||||||
|
|
||||||
sql delete from ts2 where ts = 1648791243000 ;
|
sql delete from ts2 where ts = 1648791243000 ;
|
||||||
|
|
||||||
$loop_count = 0
|
$loop_count = 0
|
||||||
|
|
|
@ -41,8 +41,37 @@ sql create table ts1 using st tags(1,1,1);
|
||||||
sql create table ts2 using st tags(2,2,2);
|
sql create table ts2 using st tags(2,2,2);
|
||||||
sql create stream stream_t1 trigger at_once IGNORE EXPIRED 0 IGNORE UPDATE 0 into streamtST as select _wstart, count(*) c1, sum(a) c2 , max(b) c3 from st session(ts, 10s) ;
|
sql create stream stream_t1 trigger at_once IGNORE EXPIRED 0 IGNORE UPDATE 0 into streamtST as select _wstart, count(*) c1, sum(a) c2 , max(b) c3 from st session(ts, 10s) ;
|
||||||
|
|
||||||
|
print ====check task status start
|
||||||
|
|
||||||
|
$loop_count = 0
|
||||||
|
|
||||||
|
loopCheck:
|
||||||
|
|
||||||
sleep 1000
|
sleep 1000
|
||||||
|
|
||||||
|
$loop_count = $loop_count + 1
|
||||||
|
if $loop_count == 30 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
|
||||||
|
print 1 select * from information_schema.ins_stream_tasks;
|
||||||
|
sql select * from information_schema.ins_stream_tasks;
|
||||||
|
|
||||||
|
if $rows == 0 then
|
||||||
|
print rows=$rows
|
||||||
|
goto loopCheck
|
||||||
|
endi
|
||||||
|
|
||||||
|
print 1 select * from information_schema.ins_stream_tasks where status != "ready";
|
||||||
|
sql select * from information_schema.ins_stream_tasks where status != "ready";
|
||||||
|
|
||||||
|
if $rows != 0 then
|
||||||
|
print rows=$rows
|
||||||
|
goto loopCheck
|
||||||
|
endi
|
||||||
|
|
||||||
|
print ====check task status end
|
||||||
|
|
||||||
sql insert into ts1 values(1648791211000,1,1,1) (1648791211005,1,1,1);
|
sql insert into ts1 values(1648791211000,1,1,1) (1648791211005,1,1,1);
|
||||||
sql insert into ts2 values(1648791221004,1,2,3) (1648791221008,2,2,3);
|
sql insert into ts2 values(1648791221004,1,2,3) (1648791221008,2,2,3);
|
||||||
sql insert into ts1 values(1648791211005,1,1,1);
|
sql insert into ts1 values(1648791211005,1,1,1);
|
||||||
|
@ -79,4 +108,73 @@ if $data03 != 7 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
print ===== step3
|
||||||
|
|
||||||
|
sql create database test1 vgroups 4;
|
||||||
|
sql use test1;
|
||||||
|
sql create stable st(ts timestamp,a int,b int,c int) tags(ta int,tb int,tc int);
|
||||||
|
sql create table ts1 using st tags(1,1,1);
|
||||||
|
sql create table ts2 using st tags(2,2,2);
|
||||||
|
sql create stream stream_t2 trigger at_once IGNORE EXPIRED 0 IGNORE UPDATE 0 into streamtST2 as select _wstart, count(*) c1, sum(a) c2 , max(b) c3 from st partition by a session(ts, 10s) ;
|
||||||
|
|
||||||
|
print ====check task status start
|
||||||
|
|
||||||
|
$loop_count = 0
|
||||||
|
|
||||||
|
loopCheck1:
|
||||||
|
|
||||||
|
sleep 1000
|
||||||
|
|
||||||
|
$loop_count = $loop_count + 1
|
||||||
|
if $loop_count == 30 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
|
||||||
|
print 1 select * from information_schema.ins_stream_tasks;
|
||||||
|
sql select * from information_schema.ins_stream_tasks;
|
||||||
|
|
||||||
|
if $rows == 0 then
|
||||||
|
print rows=$rows
|
||||||
|
goto loopCheck1
|
||||||
|
endi
|
||||||
|
|
||||||
|
print 1 select * from information_schema.ins_stream_tasks where status != "ready";
|
||||||
|
sql select * from information_schema.ins_stream_tasks where status != "ready";
|
||||||
|
|
||||||
|
if $rows != 0 then
|
||||||
|
print rows=$rows
|
||||||
|
goto loopCheck1
|
||||||
|
endi
|
||||||
|
|
||||||
|
print ====check task status end
|
||||||
|
|
||||||
|
sql insert into ts1 values(1648791201000,1,1,1) (1648791210000,1,1,1);
|
||||||
|
sql insert into ts1 values(1648791211000,2,1,1) (1648791212000,2,1,1);
|
||||||
|
sql insert into ts2 values(1648791211000,3,1,1) (1648791212000,3,1,1);
|
||||||
|
|
||||||
|
sql delete from st where ts = 1648791211000;
|
||||||
|
|
||||||
|
$loop_count = 0
|
||||||
|
loop2:
|
||||||
|
|
||||||
|
$loop_count = $loop_count + 1
|
||||||
|
if $loop_count == 10 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
|
||||||
|
sleep 1000
|
||||||
|
print 2 select * from streamtST2;
|
||||||
|
sql select * from streamtST2;
|
||||||
|
|
||||||
|
print $data00 $data01 $data02 $data03
|
||||||
|
print $data10 $data11 $data12 $data13
|
||||||
|
print $data20 $data21 $data22 $data23
|
||||||
|
print $data30 $data31 $data32 $data33
|
||||||
|
print $data40 $data41 $data42 $data43
|
||||||
|
|
||||||
|
if $rows != 3 then
|
||||||
|
print =====rows=$rows
|
||||||
|
goto loop2
|
||||||
|
endi
|
||||||
|
|
||||||
system sh/stop_dnodes.sh
|
system sh/stop_dnodes.sh
|
||||||
|
|
|
@ -97,13 +97,45 @@ class TDTestCase:
|
||||||
tdSql.checkEqual(tdSql.queryResult[0][1], 123)
|
tdSql.checkEqual(tdSql.queryResult[0][1], 123)
|
||||||
tdSql.checkEqual(tdSql.queryResult[0][2], 4)
|
tdSql.checkEqual(tdSql.queryResult[0][2], 4)
|
||||||
|
|
||||||
tdSql.execute(f'drop database {self.dbname}')
|
# tdSql.execute(f'drop database {self.dbname}')
|
||||||
|
|
||||||
|
def view_name_check(self):
|
||||||
|
"""Cover the view name with backquote"""
|
||||||
|
tdSql.execute(f'create view `v1` as select * from (select `ts`, `c0` from ntb1) `t1`;')
|
||||||
|
tdSql.query('select * from `v1`;')
|
||||||
|
tdSql.checkRows(1)
|
||||||
|
tdSql.execute(f'drop view `v1`;')
|
||||||
|
|
||||||
|
def query_check(self):
|
||||||
|
"""Cover the table name, column name with backquote in query statement"""
|
||||||
|
tdSql.query(f'select `t1`.`ts`, `t1`.`c0` + 2 as `c1` from `{self.ntbname1}` `t1` union select `t2`.`ts`, `t2`.`c0` from `{self.ntbname2}` `t2`')
|
||||||
|
tdSql.checkRows(2)
|
||||||
|
|
||||||
|
tdSql.query(f'select `t1`.`ts`, `t1`.`c0` + `t2`.`c0` as `c0`, `t1`.`c1` * `t2`.`c1` as `c1` from `{self.ntbname1}` `t1` join `{self.ntbname2}` `t2` on timetruncate(`t1`.`ts`, 1s) = timetruncate(`t2`.`ts`, 1s);')
|
||||||
|
tdSql.checkRows(1)
|
||||||
|
tdSql.checkEqual(tdSql.queryResult[0][1], 3)
|
||||||
|
|
||||||
|
tdSql.query(f'select `t1`.`ts`, `t1`.`c1`, `t1`.`c2` from (select `ts`, `c0` + 1 as `c1`, `c1` + 2 as `c2` from `{self.ntbname1}`) `t1`;')
|
||||||
|
tdSql.checkEqual(tdSql.queryResult[0][1], 2)
|
||||||
|
tdSql.checkEqual(tdSql.queryResult[0][2], 3)
|
||||||
|
|
||||||
|
tdSql.query(f'select `t`.`ts`, cast(`t`.`v1` as int) + `t`.`c0` as `v` from (select `ts`, "12" as `v1`, `c0`, `c1` from `ntb1`) `t`;')
|
||||||
|
tdSql.checkRows(1)
|
||||||
|
tdSql.checkEqual(tdSql.queryResult[0][1], 13)
|
||||||
|
|
||||||
|
tdSql.query(f'select count(`t1`.`ts`) from (select `t`.`ts` from `{self.ntbname1}` `t`) `t1`;')
|
||||||
|
tdSql.checkRows(1)
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
self.topic_name_check()
|
self.topic_name_check()
|
||||||
self.db_name_check()
|
self.db_name_check()
|
||||||
self.stream_name_check()
|
self.stream_name_check()
|
||||||
self.table_name_check()
|
self.table_name_check()
|
||||||
|
self.view_name_check()
|
||||||
|
self.query_check()
|
||||||
|
|
||||||
def stop(self):
|
def stop(self):
|
||||||
|
tdSql.execute(f'drop database {self.dbname}')
|
||||||
tdSql.close()
|
tdSql.close()
|
||||||
tdLog.success("%s successfully executed" % __file__)
|
tdLog.success("%s successfully executed" % __file__)
|
||||||
|
|
||||||
|
|
|
@ -45,6 +45,346 @@ class TDTestCase:
|
||||||
else:
|
else:
|
||||||
tdSql.checkData(i, j, 1)
|
tdSql.checkData(i, j, 1)
|
||||||
|
|
||||||
|
def ignoreTest(self):
|
||||||
|
dbname = "db"
|
||||||
|
|
||||||
|
ts1 = 1694912400000
|
||||||
|
tdSql.execute(f'''create table {dbname}.stb30749(ts timestamp, col1 tinyint, col2 smallint) tags(loc nchar(20))''')
|
||||||
|
tdSql.execute(f"create table {dbname}.stb30749_1 using {dbname}.stb30749 tags('shanghai')")
|
||||||
|
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb30749_1 values(%d, null, 1)" % (ts1 + 1))
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb30749_1 values(%d, 3, null)" % (ts1 + 2))
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb30749_1 values(%d, 4, 3)" % (ts1 + 3))
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb30749_1 values(%d, 1, 1)" % (ts1 + 4))
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb30749_1 values(%d, 2, null)" % (ts1 + 5))
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb30749_1 values(%d, null, null)" % (ts1 + 6))
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1) from {dbname}.stb30749_1")
|
||||||
|
tdSql.checkRows(5)
|
||||||
|
tdSql.checkData(0, 0, '2023-09-17 09:00:00.002')
|
||||||
|
tdSql.checkData(0, 1, None)
|
||||||
|
tdSql.checkData(1, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(1, 1, 1)
|
||||||
|
tdSql.checkData(2, 0, '2023-09-17 09:00:00.004')
|
||||||
|
tdSql.checkData(2, 1, -3)
|
||||||
|
tdSql.checkData(3, 0, '2023-09-17 09:00:00.005')
|
||||||
|
tdSql.checkData(3, 1, 1)
|
||||||
|
tdSql.checkData(4, 0, '2023-09-17 09:00:00.006')
|
||||||
|
tdSql.checkData(4, 1, None)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1, 1) from {dbname}.stb30749_1")
|
||||||
|
tdSql.checkRows(5)
|
||||||
|
tdSql.checkData(0, 0, '2023-09-17 09:00:00.002')
|
||||||
|
tdSql.checkData(0, 1, None)
|
||||||
|
tdSql.checkData(1, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(1, 1, 1)
|
||||||
|
tdSql.checkData(2, 0, '2023-09-17 09:00:00.004')
|
||||||
|
tdSql.checkData(2, 1, None)
|
||||||
|
tdSql.checkData(3, 0, '2023-09-17 09:00:00.005')
|
||||||
|
tdSql.checkData(3, 1, 1)
|
||||||
|
tdSql.checkData(4, 0, '2023-09-17 09:00:00.006')
|
||||||
|
tdSql.checkData(4, 1, None)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1, 2) from {dbname}.stb30749_1")
|
||||||
|
tdSql.checkRows(3)
|
||||||
|
tdSql.checkData(0, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(0, 1, 1)
|
||||||
|
tdSql.checkData(1, 0, '2023-09-17 09:00:00.004')
|
||||||
|
tdSql.checkData(1, 1, -3)
|
||||||
|
tdSql.checkData(2, 0, '2023-09-17 09:00:00.005')
|
||||||
|
tdSql.checkData(2, 1, 1)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1, 3) from {dbname}.stb30749_1")
|
||||||
|
tdSql.checkRows(2)
|
||||||
|
tdSql.checkData(0, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(0, 1, 1)
|
||||||
|
tdSql.checkData(1, 0, '2023-09-17 09:00:00.005')
|
||||||
|
tdSql.checkData(1, 1, 1)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1, 3), diff(col2, 0) from {dbname}.stb30749_1")
|
||||||
|
tdSql.checkRows(5)
|
||||||
|
tdSql.checkData(0, 0, '2023-09-17 09:00:00.002')
|
||||||
|
tdSql.checkData(1, 2, 2)
|
||||||
|
tdSql.checkData(2, 1, None)
|
||||||
|
tdSql.checkData(2, 2, -2)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1, 3), diff(col2, 1) from {dbname}.stb30749_1")
|
||||||
|
tdSql.checkRows(5)
|
||||||
|
tdSql.checkData(0, 0, '2023-09-17 09:00:00.002')
|
||||||
|
tdSql.checkData(1, 2, 2)
|
||||||
|
tdSql.checkData(2, 1, None)
|
||||||
|
tdSql.checkData(2, 2, None)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1, 2), diff(col2, 2) from {dbname}.stb30749_1")
|
||||||
|
tdSql.checkRows(3)
|
||||||
|
tdSql.checkData(0, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(1, 0, '2023-09-17 09:00:00.004')
|
||||||
|
tdSql.checkData(2, 0, '2023-09-17 09:00:00.005')
|
||||||
|
tdSql.checkData(0, 1, 1)
|
||||||
|
tdSql.checkData(1, 1, -3)
|
||||||
|
tdSql.checkData(2, 1, 1)
|
||||||
|
tdSql.checkData(0, 2, 2)
|
||||||
|
tdSql.checkData(1, 2, -2)
|
||||||
|
tdSql.checkData(2, 2, None)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1, 3), diff(col2, 2) from {dbname}.stb30749_1")
|
||||||
|
tdSql.checkRows(3)
|
||||||
|
tdSql.checkData(0, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(1, 0, '2023-09-17 09:00:00.004')
|
||||||
|
tdSql.checkData(2, 0, '2023-09-17 09:00:00.005')
|
||||||
|
tdSql.checkData(0, 1, 1)
|
||||||
|
tdSql.checkData(1, 1, None)
|
||||||
|
tdSql.checkData(2, 1, 1)
|
||||||
|
tdSql.checkData(0, 2, 2)
|
||||||
|
tdSql.checkData(1, 2, -2)
|
||||||
|
tdSql.checkData(2, 2, None)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1, 3), diff(col2, 3) from {dbname}.stb30749_1")
|
||||||
|
tdSql.checkRows(2)
|
||||||
|
tdSql.checkData(0, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(1, 0, '2023-09-17 09:00:00.005')
|
||||||
|
tdSql.checkData(0, 1, 1)
|
||||||
|
tdSql.checkData(1, 1, 1)
|
||||||
|
tdSql.checkData(0, 2, 2)
|
||||||
|
tdSql.checkData(1, 2, None)
|
||||||
|
|
||||||
|
tdSql.execute(f"create table {dbname}.stb30749_2 using {dbname}.stb30749 tags('shanghai')")
|
||||||
|
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb30749_2 values(%d, null, 1)" % (ts1 - 1))
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb30749_2 values(%d, 4, 3)" % (ts1 + 0))
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb30749_2 values(%d, null, null)" % (ts1 + 10))
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1), diff(col2, 1) from {dbname}.stb30749")
|
||||||
|
tdSql.checkRows(8)
|
||||||
|
tdSql.checkData(2, 0, '2023-09-17 09:00:00.002')
|
||||||
|
tdSql.checkData(3, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(2, 1, -1)
|
||||||
|
tdSql.checkData(2, 2, None)
|
||||||
|
tdSql.checkData(3, 1, 1)
|
||||||
|
tdSql.checkData(3, 2, 2)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1), diff(col2) from {dbname}.stb30749")
|
||||||
|
tdSql.checkRows(8)
|
||||||
|
tdSql.checkData(2, 0, '2023-09-17 09:00:00.002')
|
||||||
|
tdSql.checkData(3, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(2, 1, -1)
|
||||||
|
tdSql.checkData(2, 2, None)
|
||||||
|
tdSql.checkData(3, 1, 1)
|
||||||
|
tdSql.checkData(3, 2, 2)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1), diff(col2, 3) from {dbname}.stb30749")
|
||||||
|
tdSql.checkRows(8)
|
||||||
|
tdSql.checkData(2, 0, '2023-09-17 09:00:00.002')
|
||||||
|
tdSql.checkData(3, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(2, 1, -1)
|
||||||
|
tdSql.checkData(2, 2, None)
|
||||||
|
tdSql.checkData(3, 1, 1)
|
||||||
|
tdSql.checkData(3, 2, 2)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1, 1), diff(col2, 2) from {dbname}.stb30749")
|
||||||
|
tdSql.checkRows(8)
|
||||||
|
tdSql.checkData(2, 0, '2023-09-17 09:00:00.002')
|
||||||
|
tdSql.checkData(3, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(2, 1, None)
|
||||||
|
tdSql.checkData(2, 2, None)
|
||||||
|
tdSql.checkData(3, 1, 1)
|
||||||
|
tdSql.checkData(3, 2, 2)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1, 1), diff(col2, 3) from {dbname}.stb30749")
|
||||||
|
tdSql.checkRows(8)
|
||||||
|
tdSql.checkData(2, 0, '2023-09-17 09:00:00.002')
|
||||||
|
tdSql.checkData(3, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(2, 1, None)
|
||||||
|
tdSql.checkData(2, 2, None)
|
||||||
|
tdSql.checkData(3, 1, 1)
|
||||||
|
tdSql.checkData(3, 2, 2)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1, 2), diff(col2, 2) from {dbname}.stb30749")
|
||||||
|
tdSql.checkRows(6)
|
||||||
|
tdSql.checkData(2, 0, '2023-09-17 09:00:00.002')
|
||||||
|
tdSql.checkData(3, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(2, 1, -1)
|
||||||
|
tdSql.checkData(2, 2, None)
|
||||||
|
tdSql.checkData(3, 1, 1)
|
||||||
|
tdSql.checkData(3, 2, 2)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1, 3), diff(col2, 2) from {dbname}.stb30749")
|
||||||
|
tdSql.checkRows(5)
|
||||||
|
tdSql.checkData(2, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(3, 0, '2023-09-17 09:00:00.004')
|
||||||
|
tdSql.checkData(2, 1, 1)
|
||||||
|
tdSql.checkData(2, 2, 2)
|
||||||
|
tdSql.checkData(3, 1, None)
|
||||||
|
tdSql.checkData(3, 2, -2)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1, 2), diff(col2, 3) from {dbname}.stb30749")
|
||||||
|
tdSql.checkRows(5)
|
||||||
|
tdSql.checkData(2, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(3, 0, '2023-09-17 09:00:00.004')
|
||||||
|
tdSql.checkData(2, 1, 1)
|
||||||
|
tdSql.checkData(2, 2, 2)
|
||||||
|
tdSql.checkData(3, 1, -3)
|
||||||
|
tdSql.checkData(3, 2, None)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1, 3), diff(col2, 3) from {dbname}.stb30749")
|
||||||
|
tdSql.checkRows(3)
|
||||||
|
tdSql.checkData(1, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(2, 0, '2023-09-17 09:00:00.005')
|
||||||
|
tdSql.checkData(1, 1, 1)
|
||||||
|
tdSql.checkData(1, 2, 2)
|
||||||
|
tdSql.checkData(2, 1, 1)
|
||||||
|
tdSql.checkData(2, 2, None)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1), diff(col2) from {dbname}.stb30749 partition by tbname")
|
||||||
|
tdSql.checkRows(7)
|
||||||
|
tdSql.checkData(0, 0, '2023-09-17 09:00:00.002')
|
||||||
|
tdSql.checkData(1, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(0, 1, None)
|
||||||
|
tdSql.checkData(0, 2, None)
|
||||||
|
tdSql.checkData(1, 1, 1)
|
||||||
|
tdSql.checkData(1, 2, 2)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(col1, 3), diff(col2, 2) from {dbname}.stb30749 partition by tbname")
|
||||||
|
tdSql.checkRows(4)
|
||||||
|
tdSql.checkData(3, 0, '2023-09-17 09:00:00.000')
|
||||||
|
tdSql.checkData(3, 1, None)
|
||||||
|
tdSql.checkData(3, 2, 2)
|
||||||
|
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb30749_2 values(%d, null, 1)" % (ts1 + 1))
|
||||||
|
tdSql.error(f"select ts, diff(col1, 3), diff(col2, 2) from {dbname}.stb30749")
|
||||||
|
|
||||||
|
def withPkTest(self):
|
||||||
|
dbname = "db"
|
||||||
|
|
||||||
|
ts1 = 1694912400000
|
||||||
|
tdSql.execute(f'''create table {dbname}.stb5(ts timestamp, col1 int PRIMARY KEY, col2 smallint) tags(loc nchar(20))''')
|
||||||
|
tdSql.execute(f"create table {dbname}.stb5_1 using {dbname}.stb5 tags('shanghai')")
|
||||||
|
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb5_1 values(%d, 2, 1)" % (ts1 + 1))
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb5_1 values(%d, 3, null)" % (ts1 + 2))
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb5_1 values(%d, 4, 3)" % (ts1 + 3))
|
||||||
|
|
||||||
|
tdSql.execute(f"create table {dbname}.stb5_2 using {dbname}.stb5 tags('shanghai')")
|
||||||
|
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb5_2 values(%d, 5, 4)" % (ts1 + 1))
|
||||||
|
tdSql.query(f"select ts, diff(col1, 3), diff(col2, 2) from {dbname}.stb5")
|
||||||
|
tdSql.checkRows(2)
|
||||||
|
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb5_2 values(%d, 3, 3)" % (ts1 + 2))
|
||||||
|
tdSql.query(f"select ts, diff(col1, 3), diff(col2, 2) from {dbname}.stb5")
|
||||||
|
tdSql.checkRows(2)
|
||||||
|
|
||||||
|
|
||||||
|
def intOverflowTest(self):
|
||||||
|
dbname = "db"
|
||||||
|
|
||||||
|
ts1 = 1694912400000
|
||||||
|
tdSql.execute(f'''create table {dbname}.stb6(ts timestamp, c1 int, c2 smallint, c3 int unsigned, c4 BIGINT, c5 BIGINT unsigned) tags(loc nchar(20))''')
|
||||||
|
tdSql.execute(f"create table {dbname}.stb6_1 using {dbname}.stb6 tags('shanghai')")
|
||||||
|
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb6_1 values(%d, -2147483648, -32768, 0, 9223372036854775806, 9223372036854775806)" % (ts1 + 1))
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb6_1 values(%d, 2147483647, 32767, 4294967295, 0, 0)" % (ts1 + 2))
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb6_1 values(%d, -10, -10, 0, -9223372036854775806, 16223372036854775806)" % (ts1 + 3))
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(c1), diff(c2), diff(c3), diff(c4), diff(c5) from {dbname}.stb6_1")
|
||||||
|
tdSql.checkRows(2)
|
||||||
|
tdSql.checkData(0, 0, '2023-09-17 09:00:00.002')
|
||||||
|
tdSql.checkData(0, 1, 4294967295)
|
||||||
|
tdSql.checkData(0, 2, 65535)
|
||||||
|
tdSql.checkData(0, 3, 4294967295)
|
||||||
|
tdSql.checkData(0, 4, -9223372036854775806)
|
||||||
|
tdSql.checkData(0, 5, -9223372036854775806)
|
||||||
|
tdSql.checkData(1, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(1, 1, -2147483657)
|
||||||
|
tdSql.checkData(1, 2, -32777)
|
||||||
|
tdSql.checkData(1, 3, -4294967295)
|
||||||
|
tdSql.checkData(1, 4, -9223372036854775806)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(c1, 1), diff(c2) from {dbname}.stb6_1")
|
||||||
|
tdSql.checkRows(2)
|
||||||
|
tdSql.checkData(0, 1, 4294967295)
|
||||||
|
tdSql.checkData(0, 2, 65535)
|
||||||
|
tdSql.checkData(1, 1, None)
|
||||||
|
tdSql.checkData(1, 2, -32777)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(c1, 1), diff(c2, 1) from {dbname}.stb6_1")
|
||||||
|
tdSql.checkRows(2)
|
||||||
|
tdSql.checkData(0, 1, 4294967295)
|
||||||
|
tdSql.checkData(0, 2, 65535)
|
||||||
|
tdSql.checkData(1, 1, None)
|
||||||
|
tdSql.checkData(1, 2, None)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(c1, 2), diff(c2, 3) from {dbname}.stb6_1")
|
||||||
|
tdSql.checkRows(2)
|
||||||
|
tdSql.checkData(0, 1, 4294967295)
|
||||||
|
tdSql.checkData(0, 2, 65535)
|
||||||
|
tdSql.checkData(1, 1, -2147483657)
|
||||||
|
tdSql.checkData(1, 2, None)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(c1, 3), diff(c2, 3) from {dbname}.stb6_1")
|
||||||
|
tdSql.checkRows(1)
|
||||||
|
tdSql.checkData(0, 1, 4294967295)
|
||||||
|
tdSql.checkData(0, 2, 65535)
|
||||||
|
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb6_1 values(%d, -10, -10, 0, 9223372036854775800, 0)" % (ts1 + 4))
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb6_1 values(%d, -10, -10, 0, 9223372036854775800, 16223372036854775806)" % (ts1 + 5))
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(c4, 0) from {dbname}.stb6_1")
|
||||||
|
tdSql.checkRows(4)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(c4, 1) from {dbname}.stb6_1")
|
||||||
|
tdSql.checkRows(4)
|
||||||
|
tdSql.checkData(2, 1, -10)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(c4, 2) from {dbname}.stb6_1")
|
||||||
|
tdSql.checkRows(4)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(c4, 3) from {dbname}.stb6_1")
|
||||||
|
tdSql.checkRows(2)
|
||||||
|
tdSql.checkData(0, 1, -10)
|
||||||
|
tdSql.checkData(1, 1, 0)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(c5, 0) from {dbname}.stb6_1")
|
||||||
|
tdSql.checkRows(4)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(c5, 1) from {dbname}.stb6_1")
|
||||||
|
tdSql.checkRows(4)
|
||||||
|
tdSql.checkData(0, 1, None)
|
||||||
|
tdSql.checkData(1, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(2, 1, None)
|
||||||
|
tdSql.checkData(3, 0, '2023-09-17 09:00:00.005')
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(c5, 2) from {dbname}.stb6_1")
|
||||||
|
tdSql.checkRows(4)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(c5, 3) from {dbname}.stb6_1")
|
||||||
|
tdSql.checkRows(2)
|
||||||
|
tdSql.checkData(0, 0, '2023-09-17 09:00:00.003')
|
||||||
|
tdSql.checkData(1, 0, '2023-09-17 09:00:00.005')
|
||||||
|
|
||||||
|
def doubleOverflowTest(self):
|
||||||
|
dbname = "db"
|
||||||
|
|
||||||
|
ts1 = 1694912400000
|
||||||
|
tdSql.execute(f'''create table {dbname}.stb7(ts timestamp, c1 float, c2 double) tags(loc nchar(20))''')
|
||||||
|
tdSql.execute(f"create table {dbname}.stb7_1 using {dbname}.stb7 tags('shanghai')")
|
||||||
|
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb7_1 values(%d, 334567777777777777777343434343333333733, 334567777777777777777343434343333333733)" % (ts1 + 1))
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb7_1 values(%d, -334567777777777777777343434343333333733, -334567777777777777777343434343333333733)" % (ts1 + 2))
|
||||||
|
tdSql.execute(f"insert into {dbname}.stb7_1 values(%d, 334567777777777777777343434343333333733, 334567777777777777777343434343333333733)" % (ts1 + 3))
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(c1), diff(c2) from {dbname}.stb7_1")
|
||||||
|
tdSql.checkRows(2)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(c1, 1), diff(c2, 1) from {dbname}.stb7_1")
|
||||||
|
tdSql.checkRows(2)
|
||||||
|
tdSql.checkData(0, 1, None)
|
||||||
|
tdSql.checkData(0, 2, None)
|
||||||
|
|
||||||
|
tdSql.query(f"select ts, diff(c1, 3), diff(c2, 3) from {dbname}.stb7_1")
|
||||||
|
tdSql.checkRows(1)
|
||||||
|
tdSql.checkData(0, 0, '2023-09-17 09:00:00.003')
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
tdSql.prepare()
|
tdSql.prepare()
|
||||||
dbname = "db"
|
dbname = "db"
|
||||||
|
@ -52,6 +392,11 @@ class TDTestCase:
|
||||||
# full type test
|
# full type test
|
||||||
self.full_datatype_test()
|
self.full_datatype_test()
|
||||||
|
|
||||||
|
self.ignoreTest()
|
||||||
|
self.withPkTest()
|
||||||
|
self.intOverflowTest()
|
||||||
|
self.doubleOverflowTest()
|
||||||
|
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"create table {dbname}.ntb(ts timestamp,c1 int,c2 double,c3 float)")
|
f"create table {dbname}.ntb(ts timestamp,c1 int,c2 double,c3 float)")
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
|
@ -219,9 +564,18 @@ class TDTestCase:
|
||||||
tdSql.error(f"select diff(col1,1.23) from {dbname}.stb_1")
|
tdSql.error(f"select diff(col1,1.23) from {dbname}.stb_1")
|
||||||
tdSql.error(f"select diff(col1,-1) from {dbname}.stb_1")
|
tdSql.error(f"select diff(col1,-1) from {dbname}.stb_1")
|
||||||
tdSql.query(f"select ts,diff(col1),ts from {dbname}.stb_1")
|
tdSql.query(f"select ts,diff(col1),ts from {dbname}.stb_1")
|
||||||
tdSql.error(f"select diff(col1, 1),diff(col2) from {dbname}.stb_1")
|
tdSql.error(f"select diff(col1, -1) from {dbname}.stb_1")
|
||||||
tdSql.error(f"select diff(col1, 1),diff(col2, 0) from {dbname}.stb_1")
|
tdSql.error(f"select diff(col1, 4) from {dbname}.stb_1")
|
||||||
tdSql.error(f"select diff(col1, 1),diff(col2, 1) from {dbname}.stb_1")
|
tdSql.error(f"select diff(col1, 1),diff(col2, 4) from {dbname}.stb_1")
|
||||||
|
|
||||||
|
tdSql.query(f"select diff(col1, 1),diff(col2) from {dbname}.stb_1")
|
||||||
|
tdSql.checkRows(self.rowNum)
|
||||||
|
|
||||||
|
tdSql.query(f"select diff(col1, 1),diff(col2, 0) from {dbname}.stb_1")
|
||||||
|
tdSql.checkRows(self.rowNum)
|
||||||
|
|
||||||
|
tdSql.query(f"select diff(col1, 1),diff(col2, 1) from {dbname}.stb_1")
|
||||||
|
tdSql.checkRows(self.rowNum)
|
||||||
|
|
||||||
tdSql.query(f"select diff(ts) from {dbname}.stb_1")
|
tdSql.query(f"select diff(ts) from {dbname}.stb_1")
|
||||||
tdSql.checkRows(10)
|
tdSql.checkRows(10)
|
||||||
|
|
|
@ -172,7 +172,7 @@ class TDTestCase:
|
||||||
tdSql.checkRows(90)
|
tdSql.checkRows(90)
|
||||||
|
|
||||||
tdSql.query(f"select c1 , diff(c1 , 0) from {dbname}.stb partition by c1")
|
tdSql.query(f"select c1 , diff(c1 , 0) from {dbname}.stb partition by c1")
|
||||||
tdSql.checkRows(140)
|
tdSql.checkRows(139)
|
||||||
|
|
||||||
tdSql.query(f"select c1 , csum(c1) from {dbname}.stb partition by c1")
|
tdSql.query(f"select c1 , csum(c1) from {dbname}.stb partition by c1")
|
||||||
tdSql.checkRows(100)
|
tdSql.checkRows(100)
|
||||||
|
|
|
@ -687,10 +687,10 @@ class TDTestCase:
|
||||||
tdLog.debug("insert data ............ [OK]")
|
tdLog.debug("insert data ............ [OK]")
|
||||||
return
|
return
|
||||||
|
|
||||||
def init_data(self, ctb_num: int = 10, rows_per_ctb: int = 10000, start_ts: int = 1537146000000, ts_step: int = 500):
|
def init_data(self, db: str = 'test', ctb_num: int = 10, rows_per_ctb: int = 10000, start_ts: int = 1537146000000, ts_step: int = 500):
|
||||||
tdLog.printNoPrefix(
|
tdLog.printNoPrefix(
|
||||||
"======== prepare test env include database, stable, ctables, and insert data: ")
|
"======== prepare test env include database, stable, ctables, and insert data: ")
|
||||||
paraDict = {'dbName': 'test',
|
paraDict = {'dbName': db,
|
||||||
'dropFlag': 1,
|
'dropFlag': 1,
|
||||||
'vgroups': 2,
|
'vgroups': 2,
|
||||||
'stbName': 'meters',
|
'stbName': 'meters',
|
||||||
|
@ -707,8 +707,8 @@ class TDTestCase:
|
||||||
'tsStep': ts_step}
|
'tsStep': ts_step}
|
||||||
|
|
||||||
paraDict['vgroups'] = self.vgroups
|
paraDict['vgroups'] = self.vgroups
|
||||||
paraDict['ctbNum'] = self.ctbNum
|
paraDict['ctbNum'] = ctb_num
|
||||||
paraDict['rowsPerTbl'] = self.rowsPerTbl
|
paraDict['rowsPerTbl'] = rows_per_ctb
|
||||||
|
|
||||||
tdLog.info("create database")
|
tdLog.info("create database")
|
||||||
self.create_database(tsql=tdSql, dbName=paraDict["dbName"], dropFlag=paraDict["dropFlag"],
|
self.create_database(tsql=tdSql, dbName=paraDict["dbName"], dropFlag=paraDict["dropFlag"],
|
||||||
|
@ -972,16 +972,16 @@ class TDTestCase:
|
||||||
sql = 'select avg(c2), "recursive test.tsma4" from test.meters'
|
sql = 'select avg(c2), "recursive test.tsma4" from test.meters'
|
||||||
ctx = TSMAQCBuilder().with_sql(sql).should_query_with_tsma(
|
ctx = TSMAQCBuilder().with_sql(sql).should_query_with_tsma(
|
||||||
'tsma4', UsedTsma.TS_MIN, UsedTsma.TS_MAX).get_qc()
|
'tsma4', UsedTsma.TS_MIN, UsedTsma.TS_MAX).get_qc()
|
||||||
#time.sleep(999999)
|
|
||||||
self.tsma_tester.check_sql(sql, ctx)
|
self.tsma_tester.check_sql(sql, ctx)
|
||||||
self.check(self.test_query_tsma_all(select_func_list))
|
self.check(self.test_query_tsma_all(select_func_list))
|
||||||
self.create_recursive_tsma(
|
self.create_recursive_tsma(
|
||||||
'tsma4', 'tsma6', 'test', '1h', 'meters', tsma_func_list)
|
'tsma4', 'tsma6', 'test', '5h', 'meters', tsma_func_list)
|
||||||
ctx = TSMAQCBuilder().with_sql(sql).should_query_with_tsma(
|
ctx = TSMAQCBuilder().with_sql(sql).should_query_with_tsma(
|
||||||
'tsma6', UsedTsma.TS_MIN, UsedTsma.TS_MAX).get_qc()
|
'tsma6', UsedTsma.TS_MIN, UsedTsma.TS_MAX).get_qc()
|
||||||
self.tsma_tester.check_sql(sql, ctx)
|
self.tsma_tester.check_sql(sql, ctx)
|
||||||
|
|
||||||
self.check(self.test_query_tsma_all(select_func_list))
|
self.check(self.test_query_tsma_all(select_func_list))
|
||||||
|
#time.sleep(999999)
|
||||||
|
|
||||||
tdSql.error('drop tsma test.tsma3', -2147482491)
|
tdSql.error('drop tsma test.tsma3', -2147482491)
|
||||||
tdSql.error('drop tsma test.tsma4', -2147482491)
|
tdSql.error('drop tsma test.tsma4', -2147482491)
|
||||||
|
@ -1218,7 +1218,6 @@ class TDTestCase:
|
||||||
self.test_drop_tsma()
|
self.test_drop_tsma()
|
||||||
self.test_tb_ddl_with_created_tsma()
|
self.test_tb_ddl_with_created_tsma()
|
||||||
|
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
self.init_data()
|
self.init_data()
|
||||||
self.test_ddl()
|
self.test_ddl()
|
||||||
|
@ -1358,18 +1357,19 @@ class TDTestCase:
|
||||||
'create table nsdb.meters(ts timestamp, c1 int, c2 int, c3 varchar(255)) tags(t1 int, t2 int)', queryTimes=1)
|
'create table nsdb.meters(ts timestamp, c1 int, c2 int, c3 varchar(255)) tags(t1 int, t2 int)', queryTimes=1)
|
||||||
self.create_tsma('tsma1', 'nsdb', 'meters', ['avg(c1)', 'avg(c2)'], '5m')
|
self.create_tsma('tsma1', 'nsdb', 'meters', ['avg(c1)', 'avg(c2)'], '5m')
|
||||||
# Invalid tsma interval, 1ms ~ 1h is allowed
|
# Invalid tsma interval, 1ms ~ 1h is allowed
|
||||||
tdSql.error(
|
def _():
|
||||||
'create tsma tsma2 on nsdb.meters function(avg(c1), avg(c2)) interval(2h)', -2147471097)
|
tdSql.error(
|
||||||
tdSql.error(
|
'create tsma tsma2 on nsdb.meters function(avg(c1), avg(c2)) interval(2h)', -2147471097)
|
||||||
'create tsma tsma2 on nsdb.meters function(avg(c1), avg(c2)) interval(3601s)', -2147471097)
|
tdSql.error(
|
||||||
tdSql.error(
|
'create tsma tsma2 on nsdb.meters function(avg(c1), avg(c2)) interval(3601s)', -2147471097)
|
||||||
'create tsma tsma2 on nsdb.meters function(avg(c1), avg(c2)) interval(3600001a)', -2147471097)
|
tdSql.error(
|
||||||
tdSql.error(
|
'create tsma tsma2 on nsdb.meters function(avg(c1), avg(c2)) interval(3600001a)', -2147471097)
|
||||||
'create tsma tsma2 on nsdb.meters function(avg(c1), avg(c2)) interval(3600001000u)', -2147471097)
|
tdSql.error(
|
||||||
tdSql.error(
|
'create tsma tsma2 on nsdb.meters function(avg(c1), avg(c2)) interval(3600001000u)', -2147471097)
|
||||||
'create tsma tsma2 on nsdb.meters function(avg(c1), avg(c2)) interval(999999b)', -2147471097)
|
tdSql.error(
|
||||||
tdSql.error(
|
'create tsma tsma2 on nsdb.meters function(avg(c1), avg(c2)) interval(999999b)', -2147471097)
|
||||||
'create tsma tsma2 on nsdb.meters function(avg(c1), avg(c2)) interval(999u)', -2147471097)
|
tdSql.error(
|
||||||
|
'create tsma tsma2 on nsdb.meters function(avg(c1), avg(c2)) interval(999u)', -2147471097)
|
||||||
# invalid tsma func param
|
# invalid tsma func param
|
||||||
tdSql.error(
|
tdSql.error(
|
||||||
'create tsma tsma2 on nsdb.meters function(avg(c1, c2), avg(c2)) interval(10m)', -2147471096)
|
'create tsma tsma2 on nsdb.meters function(avg(c1, c2), avg(c2)) interval(10m)', -2147471096)
|
||||||
|
@ -1446,8 +1446,6 @@ class TDTestCase:
|
||||||
['avg(c1)', 'avg(c2)'], 'nsdb', 'meters', '10m', 'tsma1')
|
['avg(c1)', 'avg(c2)'], 'nsdb', 'meters', '10m', 'tsma1')
|
||||||
tdSql.execute('drop tsma nsdb.tsma1', queryTimes=1)
|
tdSql.execute('drop tsma nsdb.tsma1', queryTimes=1)
|
||||||
|
|
||||||
tdSql.error(
|
|
||||||
'create tsma tsma1 on test.meters function(avg(c1), avg(c2)) interval(2h)', -2147471097)
|
|
||||||
self.wait_query('show transactions', 0, 10, lambda row: row[3] != 'stream-chkpt-u')
|
self.wait_query('show transactions', 0, 10, lambda row: row[3] != 'stream-chkpt-u')
|
||||||
tdSql.execute('drop database nsdb')
|
tdSql.execute('drop database nsdb')
|
||||||
|
|
||||||
|
@ -1611,7 +1609,6 @@ class TDTestCase:
|
||||||
|
|
||||||
# def test_split_dnode(self):
|
# def test_split_dnode(self):
|
||||||
|
|
||||||
|
|
||||||
def stop(self):
|
def stop(self):
|
||||||
tdSql.close()
|
tdSql.close()
|
||||||
tdLog.success(f"{__file__} successfully executed")
|
tdLog.success(f"{__file__} successfully executed")
|
||||||
|
|
|
@ -0,0 +1,909 @@
|
||||||
|
from random import randrange
|
||||||
|
import time
|
||||||
|
import threading
|
||||||
|
import secrets
|
||||||
|
from util.log import *
|
||||||
|
from util.sql import *
|
||||||
|
from util.cases import *
|
||||||
|
from util.dnodes import *
|
||||||
|
from util.common import *
|
||||||
|
# from tmqCommon import *
|
||||||
|
|
||||||
|
ROUND = 100
|
||||||
|
|
||||||
|
ignore_some_tests: int = 1
|
||||||
|
|
||||||
|
class TSMA:
|
||||||
|
def __init__(self):
|
||||||
|
self.tsma_name = ''
|
||||||
|
self.db_name = ''
|
||||||
|
self.original_table_name = ''
|
||||||
|
self.funcs = []
|
||||||
|
self.cols = []
|
||||||
|
self.interval: str = ''
|
||||||
|
|
||||||
|
|
||||||
|
class UsedTsma:
|
||||||
|
TS_MIN = '-9223372036854775808'
|
||||||
|
TS_MAX = '9223372036854775806'
|
||||||
|
TSMA_RES_STB_POSTFIX = '_tsma_res_stb_'
|
||||||
|
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self.name = '' # tsma name or table name
|
||||||
|
self.time_range_start: float = float(UsedTsma.TS_MIN)
|
||||||
|
self.time_range_end: float = float(UsedTsma.TS_MAX)
|
||||||
|
self.is_tsma_ = False
|
||||||
|
|
||||||
|
def __eq__(self, __value: object) -> bool:
|
||||||
|
if isinstance(__value, self.__class__):
|
||||||
|
return self.name == __value.name \
|
||||||
|
and self.time_range_start == __value.time_range_start \
|
||||||
|
and self.time_range_end == __value.time_range_end \
|
||||||
|
and self.is_tsma_ == __value.is_tsma_
|
||||||
|
else:
|
||||||
|
return False
|
||||||
|
|
||||||
|
def __ne__(self, __value: object) -> bool:
|
||||||
|
return not self.__eq__(__value)
|
||||||
|
|
||||||
|
def __str__(self) -> str:
|
||||||
|
return "%s: from %s to %s is_tsma: %d" % (self.name, self.time_range_start, self.time_range_end, self.is_tsma_)
|
||||||
|
|
||||||
|
def __repr__(self) -> str:
|
||||||
|
return self.__str__()
|
||||||
|
|
||||||
|
def setIsTsma(self):
|
||||||
|
self.is_tsma_ = self.name.endswith(self.TSMA_RES_STB_POSTFIX)
|
||||||
|
if not self.is_tsma_:
|
||||||
|
self.is_tsma_ = len(self.name) == 32 # for tsma output child table
|
||||||
|
|
||||||
|
class TSMAQueryContext:
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self.sql = ''
|
||||||
|
self.used_tsmas: List[UsedTsma] = []
|
||||||
|
self.ignore_tsma_check_ = False
|
||||||
|
self.ignore_res_order_ = False
|
||||||
|
|
||||||
|
def __eq__(self, __value) -> bool:
|
||||||
|
if isinstance(__value, self.__class__):
|
||||||
|
if self.ignore_tsma_check_ or __value.ignore_tsma_check_:
|
||||||
|
return True
|
||||||
|
if len(self.used_tsmas) != len(__value.used_tsmas):
|
||||||
|
return False
|
||||||
|
for used_tsma1, used_tsma2 in zip(self.used_tsmas, __value.used_tsmas):
|
||||||
|
if not used_tsma1 == used_tsma2:
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
return False
|
||||||
|
|
||||||
|
def __ne__(self, __value: object) -> bool:
|
||||||
|
return self.__eq__(__value)
|
||||||
|
|
||||||
|
def __str__(self) -> str:
|
||||||
|
return str(self.used_tsmas)
|
||||||
|
|
||||||
|
def has_tsma(self) -> bool:
|
||||||
|
for tsma in self.used_tsmas:
|
||||||
|
if tsma.is_tsma_:
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
class TSMAQCBuilder:
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self.qc_: TSMAQueryContext = TSMAQueryContext()
|
||||||
|
|
||||||
|
def get_qc(self) -> TSMAQueryContext:
|
||||||
|
return self.qc_
|
||||||
|
|
||||||
|
def with_sql(self, sql: str):
|
||||||
|
self.qc_.sql = sql
|
||||||
|
return self
|
||||||
|
|
||||||
|
def to_timestamp(self, ts: str) -> float:
|
||||||
|
if ts == UsedTsma.TS_MAX or ts == UsedTsma.TS_MIN:
|
||||||
|
return float(ts)
|
||||||
|
tdSql.query(
|
||||||
|
"select to_timestamp('%s', 'yyyy-mm-dd hh24-mi-ss.ms')" % (ts))
|
||||||
|
res = tdSql.queryResult[0][0]
|
||||||
|
return res.timestamp() * 1000
|
||||||
|
|
||||||
|
def md5(self, buf: str) -> str:
|
||||||
|
tdSql.query(f'select md5("{buf}")')
|
||||||
|
res = tdSql.queryResult[0][0]
|
||||||
|
return res
|
||||||
|
|
||||||
|
def should_query_with_table(self, tb_name: str, ts_begin: str = UsedTsma.TS_MIN, ts_end: str = UsedTsma.TS_MAX) -> 'TSMAQCBuilder':
|
||||||
|
used_tsma: UsedTsma = UsedTsma()
|
||||||
|
used_tsma.name = tb_name
|
||||||
|
used_tsma.time_range_start = self.to_timestamp(ts_begin)
|
||||||
|
used_tsma.time_range_end = self.to_timestamp(ts_end)
|
||||||
|
used_tsma.is_tsma_ = False
|
||||||
|
self.qc_.used_tsmas.append(used_tsma)
|
||||||
|
return self
|
||||||
|
|
||||||
|
def should_query_with_tsma_ctb(self, db_name: str, tsma_name: str, ctb_name: str, ts_begin: str = UsedTsma.TS_MIN, ts_end: str = UsedTsma.TS_MAX) -> 'TSMAQCBuilder':
|
||||||
|
used_tsma: UsedTsma = UsedTsma()
|
||||||
|
name = f'1.{db_name}.{tsma_name}_{ctb_name}'
|
||||||
|
used_tsma.name = self.md5(name)
|
||||||
|
used_tsma.time_range_start = self.to_timestamp(ts_begin)
|
||||||
|
used_tsma.time_range_end = self.to_timestamp(ts_end)
|
||||||
|
used_tsma.is_tsma_ = True
|
||||||
|
self.qc_.used_tsmas.append(used_tsma)
|
||||||
|
return self
|
||||||
|
|
||||||
|
def ignore_query_table(self):
|
||||||
|
self.qc_.ignore_tsma_check_ = True
|
||||||
|
return self
|
||||||
|
|
||||||
|
def ignore_res_order(self, ignore: bool):
|
||||||
|
self.qc_.ignore_res_order_ = ignore
|
||||||
|
return self
|
||||||
|
|
||||||
|
def should_query_with_tsma(self, tsma_name: str, ts_begin: str = UsedTsma.TS_MIN, ts_end: str = UsedTsma.TS_MAX, child_tb: bool = False) -> 'TSMAQCBuilder':
|
||||||
|
used_tsma: UsedTsma = UsedTsma()
|
||||||
|
if child_tb:
|
||||||
|
used_tsma.name = tsma_name
|
||||||
|
else:
|
||||||
|
used_tsma.name = tsma_name + UsedTsma.TSMA_RES_STB_POSTFIX
|
||||||
|
used_tsma.time_range_start = self.to_timestamp(ts_begin)
|
||||||
|
used_tsma.time_range_end = self.to_timestamp(ts_end)
|
||||||
|
used_tsma.is_tsma_ = True
|
||||||
|
self.qc_.used_tsmas.append(used_tsma)
|
||||||
|
return self
|
||||||
|
|
||||||
|
|
||||||
|
class TSMATester:
|
||||||
|
def __init__(self, tdSql: TDSql) -> None:
|
||||||
|
self.tsmas = []
|
||||||
|
self.tdSql: TDSql = tdSql
|
||||||
|
|
||||||
|
def explain_sql(self, sql: str):
|
||||||
|
tdSql.execute("alter local 'querySmaOptimize' '1'")
|
||||||
|
sql = "explain verbose true " + sql
|
||||||
|
tdSql.query(sql, queryTimes=1)
|
||||||
|
res = self.tdSql.queryResult
|
||||||
|
if self.tdSql.queryResult is None:
|
||||||
|
raise
|
||||||
|
return res
|
||||||
|
|
||||||
|
def get_tsma_query_ctx(self, sql: str):
|
||||||
|
explain_res = self.explain_sql(sql)
|
||||||
|
query_ctx: TSMAQueryContext = TSMAQueryContext()
|
||||||
|
query_ctx.sql = sql
|
||||||
|
query_ctx.used_tsmas = []
|
||||||
|
used_tsma: UsedTsma = UsedTsma()
|
||||||
|
for row in explain_res:
|
||||||
|
row = str(row)
|
||||||
|
if len(used_tsma.name) == 0:
|
||||||
|
idx = row.find("Table Scan on ")
|
||||||
|
if idx >= 0:
|
||||||
|
words = row[idx:].split(' ')
|
||||||
|
used_tsma.name = words[3]
|
||||||
|
used_tsma.setIsTsma()
|
||||||
|
else:
|
||||||
|
idx = row.find('Time Range:')
|
||||||
|
if idx >= 0:
|
||||||
|
row = row[idx:].split('[')[1]
|
||||||
|
row = row.split(']')[0]
|
||||||
|
words = row.split(',')
|
||||||
|
used_tsma.time_range_start = float(words[0].strip())
|
||||||
|
used_tsma.time_range_end = float(words[1].strip())
|
||||||
|
query_ctx.used_tsmas.append(used_tsma)
|
||||||
|
used_tsma = UsedTsma()
|
||||||
|
|
||||||
|
deduplicated_tsmas: list[UsedTsma] = []
|
||||||
|
if len(query_ctx.used_tsmas) > 0:
|
||||||
|
deduplicated_tsmas.append(query_ctx.used_tsmas[0])
|
||||||
|
for tsma in query_ctx.used_tsmas:
|
||||||
|
if tsma == deduplicated_tsmas[-1]:
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
deduplicated_tsmas.append(tsma)
|
||||||
|
query_ctx.used_tsmas = deduplicated_tsmas
|
||||||
|
|
||||||
|
return query_ctx
|
||||||
|
|
||||||
|
def check_explain(self, sql: str, expect: TSMAQueryContext) -> TSMAQueryContext:
|
||||||
|
query_ctx = self.get_tsma_query_ctx(sql)
|
||||||
|
if not query_ctx == expect:
|
||||||
|
tdLog.exit('check explain failed for sql: %s \nexpect: %s \nactual: %s' % (
|
||||||
|
sql, str(expect), str(query_ctx)))
|
||||||
|
elif expect.has_tsma():
|
||||||
|
tdLog.debug('check explain succeed for sql: %s \ntsma: %s' %
|
||||||
|
(sql, str(expect.used_tsmas)))
|
||||||
|
has_tsma = False
|
||||||
|
for tsma in query_ctx.used_tsmas:
|
||||||
|
has_tsma = has_tsma or tsma.is_tsma_
|
||||||
|
if not has_tsma and len(query_ctx.used_tsmas) > 1:
|
||||||
|
tdLog.exit(
|
||||||
|
f'explain err for sql: {sql}, has multi non tsmas, {query_ctx.used_tsmas}')
|
||||||
|
return query_ctx
|
||||||
|
|
||||||
|
def check_result(self, sql: str, skip_order: bool = False):
|
||||||
|
tdSql.execute("alter local 'querySmaOptimize' '1'")
|
||||||
|
tsma_res = tdSql.getResult(sql)
|
||||||
|
|
||||||
|
tdSql.execute("alter local 'querySmaOptimize' '0'")
|
||||||
|
no_tsma_res = tdSql.getResult(sql)
|
||||||
|
|
||||||
|
if no_tsma_res is None or tsma_res is None:
|
||||||
|
if no_tsma_res != tsma_res:
|
||||||
|
tdLog.exit("comparing tsma res for: %s got different rows of result: without tsma: %s, with tsma: %s" % (
|
||||||
|
sql, str(no_tsma_res), str(tsma_res)))
|
||||||
|
else:
|
||||||
|
return
|
||||||
|
|
||||||
|
if len(no_tsma_res) != len(tsma_res):
|
||||||
|
tdLog.exit("comparing tsma res for: %s got different rows of result: \nwithout tsma: %s\nwith tsma: %s" % (
|
||||||
|
sql, str(no_tsma_res), str(tsma_res)))
|
||||||
|
if skip_order:
|
||||||
|
try:
|
||||||
|
no_tsma_res.sort(
|
||||||
|
key=lambda x: [v is None for v in x] + list(x))
|
||||||
|
tsma_res.sort(key=lambda x: [v is None for v in x] + list(x))
|
||||||
|
except Exception as e:
|
||||||
|
tdLog.exit("comparing tsma res for: %s got different data: \nno tsma res: %s \n tsma res: %s err: %s" % (
|
||||||
|
sql, str(no_tsma_res), str(tsma_res), str(e)))
|
||||||
|
|
||||||
|
for row_no_tsma, row_tsma in zip(no_tsma_res, tsma_res):
|
||||||
|
if row_no_tsma != row_tsma:
|
||||||
|
tdLog.exit("comparing tsma res for: %s got different row data: no tsma row: %s, tsma row: %s \nno tsma res: %s \n tsma res: %s" % (
|
||||||
|
sql, str(row_no_tsma), str(row_tsma), str(no_tsma_res), str(tsma_res)))
|
||||||
|
tdLog.info('result check succeed for sql: %s. \n tsma-res: %s. \nno_tsma-res: %s' %
|
||||||
|
(sql, str(tsma_res), str(no_tsma_res)))
|
||||||
|
|
||||||
|
def check_sql(self, sql: str, expect: TSMAQueryContext):
|
||||||
|
tdLog.debug(f"start to check sql: {sql}")
|
||||||
|
actual_ctx = self.check_explain(sql, expect=expect)
|
||||||
|
tdLog.debug(f"ctx: {actual_ctx}")
|
||||||
|
if actual_ctx.has_tsma():
|
||||||
|
self.check_result(sql, expect.ignore_res_order_)
|
||||||
|
|
||||||
|
def check_sqls(self, sqls, expects):
|
||||||
|
for sql, query_ctx in zip(sqls, expects):
|
||||||
|
self.check_sql(sql, query_ctx)
|
||||||
|
|
||||||
|
|
||||||
|
class TSMATesterSQLGeneratorOptions:
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self.ts_min: int = 1537146000000 - 1000 * 60 * 60
|
||||||
|
self.ts_max: int = 1537150999000 + 1000 * 60 * 60
|
||||||
|
self.times: int = 100
|
||||||
|
self.pk_col: str = 'ts'
|
||||||
|
self.column_prefix: str = 'c'
|
||||||
|
self.column_num: int = 9 # c1 - c10
|
||||||
|
self.tags_prefix: str = 't'
|
||||||
|
self.tag_num: int = 6 # t1 - t6
|
||||||
|
self.str_tag_idx: List = [2, 3]
|
||||||
|
self.child_table_name_prefix: str = 't'
|
||||||
|
self.child_table_num: int = 10 # t0 - t9
|
||||||
|
self.interval: bool = False
|
||||||
|
# 70% generating a partition by, 30% no partition by, same as group by
|
||||||
|
self.partition_by: bool = False
|
||||||
|
self.group_by: bool = False
|
||||||
|
# generating no ts range condition is also possible
|
||||||
|
self.where_ts_range: bool = False
|
||||||
|
self.where_tbname_func: bool = False
|
||||||
|
self.where_tag_func: bool = False
|
||||||
|
self.where_col_func: bool = False
|
||||||
|
self.slimit_max = 10
|
||||||
|
self.limit_max = 10
|
||||||
|
self.norm_tb = False
|
||||||
|
|
||||||
|
|
||||||
|
class TSMATesterSQLGeneratorRes:
|
||||||
|
def __init__(self):
|
||||||
|
self.has_where_ts_range: bool = False
|
||||||
|
self.has_interval: bool = False
|
||||||
|
self.partition_by: bool = False
|
||||||
|
self.group_by: bool = False
|
||||||
|
self.has_slimit: bool = False
|
||||||
|
self.has_limit: bool = False
|
||||||
|
self.has_user_order_by: bool = False
|
||||||
|
|
||||||
|
def can_ignore_res_order(self):
|
||||||
|
return not (self.has_limit and self.has_slimit)
|
||||||
|
|
||||||
|
|
||||||
|
class TSMATestSQLGenerator:
|
||||||
|
def __init__(self, opts: TSMATesterSQLGeneratorOptions = TSMATesterSQLGeneratorOptions()):
|
||||||
|
self.db_name_: str = ''
|
||||||
|
self.tb_name_: str = ''
|
||||||
|
self.ts_scan_range_: List[float] = [
|
||||||
|
float(UsedTsma.TS_MIN), float(UsedTsma.TS_MAX)]
|
||||||
|
self.agg_funcs_: List[str] = []
|
||||||
|
self.tsmas_: List[TSMA] = [] # currently created tsmas
|
||||||
|
self.opts_: TSMATesterSQLGeneratorOptions = opts
|
||||||
|
self.res_: TSMATesterSQLGeneratorRes = TSMATesterSQLGeneratorRes()
|
||||||
|
|
||||||
|
self.select_list_: List[str] = []
|
||||||
|
self.where_list_: List[str] = []
|
||||||
|
self.group_or_partition_by_list: List[str] = []
|
||||||
|
self.interval: str = ''
|
||||||
|
|
||||||
|
def get_depth_one_str_funcs(self, name: str) -> List[str]:
|
||||||
|
concat1 = f'CONCAT({name}, "_concat")'
|
||||||
|
concat2 = f'CONCAT({name}, {name})'
|
||||||
|
concat3 = f'CONCAT({name}, {name}, {name})'
|
||||||
|
start = random.randint(1, 3)
|
||||||
|
len = random.randint(0, 3)
|
||||||
|
substr = f'SUBSTR({name}, {start}, {len})'
|
||||||
|
lower = f'LOWER({name})'
|
||||||
|
ltrim = f'LTRIM({name})'
|
||||||
|
return [concat1, concat2, concat3, substr, substr, lower, lower, ltrim, name]
|
||||||
|
|
||||||
|
def generate_depthed_str_func(self, name: str, depth: int) -> str:
|
||||||
|
if depth == 1:
|
||||||
|
return random.choice(self.get_depth_one_str_funcs(name))
|
||||||
|
name = self.generate_depthed_str_func(name, depth - 1)
|
||||||
|
return random.choice(self.get_depth_one_str_funcs(name))
|
||||||
|
|
||||||
|
def generate_str_func(self, column_name: str, depth: int = 0) -> str:
|
||||||
|
if depth == 0:
|
||||||
|
depth = random.randint(1, 3)
|
||||||
|
|
||||||
|
ret = self.generate_depthed_str_func(column_name, depth)
|
||||||
|
tdLog.debug(f'generating str func: {ret}')
|
||||||
|
return ret
|
||||||
|
|
||||||
|
def get_random_type(self, funcs):
|
||||||
|
rand: int = randrange(1, len(funcs))
|
||||||
|
return funcs[rand-1]()
|
||||||
|
|
||||||
|
def generate_select_list(self, user_select_list: str, partition_by_list: str):
|
||||||
|
res = user_select_list
|
||||||
|
if self.res_.has_interval and random.random() < 0.8:
|
||||||
|
res = res + ',_wstart, _wend'
|
||||||
|
if self.res_.partition_by or self.res_.group_by and random.random() < 0.8:
|
||||||
|
res = res + f',{partition_by_list}'
|
||||||
|
return res
|
||||||
|
|
||||||
|
def generate_order_by(self, user_order_by: str, partition_by_list: str):
|
||||||
|
auto_order_by = 'ORDER BY'
|
||||||
|
has_limit = self.res_.has_limit or self.res_.has_slimit
|
||||||
|
if has_limit and (self.res_.group_by or self.res_.partition_by):
|
||||||
|
auto_order_by = f'{auto_order_by} {partition_by_list},'
|
||||||
|
if has_limit and self.res_.has_interval:
|
||||||
|
auto_order_by = f'{auto_order_by} _wstart, _wend,'
|
||||||
|
if len(user_order_by) > 0:
|
||||||
|
self.res_.has_user_order_by = True
|
||||||
|
auto_order_by = f'{auto_order_by} {user_order_by},'
|
||||||
|
if auto_order_by == 'ORDER BY':
|
||||||
|
return ''
|
||||||
|
else:
|
||||||
|
return auto_order_by[:-1]
|
||||||
|
|
||||||
|
def generate_one(self, select_list: str, possible_tbs: List, order_by_list: str, interval_list: List[str] = []) -> str:
|
||||||
|
tb = random.choice(possible_tbs)
|
||||||
|
where = self.generate_where()
|
||||||
|
interval = self.generate_interval(interval_list)
|
||||||
|
(partition_by, partition_by_list) = self.generate_partition_by()
|
||||||
|
limit = self.generate_limit()
|
||||||
|
auto_select_list = self.generate_select_list(
|
||||||
|
select_list, partition_by_list)
|
||||||
|
order_by = self.generate_order_by(order_by_list, partition_by_list)
|
||||||
|
sql = f"SELECT {auto_select_list} FROM {tb} {where} {partition_by} {partition_by_list} {interval} {order_by} {limit}"
|
||||||
|
tdLog.debug(sql)
|
||||||
|
return sql
|
||||||
|
|
||||||
|
def can_ignore_res_order(self):
|
||||||
|
return self.res_.can_ignore_res_order()
|
||||||
|
|
||||||
|
def generate_where(self) -> str:
|
||||||
|
v = random.random()
|
||||||
|
where = ''
|
||||||
|
if not self.opts_.norm_tb:
|
||||||
|
if v < 0.2:
|
||||||
|
where = f'{self.generate_tbname_where()}'
|
||||||
|
elif v < 0.5:
|
||||||
|
where = f'{self.generate_tag_where()}'
|
||||||
|
elif v < 0.7:
|
||||||
|
op = random.choice(['AND', 'OR'])
|
||||||
|
where = f'{self.generate_tbname_where()} {op} {self.generate_tag_where()}'
|
||||||
|
ts_where = self.generate_ts_where_range()
|
||||||
|
if len(ts_where) > 0 or len(where) > 0:
|
||||||
|
op = ''
|
||||||
|
if len(where) > 0 and len(ts_where) > 0:
|
||||||
|
op = random.choice(['AND', 'AND', 'AND', 'AND', 'OR'])
|
||||||
|
return f'WHERE {ts_where} {op} {where}'
|
||||||
|
return ''
|
||||||
|
|
||||||
|
def generate_str_equal_operator(self, column_name: str, opts: List) -> str:
|
||||||
|
opt = random.choice(opts)
|
||||||
|
return f'{column_name} = "{opt}"'
|
||||||
|
|
||||||
|
# TODO support it
|
||||||
|
def generate_str_in_operator(self, column_name: str, opts: List) -> str:
|
||||||
|
opt = random.choice(opts)
|
||||||
|
IN = f'"{",".join(opts)}"'
|
||||||
|
return f'{column_name} in ({IN})'
|
||||||
|
|
||||||
|
def generate_str_like_operator(self, column_name: str, opts: List) -> str:
|
||||||
|
opt = random.choice(opts)
|
||||||
|
return f'{column_name} like "{opt}"'
|
||||||
|
|
||||||
|
def generate_tbname_where(self) -> str:
|
||||||
|
tbs = []
|
||||||
|
for idx in range(1, self.opts_.tag_num + 1):
|
||||||
|
tbs.append(f'{self.opts_.child_table_name_prefix}{idx}')
|
||||||
|
|
||||||
|
if random.random() < 0.5:
|
||||||
|
return self.generate_str_equal_operator('tbname', tbs)
|
||||||
|
else:
|
||||||
|
return self.generate_str_like_operator('tbname', ['t%', '%2'])
|
||||||
|
|
||||||
|
def generate_tag_where(self) -> str:
|
||||||
|
idx = random.randrange(1, self.opts_.tag_num + 1)
|
||||||
|
if random.random() < 0.5 and idx in self.opts_.str_tag_idx:
|
||||||
|
if random.random() < 0.5:
|
||||||
|
return self.generate_str_equal_operator(f'{self.opts_.tags_prefix}{idx}', [f'tb{random.randint(1,100)}'])
|
||||||
|
else:
|
||||||
|
return self.generate_str_like_operator(f'{self.opts_.tags_prefix}{idx}', ['%1', 'tb%', 'tb1%', '%1%'])
|
||||||
|
else:
|
||||||
|
operator = random.choice(['>', '>=', '<', '<=', '=', '!='])
|
||||||
|
val = random.randint(1, 100)
|
||||||
|
return f'{self.opts_.tags_prefix}{idx} {operator} {val}'
|
||||||
|
|
||||||
|
def generate_timestamp(self, min: float = -1, max: float = 0) -> int:
|
||||||
|
milliseconds_aligned: float = random.randint(int(min), int(max))
|
||||||
|
seconds_aligned = int(milliseconds_aligned / 1000) * 1000
|
||||||
|
if seconds_aligned < min:
|
||||||
|
seconds_aligned = int(min)
|
||||||
|
minutes_aligned = int(milliseconds_aligned / 1000 / 60) * 1000 * 60
|
||||||
|
if minutes_aligned < min:
|
||||||
|
minutes_aligned = int(min)
|
||||||
|
hour_aligned = int(milliseconds_aligned / 1000 /
|
||||||
|
60 / 60) * 1000 * 60 * 60
|
||||||
|
if hour_aligned < min:
|
||||||
|
hour_aligned = int(min)
|
||||||
|
|
||||||
|
return random.choice([milliseconds_aligned, seconds_aligned, seconds_aligned, minutes_aligned, minutes_aligned, hour_aligned, hour_aligned])
|
||||||
|
|
||||||
|
def generate_ts_where_range(self):
|
||||||
|
if not self.opts_.where_ts_range:
|
||||||
|
return ''
|
||||||
|
left_operators = ['>', '>=', '']
|
||||||
|
right_operators = ['<', '<=', '']
|
||||||
|
left_operator = left_operators[random.randrange(0, 3)]
|
||||||
|
right_operator = right_operators[random.randrange(0, 3)]
|
||||||
|
a = ''
|
||||||
|
left_value = None
|
||||||
|
if left_operator:
|
||||||
|
left_value = self.generate_timestamp(
|
||||||
|
self.opts_.ts_min, self.opts_.ts_max)
|
||||||
|
a += f'{self.opts_.pk_col} {left_operator} {left_value}'
|
||||||
|
if right_operator:
|
||||||
|
if left_value:
|
||||||
|
start = left_value
|
||||||
|
else:
|
||||||
|
start = self.opts_.ts_min
|
||||||
|
right_value = self.generate_timestamp(start, self.opts_.ts_max)
|
||||||
|
if left_operator:
|
||||||
|
a += ' AND '
|
||||||
|
a += f'{self.opts_.pk_col} {right_operator} {right_value}'
|
||||||
|
# tdLog.debug(f'{self.opts_.pk_col} range with: {a}')
|
||||||
|
if len(a) > 0:
|
||||||
|
self.res_.has_where_ts_range = True
|
||||||
|
return a
|
||||||
|
|
||||||
|
def generate_limit(self) -> str:
|
||||||
|
ret = ''
|
||||||
|
can_have_slimit = self.res_.partition_by or self.res_.group_by
|
||||||
|
if can_have_slimit:
|
||||||
|
if random.random() < 0.4:
|
||||||
|
ret = f'SLIMIT {random.randint(0, self.opts_.slimit_max)}'
|
||||||
|
self.res_.has_slimit = True
|
||||||
|
if random.random() < 0.4:
|
||||||
|
self.res_.has_limit = True
|
||||||
|
ret = ret + f' LIMIT {random.randint(0, self.opts_.limit_max)}'
|
||||||
|
return ret
|
||||||
|
|
||||||
|
## if offset is True, offset cannot be the same as interval
|
||||||
|
def generate_random_offset_sliding(self, interval: str, offset: bool = False) -> str:
|
||||||
|
unit = interval[-1]
|
||||||
|
hasUnit = unit.isalpha()
|
||||||
|
if not hasUnit:
|
||||||
|
start = 1
|
||||||
|
if offset:
|
||||||
|
start = 2
|
||||||
|
ret: int = int(int(interval) / random.randint(start, 5))
|
||||||
|
return str(ret)
|
||||||
|
return ''
|
||||||
|
|
||||||
|
# add sliding offset
|
||||||
|
def generate_interval(self, intervals: List[str]) -> str:
|
||||||
|
if not self.opts_.interval:
|
||||||
|
return ''
|
||||||
|
if random.random() < 0.4: # no interval
|
||||||
|
return ''
|
||||||
|
value = random.choice(intervals)
|
||||||
|
self.res_.has_interval = True
|
||||||
|
has_offset = False
|
||||||
|
offset = ''
|
||||||
|
has_sliding = False
|
||||||
|
sliding = ''
|
||||||
|
num: int = int(value[:-1])
|
||||||
|
unit = value[-1]
|
||||||
|
if has_offset and num > 1:
|
||||||
|
offset = f', {self.generate_random_offset_sliding(value, True)}'
|
||||||
|
if has_sliding:
|
||||||
|
sliding = f'sliding({self.generate_random_offset_sliding(value)})'
|
||||||
|
return f'INTERVAL({value} {offset}) {sliding}'
|
||||||
|
|
||||||
|
def generate_tag_list(self):
|
||||||
|
used_tag_num = random.randrange(1, self.opts_.tag_num + 1)
|
||||||
|
ret = ''
|
||||||
|
for _ in range(used_tag_num):
|
||||||
|
tag_idx = random.randint(1, self.opts_.tag_num)
|
||||||
|
tag_name = self.opts_.tags_prefix + f'{tag_idx}'
|
||||||
|
if random.random() < 0.5 and tag_idx in self.opts_.str_tag_idx:
|
||||||
|
tag_func = self.generate_str_func(tag_name, 2)
|
||||||
|
else:
|
||||||
|
tag_func = tag_name
|
||||||
|
ret = ret + f'{tag_func},'
|
||||||
|
return ret[:-1]
|
||||||
|
|
||||||
|
def generate_tbname_tag_list(self):
|
||||||
|
tag_num = random.randrange(1, self.opts_.tag_num)
|
||||||
|
ret = ''
|
||||||
|
tbname_idx = random.randint(0, tag_num + 1)
|
||||||
|
for i in range(tag_num + 1):
|
||||||
|
if i == tbname_idx:
|
||||||
|
ret = ret + 'tbname,'
|
||||||
|
else:
|
||||||
|
tag_idx = random.randint(1, self.opts_.tag_num)
|
||||||
|
ret = ret + self.opts_.tags_prefix + f'{tag_idx},'
|
||||||
|
return ret[:-1]
|
||||||
|
|
||||||
|
def generate_partition_by(self):
|
||||||
|
if not self.opts_.partition_by and not self.opts_.group_by:
|
||||||
|
return ('', '')
|
||||||
|
# no partition or group
|
||||||
|
if random.random() < 0.3:
|
||||||
|
return ('', '')
|
||||||
|
ret = ''
|
||||||
|
rand = random.random()
|
||||||
|
if rand < 0.4:
|
||||||
|
if random.random() < 0.5:
|
||||||
|
ret = self.generate_str_func('tbname', 3)
|
||||||
|
else:
|
||||||
|
ret = 'tbname'
|
||||||
|
elif rand < 0.8:
|
||||||
|
ret = self.generate_tag_list()
|
||||||
|
else:
|
||||||
|
# tbname and tag
|
||||||
|
ret = self.generate_tbname_tag_list()
|
||||||
|
# tdLog.debug(f'partition by: {ret}')
|
||||||
|
if self.res_.has_interval or random.random() < 0.5:
|
||||||
|
self.res_.partition_by = True
|
||||||
|
return (str('PARTITION BY'), f'{ret}')
|
||||||
|
else:
|
||||||
|
self.res_.group_by = True
|
||||||
|
return (str('GROUP BY'), f'{ret}')
|
||||||
|
|
||||||
|
def generate_where_tbname(self) -> str:
|
||||||
|
return self.generate_str_func('tbname')
|
||||||
|
|
||||||
|
def generate_where_tag(self) -> str:
|
||||||
|
# tag_idx = random.randint(1, self.opts_.tag_num)
|
||||||
|
# tag = self.opts_.tags_prefix + str(tag_idx)
|
||||||
|
return self.generate_str_func('t3')
|
||||||
|
|
||||||
|
def generate_where_conditions(self) -> str:
|
||||||
|
|
||||||
|
pass
|
||||||
|
|
||||||
|
# generate func in tsmas(select list)
|
||||||
|
def _generate_agg_func_for_select(self) -> str:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# order by, limit, having, subquery...
|
||||||
|
|
||||||
|
|
||||||
|
class TDTestCase:
|
||||||
|
updatecfgDict = {'asynclog': 0, 'ttlUnit': 1, 'ttlPushInterval': 5, 'ratioOfVnodeStreamThrea': 4, 'maxTsmaNum': 3}
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.vgroups = 4
|
||||||
|
self.ctbNum = 10
|
||||||
|
self.rowsPerTbl = 10000
|
||||||
|
self.duraion = '1h'
|
||||||
|
|
||||||
|
def init(self, conn, logSql, replicaVar=1):
|
||||||
|
self.replicaVar = int(replicaVar)
|
||||||
|
tdLog.debug(f"start to excute {__file__}")
|
||||||
|
tdSql.init(conn.cursor(), False)
|
||||||
|
self.tsma_tester: TSMATester = TSMATester(tdSql)
|
||||||
|
self.tsma_sql_generator: TSMATestSQLGenerator = TSMATestSQLGenerator()
|
||||||
|
|
||||||
|
def create_database(self, tsql, dbName, dropFlag=1, vgroups=2, replica=1, duration: str = '1d'):
|
||||||
|
if dropFlag == 1:
|
||||||
|
tsql.execute("drop database if exists %s" % (dbName))
|
||||||
|
|
||||||
|
tsql.execute("create database if not exists %s vgroups %d replica %d duration %s" % (
|
||||||
|
dbName, vgroups, replica, duration))
|
||||||
|
tdLog.debug("complete to create database %s" % (dbName))
|
||||||
|
return
|
||||||
|
|
||||||
|
def create_stable(self, tsql, paraDict):
|
||||||
|
colString = tdCom.gen_column_type_str(
|
||||||
|
colname_prefix=paraDict["colPrefix"], column_elm_list=paraDict["colSchema"])
|
||||||
|
tagString = tdCom.gen_tag_type_str(
|
||||||
|
tagname_prefix=paraDict["tagPrefix"], tag_elm_list=paraDict["tagSchema"])
|
||||||
|
sqlString = f"create table if not exists %s.%s (%s) tags (%s)" % (
|
||||||
|
paraDict["dbName"], paraDict["stbName"], colString, tagString)
|
||||||
|
tdLog.debug("%s" % (sqlString))
|
||||||
|
tsql.execute(sqlString)
|
||||||
|
return
|
||||||
|
|
||||||
|
def create_ctable(self, tsql=None, dbName='dbx', stbName='stb', ctbPrefix='ctb', ctbNum=1, ctbStartIdx=0):
|
||||||
|
for i in range(ctbNum):
|
||||||
|
sqlString = "create table %s.%s%d using %s.%s tags(%d, 'tb%d', 'tb%d', %d, %d, %d)" % (dbName, ctbPrefix, i+ctbStartIdx, dbName, stbName, (i+ctbStartIdx) % 5, i+ctbStartIdx + random.randint(
|
||||||
|
1, 100), i+ctbStartIdx + random.randint(1, 100), i+ctbStartIdx + random.randint(1, 100), i+ctbStartIdx + random.randint(1, 100), i+ctbStartIdx + random.randint(1, 100))
|
||||||
|
tsql.execute(sqlString)
|
||||||
|
|
||||||
|
tdLog.debug("complete to create %d child tables by %s.%s" %
|
||||||
|
(ctbNum, dbName, stbName))
|
||||||
|
return
|
||||||
|
|
||||||
|
def init_normal_tb(self, tsql, db_name: str, tb_name: str, rows: int, start_ts: int, ts_step: int):
|
||||||
|
sql = 'CREATE TABLE %s.%s (ts timestamp, c1 INT, c2 INT, c3 INT, c4 double, c5 VARCHAR(255))' % (
|
||||||
|
db_name, tb_name)
|
||||||
|
tsql.execute(sql)
|
||||||
|
sql = 'INSERT INTO %s.%s values' % (db_name, tb_name)
|
||||||
|
for j in range(rows):
|
||||||
|
sql += f'(%d, %d,%d,%d,{random.random()},"varchar_%d"),' % (start_ts + j * ts_step + randrange(500), j %
|
||||||
|
10 + randrange(200), j % 10, j % 10, j % 10 + randrange(100))
|
||||||
|
tsql.execute(sql)
|
||||||
|
|
||||||
|
def insert_data(self, tsql, dbName, ctbPrefix, ctbNum, rowsPerTbl, batchNum, startTs, tsStep):
|
||||||
|
tdLog.debug("start to insert data ............")
|
||||||
|
tsql.execute("use %s" % dbName)
|
||||||
|
pre_insert = "insert into "
|
||||||
|
sql = pre_insert
|
||||||
|
|
||||||
|
for i in range(ctbNum):
|
||||||
|
rowsBatched = 0
|
||||||
|
sql += " %s.%s%d values " % (dbName, ctbPrefix, i)
|
||||||
|
for j in range(rowsPerTbl):
|
||||||
|
if (i < ctbNum/2):
|
||||||
|
sql += "(%d, %d, %d, %d,%d,%d,%d,true,'binary%d', 'nchar%d') " % (startTs + j*tsStep + randrange(
|
||||||
|
500), j % 10 + randrange(100), j % 10 + randrange(200), j % 10, j % 10, j % 10, j % 10, j % 10, j % 10)
|
||||||
|
else:
|
||||||
|
sql += "(%d, %d, NULL, %d,NULL,%d,%d,true,'binary%d', 'nchar%d') " % (
|
||||||
|
startTs + j*tsStep + randrange(500), j % 10, j % 10, j % 10, j % 10, j % 10, j % 10)
|
||||||
|
rowsBatched += 1
|
||||||
|
if ((rowsBatched == batchNum) or (j == rowsPerTbl - 1)):
|
||||||
|
tsql.execute(sql)
|
||||||
|
rowsBatched = 0
|
||||||
|
if j < rowsPerTbl - 1:
|
||||||
|
sql = "insert into %s.%s%d values " % (dbName, ctbPrefix, i)
|
||||||
|
else:
|
||||||
|
sql = "insert into "
|
||||||
|
if sql != pre_insert:
|
||||||
|
tsql.execute(sql)
|
||||||
|
tdLog.debug("insert data ............ [OK]")
|
||||||
|
return
|
||||||
|
|
||||||
|
def init_data(self, db: str = 'test', ctb_num: int = 10, rows_per_ctb: int = 10000, start_ts: int = 1537146000000, ts_step: int = 500):
|
||||||
|
tdLog.printNoPrefix(
|
||||||
|
"======== prepare test env include database, stable, ctables, and insert data: ")
|
||||||
|
paraDict = {'dbName': db,
|
||||||
|
'dropFlag': 1,
|
||||||
|
'vgroups': 2,
|
||||||
|
'stbName': 'meters',
|
||||||
|
'colPrefix': 'c',
|
||||||
|
'tagPrefix': 't',
|
||||||
|
'colSchema': [{'type': 'INT', 'count': 1}, {'type': 'BIGINT', 'count': 1}, {'type': 'FLOAT', 'count': 1}, {'type': 'DOUBLE', 'count': 1}, {'type': 'smallint', 'count': 1}, {'type': 'tinyint', 'count': 1}, {'type': 'bool', 'count': 1}, {'type': 'binary', 'len': 10, 'count': 1}, {'type': 'nchar', 'len': 10, 'count': 1}],
|
||||||
|
'tagSchema': [{'type': 'INT', 'count': 1}, {'type': 'nchar', 'len': 20, 'count': 1}, {'type': 'binary', 'len': 20, 'count': 1}, {'type': 'BIGINT', 'count': 1}, {'type': 'smallint', 'count': 1}, {'type': 'DOUBLE', 'count': 1}],
|
||||||
|
'ctbPrefix': 't',
|
||||||
|
'ctbStartIdx': 0,
|
||||||
|
'ctbNum': ctb_num,
|
||||||
|
'rowsPerTbl': rows_per_ctb,
|
||||||
|
'batchNum': 3000,
|
||||||
|
'startTs': start_ts,
|
||||||
|
'tsStep': ts_step}
|
||||||
|
|
||||||
|
paraDict['vgroups'] = self.vgroups
|
||||||
|
paraDict['ctbNum'] = ctb_num
|
||||||
|
paraDict['rowsPerTbl'] = rows_per_ctb
|
||||||
|
|
||||||
|
tdLog.info("create database")
|
||||||
|
self.create_database(tsql=tdSql, dbName=paraDict["dbName"], dropFlag=paraDict["dropFlag"],
|
||||||
|
vgroups=paraDict["vgroups"], replica=self.replicaVar, duration=self.duraion)
|
||||||
|
|
||||||
|
tdLog.info("create stb")
|
||||||
|
self.create_stable(tsql=tdSql, paraDict=paraDict)
|
||||||
|
|
||||||
|
tdLog.info("create child tables")
|
||||||
|
self.create_ctable(tsql=tdSql, dbName=paraDict["dbName"],
|
||||||
|
stbName=paraDict["stbName"], ctbPrefix=paraDict["ctbPrefix"],
|
||||||
|
ctbNum=paraDict["ctbNum"], ctbStartIdx=paraDict["ctbStartIdx"])
|
||||||
|
self.insert_data(tsql=tdSql, dbName=paraDict["dbName"],
|
||||||
|
ctbPrefix=paraDict["ctbPrefix"], ctbNum=paraDict["ctbNum"],
|
||||||
|
rowsPerTbl=paraDict["rowsPerTbl"], batchNum=paraDict["batchNum"],
|
||||||
|
startTs=paraDict["startTs"], tsStep=paraDict["tsStep"])
|
||||||
|
self.init_normal_tb(tdSql, paraDict['dbName'], 'norm_tb',
|
||||||
|
paraDict['rowsPerTbl'], paraDict['startTs'], paraDict['tsStep'])
|
||||||
|
|
||||||
|
def wait_for_tsma_calculation(self, func_list: list, db: str, tb: str, interval: str, tsma_name: str, timeout_seconds: int =600):
|
||||||
|
start_time = time.time()
|
||||||
|
while True:
|
||||||
|
current_time = time.time()
|
||||||
|
if current_time - start_time > timeout_seconds:
|
||||||
|
error_message = f"Timeout occurred while waiting for TSMA calculation to complete."
|
||||||
|
tdLog.exit(error_message)
|
||||||
|
sql = 'select %s from %s.%s interval(%s)' % (
|
||||||
|
', '.join(func_list), db, tb, interval)
|
||||||
|
tdLog.debug(
|
||||||
|
f'waiting for tsma {db}.{tsma_name} to be useful with sql {sql}')
|
||||||
|
ctx: TSMAQueryContext = self.tsma_tester.get_tsma_query_ctx(sql)
|
||||||
|
if ctx.has_tsma():
|
||||||
|
if ctx.used_tsmas[0].name == tsma_name + UsedTsma.TSMA_RES_STB_POSTFIX:
|
||||||
|
break
|
||||||
|
elif len(ctx.used_tsmas[0].name) == 32:
|
||||||
|
name = f'1.{db}.{tsma_name}_{tb}'
|
||||||
|
if ctx.used_tsmas[0].name == TSMAQCBuilder().md5(name):
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
time.sleep(1)
|
||||||
|
else:
|
||||||
|
time.sleep(1)
|
||||||
|
else:
|
||||||
|
time.sleep(1)
|
||||||
|
time.sleep(1)
|
||||||
|
|
||||||
|
def create_tsma(self, tsma_name: str, db: str, tb: str, func_list: list, interval: str, check_tsma_calculation : str=True):
|
||||||
|
tdSql.execute('use %s' % db)
|
||||||
|
sql = "CREATE TSMA %s ON %s.%s FUNCTION(%s) INTERVAL(%s)" % (
|
||||||
|
tsma_name, db, tb, ','.join(func_list), interval)
|
||||||
|
tdSql.execute(sql, queryTimes=1)
|
||||||
|
if check_tsma_calculation == True:
|
||||||
|
self.wait_for_tsma_calculation(func_list, db, tb, interval, tsma_name)
|
||||||
|
|
||||||
|
def create_error_tsma(self, tsma_name: str, db: str, tb: str, func_list: list, interval: str, expectedErrno: int):
|
||||||
|
tdSql.execute('use %s' % db)
|
||||||
|
sql = "CREATE TSMA %s ON %s.%s FUNCTION(%s) INTERVAL(%s)" % (
|
||||||
|
tsma_name, db, tb, ','.join(func_list), interval)
|
||||||
|
tdSql.error(sql, expectedErrno)
|
||||||
|
|
||||||
|
def create_recursive_tsma(self, base_tsma_name: str, new_tsma_name: str, db: str, interval: str, tb_name: str, func_list: List[str] = ['avg(c1)']):
|
||||||
|
tdSql.execute('use %s' % db, queryTimes=1)
|
||||||
|
sql = 'CREATE RECURSIVE TSMA %s ON %s.%s INTERVAL(%s)' % (
|
||||||
|
new_tsma_name, db, base_tsma_name, interval)
|
||||||
|
tdSql.execute(sql, queryTimes=1)
|
||||||
|
self.wait_for_tsma_calculation(
|
||||||
|
func_list, db, tb_name, interval, new_tsma_name)
|
||||||
|
|
||||||
|
def drop_tsma(self, tsma_name: str, db: str):
|
||||||
|
sql = 'DROP TSMA %s.%s' % (db, tsma_name)
|
||||||
|
tdSql.execute(sql, queryTimes=1)
|
||||||
|
|
||||||
|
def check_explain_res_has_row(self, plan_str_expect: str, explain_output):
|
||||||
|
plan_found = False
|
||||||
|
for row in explain_output:
|
||||||
|
if str(row).find(plan_str_expect) >= 0:
|
||||||
|
tdLog.debug("plan: [%s] found in: [%s]" %
|
||||||
|
(plan_str_expect, str(row)))
|
||||||
|
plan_found = True
|
||||||
|
break
|
||||||
|
if not plan_found:
|
||||||
|
tdLog.exit("plan: %s not found in res: [%s]" % (
|
||||||
|
plan_str_expect, str(explain_output)))
|
||||||
|
|
||||||
|
def check(self, ctxs: List):
|
||||||
|
for ctx in ctxs:
|
||||||
|
self.tsma_tester.check_sql(ctx.sql, ctx)
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
self.test_bigger_tsma_interval()
|
||||||
|
|
||||||
|
def test_create_recursive_tsma_interval(self, db: str, tb: str, func, interval: str, recursive_interval: str, succ: bool, code: int):
|
||||||
|
self.create_tsma('tsma1', db, tb, func, interval)
|
||||||
|
sql = f'CREATE RECURSIVE TSMA tsma2 ON {db}.tsma1 INTERVAL({recursive_interval})'
|
||||||
|
if not succ:
|
||||||
|
tdSql.error(sql, code)
|
||||||
|
else:
|
||||||
|
self.create_recursive_tsma('tsma1', 'tsma2', db, recursive_interval, tb, func)
|
||||||
|
self.drop_tsma('tsma2', db)
|
||||||
|
self.drop_tsma('tsma1', db)
|
||||||
|
|
||||||
|
def test_bigger_tsma_interval_query(self, func_list: List):
|
||||||
|
## 3 tsmas, 12h, 1n, 1y
|
||||||
|
ctxs = []
|
||||||
|
interval_list = ['2h', '8h', '1d', '1n', '3n', '1w', '1y', '2y']
|
||||||
|
opts: TSMATesterSQLGeneratorOptions = TSMATesterSQLGeneratorOptions()
|
||||||
|
opts.interval = True
|
||||||
|
opts.where_ts_range = True
|
||||||
|
for _ in range(1, ROUND):
|
||||||
|
opts.partition_by = True
|
||||||
|
opts.group_by = True
|
||||||
|
opts.norm_tb = False
|
||||||
|
sql_generator = TSMATestSQLGenerator(opts)
|
||||||
|
sql = sql_generator.generate_one(
|
||||||
|
','.join(func_list), ['db.meters', 'db.meters', 'db.t1', 'db.t9'], '', interval_list)
|
||||||
|
ctxs.append(TSMAQCBuilder().with_sql(sql).ignore_query_table(
|
||||||
|
).ignore_res_order(sql_generator.can_ignore_res_order()).get_qc())
|
||||||
|
return ctxs
|
||||||
|
|
||||||
|
def test_bigger_tsma_interval(self):
|
||||||
|
db = 'db'
|
||||||
|
tb = 'meters'
|
||||||
|
func = ['max(c1)', 'min(c1)', 'min(c2)', 'max(c2)', 'avg(c1)', 'count(ts)']
|
||||||
|
self.init_data(db,10, 10000, 1500000000000, 11000000)
|
||||||
|
examples = [
|
||||||
|
('10m', '1h', True), ('10m','1d',True), ('1m', '120s', True), ('1h','1d',True),
|
||||||
|
('12h', '1y', False), ('1h', '1n', True), ('1h', '1y', True),
|
||||||
|
('12n', '1y', False), ('2d','1n',False), ('55m', '55h', False), ('7m','7d',False),
|
||||||
|
]
|
||||||
|
tdSql.execute('use db')
|
||||||
|
for (i, ri, ret) in examples:
|
||||||
|
self.test_create_recursive_tsma_interval(db, tb, func, i, ri, ret, -2147471099)
|
||||||
|
|
||||||
|
self.create_tsma('tsma1', db, tb, func, '1h')
|
||||||
|
self.create_recursive_tsma('tsma1', 'tsma2', db, '1n', tb, func)
|
||||||
|
self.create_recursive_tsma('tsma2', 'tsma3', db, '1y', tb, func)
|
||||||
|
self.check(self.test_bigger_tsma_interval_query(func))
|
||||||
|
|
||||||
|
ctxs = []
|
||||||
|
ctxs.append(TSMAQCBuilder().with_sql('SELECT max(c1) FROM db.meters').should_query_with_tsma('tsma3').get_qc())
|
||||||
|
ctxs.append(TSMAQCBuilder()
|
||||||
|
.with_sql('SELECT max(c1) FROM db.meters WHERE ts > "2024-09-03 18:40:00.324"')
|
||||||
|
.should_query_with_table('meters', '2024-09-03 18:40:00.325', '2024-12-31 23:59:59.999')
|
||||||
|
.should_query_with_tsma('tsma3', '2025-01-01 00:00:00.000', UsedTsma.TS_MAX)
|
||||||
|
.get_qc())
|
||||||
|
|
||||||
|
ctxs.append(TSMAQCBuilder()
|
||||||
|
.with_sql('SELECT max(c1) FROM db.meters WHERE ts >= "2024-09-03 18:00:00.000"')
|
||||||
|
.should_query_with_tsma('tsma1', '2024-09-03 18:00:00.000', '2024-12-31 23:59:59.999')
|
||||||
|
.should_query_with_tsma('tsma3', '2025-01-01 00:00:00.000', UsedTsma.TS_MAX)
|
||||||
|
.get_qc())
|
||||||
|
|
||||||
|
ctxs.append(TSMAQCBuilder()
|
||||||
|
.with_sql('SELECT max(c1) FROM db.meters WHERE ts >= "2024-09-01 00:00:00.000"')
|
||||||
|
.should_query_with_tsma('tsma2', '2024-09-01 00:00:00.000', '2024-12-31 23:59:59.999')
|
||||||
|
.should_query_with_tsma('tsma3', '2025-01-01 00:00:00.000', UsedTsma.TS_MAX)
|
||||||
|
.get_qc())
|
||||||
|
|
||||||
|
ctxs.append(TSMAQCBuilder()
|
||||||
|
.with_sql("SELECT max(c1) FROM db.meters INTERVAL(12n)")
|
||||||
|
.should_query_with_tsma('tsma3')
|
||||||
|
.get_qc())
|
||||||
|
|
||||||
|
ctxs.append(TSMAQCBuilder()
|
||||||
|
.with_sql("SELECT max(c1) FROM db.meters INTERVAL(13n)")
|
||||||
|
.should_query_with_tsma('tsma2')
|
||||||
|
.get_qc())
|
||||||
|
|
||||||
|
ctxs.append(TSMAQCBuilder()
|
||||||
|
.with_sql("SELECT max(c1),min(c1),min(c2),max(c2),avg(c1),count(ts) FROM db.t9 WHERE ts > '2018-09-17 08:16:00'")
|
||||||
|
.should_query_with_table('t9', '2018-09-17 08:16:00.001', '2018-12-31 23:59:59:999')
|
||||||
|
.should_query_with_tsma_ctb('db', 'tsma3', 't9', '2019-01-01')
|
||||||
|
.get_qc())
|
||||||
|
|
||||||
|
ctxs.append(TSMAQCBuilder()
|
||||||
|
.with_sql("SELECT max(c1), _wstart FROM db.meters WHERE ts >= '2024-09-03 18:40:00.324' INTERVAL(1d)")
|
||||||
|
.should_query_with_table('meters', '2024-09-03 18:40:00.324', '2024-09-03 23:59:59:999')
|
||||||
|
.should_query_with_tsma('tsma1', '2024-09-04 00:00:00.000')
|
||||||
|
.get_qc())
|
||||||
|
|
||||||
|
ctxs.append(TSMAQCBuilder()
|
||||||
|
.with_sql("SELECT max(c1), _wstart FROM db.meters WHERE ts >= '2024-09-03 18:40:00.324' INTERVAL(1n)")
|
||||||
|
.should_query_with_table('meters', '2024-09-03 18:40:00.324', '2024-09-30 23:59:59:999')
|
||||||
|
.should_query_with_tsma('tsma2', '2024-10-01 00:00:00.000')
|
||||||
|
.get_qc())
|
||||||
|
|
||||||
|
self.check(ctxs)
|
||||||
|
tdSql.execute('drop database db')
|
||||||
|
|
||||||
|
def stop(self):
|
||||||
|
tdSql.close()
|
||||||
|
tdLog.success(f"{__file__} successfully executed")
|
||||||
|
|
||||||
|
|
||||||
|
event = threading.Event()
|
||||||
|
|
||||||
|
tdCases.addLinux(__file__, TDTestCase())
|
||||||
|
tdCases.addWindows(__file__, TDTestCase())
|
|
@ -134,6 +134,8 @@ class TDTestCase:
|
||||||
if offset_value != "earliest" and offset_value != "":
|
if offset_value != "earliest" and offset_value != "":
|
||||||
if offset_value == "latest":
|
if offset_value == "latest":
|
||||||
offset_value_list = list(map(lambda x: (x[-2].replace("wal:", "").replace("earliest", "0").replace("latest", "0").replace(offset_value, "0")), subscription_info))
|
offset_value_list = list(map(lambda x: (x[-2].replace("wal:", "").replace("earliest", "0").replace("latest", "0").replace(offset_value, "0")), subscription_info))
|
||||||
|
if None in offset_value_list:
|
||||||
|
continue
|
||||||
offset_value_list1 = list(map(lambda x: int(x.split("/")[0]), offset_value_list))
|
offset_value_list1 = list(map(lambda x: int(x.split("/")[0]), offset_value_list))
|
||||||
offset_value_list2 = list(map(lambda x: int(x.split("/")[1]), offset_value_list))
|
offset_value_list2 = list(map(lambda x: int(x.split("/")[1]), offset_value_list))
|
||||||
tdSql.checkEqual(offset_value_list1 == offset_value_list2, True)
|
tdSql.checkEqual(offset_value_list1 == offset_value_list2, True)
|
||||||
|
@ -142,6 +144,8 @@ class TDTestCase:
|
||||||
tdSql.checkEqual(sum(rows_value_list), expected_res)
|
tdSql.checkEqual(sum(rows_value_list), expected_res)
|
||||||
elif offset_value == "none":
|
elif offset_value == "none":
|
||||||
offset_value_list = list(map(lambda x: x[-2], subscription_info))
|
offset_value_list = list(map(lambda x: x[-2], subscription_info))
|
||||||
|
if None in offset_value_list:
|
||||||
|
continue
|
||||||
offset_value_list1 = list(map(lambda x: (x.split("/")[0]), offset_value_list))
|
offset_value_list1 = list(map(lambda x: (x.split("/")[0]), offset_value_list))
|
||||||
tdSql.checkEqual(offset_value_list1, ['none']*len(subscription_info))
|
tdSql.checkEqual(offset_value_list1, ['none']*len(subscription_info))
|
||||||
rows_value_list = list(map(lambda x: x[-1], subscription_info))
|
rows_value_list = list(map(lambda x: x[-1], subscription_info))
|
||||||
|
@ -155,6 +159,8 @@ class TDTestCase:
|
||||||
# tdSql.checkEqual(sum(rows_value_list), expected_res)
|
# tdSql.checkEqual(sum(rows_value_list), expected_res)
|
||||||
else:
|
else:
|
||||||
offset_value_list = list(map(lambda x: x[-2], subscription_info))
|
offset_value_list = list(map(lambda x: x[-2], subscription_info))
|
||||||
|
if None in offset_value_list:
|
||||||
|
continue
|
||||||
offset_value_list1 = list(map(lambda x: (x.split("/")[0]), offset_value_list))
|
offset_value_list1 = list(map(lambda x: (x.split("/")[0]), offset_value_list))
|
||||||
tdSql.checkEqual(offset_value_list1, [None]*len(subscription_info))
|
tdSql.checkEqual(offset_value_list1, [None]*len(subscription_info))
|
||||||
rows_value_list = list(map(lambda x: x[-1], subscription_info))
|
rows_value_list = list(map(lambda x: x[-1], subscription_info))
|
||||||
|
@ -162,6 +168,8 @@ class TDTestCase:
|
||||||
else:
|
else:
|
||||||
if offset_value != "none":
|
if offset_value != "none":
|
||||||
offset_value_list = list(map(lambda x: (x[-2].replace("wal:", "").replace("earliest", "0").replace("latest", "0").replace(offset_value, "0")), subscription_info))
|
offset_value_list = list(map(lambda x: (x[-2].replace("wal:", "").replace("earliest", "0").replace("latest", "0").replace(offset_value, "0")), subscription_info))
|
||||||
|
if None in offset_value_list:
|
||||||
|
continue
|
||||||
offset_value_list1 = list(map(lambda x: int(x.split("/")[0]), offset_value_list))
|
offset_value_list1 = list(map(lambda x: int(x.split("/")[0]), offset_value_list))
|
||||||
offset_value_list2 = list(map(lambda x: int(x.split("/")[1]), offset_value_list))
|
offset_value_list2 = list(map(lambda x: int(x.split("/")[1]), offset_value_list))
|
||||||
tdSql.checkEqual(offset_value_list1 <= offset_value_list2, True)
|
tdSql.checkEqual(offset_value_list1 <= offset_value_list2, True)
|
||||||
|
@ -170,6 +178,8 @@ class TDTestCase:
|
||||||
tdSql.checkEqual(sum(rows_value_list), expected_res)
|
tdSql.checkEqual(sum(rows_value_list), expected_res)
|
||||||
else:
|
else:
|
||||||
offset_value_list = list(map(lambda x: x[-2], subscription_info))
|
offset_value_list = list(map(lambda x: x[-2], subscription_info))
|
||||||
|
if None in offset_value_list:
|
||||||
|
continue
|
||||||
offset_value_list1 = list(map(lambda x: (x.split("/")[0]), offset_value_list))
|
offset_value_list1 = list(map(lambda x: (x.split("/")[0]), offset_value_list))
|
||||||
tdSql.checkEqual(offset_value_list1, ['none']*len(subscription_info))
|
tdSql.checkEqual(offset_value_list1, ['none']*len(subscription_info))
|
||||||
rows_value_list = list(map(lambda x: x[-1], subscription_info))
|
rows_value_list = list(map(lambda x: x[-1], subscription_info))
|
||||||
|
|