fix:[TS-4592]merge conflicts
|
@ -378,7 +378,7 @@ The configuration parameters in properties are as follows.
|
|||
- TSDBDriver.PROPERTY_KEY_DISABLE_SSL_CERT_VALIDATION: Whether to disable SSL certification validation. It only takes effect when using Websocket connections. true: enabled, false: disabled. The default is false.
|
||||
|
||||
|
||||
For JDBC native connections, you can specify other parameters, such as log level, SQL length, etc., by specifying URL and Properties. For more detailed configuration, please refer to [Client Configuration](../../reference/config/#configuration-file-on-client-side).
|
||||
For JDBC native connections, you can specify other parameters, such as log level, SQL length, etc., by specifying URL and Properties. For more detailed configuration, please refer to [Client Configuration](../../reference/config/).
|
||||
|
||||
### Priority of configuration parameters
|
||||
|
||||
|
|
|
@ -1209,27 +1209,40 @@ ignore_negative: {
|
|||
### DIFF
|
||||
|
||||
```sql
|
||||
DIFF(expr [, ignore_negative])
|
||||
DIFF(expr [, ignore_option])
|
||||
|
||||
ignore_negative: {
|
||||
ignore_option: {
|
||||
0
|
||||
| 1
|
||||
| 2
|
||||
| 3
|
||||
}
|
||||
```
|
||||
|
||||
**Description**: The different of each row with its previous row for a specific column. `ignore_negative` can be specified as 0 or 1, the default value is 1 if it's not specified. `1` means negative values are ignored. For tables with composite primary key, the data with the smallest primary key value is used to calculate the difference.
|
||||
**Description**: The difference of each row with its previous row for a specific column. `ignore_option` takes the value of 0|1|2|3, the default value is 0 if it's not specified.
|
||||
- `0` means that negative values (diff results) are not ignored and null values are not ignored
|
||||
- `1` means that negative values (diff results) are treated as null values
|
||||
- `2` means that negative values (diff results) are not ignored but null values are ignored
|
||||
- `3` means that negative values (diff results) are ignored and null values are ignored
|
||||
- For tables with composite primary key, the data with the smallest primary key value is used to calculate the difference.
|
||||
|
||||
**Return value type**:Same as the data type of the column being operated upon
|
||||
**Return value type**: `bool`, `timestamp` and `integer` value type all return `int_64`, `float` type returns `double`; if the diff result overflows, it is returned as overflow.
|
||||
|
||||
**Applicable data types**: Numeric
|
||||
**Applicable data types**: Numeric type, timestamp and bool type.
|
||||
|
||||
**Applicable table types**: standard tables and supertables
|
||||
|
||||
**More explanation**:
|
||||
|
||||
- The number of result rows is the number of rows subtracted by one, no output for the first row
|
||||
- It can be used together with a selected column. For example: select \_rowts, DIFF() from.
|
||||
|
||||
- diff is to calculate the difference of a specific column in current row and the **first valid data before the row**. The **first valid data before the row** refers to the most adjacent non-null value of same column with smaller timestamp.
|
||||
- The diff result of numeric type is the corresponding arithmatic difference; the timestamp is calculated based on the timestamp precision of the database; when calculating diff, `true` is treated as 1 and `false` is treated as 0
|
||||
- If the data of current row is NULL or can't find the **first valid data before the current row**, the diff result is NULL
|
||||
- When ignoring negative values (ignore_option is set to 1 or 3), if the diff result is negative, the result is set to null, and then filtered according to the null value filtering rule
|
||||
- When the diff result has an overflow, whether to ignore the negative value depends on the result of the logical operation is positive or negative. For example, the value of 9223372036854775800 - (-9223372036854775806) exceeds the range of BIGINT, and the diff result will display the overflow value -10, but it will not be ignored as a negative value
|
||||
- Single or multiple diffs can be used in a single statement, and for each diff you can specify same or different `ignore_option`. When there are multiple diffs in a single statement, when and only when all the diff results are NULL for a row and each diff's `ignore_option` is specified as ignoring NULL, the output of this row will be removed from the result set.
|
||||
- Can be used with the selected associated columns. For example: `select _rowts, DIFF()`.
|
||||
- When there is not composite primary key, if there are the same timestamps across different subtables, it will prompt "Duplicate timestamps not allowed"
|
||||
- When using with composite primary key, there may be same combination of timestamp and complete primary key across sub-tables, which row will be used depends on which row is found first, that means the result of running diff() multiple times may be different in such a case
|
||||
|
||||
### IRATE
|
||||
|
||||
|
|
|
@ -80,7 +80,7 @@ These pseudocolumns occur after the aggregation clause.
|
|||
`FILL` clause is used to specify how to fill when there is data missing in any window, including:
|
||||
|
||||
1. NONE: No fill (the default fill mode)
|
||||
2. VALUE: Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)` Note: The value filled depends on the data type. For example, if you run FILL(VALUE 1.23) on an integer column, the value 1 is filled.
|
||||
2. VALUE: Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)` Note: The value filled depends on the data type. For example, if you run FILL(VALUE 1.23) on an integer column, the value 1 is filled. If multiple columns in select list need to be filled, then in the fill clause there must be a fill value for each of these columns, for example, `SELECT _wstart, min(c1), max(c1) FROM ... FILL(VALUE, 0, 0)`.
|
||||
3. PREV: Fill with the previous non-NULL value, `FILL(PREV)`
|
||||
4. NULL: Fill with NULL, `FILL(NULL)`
|
||||
5. LINEAR: Fill with the closest non-NULL value, `FILL(LINEAR)`
|
||||
|
|
|
@ -30,7 +30,7 @@ subquery: SELECT [DISTINCT] select_list
|
|||
from_clause
|
||||
[WHERE condition]
|
||||
[PARTITION BY tag_list]
|
||||
[window_clause]
|
||||
window_clause
|
||||
```
|
||||
|
||||
Session windows, state windows, and sliding windows are supported. When you configure a session or state window for a supertable, you must use PARTITION BY TBNAME. If the source table has a composite primary key, state windows, event windows, and count windows are not supported.
|
||||
|
@ -193,11 +193,32 @@ All [scalar functions](../function/#scalar-functions) are available in stream pr
|
|||
- [unique](../function/#unique)
|
||||
- [mode](../function/#mode)
|
||||
|
||||
## Pause\Resume stream
|
||||
## Pause Resume stream
|
||||
1.pause stream
|
||||
```sql
|
||||
PAUSE STREAM [IF EXISTS] stream_name;
|
||||
```
|
||||
If "IF EXISTS" is not specified and the stream does not exist, an error will be reported; If "IF EXISTS" is specified and the stream does not exist, success is returned; If the stream exists, paused all stream tasks.
|
||||
|
||||
2.resume stream
|
||||
```sql
|
||||
RESUME STREAM [IF EXISTS] [IGNORE UNTREATED] stream_name;
|
||||
```
|
||||
If "IF EXISTS" is not specified and the stream does not exist, an error will be reported. If "IF EXISTS" is specified and the stream does not exist, success is returned; If the stream exists, all of the stream tasks will be resumed. If "IGNORE UntREATED" is specified, data written during the pause period of stream is ignored when resuming stream.
|
||||
|
||||
## Stream State Backup
|
||||
The intermediate processing results of stream, a.k.a stream state, need to be persistent on the disk properly during stream processing. The stream state, consisting of multiple files on disk, may be transferred between different computing nodes during the stream processing, as a result of a leader/follower switch or physical computing node offline. You need to deploy the rsync on each physical node to enable the backup and restore processing work, since _ver_.3.3.2.1. To ensure it works correctly, please refer to the following instructions:
|
||||
1. add the option "snodeAddress" in the configure file
|
||||
2. add the option "checkpointBackupDir" in the configure file to set the backup data directory.
|
||||
3. create a _snode_ before creating a stream to ensure the backup service is activated. Otherwise, the checkpoint may not generated during the stream procedure.
|
||||
|
||||
>snodeAddress 127.0.0.1:873
|
||||
>
|
||||
>checkpointBackupDir /home/user/stream/backup/checkpoint/
|
||||
|
||||
## create snode
|
||||
The snode, stream node for short, on which the aggregate tasks can be deployed on, is a stateful computing node dedicated to the stream processing. An important feature is to backup and restore the stream state files. The snode needs to be created before creating stream tasks. Use the following SQL statement to create a snode in a TDengine cluster, and only one snode is allowed in a TDengine cluster for now.
|
||||
```sql
|
||||
CREATE SNODE ON DNODE id
|
||||
```
|
||||
is the ordinal number of a dnode, which can be acquired by using ```show dnodes``` statement.
|
||||
|
|
|
@ -22,9 +22,7 @@ Record these values:
|
|||
- TDengine REST API url: `http://tdengine.local:6041`.
|
||||
- TDengine cluster authorization, with user + password.
|
||||
|
||||
## Configuring Grafana
|
||||
|
||||
### Install Grafana Plugin and Configure Data Source
|
||||
## Install Grafana Plugin and Configure Data Source
|
||||
|
||||
<Tabs defaultValue="script">
|
||||
<TabItem value="gui" label="With GUI">
|
||||
|
@ -34,8 +32,6 @@ Under Grafana 8, plugin catalog allows you to [browse and manage plugins within
|
|||
Installation may cost some minutes, you can **Create a TDengine data source** when installation finished.
|
||||
Then you can add a TDengine data source by filling up the configuration options.
|
||||
|
||||

|
||||
|
||||
- Host: IP address of the server where the components of the TDengine cluster provide REST service and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`.
|
||||
- User: TDengine user name.
|
||||
- Password: TDengine user password.
|
||||
|
@ -99,8 +95,6 @@ Now users can log in to the Grafana server (username/password: admin/admin) dire
|
|||
Click `Add data source` to enter the Add data source page, and enter TDengine in the query box to add it.
|
||||
Enter the datasource configuration page, and follow the default prompts to modify the corresponding configuration.
|
||||
|
||||

|
||||
|
||||
- Host: IP address of the server where the components of the TDengine cluster provide REST service and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`.
|
||||
- User: TDengine user name.
|
||||
- Password: TDengine user password.
|
||||
|
@ -177,37 +171,112 @@ Open Grafana (http://localhost:3000), and you can add dashboard with TDengine no
|
|||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Create Dashboard
|
||||
:::info
|
||||
|
||||
Go back to the main interface to create a dashboard and click Add Query to enter the panel query page:
|
||||
In the following introduction, we take Grafana v11.0.0 as an example. Other versions may have different features, please refer to [Grafana's official website](https://grafana.com/docs/grafana/latest/).
|
||||
|
||||
:::
|
||||
|
||||
## Built-in Variables and Custom Variables
|
||||
The Variable feature in Grafana is very powerful. It can be used in queries, panel titles, labels, etc., to create more dynamic and interactive Dashboards, improving user experience and efficiency.
|
||||
|
||||
The main functions and characteristics of variables include:
|
||||
|
||||
- Dynamic data query: Variables can be used in query statements, allowing users to dynamically change query conditions by selecting different variable values, thus viewing different data views. This is very useful for scenarios that need to dynamically display data based on user input.
|
||||
|
||||
- Improved reusability: By defining variables, the same configuration or query logic can be reused in multiple places without the need to rewrite the same code. This makes the maintenance and updating of Dashboards simpler and more efficient.
|
||||
|
||||
- Flexible configuration options: Variables offer a variety of configuration options, such as predefined static value lists, dynamic value querying from data sources, regular expression filtering, etc., making the application of variables more flexible and powerful.
|
||||
|
||||
|
||||
Grafana provides both built-in variables and custom variables, which can be referenced in SQL writing. We can use `$variableName` to reference the variable, where `variableName` is the name of the variable. For detailed reference, please refer to [Variable reference](https://grafana.com/docs/grafana/latest/dashboards/variables/variable-syntax/).
|
||||
|
||||
### Built-in Variables
|
||||
Grafana has built-in variables such as `from`, `to`, and `interval`, all derived from Grafana plugin panels. Their meanings are as follows:
|
||||
- `from` is the start time of the query range
|
||||
- `to` is the end time of the query range
|
||||
- `interval` represent time spans
|
||||
|
||||
It is recommended to set the start and end times of the query range for each query, which can effectively reduce the amount of data scanned by the TDengine server during query execution. `interval` is the size of the window split, which in Grafana version 11, is calculated based on the time range and the number of return points.
|
||||
In addition to the above three common variables, Grafana also provides variables such as `__timezone`, `__org`, `__user`, etc. For details, please refer to [Built-in Variables](https://grafana.com/docs/grafana/latest/dashboards/variables/add-template-variables/#global-variables).
|
||||
|
||||
### Custom Variables
|
||||
We can add custom variables in the Dashboard. The usage of custom variables is no different from that of built-in variables; they are referenced in SQL with `$variableName`.
|
||||
Custom variables support multiple types, such as `Query`, `Constant`, `Interval`, `Data source`, etc.
|
||||
Custom variables can reference other custom variables, for example, one variable represents a region, and another variable can reference the value of the region to query devices in that region.
|
||||
|
||||
#### Adding Query Type Variables
|
||||
In the Dashboard configuration, select `Variables`, then click `New variable`:
|
||||
1. In the `Name` field, enter your variable name, here we set the variable name as `selected_groups`.
|
||||
2. In the `Select variable type` dropdown menu, select `Query`.
|
||||
Depending on the selected variable type, configure the corresponding options. For example, if you choose `Query`, you need to specify the data source and the query statement for obtaining variable values. Here, taking smart meters as an example, we set the query type, select the data source, and configure the SQL as `select distinct(groupid) from power.meters where groupid < 3 and ts > $from and ts < $to;`
|
||||
3. After clicking `Run Query` at the bottom, you can see the variable values generated based on your configuration in the `Preview of values` section.
|
||||
4. Other configurations are not detailed here. After completing the configuration, click the `Apply` button at the bottom of the page, then click `Save dashboard` in the top right corner to save.
|
||||
|
||||
After completing the above steps, we have successfully added a new custom variable `$selected_groups` to the Dashboard. We can later reference this variable in the Dashboard's queries with `$selected_groups`.
|
||||
|
||||
We can also add another custom variable to reference this `selected_groups` variable, for example, we add a query variable named `tbname_max_current`, with the SQL as `select tbname from power.meters where groupid = $selected_groups and ts > $from and ts < $to;`
|
||||
|
||||
#### Adding Interval Type Variables
|
||||
We can customize the time window interval to better fit business needs.
|
||||
1. In the `Name` field, enter the variable name as `interval`.
|
||||
2. In the `Select variable type` dropdown menu, select `Interval`.
|
||||
3. In the `Interval options` enter `1s,2s,5s,10s,15s,30s,1m`.
|
||||
4. Other configurations are not detailed here. After completing the configuration, click the `Apply` button at the bottom of the page, then click `Save dashboard` in the top right corner to save.
|
||||
|
||||
After completing the above steps, we have successfully added a new custom variable `$interval` to the Dashboard. We can later reference this variable in the Dashboard's queries with `$interval`.
|
||||
|
||||
## TDengine Time Series Query Support
|
||||
On top of supporting standard SQL, TDengine also provides a series of special query syntaxes that meet the needs of time series business scenarios, bringing great convenience to the development of applications in time series scenarios.
|
||||
- `partition by` this clause can split data by certain dimensions and then perform a series of calculations within the split data space, which can replace `group by` in most cases.
|
||||
- `interval` this clause is used to generate time windows of the same time interval.
|
||||
- `fill` this clause is used to specify how to fill when there is data missing in any window.
|
||||
- `Window Pseudocolumns` If you need to output the time window information corresponding to the aggregation result in the results, you need to use window pseudocolumns in the SELECT clause: the start time of the time window (_wstart), the end time of the time window (_wend), etc.
|
||||
|
||||
For a detailed introduction to these features, please refer to [Time-Series Extensions](../../taos-sql/distinguished/).
|
||||
|
||||
|
||||
## Create Dashboard
|
||||
|
||||
Return to the main interface to create a Dashboard, click Add Query to enter the panel query page:
|
||||
|
||||

|
||||
|
||||
As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query. We will continue to use power meters as an example. In order to demonstrate the beautiful curves, **virtual data** is used here.
|
||||
|
||||
- INPUT SQL: Enter the desired query (the results being two columns and multiple rows), such as `select _wstart as ts, avg(current) as current from power.meters where ts > $from and ts < $to interval($interval) fill(null)`. In this statement, `$from`, `$to`, and `$interval` are variables that Grafana replaces with the query time range and interval. In addition to the built-in variables, custom template variables are also supported.
|
||||
- ALIAS BY: This allows you to set the current query alias.
|
||||
- GENERATE SQL: Clicking this button will automatically replace the corresponding variables and generate the final executed statement.
|
||||
- Group by column(s): `group by` or `partition by` columns name split by comma. By setting `Group by column(s)`, it can show multi-dimension data if Sql is `group by` or `partition by`. Such as, it can show data by `groupid` if sql is `select _wstart as ts, groupid, avg(current) as current from power.meters where ts > $from and ts < $to partition by groupid interval($interval) fill(null)` and `Group by column(s)` is `groupid`.
|
||||
- Group By Format: format legend for `group by` or `partition by`. For example, in the above Input SQL, set `Group By Format` to `groupid-{{groupid}}`, and display the legend name as the formatted group name.
|
||||
## Time Series Data Display
|
||||
Suppose we want to query the average current size over a period of time, with the time window divided by $interval, and fill with null if data is missing in any time window interval.
|
||||
- INPUT SQL: Enter the statement to be queried (the result set of this SQL statement should be two columns and multiple rows), here enter: `select _wstart as ts, avg(current) as current from power.meters where groupid in ($selected_groups) and ts > $from and ts < $to interval($interval) fill(null)`, where from, to, and interval are built-in variables of the Grafana, selected_groups is a custom variable.
|
||||
- ALIAS BY: You can set the current query alias.
|
||||
- GENERATE SQL: Clicking this button will automatically replace the corresponding variables and generate the final execution statement.
|
||||
|
||||
In the custom variables at the top, if the value of `selected_groups` is selected as 1, then the query for the average value change of all device currents in the `meters` supertable where `groupid` is 1 is as shown in the following figure:
|
||||
|
||||

|
||||
|
||||
:::note
|
||||
|
||||
Since the REST connection because is stateless. Grafana plugin can use <db_name>.<table_name> in the SQL command to specify the database name.
|
||||
Since the REST interface is stateless, it is not possible to use the `use db` statement to switch databases. In the SQL statement in the Grafana plugin, you can use \<db_name>.\<table_name> to specify the database.
|
||||
|
||||
:::
|
||||
|
||||
Query the average current changes of all devices in the `meters` stable as shown in the following figure:
|
||||
## Time Series Data Group Display
|
||||
Suppose we want to query the average current value over a period of time, displayed by `groupid` grouping, we can modify the previous SQL to `select _wstart as ts, groupid, avg(current) as current from power.meters where ts > $from and ts < $to partition by groupid interval($interval) fill(null)`
|
||||
|
||||

|
||||
- Group by column(s): **Half-width** comma-separated `group by` or `partition by` column names. If it is a `group by` or `partition by` query statement, setting the `Group by` column can display multidimensional data. Here, set the Group by column name to `groupid`, which can display data grouped by `groupid`.
|
||||
- Group By Format: Legend formatting format for multidimensional data in Group by or Partition by scenarios. For example, the above INPUT SQL, setting `Group By Format` to `groupid-{{groupid}}`, the displayed legend name is the formatted group name.
|
||||
|
||||
Query the average current value of all devices in the 'meters' stable and display it in groups according to the `groupid` as shown in the following figure:
|
||||
After completing the settings, the data is displayed grouped by `groupid` as shown in the following figure:
|
||||
|
||||

|
||||

|
||||
|
||||
> For more information on how to use Grafana to create the appropriate monitoring interface and for more details on using Grafana, refer to the official Grafana [documentation](https://grafana.com/docs/).
|
||||
|
||||
### Importing the Dashboard
|
||||
## Performance Suggestions
|
||||
- **Include time range in all queries**, in time series databases, if the time range is not included in the query, it will lead to table scanning and poor performance. A common SQL writing example is `select column_name from db.table where ts > $from and ts < $to;`
|
||||
- For queries of the latest status type, we generally recommend **enabling cache when creating the database** (`CACHEMODEL` set to last_row or both), a common SQL writing example is `select last(column_name) from db.table where ts > $from and ts < $to;`
|
||||
|
||||
## Importing the Dashboard
|
||||
|
||||
You can install TDinsight dashboard in data source configuration page (like `http://localhost:3000/datasources/edit/1/dashboards`) as a monitoring visualization tool for TDengine cluster. Ensure that you use TDinsight for 3.x. Please note TDinsight for 3.x needs to configure and run taoskeeper correctly.
|
||||
|
||||
|
@ -221,3 +290,137 @@ For more dashboards using TDengine data source, [search here in Grafana](https:/
|
|||
- [15155](https://grafana.com/grafana/dashboards/15155): TDengine alert demo.
|
||||
- [15167](https://grafana.com/grafana/dashboards/15167): TDinsight.
|
||||
- [16388](https://grafana.com/grafana/dashboards/16388): Telegraf node metrics dashboard using TDengine data source.
|
||||
|
||||
|
||||
## Alert Configuration Introduction
|
||||
### Alert Configuration Steps
|
||||
The TDengine Grafana plugin supports alerts. To configure alerts, the following steps are required:
|
||||
1. Configure Contact Points: Set up notification channels, including DingDing, Email, Slack, WebHook, Prometheus Alertmanager, etc.
|
||||
2. Configure Notification Policies: Set up routing for which channel to send alerts to, as well as the timing and frequency of notifications.
|
||||
3. Configure "Alert rules": Set up detailed alert rules.
|
||||
3.1 Configure alert name.
|
||||
3.2 Configure query and alert trigger conditions.
|
||||
3.3 Configure evaluation behavior.
|
||||
3.4 Configure labels and notifications.
|
||||
3.5 Configure annotations.
|
||||
|
||||
### Alert Configuration Web UI
|
||||
In Grafana 11, the alert Web UI has 6 tabs: "Alert rules", "Contact points", "Notification policies", "Silences", "Groups", and "Settings".
|
||||
- "Alert rules" displays and configures alert rules.
|
||||
- "Contact points" support notification channels such as DingDing, Email, Slack, WebHook, Prometheus Alertmanager, etc.
|
||||
- "Notification policies" sets up routing for which channel to send alerts to, as well as the timing and frequency of notifications.
|
||||
- "Silences" configures silent periods for alerts.
|
||||
- "Groups" displays grouped alerts after they are triggered.
|
||||
- "Admin" allows modifying alert configurations through JSON.
|
||||
|
||||
## Configuring Email Contact Point
|
||||
### Modifying Grafana Server Configuration File
|
||||
Add SMTP/Emailing and Alerting modules to the Grafana service configuration file. For Linux systems, the configuration file is usually located at `/etc/grafana/grafana.ini`.
|
||||
Add the following content to the configuration file:
|
||||
|
||||
```ini
|
||||
#################################### SMTP / Emailing ##########################
|
||||
[smtp]
|
||||
enabled = true
|
||||
host = smtp.qq.com:465 #Email service used
|
||||
user = receiver@foxmail.com
|
||||
password = *********** #Use mail authorization code
|
||||
skip_verify = true
|
||||
from_address = sender@foxmail.com
|
||||
```
|
||||
|
||||
Then restart the Grafana service. For example, on a Linux system, execute `systemctl restart grafana-server.service`
|
||||
|
||||
### Grafana Configuration for Email Contact Point
|
||||
|
||||
Find "Home" -> "Alerting" -> "Contact points" on the Grafana page to create a new contact point
|
||||
"Name": Email Contact Point
|
||||
"Integration": Select the contact type, here choose Email, fill in the email receiving address, and save the contact point after completion
|
||||
|
||||

|
||||
|
||||
## Configuring Feishu Contact Point
|
||||
|
||||
### Feishu Robot Configuration
|
||||
1. "Feishu Workspace" -> "Get Apps" -> "Search for Feishu Robot Assistant" -> "Create Command"
|
||||
2. Choose Trigger: Grafana
|
||||
3. Choose Action: Send a message through the official robot, fill in the recipient and message content
|
||||
|
||||

|
||||
|
||||
### Grafana Configuration for Feishu Contact Point
|
||||
|
||||
Find "Home" -> "Alerting" -> "Contact points" on the Grafana page to create a new contact point
|
||||
"Name": Feishu Contact Point
|
||||
"Integration": Select the contact type, here choose Webhook, and fill in the URL (the Grafana trigger Webhook address in Feishu Robot Assistant), then save the contact point
|
||||
|
||||

|
||||
|
||||
## Notification Policy
|
||||
After configuring the contact points, you can see there is a Default Policy
|
||||
|
||||

|
||||
|
||||
Click on the "..." on the right -> "Edit", then edit the default notification policy, a configuration window pops up:
|
||||
|
||||

|
||||
|
||||
Configure the parameters as shown in the screenshot above.
|
||||
|
||||
## Configuring Alert Rules
|
||||
|
||||
### Define Query and Alert Conditions
|
||||
|
||||
Select "Edit" -> "Alert" -> "New alert rule" in the panel where you want to configure the alert.
|
||||
|
||||
1. "Enter alert rule name": Here, enter `power meters alert` as an example for smart meters.
|
||||
2. "Define query and alert condition":
|
||||
2.1 Choose data source: `TDengine Datasource`
|
||||
2.2 Query statement:
|
||||
```sql
|
||||
select _wstart as ts, groupid, avg(current) as current from power.meters where ts > $from and ts < $to partition by groupid interval($interval) fill(null)
|
||||
```
|
||||
2.3 Set "Expression": `Threshold is above 100`
|
||||
2.4 Click "Set as alert condition"
|
||||
2.5 "Preview": View the results of the set rules
|
||||
|
||||
After completing the settings, you can see the image displayed below:
|
||||
|
||||

|
||||
|
||||
### Configuring Expressions and Calculation Rules
|
||||
|
||||
Grafana's "Expression" supports various operations and calculations on data, which are divided into:
|
||||
1. "Reduce": Aggregates the values of a time series within the selected time range into a single value
|
||||
1.1 "Function" is used to set the aggregation method, supporting Min, Max, Last, Mean, Sum, and Count.
|
||||
1.2 "Mode" supports the following three:
|
||||
- "Strict": If no data is queried, the data will be assigned NaN.
|
||||
- "Drop Non-numeric Value": Remove illegal data results.
|
||||
- "Replace Non-numeric Value": If it is illegal data, replace it with a constant value.
|
||||
2. "Threshold": Checks whether the time series data meets the threshold judgment condition. Returns 0 when the condition is false, and 1 when true. Supports the following methods:
|
||||
- Is above (x > y)
|
||||
- Is below (x < y)
|
||||
- Is within range (x > y1 AND x < y2)
|
||||
- Is outside range (x < y1 AND x > y2)
|
||||
3. "Math": Performs mathematical operations on the data of the time series.
|
||||
4. "Resample": Changes the timestamps in each time series to have a consistent interval, so that mathematical operations can be performed between them.
|
||||
5. "Classic condition (legacy)": Multiple logical conditions can be configured to determine whether to trigger an alert.
|
||||
|
||||
As shown in the screenshot above, here we set the alert to trigger when the maximum value exceeds 100.
|
||||
|
||||
### Configuring Evaluation behavior
|
||||
|
||||

|
||||
|
||||
Configure the parameters as shown in the screenshot above.
|
||||
|
||||
### Configuring Labels and Notifications
|
||||

|
||||
|
||||
Configure the parameters as shown in the screenshot above.
|
||||
|
||||
### Configuring annotations
|
||||
|
||||

|
||||
|
||||
After setting "Summary" and "Description", you will receive an alert notification if the alert is triggered.
|
||||
|
|
After Width: | Height: | Size: 83 KiB |
After Width: | Height: | Size: 39 KiB |
After Width: | Height: | Size: 105 KiB |
After Width: | Height: | Size: 41 KiB |
After Width: | Height: | Size: 36 KiB |
After Width: | Height: | Size: 116 KiB |
After Width: | Height: | Size: 37 KiB |
After Width: | Height: | Size: 98 KiB |
After Width: | Height: | Size: 204 KiB |
Before Width: | Height: | Size: 124 KiB After Width: | Height: | Size: 154 KiB |
Before Width: | Height: | Size: 122 KiB After Width: | Height: | Size: 157 KiB |
|
@ -379,7 +379,7 @@ properties 中的配置参数如下:
|
|||
- TSDBDriver.PROPERTY_KEY_DISABLE_SSL_CERT_VALIDATION: 关闭 SSL 证书验证 。仅在使用 Websocket 连接时生效。true: 启用,false: 不启用。默认为 false。
|
||||
|
||||
|
||||
此外对 JDBC 原生连接,通过指定 URL 和 Properties 还可以指定其他参数,比如日志级别、SQL 长度等。更多详细配置请参考[客户端配置](../../reference/config/#仅客户端适用)。
|
||||
此外对 JDBC 原生连接,通过指定 URL 和 Properties 还可以指定其他参数,比如日志级别、SQL 长度等。更多详细配置请参考[客户端配置](../../reference/config/)。
|
||||
|
||||
### 配置参数的优先级
|
||||
|
||||
|
|
|
@ -1200,27 +1200,40 @@ ignore_negative: {
|
|||
### DIFF
|
||||
|
||||
```sql
|
||||
DIFF(expr [, ignore_negative])
|
||||
DIFF(expr [, ignore_option])
|
||||
|
||||
ignore_negative: {
|
||||
ignore_option: {
|
||||
0
|
||||
| 1
|
||||
| 2
|
||||
| 3
|
||||
}
|
||||
```
|
||||
|
||||
**功能说明**:统计表中某列的值与前一行对应值的差。 ignore_negative 取值为 0|1 , 可以不填,默认值为 0. 不忽略负值。ignore_negative 为 1 时表示忽略负数。对于你存在复合主键的表的查询,若时间戳相同的数据存在多条,则只有对应的复合主键最小的数据参与运算。
|
||||
**功能说明**:统计表中特定列与之前行的当前列有效值之差。 ignore_option 取值为 0|1|2|3 , 可以不填,默认值为 0.
|
||||
- `0` 表示不忽略(diff结果)负值不忽略 null 值
|
||||
- `1` 表示(diff结果)负值作为 null 值
|
||||
- `2` 表示不忽略(diff结果)负值但忽略 null 值
|
||||
- `3` 表示忽略(diff结果)负值且忽略 null 值
|
||||
- 对于存在复合主键的表的查询,若时间戳相同的数据存在多条,则只有对应的复合主键最小的数据参与运算。
|
||||
|
||||
**返回数据类型**:同应用字段。
|
||||
**返回数据类型**:bool、时间戳及整型数值类型均返回 int_64,浮点类型返回 double, 若 diff 结果溢出则返回溢出后的值。
|
||||
|
||||
**适用数据类型**:数值类型。
|
||||
**适用数据类型**:数值类型、时间戳和 bool 类型。
|
||||
|
||||
**适用于**:表和超级表。
|
||||
|
||||
**使用说明**:
|
||||
|
||||
- 输出结果行数是范围内总行数减一,第一行没有结果输出。
|
||||
- 可以与选择相关联的列一起使用。 例如: select \_rowts, DIFF() from。
|
||||
|
||||
- diff 是计算本行特定列与同列的前一个有效数据的差值,同列的前一个有效数据:指的是同一列中时间戳较小的最临近的非空值。
|
||||
- 数值类型 diff 结果为对应的算术差值;时间戳类型根据数据库的时间戳精度进行差值计算;bool 类型计算差值时 true 视为 1, false 视为 0
|
||||
- 如当前行数据为 null 或者没有找到同列前一个有效数据时,diff 结果为 null
|
||||
- 忽略负值时( ignore_option 设置为 1 或 3 ),如果 diff 结果为负值,则结果设置为 null,然后根据 null 值过滤规则进行过滤
|
||||
- 当 diff 结果发生溢出时,结果是否是`应该忽略的负值`取决于逻辑运算结果是正数还是负数,例如 9223372036854775800 - (-9223372036854775806) 的值超出 BIGINT 的范围 ,diff 结果会显示溢出值 -10,但并不会被作为负值忽略
|
||||
- 单个语句中可以使用单个或者多个 diff,并且每个 diff 可以指定相同或不同的 ignore_option ,当单个语句中存在多个 diff 时当且仅当某行所有 diff 的结果都为 null ,并且 ignore_option 都设置为忽略 null 值,该行才从结果集中剔除
|
||||
- 可以选择与相关联的列一起使用。 例如: select _rowts, DIFF() from。
|
||||
- 当没有复合主键时,如果不同的子表有相同时间戳的数据,会提示 "Duplicate timestamps not allowed"
|
||||
- 当使用复合主键时,不同子表的时间戳和主键组合可能相同,使用哪一行取决于先找到哪一行,这意味着在这种情况下多次运行 diff() 的结果可能会不同。
|
||||
|
||||
### IRATE
|
||||
|
||||
|
|
|
@ -76,7 +76,7 @@ window_clause: {
|
|||
FILL 语句指定某一窗口区间数据缺失的情况下的填充模式。填充模式包括以下几种:
|
||||
|
||||
1. 不进行填充:NONE(默认填充模式)。
|
||||
2. VALUE 填充:固定值填充,此时需要指定填充的数值。例如:FILL(VALUE, 1.23)。这里需要注意,最终填充的值受由相应列的类型决定,如 FILL(VALUE, 1.23),相应列为 INT 类型,则填充值为 1。
|
||||
2. VALUE 填充:固定值填充,此时需要指定填充的数值。例如:FILL(VALUE, 1.23)。这里需要注意,最终填充的值受由相应列的类型决定,如 FILL(VALUE, 1.23),相应列为 INT 类型,则填充值为 1, 若查询列表中有多列需要FILL, 则需要给每一个FILL列指定VALUE, 如`SELECT _wstart, min(c1), max(c1) FROM ... FILL(VALUE, 0, 0)`。
|
||||
3. PREV 填充:使用前一个非 NULL 值填充数据。例如:FILL(PREV)。
|
||||
4. NULL 填充:使用 NULL 填充数据。例如:FILL(NULL)。
|
||||
5. LINEAR 填充:根据前后距离最近的非 NULL 值做线性插值填充。例如:FILL(LINEAR)。
|
||||
|
|
|
@ -27,7 +27,7 @@ subquery: SELECT select_list
|
|||
from_clause
|
||||
[WHERE condition]
|
||||
[PARTITION BY tag_list]
|
||||
[window_clause]
|
||||
window_clause
|
||||
```
|
||||
|
||||
支持会话窗口、状态窗口、滑动窗口、事件窗口和计数窗口,其中,状态窗口、事件窗口和计数窗口搭配超级表时必须与partition by tbname一起使用。对于数据源表是复合主键的流,不支持状态窗口、事件窗口、计数窗口的计算。
|
||||
|
@ -272,3 +272,22 @@ PAUSE STREAM [IF EXISTS] stream_name;
|
|||
2.流计算恢复计算任务
|
||||
RESUME STREAM [IF EXISTS] [IGNORE UNTREATED] stream_name;
|
||||
没有指定IF EXISTS,如果该stream不存在,则报错,如果存在,则恢复流计算;指定了IF EXISTS,如果stream不存在,则返回成功;如果存在,则恢复流计算。如果指定IGNORE UNTREATED,则恢复流计算时,忽略流计算暂停期间写入的数据。
|
||||
|
||||
## 状态数据备份与同步
|
||||
流计算的中间结果成为计算的状态数据,需要在流计算整个生命周期中进行持久化保存。为了确保流计算中间状态能够在集群环境下在不同的节点间可靠地同步和迁移,至3.3.2.1 版本开始,需要在运行环境中部署 rsync 软件,还需要增加以下的步骤:
|
||||
1. 在配置文件中配置 snode 的地址(IP+端口)和状态数据备份目录(该目录系 snode 所在的物理节点的目录)。
|
||||
2. 然后创建 snode。
|
||||
完成上述两个步骤以后才能创建流。
|
||||
如果没有创建 snode 并正确配置 snode 的地址,流计算过程中将无法生成检查点(checkpoint),并可能导致后续的计算结果产生错误。
|
||||
|
||||
> snodeAddress 127.0.0.1:873
|
||||
>
|
||||
> checkpointBackupDir /home/user/stream/backup/checkpoint/
|
||||
|
||||
|
||||
## 创建 snode 的方式
|
||||
使用以下命令创建 snode(stream node), snode 是流计算中有状态的计算节点,可用于部署聚合任务,同时负责备份不同的流计算任务生成的检查点数据。
|
||||
```sql
|
||||
CREATE SNODE ON DNODE [id]
|
||||
```
|
||||
其中的 id 是集群中的 dnode 的序号。请注意选择的dnode,流计算的中间状态将自动在其上进行备份。
|
||||
|
|
|
@ -13,7 +13,7 @@ TDengine 能够与开源数据可视化系统 [Grafana](https://www.grafana.com/
|
|||
|
||||
要让 Grafana 能正常添加 TDengine 数据源,需要以下几方面的准备工作。
|
||||
|
||||
- Grafana 服务已经部署并正常运行。目前 TDengine 支持 Grafana 7.5 以上的版本。用户可以根据当前的操作系统,到 Grafana 官网下载安装包,并执行安装。下载地址如下:\<https://grafana.com/grafana/download>。
|
||||
- Grafana 服务已经部署并正常运行。目前 TDengine 支持 Grafana 7.5 以上的版本。用户可以根据当前的操作系统,到 Grafana 官网下载安装包,并执行安装。下载地址如下:[https://grafana.com/grafana/download](https://grafana.com/grafana/download) 。
|
||||
- TDengine 集群已经部署并正常运行。
|
||||
- taosAdapter 已经安装并正常运行。具体细节请参考 [taosAdapter 的使用手册](../../reference/taosadapter)
|
||||
|
||||
|
@ -22,26 +22,19 @@ TDengine 能够与开源数据可视化系统 [Grafana](https://www.grafana.com/
|
|||
- TDengine 集群 REST API 地址,如:`http://tdengine.local:6041`。
|
||||
- TDengine 集群认证信息,可使用用户名及密码。
|
||||
|
||||
## 配置 Grafana
|
||||
|
||||
### 安装 Grafana Plugin 并配置数据源
|
||||
## 安装 Grafana Plugin 并配置数据源
|
||||
|
||||
<Tabs defaultValue="script">
|
||||
<TabItem value="gui" label="图形化界面安装">
|
||||
|
||||
使用 Grafana 最新版本(8.5+),您可以在 Grafana 中[浏览和管理插件](https://grafana.com/docs/grafana/next/administration/plugin-management/#plugin-catalog)(对于 7.x 版本,请采用 **安装脚本** 或 **手动安装** 方式)。在 Grafana 管理界面中的 **Configurations > Plugins** 页面直接搜索 `TDengine` 并按照提示安装。
|
||||
|
||||
安装完毕后,按照指示 **Create a TDengine data source** 添加数据源。
|
||||
输入 TDengine 相关配置,如下图所示:
|
||||
|
||||

|
||||
|
||||
- Host: TDengine 集群中提供 REST 服务的 IP 地址与端口号,默认 \<http://localhost:6041>。
|
||||
安装完毕后,按照指示 **Create a TDengine data source** 添加数据源,输入 TDengine 相关配置:
|
||||
- Host: TDengine 集群中提供 REST 服务的 IP 地址与端口号,默认 `http://localhost:6041`
|
||||
- User:TDengine 用户名。
|
||||
- Password:TDengine 用户密码。
|
||||
|
||||
点击 `Save & Test` 进行测试,成功会提示:`TDengine Data source is working`。
|
||||
配置完毕,现在可以使用 TDengine 创建 Dashboard 了。
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="script" label="安装脚本">
|
||||
|
@ -93,13 +86,11 @@ sudo unzip tdengine-datasource-$GF_VERSION.zip -d /var/lib/grafana/plugins/
|
|||
GF_INSTALL_PLUGINS=tdengine-datasource
|
||||
```
|
||||
|
||||
之后,用户可以直接通过 \<http://localhost:3000> 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,
|
||||
之后,用户可以直接通过 http://localhost:3000 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,
|
||||
|
||||
点击 `Add data source` 可进入新增数据源页面,在查询框中输入 TDengine, 然后点击 `select` 选择添加后会进入数据源配置页面,按照默认提示修改相应配置即可:
|
||||
点击 `Add data source` 可进入新增数据源页面,在查询框中输入 TDengine, 然后点击 `select` 选择添加后会进入数据源配置页面,按照默认提示修改相应配置:
|
||||
|
||||

|
||||
|
||||
- Host: TDengine 集群中提供 REST 服务的 IP 地址与端口号,默认 \<http://localhost:6041>。
|
||||
- Host: TDengine 集群中提供 REST 服务的 IP 地址与端口号,默认 `http://localhost:6041`
|
||||
- User:TDengine 用户名。
|
||||
- Password:TDengine 用户密码。
|
||||
|
||||
|
@ -170,25 +161,92 @@ docker run -d \
|
|||
|
||||
3. 使用 docker-compose 命令启动 TDengine + Grafana :`docker-compose up -d`。
|
||||
|
||||
打开 Grafana \<http://localhost:3000>,现在可以添加 Dashboard 了。
|
||||
打开 Grafana [http://localhost:3000](http://localhost:3000),现在可以添加 Dashboard 了。
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### 创建 Dashboard
|
||||
|
||||
回到主界面创建 Dashboard,点击 Add Query 进入面板查询页面:
|
||||
:::info
|
||||
|
||||
下文介绍中,都以 Grafana v11.0.0 版本为例,其他版本功能可能有差异,请参考 [Grafana 官网](https://grafana.com/docs/grafana/latest/)。
|
||||
|
||||
:::
|
||||
|
||||
## 内置变量和自定义变量
|
||||
Grafana 中的 Variable(变量)功能非常强大,可以在 Dashboard 的查询、面板标题、标签等地方使用,用来创建更加动态和交互式的 Dashbord,提高用户体验和效率。
|
||||
|
||||
变量的主要作用和特点包括:
|
||||
|
||||
- 动态数据查询:变量可以用于查询语句中,使得用户可以通过选择不同的变量值来动态更改查询条件,从而查看不同的数据视图。这对于需要根据用户输入动态展示数据的场景非常有用。
|
||||
|
||||
- 提高可重用性:通过定义变量,可以在多个地方重用相同的配置或查询逻辑,而不需要重复编写相同的代码。这使得 Dashboard 的维护和更新变得更加简单高效。
|
||||
|
||||
- 灵活的配置选项:变量提供了多种配置选项,如预定义的静态值列表、从数据源动态查询值、正则表达式过滤等,使得变量的应用更加灵活和强大。
|
||||
|
||||
|
||||
Grafana 提供了内置变量和自定义变量,它们都可以可以在编写 SQL 时引用,引用的方式是 `$variableName`,`variableName` 是变量的名字,其他引用方式请参考 [引用方式](https://grafana.com/docs/grafana/latest/dashboards/variables/variable-syntax/)。
|
||||
|
||||
### 内置变量
|
||||
Grafana 内置了 `from`、`to` 和 `interval` 等变量,都取自于 Grafana 插件面板。其含义如下:
|
||||
- `from` 查询范围的起始时间
|
||||
- `to` 查询范围的结束时间
|
||||
- `interval` 窗口切分间隔
|
||||
|
||||
对于每个查询都建议设置查询范围的起始时间和结束时间,可以有效的减少 TDengine 服务端执行查询扫描的数据量。`internal` 是窗口切分的大小,在 Grafana 11 版本中,其大小为时间间隔和返回点数计算而得。
|
||||
除了上述三个常用变量,Grafana 还提供了如 `__timezone`, `__org`, `__user` 等变量,详情请参考 [内置变量](https://grafana.com/docs/grafana/latest/dashboards/variables/add-template-variables/#global-variables)。
|
||||
|
||||
### 自定义变量
|
||||
我们可以在 Dashbord 中增加自定义变量。自定义变量和内置变量的使用方式没有区别,都是在 SQL 中用 `$variableName` 进行引用。
|
||||
自定义变量支持多种类型,常见的类型包括 `Query`(查询)、`Constant`(常量)、`Interval`(间隔)、`Data source`(数据源)等。
|
||||
自定义变量可以引用其他自定义变量,比如一个变量表示区域,另一个变量可以引用区域的值,来查询这个区域的设备。
|
||||
#### 添加查询类型变量
|
||||
在 Dashbord 的配置中,选择 【Variables】,然后点击 【New variable】:
|
||||
1. 在 “Name“ 字段中,输入你的变量名,此处我们设置变量名为 `selected_groups`。
|
||||
2. 在 【Select variable type】下拉菜单中,选择 ”Query“(查询)。
|
||||
根据选择的变量类型,配置相应的选项。例如,如果选择了 “Query” 类型,你需要指定数据源和用于获取变量值的查询语句。此处我们还以智能电表为例,设置查询类型,选择数据源后,配置 SQL 为 `select distinct(groupid) from power.meters where groupid < 3 and ts > $from and ts < $to;`
|
||||
3. 点击底部的【Run Query】后,可以在 “Preview of values”(值预览)部分,查看到根据你的配置生成的变量值。
|
||||
4. 还有其他配置不再赘述,完成配置后,点击页面底部的【Apply】(应用)按钮,然后点击右上角的【Save dashboard】保存。
|
||||
|
||||
完成以上步骤后,我们就成功在 Dashboard 中添加了一个新的自定义变量 `$selected_groups`。我们可以后面在 Dashboard 的查询中通过 `$selected_groups` 的方式引用这个变量。
|
||||
|
||||
我们还可以再新增自定义变量来引用这个 `selected_groups` 变量,比如我们新增一个名为 `tbname_max_current` 的查询变量,其 SQL 为 `select tbname from power.meters where groupid = $selected_groups and ts > $from and ts < $to;`
|
||||
|
||||
#### 添加间隔类型变量
|
||||
我们可以自定义时间窗口间隔,可以更加贴合业务需求。
|
||||
1. 在 “Name“ 字段中,输入变量名为 `interval`。
|
||||
2. 在 【Select variable type】下拉菜单中,选择 “Interval”(间隔)。
|
||||
3. 在 【Interval options】选项中输入 `1s,2s,5s,10s,15s,30s,1m`。
|
||||
4. 还有其他配置不再赘述,完成配置后,点击页面底部的【Apply】(应用)按钮,然后点击右上角的【Save dashboard】保存。
|
||||
|
||||
完成以上步骤后,我们就成功在 Dashboard 中添加了一个新的自定义变量 `$interval`。我们可以后面在 Dashboard 的查询中通过 `$interval` 的方式引用这个变量。
|
||||
|
||||
## TDengine 时间序列查询支持
|
||||
TDengine 在支持标准 SQL 的基础之上,还提供了一系列满足时序业务场景需求的特色查询语法,这些语法能够为时序场景的应用的开发带来极大的便利。
|
||||
- `partition by` 子句可以按一定的维度对数据进行切分,然后在切分出的数据空间内再进行一系列的计算。绝大多数情况可以替代 `group by`。
|
||||
- `interval` 子句用于产生相等时间周期的窗口
|
||||
- `fill` 语句指定某一窗口区间数据缺失的情况下的填充模式
|
||||
- `时间戳伪列` 如果需要在结果中输出聚合结果所对应的时间窗口信息,需要在 SELECT 子句中使用时间戳相关的伪列: 时间窗口起始时间 (_wstart), 时间窗口结束时间 (_wend) 等
|
||||
|
||||
上述特性详细介绍可以参考 [特色查询](../../taos-sql/distinguished/)。
|
||||
|
||||
## 创建 Dashboard
|
||||
|
||||
回到主界面创建 Dashboard,点击【Add Query】进入面板查询页面:
|
||||
|
||||

|
||||
|
||||
如上图所示,在 Query 中选中 `TDengine` 数据源,在下方查询框可输入相应 SQL 进行查询。 我们继续用智能电表来举例,为了展示曲线美观,此处**用了虚拟数据**。
|
||||
具体说明如下:
|
||||
如上图所示,在 ”Query“ 中选中 `TDengine` 数据源,在下方查询框可输入相应 SQL 进行查询。 我们继续用智能电表来举例,为了展示曲线美观,此处**用了虚拟数据**。
|
||||
|
||||
- INPUT SQL:输入要查询的语句(该 SQL 语句的结果集应为两列多行),例如:`select _wstart as ts, avg(current) as current from power.meters where ts > $from and ts < $to interval($interval) fill(null)` ,其中,from、to 和 interval 为 TDengine 插件的内置变量,表示从 Grafana 插件面板获取的时间查询范围和窗口切分间隔。除了内置变量外,也支持使用自定义模板变量。
|
||||
- ALIAS BY:可设置当前查询别名。
|
||||
- GENERATE SQL: 点击该按钮会自动替换相应变量,并生成最终执行的语句。
|
||||
- Group by column(s): **半角**逗号分隔的 `group by` 或 `partition by` 列名。如果是 `group by` or `partition by` 查询语句,设置 `Group by` 列,可以展示多维数据。例如:INPUT SQL 为 `select _wstart as ts, groupid, avg(current) as current from power.meters where ts > $from and ts < $to partition by groupid interval($interval) fill(null)`,设置 Group by 列名为 `groupid`,可以按 `groupid` 展示数据。
|
||||
- Group By Format: Group by 或 Partition by 场景下多维数据 legend 格式化格式。例如上述 INPUT SQL,将 `Group By Format` 设置为 `groupid-{{groupid}}`,展示的 legend 名字为格式化的分组名。
|
||||
## 时间序列数据展示
|
||||
假设我们想查询一段时间内的平均电流大小,时间窗口按 `$interval` 切分,若某一时间窗口区间数据缺失,填充 null。
|
||||
- “INPUT SQL“:输入要查询的语句(该 SQL 语句的结果集应为两列多行),此处输入:`select _wstart as ts, avg(current) as current from power.meters where groupid in ($selected_groups) and ts > $from and ts < $to interval($interval) fill(null)` ,其中,from、to 和 interval 为 Grafana 内置变量,selected_groups 为自定义变量。
|
||||
- “ALIAS BY“:可设置当前查询别名。
|
||||
- “GENERATE SQL“: 点击该按钮会自动替换相应变量,并生成最终执行的语句。
|
||||
|
||||
在顶部的自定义变量中,若选择 `selected_groups` 的值为 1,则查询 `meters` 超级表中 `groupid` 为 1 的所有设备电流平均值变化如下图:
|
||||
|
||||

|
||||
|
||||
:::note
|
||||
|
||||
|
@ -196,17 +254,24 @@ docker run -d \
|
|||
|
||||
:::
|
||||
|
||||
查询 `meters` 超级表所有设备电流平均值变化如下图:
|
||||
## 时间序列数据分组展示
|
||||
假设我们想查询一段时间内的平均电流大小,按 `groupid` 分组展示,我们可以修改之前的 SQL 为 `select _wstart as ts, groupid, avg(current) as current from power.meters where ts > $from and ts < $to partition by groupid interval($interval) fill(null)`
|
||||
|
||||

|
||||
- “Group by column(s)“: **半角**逗号分隔的 `group by` 或 `partition by` 列名。如果是 `group by` 或 `partition by` 查询语句,设置 “Group by“ 列,可以展示多维数据。此处设置 “Group by“ 列名为 `groupid`,可以按 `groupid` 分组展示数据。
|
||||
- “Group By Format“: `Group by` 或 `Partition by` 场景下多维数据 legend 格式化格式。例如上述 INPUT SQL,将 “Group By Format“ 设置为 `groupid-{{groupid}}`,展示的 legend 名字为格式化的分组名。
|
||||
|
||||
查询 `meters` 超级表所有设备电流平均值,并按照 `groupid` 分组展示如下图:
|
||||
完成设置后,按照 `groupid` 分组展示如下图:
|
||||
|
||||

|
||||
|
||||
> 关于如何使用 Grafana 创建相应的监测界面以及更多有关使用 Grafana 的信息,请参考 Grafana 官方的[文档](https://grafana.com/docs/)。
|
||||
|
||||
### 导入 Dashboard
|
||||
## 性能建议
|
||||
- **所有查询加上时间范围**,在时序数据库中,如果不加查询的时间范围,会扫表导致性能低下。常见的 SQL 写法是 `select column_name from db.table where ts > $from and ts < $to;`
|
||||
- 对于最新状态类型的查询,我们一般建议在**创建数据库的时候打开缓存**(`CACHEMODEL` 设置为 last_row 或者 both),常见的 SQL 写法是 `select last(column_name) from db.table where ts > $from and ts < $to;`
|
||||
|
||||
|
||||
## 导入 Dashboard
|
||||
|
||||
在数据源配置页面,您可以为该数据源导入 TDinsight 面板,作为 TDengine 集群的监控可视化工具。如果 TDengine 服务端为 3.0 版本请选择 `TDinsight for 3.x` 导入。注意 TDinsight for 3.x 需要运行和配置 taoskeeper。
|
||||
|
||||
|
@ -216,7 +281,148 @@ docker run -d \
|
|||
|
||||
使用 TDengine 作为数据源的其他面板,可以[在此搜索](https://grafana.com/grafana/dashboards/?dataSource=tdengine-datasource)。以下是一份不完全列表:
|
||||
|
||||
- [15146](https://grafana.com/grafana/dashboards/15146): 监控多个 TDengine 集群
|
||||
- [15155](https://grafana.com/grafana/dashboards/15155): TDengine 告警示例
|
||||
- [15167](https://grafana.com/grafana/dashboards/15167): TDinsight
|
||||
- [16388](https://grafana.com/grafana/dashboards/16388): Telegraf 采集节点信息的数据展示
|
||||
- [15146](https://grafana.com/grafana/dashboards/15146): 监控多个 TDengine 集群
|
||||
- [15155](https://grafana.com/grafana/dashboards/15155): TDengine 告警示例
|
||||
- [15167](https://grafana.com/grafana/dashboards/15167): TDinsight
|
||||
- [16388](https://grafana.com/grafana/dashboards/16388): Telegraf 采集节点信息的数据展示
|
||||
|
||||
## 告警配置简介
|
||||
### 告警配置流程
|
||||
TDengine Grafana 插件支持告警,如果要配置告警,需要以下几个步骤:
|
||||
1. 配置联络点(“Contact points“):配置通知渠道,包括 DingDing、Email、Slack、WebHook、Prometheus Alertmanager 等
|
||||
2. 配置告警通知策略(“Notification policies“):配置告警发送到哪个通道的路由,以及发送通知的时间和重复频率
|
||||
3. 配置 “Alert rules“:配置详细的告警规则
|
||||
3.1 配置告警名称
|
||||
3.2 配置查询及告警触发条件
|
||||
3.3 配置规则评估策略
|
||||
3.4 配置标签和告警通道
|
||||
3.5 配置通知文案
|
||||
|
||||
### 告警配置界面
|
||||
在Grafana 11 告警界面一共有 6 个 Tab,分别是 “Alert rules“、“Contact points“、“Notification policies“、“Silences“、 “Groups“ 和 “Settings“。
|
||||
- “Alert rules“ 告警规则列表,用于展示和配置告警规则
|
||||
- “Contact points“ 通知渠道,包括 DingDing、Email、Slack、WebHook、Prometheus Alertmanager 等
|
||||
- “Notification policies“ 配置告警发送到哪个通道的路由,以及发送通知的时间和重复频率
|
||||
- “Silences“ 配置告警静默时间段
|
||||
- “Groups“ 告警组,配置的告警触发后会在这里分组显示
|
||||
- “Settings“ 提供通过 JSON 方式修改告警配置
|
||||
|
||||
## 配置邮件联络点
|
||||
### Grafana Server 配置文件修改
|
||||
在 Grafana 服务的配置文件中添加 SMTP/Emailing 和 Alerting 模块,以 Linux 系统为例,其配置文件一般位于 `/etc/grafana/grafana.ini`
|
||||
在配置文件中增加下面内容:
|
||||
|
||||
```ini
|
||||
#################################### SMTP / Emailing ##########################
|
||||
[smtp]
|
||||
enabled = true
|
||||
host = smtp.qq.com:465 #使用的邮箱
|
||||
user = receiver@foxmail.com
|
||||
password = *********** #使用mail授权码
|
||||
skip_verify = true
|
||||
from_address = sender@foxmail.com
|
||||
```
|
||||
|
||||
然后重启 Grafana 服务即可, 以 Linux 系统为例,执行 `systemctl restart grafana-server.service`
|
||||
|
||||
### Grafana 页面创建新联络点
|
||||
|
||||
在 Grafana 页面找到 “Home“ -> “Alerting“ -> “Contact points“,创建新联络点
|
||||
”Name“: Email Contact Point
|
||||
“Integration“:选择联络类型,这里选择 Email,填写邮件接收地址,完成后保存联络点
|
||||
|
||||

|
||||
|
||||
## 配置飞书联络点
|
||||
|
||||
### 飞书机器人配置
|
||||
1. “飞书工作台“ -> “获取应用“ -> “搜索飞书机器人助手“ -> “新建指令“
|
||||
2. 选择触发器:Grafana
|
||||
3. 选择操作:通过官方机器人发送消息,填写发送对象和发送内容
|
||||
|
||||

|
||||
|
||||
### Grafana 配置飞书联络点
|
||||
|
||||
在 Grafana 页面找到 “Home“ -> “Alerting“ -> “Contact points“ 创建新联络点
|
||||
“Name“:Feishu Contact Point
|
||||
“Integration“:选择联络类型,这里选择 Webhook,并填写 URL (在飞书机器人助手的 Grafana 触发器 Webhook 地址),完成后保存联络点
|
||||
|
||||

|
||||
|
||||
## 通知策略
|
||||
配置好联络点后,可以看到已有一个Default Policy
|
||||
|
||||

|
||||
|
||||
点击右侧 ”...“ -> ”Edit“,然后编辑默认通知策略,弹出配置窗口:
|
||||
|
||||

|
||||
|
||||
然后配置下列参数:
|
||||
- “Group wait“: 发送首次告警之前的等待时间。
|
||||
- “Group interval“: 发送第一个告警后,为该组发送下一批新告警的等待时间。
|
||||
- “Repeat interval“: 成功发送告警后再次重复发送告警的等待时间。
|
||||
|
||||
## 配置告警规则
|
||||
|
||||
### 配置查询和告警触发条件
|
||||
|
||||
在需要配置告警的面板中选择 “Edit“ -> “Alert“ -> “New alert rule“。
|
||||
|
||||
1. “Enter alert rule name“ (输入告警规则名称):此处以智能电表为例输入 `power meters alert`
|
||||
2. “Define query and alert condition“ (定义查询和告警触发条件)
|
||||
2.1 选择数据源:`TDengine Datasource`
|
||||
2.2 查询语句:
|
||||
```sql
|
||||
select _wstart as ts, groupid, avg(current) as current from power.meters where ts > $from and ts < $to partition by groupid interval($interval) fill(null)
|
||||
```
|
||||
2.3 设置 ”Expression“(表达式):`Threshold is above 100`
|
||||
2.4 点击【Set as alert condition】
|
||||
2.5 “Preview“:查看设置的规则的结果
|
||||
|
||||
完成设置后可以看到下面图片展示:
|
||||
|
||||

|
||||
|
||||
### 配置表达式和计算规则
|
||||
|
||||
Grafana 的 “Expression“(表达式)支持对数据做各种操作和计算,其类型分为:
|
||||
1. “Reduce“:将所选时间范围内的时间序列值聚合为单个值
|
||||
1.1 “Function“ 用来设置聚合方法,支持 Min、Max、Last、Mean、Sum 和 Count。
|
||||
1.2 “Mode“ 支持下面三种:
|
||||
- “Strict“:如果查询不到数据,数据会赋值为 NaN。
|
||||
- “Drop Non-numeric Value“:去掉非法数据结果。
|
||||
- “Replace Non-numeric Value“:如果是非法数据,使用固定值进行替换。
|
||||
2. “Threshold“:检查时间序列数据是否符合阈值判断条件。当条件为假时返回 0,为真则返回1。支持下列方式:
|
||||
- Is above (x > y)
|
||||
- Is below (x < y)
|
||||
- Is within range (x > y1 AND x < y2)
|
||||
- Is outside range (x < y1 AND x > y2)
|
||||
3. “Math“:对时间序列的数据进行数学运算。
|
||||
4. “Resample“:更改每个时间序列中的时间戳使其具有一致的时间间隔,以便在它们之间执行数学运算。
|
||||
5. “Classic condition (legacy)“: 可配置多个逻辑条件,判断是否触发告警。
|
||||
|
||||
如上节截图显示,此处我们设置最大值超过 100 触发告警。
|
||||
|
||||
### 配置评估策略
|
||||
|
||||

|
||||
|
||||
完成下面配置:
|
||||
- “Folder“:设置告警规则所属目录。
|
||||
- “Evaluation group“:设置告警规则评估组。“Evaluation group“ 可以选择已有组或者新建组,新建组可以设置组名和评估时间间隔。
|
||||
- “Pending period“:在告警规则的阈值被触发后,异常值持续多长时间可以触发告警,合理设置可以避免误报。
|
||||
|
||||
### 配置标签和告警通道
|
||||

|
||||
|
||||
完成下面配置:
|
||||
- “Labels“ 将标签添加到规则中,以便进行搜索、静默或路由到通知策略。
|
||||
- “Contact point“ 选择联络点,当告警发生时通过设置的联络点进行通知。
|
||||
|
||||
### 配置通知文案
|
||||
|
||||

|
||||
|
||||
设置 “Summary” 和 ”Description” 后,若告警触发,将会收到告警通知。
|
||||
|
|
After Width: | Height: | Size: 83 KiB |
After Width: | Height: | Size: 39 KiB |
After Width: | Height: | Size: 105 KiB |
After Width: | Height: | Size: 41 KiB |
After Width: | Height: | Size: 36 KiB |
After Width: | Height: | Size: 116 KiB |
After Width: | Height: | Size: 37 KiB |
After Width: | Height: | Size: 98 KiB |
After Width: | Height: | Size: 204 KiB |
Before Width: | Height: | Size: 124 KiB After Width: | Height: | Size: 154 KiB |
Before Width: | Height: | Size: 122 KiB After Width: | Height: | Size: 157 KiB |
|
@ -134,7 +134,6 @@ extern uint16_t tsMonitorPort;
|
|||
extern int32_t tsMonitorMaxLogs;
|
||||
extern bool tsMonitorComp;
|
||||
extern bool tsMonitorLogProtocol;
|
||||
extern int32_t tsMonitorIntervalForBasic;
|
||||
extern bool tsMonitorForceV2;
|
||||
|
||||
// audit
|
||||
|
|
|
@ -250,7 +250,7 @@
|
|||
TD_DEF_MSG_TYPE(TDMT_MND_DROP_TB_WITH_TSMA, "drop-tb-with-tsma", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_STREAM_UPDATE_CHKPT_EVT, "stream-update-chkpt-evt", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_STREAM_CHKPT_REPORT, "stream-chkpt-report", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_STREAM_CHKPT_CONSEN, "stream-chkpt-consen", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_STREAM_CONSEN_TIMER, "stream-consen-tmr", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_MAX_MSG, "mnd-max", NULL, NULL)
|
||||
TD_CLOSE_MSG_SEG(TDMT_END_MND_MSG)
|
||||
|
||||
|
@ -341,6 +341,7 @@
|
|||
TD_DEF_MSG_TYPE(TDMT_STREAM_CREATE, "stream-create", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_STREAM_DROP, "stream-drop", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_STREAM_RETRIEVE_TRIGGER, "stream-retri-trigger", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_STREAM_CONSEN_CHKPT, "stream-consen-chkpt", NULL, NULL)
|
||||
TD_CLOSE_MSG_SEG(TDMT_STREAM_MSG)
|
||||
|
||||
TD_NEW_MSG_SEG(TDMT_MON_MSG) //5 << 8
|
||||
|
|
|
@ -127,6 +127,9 @@ int32_t TEST_char2ts(const char* format, int64_t* ts, int32_t precision, const c
|
|||
/// @return 0 success, other fail
|
||||
int32_t offsetOfTimezone(char* tzStr, int64_t* offset);
|
||||
|
||||
bool checkRecursiveTsmaInterval(int64_t baseInterval, int8_t baseUnit, int64_t interval, int8_t unit, int8_t precision,
|
||||
bool checkEq);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -29,7 +29,8 @@ int32_t tqStreamTaskProcessCheckpointReadyMsg(SStreamMeta* pMeta, SRpcMsg* pMsg)
|
|||
int32_t tqStreamProcessStreamHbRsp(SStreamMeta* pMeta, SRpcMsg* pMsg);
|
||||
int32_t tqStreamProcessReqCheckpointRsp(SStreamMeta* pMeta, SRpcMsg* pMsg);
|
||||
int32_t tqStreamProcessChkptReportRsp(SStreamMeta* pMeta, SRpcMsg* pMsg);
|
||||
int32_t tqStreamProcessConsensusChkptRsp(SStreamMeta* pMeta, SRpcMsg* pMsg);
|
||||
int32_t tqStreamProcessConsensusChkptRsp2(SStreamMeta* pMeta, SRpcMsg* pMsg);
|
||||
int32_t tqStreamTaskProcessConsenChkptIdReq(SStreamMeta* pMeta, SRpcMsg* pMsg);
|
||||
int32_t tqStreamProcessCheckpointReadyRsp(SStreamMeta* pMeta, SRpcMsg* pMsg);
|
||||
int32_t tqStreamTaskProcessDeployReq(SStreamMeta* pMeta, SMsgCb* cb, int64_t sversion, char* msg, int32_t msgLen,
|
||||
bool isLeader, bool restored);
|
||||
|
@ -37,12 +38,12 @@ int32_t tqStreamTaskProcessDropReq(SStreamMeta* pMeta, char* msg, int32_t msgLen
|
|||
int32_t tqStreamTaskProcessRunReq(SStreamMeta* pMeta, SRpcMsg* pMsg, bool isLeader);
|
||||
int32_t tqStartTaskCompleteCallback(SStreamMeta* pMeta);
|
||||
int32_t tqStreamTasksGetTotalNum(SStreamMeta* pMeta);
|
||||
int32_t tqStreamTaskProcessTaskResetReq(SStreamMeta* pMeta, SRpcMsg* pMsg);
|
||||
int32_t tqStreamTaskProcessTaskResetReq(SStreamMeta* pMeta, char* msg);
|
||||
int32_t tqStreamTaskProcessRetrieveTriggerReq(SStreamMeta* pMeta, SRpcMsg* pMsg);
|
||||
int32_t tqStreamTaskProcessRetrieveTriggerRsp(SStreamMeta* pMeta, SRpcMsg* pMsg);
|
||||
int32_t tqStreamTaskProcessTaskPauseReq(SStreamMeta* pMeta, char* pMsg);
|
||||
int32_t tqStreamTaskProcessTaskResumeReq(void* handle, int64_t sversion, char* pMsg, bool fromVnode);
|
||||
int32_t tqStreamTaskProcessUpdateCheckpointReq(SStreamMeta* pMeta, bool restored, char* msg, int32_t msgLen);
|
||||
int32_t tqStreamTaskProcessUpdateCheckpointReq(SStreamMeta* pMeta, bool restored, char* msg);
|
||||
|
||||
void tqSetRestoreVersionInfo(SStreamTask* pTask);
|
||||
int32_t tqExpandStreamTask(SStreamTask* pTask);
|
||||
|
|
|
@ -41,6 +41,7 @@ typedef int32_t (*FExecProcess)(struct SqlFunctionCtx *pCtx);
|
|||
typedef int32_t (*FExecFinalize)(struct SqlFunctionCtx *pCtx, SSDataBlock *pBlock);
|
||||
typedef int32_t (*FScalarExecProcess)(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||
typedef int32_t (*FExecCombine)(struct SqlFunctionCtx *pDestCtx, struct SqlFunctionCtx *pSourceCtx);
|
||||
typedef int32_t (*processFuncByRow)(SArray* pCtx); // array of SqlFunctionCtx
|
||||
|
||||
typedef struct SScalarFuncExecFuncs {
|
||||
FExecGetEnv getEnv;
|
||||
|
@ -53,6 +54,7 @@ typedef struct SFuncExecFuncs {
|
|||
FExecProcess process;
|
||||
FExecFinalize finalize;
|
||||
FExecCombine combine;
|
||||
processFuncByRow processFuncByRow;
|
||||
} SFuncExecFuncs;
|
||||
|
||||
#define MAX_INTERVAL_TIME_WINDOW 10000000 // maximum allowed time windows in final results
|
||||
|
@ -253,6 +255,7 @@ typedef struct SqlFunctionCtx {
|
|||
bool hasPrimaryKey;
|
||||
SFuncInputRowIter rowIter;
|
||||
bool bInputFinished;
|
||||
bool hasWindowOrGroup; // denote that the function is used with time window or group
|
||||
} SqlFunctionCtx;
|
||||
|
||||
typedef struct tExprNode {
|
||||
|
|
|
@ -255,6 +255,7 @@ bool fmIsIgnoreNullFunc(int32_t funcId);
|
|||
bool fmIsConstantResFunc(SFunctionNode* pFunc);
|
||||
bool fmIsSkipScanCheckFunc(int32_t funcId);
|
||||
bool fmIsPrimaryKeyFunc(int32_t funcId);
|
||||
bool fmIsProcessByRowFunc(int32_t funcId);
|
||||
|
||||
void getLastCacheDataType(SDataType* pType, int32_t pkBytes);
|
||||
SFunctionNode* createFunction(const char* pName, SNodeList* pParameterList);
|
||||
|
|
|
@ -38,6 +38,13 @@ typedef enum {
|
|||
SLOW_LOG_READ_QUIT = 3,
|
||||
} SLOW_LOG_QUEUE_TYPE;
|
||||
|
||||
static char* queueTypeStr[] = {
|
||||
"SLOW_LOG_WRITE",
|
||||
"SLOW_LOG_READ_RUNNING",
|
||||
"SLOW_LOG_READ_BEGINNIG",
|
||||
"SLOW_LOG_READ_QUIT"
|
||||
};
|
||||
|
||||
#define SLOW_LOG_SEND_SIZE_MAX 1024*1024
|
||||
|
||||
typedef struct {
|
||||
|
@ -65,7 +72,7 @@ typedef struct {
|
|||
} MonitorSlowLogData;
|
||||
|
||||
void monitorClose();
|
||||
void monitorInit();
|
||||
int32_t monitorInit();
|
||||
|
||||
void monitorClientSQLReqInit(int64_t clusterKey);
|
||||
void monitorClientSlowQueryInit(int64_t clusterId);
|
||||
|
|
|
@ -226,7 +226,6 @@ void monSetQmInfo(SMonQmInfo *pInfo);
|
|||
void monSetSmInfo(SMonSmInfo *pInfo);
|
||||
void monSetBmInfo(SMonBmInfo *pInfo);
|
||||
void monGenAndSendReport();
|
||||
void monGenAndSendReportBasic();
|
||||
void monSendContent(char *pCont, const char* uri);
|
||||
|
||||
void tFreeSMonMmInfo(SMonMmInfo *pInfo);
|
||||
|
|
|
@ -640,6 +640,7 @@ typedef struct SCreateTSMAStmt {
|
|||
STSMAOptions* pOptions;
|
||||
SNode* pPrevQuery;
|
||||
SMCreateSmaReq* pReq;
|
||||
uint8_t precision;
|
||||
} SCreateTSMAStmt;
|
||||
|
||||
typedef struct SDropTSMAStmt {
|
||||
|
|
|
@ -415,6 +415,7 @@ typedef struct SSelectStmt {
|
|||
int32_t returnRows; // EFuncReturnRows
|
||||
ETimeLineMode timeLineCurMode;
|
||||
ETimeLineMode timeLineResMode;
|
||||
int32_t lastProcessByRowFuncId;
|
||||
bool timeLineFromOrderBy;
|
||||
bool isEmptyResult;
|
||||
bool isSubquery;
|
||||
|
|
|
@ -89,6 +89,7 @@ typedef struct SParseContext {
|
|||
bool isView;
|
||||
bool isAudit;
|
||||
bool nodeOffline;
|
||||
bool isStmtBind;
|
||||
const char* svrVer;
|
||||
SArray* pTableMetaPos; // sql table pos => catalog data pos
|
||||
SArray* pTableVgroupPos; // sql table pos => catalog data pos
|
||||
|
|
|
@ -216,6 +216,7 @@ typedef struct SRestoreCheckpointInfo {
|
|||
int64_t startTs;
|
||||
int64_t streamId;
|
||||
int64_t checkpointId; // latest checkpoint id
|
||||
int32_t transId; // transaction id of the update the consensus-checkpointId transaction
|
||||
int32_t taskId;
|
||||
int32_t nodeId;
|
||||
} SRestoreCheckpointInfo;
|
||||
|
@ -223,16 +224,6 @@ typedef struct SRestoreCheckpointInfo {
|
|||
int32_t tEncodeRestoreCheckpointInfo (SEncoder* pEncoder, const SRestoreCheckpointInfo* pReq);
|
||||
int32_t tDecodeRestoreCheckpointInfo(SDecoder* pDecoder, SRestoreCheckpointInfo* pReq);
|
||||
|
||||
typedef struct SRestoreCheckpointInfoRsp {
|
||||
int64_t streamId;
|
||||
int64_t checkpointId;
|
||||
int64_t startTs;
|
||||
int32_t taskId;
|
||||
} SRestoreCheckpointInfoRsp;
|
||||
|
||||
int32_t tEncodeRestoreCheckpointInfoRsp(SEncoder* pCoder, const SRestoreCheckpointInfoRsp* pInfo);
|
||||
int32_t tDecodeRestoreCheckpointInfoRsp(SDecoder* pCoder, SRestoreCheckpointInfoRsp* pInfo);
|
||||
|
||||
typedef struct {
|
||||
SMsgHead head;
|
||||
int64_t streamId;
|
||||
|
@ -242,7 +233,6 @@ typedef struct {
|
|||
|
||||
typedef struct SCheckpointConsensusEntry {
|
||||
SRestoreCheckpointInfo req;
|
||||
SRpcMsg rsp;
|
||||
int64_t ts;
|
||||
} SCheckpointConsensusEntry;
|
||||
|
||||
|
|
|
@ -272,9 +272,9 @@ typedef struct SCheckpointInfo {
|
|||
int64_t checkpointTime; // latest checkpoint time
|
||||
int64_t processedVer;
|
||||
int64_t nextProcessVer; // current offset in WAL, not serialize it
|
||||
|
||||
SActiveCheckpointInfo* pActiveInfo;
|
||||
int64_t msgVer;
|
||||
int32_t consensusTransId;// consensus checkpoint id
|
||||
SActiveCheckpointInfo* pActiveInfo;
|
||||
} SCheckpointInfo;
|
||||
|
||||
typedef struct SStreamStatus {
|
||||
|
@ -289,6 +289,8 @@ typedef struct SStreamStatus {
|
|||
int32_t inScanHistorySentinel;
|
||||
bool appendTranstateBlock; // has append the transfer state data block already
|
||||
bool removeBackendFiles; // remove backend files on disk when free stream tasks
|
||||
bool sendConsensusChkptId;
|
||||
bool requireConsensusChkptId;
|
||||
} SStreamStatus;
|
||||
|
||||
typedef struct SDataRange {
|
||||
|
@ -528,10 +530,11 @@ typedef int32_t (*__state_trans_user_fn)(SStreamTask*, void* param);
|
|||
|
||||
SStreamTask* tNewStreamTask(int64_t streamId, int8_t taskLevel, SEpSet* pEpset, bool fillHistory, int64_t triggerParam,
|
||||
SArray* pTaskList, bool hasFillhistory, int8_t subtableWithoutMd5);
|
||||
void tFreeStreamTask(SStreamTask* pTask);
|
||||
int32_t tEncodeStreamTask(SEncoder* pEncoder, const SStreamTask* pTask);
|
||||
int32_t tDecodeStreamTask(SDecoder* pDecoder, SStreamTask* pTask);
|
||||
void tFreeStreamTask(SStreamTask* pTask);
|
||||
int32_t streamTaskInit(SStreamTask* pTask, SStreamMeta* pMeta, SMsgCb* pMsgCb, int64_t ver);
|
||||
void streamFreeTaskState(SStreamTask* pTask, ETaskStatus status);
|
||||
|
||||
int32_t tDecodeStreamTaskChkInfo(SDecoder* pDecoder, SCheckpointInfo* pChkpInfo);
|
||||
int32_t tDecodeStreamTaskId(SDecoder* pDecoder, STaskId* pTaskId);
|
||||
|
@ -576,6 +579,8 @@ typedef struct STaskCkptInfo {
|
|||
int64_t activeId; // current active checkpoint id
|
||||
int32_t activeTransId; // checkpoint trans id
|
||||
int8_t failed; // denote if the checkpoint is failed or not
|
||||
int8_t consensusChkptId; // required the consensus-checkpointId
|
||||
int64_t consensusTs; //
|
||||
} STaskCkptInfo;
|
||||
|
||||
typedef struct STaskStatusEntry {
|
||||
|
@ -586,8 +591,6 @@ typedef struct STaskStatusEntry {
|
|||
int32_t nodeId;
|
||||
SVersionRange verRange; // start/end version in WAL, only valid for source task
|
||||
int64_t processedVer; // only valid for source task
|
||||
bool inputQChanging; // inputQ is changing or not
|
||||
int64_t inputQUnchangeCounter;
|
||||
double inputQUsed; // in MiB
|
||||
double inputRate;
|
||||
double procsThroughput; // duration between one element put into input queue and being processed.
|
||||
|
@ -616,8 +619,8 @@ typedef struct SStreamTaskState {
|
|||
|
||||
typedef struct SCheckpointConsensusInfo {
|
||||
SArray* pTaskList;
|
||||
int64_t checkpointId;
|
||||
int64_t genTs;
|
||||
int32_t numOfTasks;
|
||||
int64_t streamId;
|
||||
} SCheckpointConsensusInfo;
|
||||
|
||||
int32_t streamSetupScheduleTrigger(SStreamTask* pTask);
|
||||
|
@ -800,6 +803,7 @@ int32_t streamTaskBroadcastRetrieveReq(SStreamTask* pTask, SStreamRetrieveReq* r
|
|||
void streamTaskSendRetrieveRsp(SStreamRetrieveReq* pReq, SRpcMsg* pRsp);
|
||||
|
||||
int32_t streamProcessHeartbeatRsp(SStreamMeta* pMeta, SMStreamHbRspMsg* pRsp);
|
||||
int32_t streamTaskSendPreparedCheckpointsourceRsp(SStreamTask* pTask);
|
||||
|
||||
|
||||
#ifdef __cplusplus
|
||||
|
|
|
@ -46,9 +46,11 @@ const char* terrstr();
|
|||
char* taosGetErrMsgReturn();
|
||||
char* taosGetErrMsg();
|
||||
int32_t* taosGetErrno();
|
||||
int32_t* taosGetErrln();
|
||||
int32_t taosGetErrSize();
|
||||
#define terrno (*taosGetErrno())
|
||||
#define terrMsg (taosGetErrMsg())
|
||||
#define terrln (*taosGetErrln())
|
||||
|
||||
#define SET_ERROR_MSG(MSG, ...) \
|
||||
snprintf(terrMsg, ERR_MSG_LEN, MSG, ##__VA_ARGS__)
|
||||
|
@ -136,6 +138,7 @@ int32_t taosGetErrSize();
|
|||
#define TSDB_CODE_TIMEOUT_ERROR TAOS_DEF_ERROR_CODE(0, 0x012C)
|
||||
#define TSDB_CODE_MSG_ENCODE_ERROR TAOS_DEF_ERROR_CODE(0, 0x012D)
|
||||
#define TSDB_CODE_NO_ENOUGH_DISKSPACE TAOS_DEF_ERROR_CODE(0, 0x012E)
|
||||
#define TSDB_CODE_THIRDPARTY_ERROR TAOS_DEF_ERROR_CODE(0, 0x012F)
|
||||
|
||||
#define TSDB_CODE_APP_IS_STARTING TAOS_DEF_ERROR_CODE(0, 0x0130)
|
||||
#define TSDB_CODE_APP_IS_STOPPING TAOS_DEF_ERROR_CODE(0, 0x0131)
|
||||
|
@ -145,6 +148,7 @@ int32_t taosGetErrSize();
|
|||
#define TSDB_CODE_IP_NOT_IN_WHITE_LIST TAOS_DEF_ERROR_CODE(0, 0x0134)
|
||||
#define TSDB_CODE_FAILED_TO_CONNECT_S3 TAOS_DEF_ERROR_CODE(0, 0x0135)
|
||||
#define TSDB_CODE_MSG_PREPROCESSED TAOS_DEF_ERROR_CODE(0, 0x0136) // internal
|
||||
#define TSDB_CODE_OUT_OF_BUFFER TAOS_DEF_ERROR_CODE(0, 0x0137)
|
||||
|
||||
//client
|
||||
#define TSDB_CODE_TSC_INVALID_OPERATION TAOS_DEF_ERROR_CODE(0, 0x0200)
|
||||
|
@ -831,6 +835,7 @@ int32_t taosGetErrSize();
|
|||
#define TSDB_CODE_PAR_TBNAME_ERROR TAOS_DEF_ERROR_CODE(0, 0x267D)
|
||||
#define TSDB_CODE_PAR_TBNAME_DUPLICATED TAOS_DEF_ERROR_CODE(0, 0x267E)
|
||||
#define TSDB_CODE_PAR_TAG_NAME_DUPLICATED TAOS_DEF_ERROR_CODE(0, 0x267F)
|
||||
#define TSDB_CODE_PAR_NOT_ALLOWED_DIFFERENT_BY_ROW_FUNC TAOS_DEF_ERROR_CODE(0, 0x2680)
|
||||
#define TSDB_CODE_PAR_INTERNAL_ERROR TAOS_DEF_ERROR_CODE(0, 0x26FF)
|
||||
|
||||
//planner
|
||||
|
|
|
@ -198,15 +198,6 @@ int32_t tsCompressBigint2(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int3
|
|||
int32_t nBuf);
|
||||
int32_t tsDecompressBigint2(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint32_t cmprAlg,
|
||||
void *pBuf, int32_t nBuf);
|
||||
// for internal usage
|
||||
int32_t getWordLength(char type);
|
||||
|
||||
int32_t tsDecompressIntImpl_Hw(const char *const input, const int32_t nelements, char *const output, const char type);
|
||||
int32_t tsDecompressFloatImplAvx512(const char *const input, const int32_t nelements, char *const output);
|
||||
int32_t tsDecompressFloatImplAvx2(const char *const input, const int32_t nelements, char *const output);
|
||||
int32_t tsDecompressTimestampAvx512(const char *const input, const int32_t nelements, char *const output,
|
||||
bool bigEndian);
|
||||
int32_t tsDecompressTimestampAvx2(const char *const input, const int32_t nelements, char *const output, bool bigEndian);
|
||||
|
||||
/*************************************************************************
|
||||
* STREAM COMPRESSION
|
||||
|
|
|
@ -117,6 +117,15 @@ static FORCE_INLINE int32_t taosGetTbHashVal(const char *tbname, int32_t tblen,
|
|||
}
|
||||
}
|
||||
|
||||
#define TAOS_CHECK_ERRNO(CODE) \
|
||||
do { \
|
||||
terrno = (CODE); \
|
||||
if (terrno != TSDB_CODE_SUCCESS) { \
|
||||
terrln = __LINE__; \
|
||||
goto _exit; \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define TSDB_CHECK_CODE(CODE, LINO, LABEL) \
|
||||
do { \
|
||||
if (TSDB_CODE_SUCCESS != (CODE)) { \
|
||||
|
|
|
@ -283,6 +283,7 @@ typedef struct SRequestObj {
|
|||
bool inRetry;
|
||||
bool isSubReq;
|
||||
bool inCallback;
|
||||
bool isStmtBind; // is statement bind parameter
|
||||
uint32_t prevCode; // previous error code: todo refactor, add update flag for catalog
|
||||
uint32_t retry;
|
||||
int64_t allocatorRefId;
|
||||
|
@ -454,6 +455,8 @@ enum {
|
|||
|
||||
void sqlReqLog(int64_t rid, bool killed, int32_t code, int8_t type);
|
||||
|
||||
void tmqMgmtClose(void);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -864,10 +864,15 @@ void taos_init_imp(void) {
|
|||
initQueryModuleMsgHandle();
|
||||
|
||||
if (taosConvInit() != 0) {
|
||||
tscInitRes = -1;
|
||||
tscError("failed to init conv");
|
||||
return;
|
||||
}
|
||||
|
||||
if (monitorInit() != 0){
|
||||
tscInitRes = -1;
|
||||
tscError("failed to init monitor");
|
||||
return;
|
||||
}
|
||||
rpcInit();
|
||||
|
||||
SCatalogCfg cfg = {.maxDBCacheNum = 100, .maxTblCacheNum = 100};
|
||||
|
@ -891,7 +896,6 @@ void taos_init_imp(void) {
|
|||
taosThreadMutexInit(&appInfo.mutex, NULL);
|
||||
|
||||
tscCrashReportInit();
|
||||
monitorInit();
|
||||
|
||||
tscDebug("client is initialized successfully");
|
||||
}
|
||||
|
|
|
@ -206,6 +206,7 @@ int32_t buildRequest(uint64_t connId, const char* sql, int sqlLen, void* param,
|
|||
(*pRequest)->sqlstr[sqlLen] = 0;
|
||||
(*pRequest)->sqlLen = sqlLen;
|
||||
(*pRequest)->validateOnly = validateSql;
|
||||
(*pRequest)->isStmtBind = false;
|
||||
|
||||
((SSyncQueryParam*)(*pRequest)->body.interParam)->userParam = param;
|
||||
|
||||
|
@ -266,7 +267,8 @@ int32_t parseSql(SRequestObj* pRequest, bool topicQuery, SQuery** pQuery, SStmtC
|
|||
.isSuperUser = (0 == strcmp(pTscObj->user, TSDB_DEFAULT_USER)),
|
||||
.enableSysInfo = pTscObj->sysInfo,
|
||||
.svrVer = pTscObj->sVer,
|
||||
.nodeOffline = (pTscObj->pAppInfo->onlineDnodes < pTscObj->pAppInfo->totalDnodes)};
|
||||
.nodeOffline = (pTscObj->pAppInfo->onlineDnodes < pTscObj->pAppInfo->totalDnodes),
|
||||
.isStmtBind = pRequest->isStmtBind};
|
||||
|
||||
cxt.mgmtEpSet = getEpSet_s(&pTscObj->pAppInfo->mgmtEp);
|
||||
int32_t code = catalogGetHandle(pTscObj->pAppInfo->clusterId, &cxt.pCatalog);
|
||||
|
|
|
@ -57,8 +57,6 @@ void taos_cleanup(void) {
|
|||
}
|
||||
|
||||
monitorClose();
|
||||
taosHashCleanup(appInfo.pInstMap);
|
||||
taosHashCleanup(appInfo.pInstMapByClusterId);
|
||||
tscStopCrashReport();
|
||||
|
||||
hbMgrCleanUp();
|
||||
|
@ -85,6 +83,8 @@ void taos_cleanup(void) {
|
|||
|
||||
taosConvDestroy();
|
||||
|
||||
tmqMgmtClose();
|
||||
|
||||
tscInfo("all local resources released");
|
||||
taosCleanupCfg();
|
||||
taosCloseLog();
|
||||
|
|
|
@ -18,6 +18,7 @@ int32_t quitCnt = 0;
|
|||
tsem2_t monitorSem;
|
||||
STaosQueue* monitorQueue;
|
||||
SHashObj* monitorSlowLogHash;
|
||||
char tmpSlowLogPath[PATH_MAX] = {0};
|
||||
|
||||
static int32_t getSlowLogTmpDir(char* tmpPath, int32_t size){
|
||||
if (tsTempDir == NULL) {
|
||||
|
@ -453,6 +454,10 @@ static int64_t getFileSize(char* path){
|
|||
}
|
||||
|
||||
static int32_t sendSlowLog(int64_t clusterId, char* data, TdFilePtr pFile, int64_t offset, SLOW_LOG_QUEUE_TYPE type, char* fileName, void* pTransporter, SEpSet *epSet){
|
||||
if (data == NULL){
|
||||
taosMemoryFree(fileName);
|
||||
return -1;
|
||||
}
|
||||
MonitorSlowLogData* pParam = taosMemoryMalloc(sizeof(MonitorSlowLogData));
|
||||
if(pParam == NULL){
|
||||
taosMemoryFree(data);
|
||||
|
@ -468,19 +473,27 @@ static int32_t sendSlowLog(int64_t clusterId, char* data, TdFilePtr pFile, int64
|
|||
return sendReport(pTransporter, epSet, data, MONITOR_TYPE_SLOW_LOG, pParam);
|
||||
}
|
||||
|
||||
static void monitorSendSlowLogAtBeginning(int64_t clusterId, char** fileName, TdFilePtr pFile, int64_t offset, void* pTransporter, SEpSet *epSet){
|
||||
static int32_t monitorReadSend(int64_t clusterId, TdFilePtr pFile, int64_t* offset, int64_t size, SLOW_LOG_QUEUE_TYPE type, char* fileName){
|
||||
SAppInstInfo* pInst = getAppInstByClusterId(clusterId);
|
||||
if(pInst == NULL){
|
||||
tscError("failed to get app instance by clusterId:%" PRId64, clusterId);
|
||||
return -1;
|
||||
}
|
||||
SEpSet ep = getEpSet_s(&pInst->mgmtEp);
|
||||
char* data = readFile(pFile, offset, size);
|
||||
return sendSlowLog(clusterId, data, (type == SLOW_LOG_READ_BEGINNIG ? pFile : NULL), *offset, type, fileName, pInst->pTransporter, &ep);
|
||||
}
|
||||
|
||||
static void monitorSendSlowLogAtBeginning(int64_t clusterId, char** fileName, TdFilePtr pFile, int64_t offset){
|
||||
int64_t size = getFileSize(*fileName);
|
||||
if(size <= offset){
|
||||
processFileInTheEnd(pFile, *fileName);
|
||||
tscDebug("[monitor] monitorSendSlowLogAtBeginning delete file:%s", *fileName);
|
||||
}else{
|
||||
char* data = readFile(pFile, &offset, size);
|
||||
if(data != NULL){
|
||||
sendSlowLog(clusterId, data, pFile, offset, SLOW_LOG_READ_BEGINNIG, *fileName, pTransporter, epSet);
|
||||
int32_t code = monitorReadSend(clusterId, pFile, &offset, size, SLOW_LOG_READ_BEGINNIG, *fileName);
|
||||
tscDebug("[monitor] monitorSendSlowLogAtBeginning send slow log clusterId:%"PRId64",ret:%d", clusterId, code);
|
||||
*fileName = NULL;
|
||||
}
|
||||
tscDebug("[monitor] monitorSendSlowLogAtBeginning send slow log file:%p, data:%s", pFile, data);
|
||||
}
|
||||
}
|
||||
|
||||
static void monitorSendSlowLogAtRunning(int64_t clusterId){
|
||||
|
@ -500,17 +513,8 @@ static void monitorSendSlowLogAtRunning(int64_t clusterId){
|
|||
tscDebug("[monitor] monitorSendSlowLogAtRunning truncate file to 0 file:%p", pClient->pFile);
|
||||
pClient->offset = 0;
|
||||
}else{
|
||||
SAppInstInfo* pInst = getAppInstByClusterId(clusterId);
|
||||
if(pInst == NULL){
|
||||
tscError("failed to get app instance by clusterId:%" PRId64, clusterId);
|
||||
return;
|
||||
}
|
||||
SEpSet ep = getEpSet_s(&pInst->mgmtEp);
|
||||
char* data = readFile(pClient->pFile, &pClient->offset, size);
|
||||
if(data != NULL){
|
||||
sendSlowLog(clusterId, data, pClient->pFile, pClient->offset, SLOW_LOG_READ_RUNNING, NULL, pInst->pTransporter, &ep);
|
||||
}
|
||||
tscDebug("[monitor] monitorSendSlowLogAtRunning send slow log:%s", data);
|
||||
int32_t code = monitorReadSend(clusterId, pClient->pFile, &pClient->offset, size, SLOW_LOG_READ_RUNNING, NULL);
|
||||
tscDebug("[monitor] monitorSendSlowLogAtRunning send slow log clusterId:%"PRId64",ret:%d", clusterId, code);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -532,16 +536,8 @@ static bool monitorSendSlowLogAtQuit(int64_t clusterId) {
|
|||
return true;
|
||||
}
|
||||
}else{
|
||||
SAppInstInfo* pInst = getAppInstByClusterId(clusterId);
|
||||
if(pInst == NULL) {
|
||||
return true;
|
||||
}
|
||||
SEpSet ep = getEpSet_s(&pInst->mgmtEp);
|
||||
char* data = readFile(pClient->pFile, &pClient->offset, size);
|
||||
if(data != NULL){
|
||||
sendSlowLog(clusterId, data, pClient->pFile, pClient->offset, SLOW_LOG_READ_QUIT, NULL, pInst->pTransporter, &ep);
|
||||
}
|
||||
tscInfo("[monitor] monitorSendSlowLogAtQuit send slow log:%s", data);
|
||||
int32_t code = monitorReadSend(clusterId, pClient->pFile, &pClient->offset, size, SLOW_LOG_READ_QUIT, NULL);
|
||||
tscDebug("[monitor] monitorSendSlowLogAtQuit send slow log clusterId:%"PRId64",ret:%d", clusterId, code);
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
@ -558,16 +554,11 @@ static void monitorSendAllSlowLogAtQuit(){
|
|||
pClient->pFile = NULL;
|
||||
}else if(pClient->offset == 0){
|
||||
int64_t* clusterId = (int64_t*)taosHashGetKey(pIter, NULL);
|
||||
SAppInstInfo* pInst = getAppInstByClusterId(*clusterId);
|
||||
if(pInst == NULL) {
|
||||
continue;
|
||||
}
|
||||
SEpSet ep = getEpSet_s(&pInst->mgmtEp);
|
||||
char* data = readFile(pClient->pFile, &pClient->offset, size);
|
||||
if(data != NULL && sendSlowLog(*clusterId, data, NULL, pClient->offset, SLOW_LOG_READ_QUIT, NULL, pInst->pTransporter, &ep) == 0){
|
||||
int32_t code = monitorReadSend(*clusterId, pClient->pFile, &pClient->offset, size, SLOW_LOG_READ_QUIT, NULL);
|
||||
tscDebug("[monitor] monitorSendAllSlowLogAtQuit send slow log clusterId:%"PRId64",ret:%d", *clusterId, code);
|
||||
if (code == 0){
|
||||
quitCnt ++;
|
||||
}
|
||||
tscInfo("[monitor] monitorSendAllSlowLogAtQuit send slow log :%s", data);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -613,12 +604,8 @@ static void monitorSendAllSlowLog(){
|
|||
}
|
||||
continue;
|
||||
}
|
||||
SEpSet ep = getEpSet_s(&pInst->mgmtEp);
|
||||
char* data = readFile(pClient->pFile, &pClient->offset, size);
|
||||
if(data != NULL){
|
||||
sendSlowLog(*clusterId, data, NULL, pClient->offset, SLOW_LOG_READ_RUNNING, NULL, pInst->pTransporter, &ep);
|
||||
}
|
||||
tscDebug("[monitor] monitorSendAllSlowLog send slow log :%s", data);
|
||||
int32_t code = monitorReadSend(*clusterId, pClient->pFile, &pClient->offset, size, SLOW_LOG_READ_RUNNING, NULL);
|
||||
tscDebug("[monitor] monitorSendAllSlowLog send slow log clusterId:%"PRId64",ret:%d", *clusterId, code);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -631,7 +618,7 @@ static void monitorSendAllSlowLogFromTempDir(int64_t clusterId){
|
|||
return;
|
||||
}
|
||||
char namePrefix[PATH_MAX] = {0};
|
||||
if (snprintf(namePrefix, sizeof(namePrefix), "%s%"PRIx64, TD_TMP_FILE_PREFIX, pInst->clusterId) < 0) {
|
||||
if (snprintf(namePrefix, sizeof(namePrefix), "%s%"PRIx64, TD_TMP_FILE_PREFIX, clusterId) < 0) {
|
||||
tscError("failed to generate slow log file name prefix");
|
||||
return;
|
||||
}
|
||||
|
@ -656,7 +643,7 @@ static void monitorSendAllSlowLogFromTempDir(int64_t clusterId){
|
|||
if (strcmp(name, ".") == 0 ||
|
||||
strcmp(name, "..") == 0 ||
|
||||
strstr(name, namePrefix) == NULL) {
|
||||
tscInfo("skip file:%s, for cluster id:%"PRIx64, name, pInst->clusterId);
|
||||
tscInfo("skip file:%s, for cluster id:%"PRIx64, name, clusterId);
|
||||
continue;
|
||||
}
|
||||
|
||||
|
@ -672,9 +659,8 @@ static void monitorSendAllSlowLogFromTempDir(int64_t clusterId){
|
|||
taosCloseFile(&pFile);
|
||||
continue;
|
||||
}
|
||||
SEpSet ep = getEpSet_s(&pInst->mgmtEp);
|
||||
char *tmp = taosStrdup(filename);
|
||||
monitorSendSlowLogAtBeginning(pInst->clusterId, &tmp, pFile, 0, pInst->pTransporter, &ep);
|
||||
monitorSendSlowLogAtBeginning(clusterId, &tmp, pFile, 0);
|
||||
taosMemoryFree(tmp);
|
||||
}
|
||||
|
||||
|
@ -690,35 +676,13 @@ static void* monitorThreadFunc(void *param){
|
|||
}
|
||||
#endif
|
||||
|
||||
char tmpPath[PATH_MAX] = {0};
|
||||
if (getSlowLogTmpDir(tmpPath, sizeof(tmpPath)) < 0){
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (taosMulModeMkDir(tmpPath, 0777, true) != 0) {
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
printf("failed to create dir:%s since %s", tmpPath, terrstr());
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (tsem2_init(&monitorSem, 0, 0) != 0) {
|
||||
tscError("sem init error since %s", terrstr());
|
||||
return NULL;
|
||||
}
|
||||
|
||||
monitorQueue = taosOpenQueue();
|
||||
if(monitorQueue == NULL){
|
||||
tscError("open queue error since %s", terrstr());
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (-1 != atomic_val_compare_exchange_32(&slowLogFlag, -1, 0)) {
|
||||
return NULL;
|
||||
}
|
||||
tscDebug("monitorThreadFunc start");
|
||||
int64_t quitTime = 0;
|
||||
while (1) {
|
||||
if (slowLogFlag > 0) {
|
||||
if (atomic_load_32(&slowLogFlag) > 0 > 0) {
|
||||
if(quitCnt == 0){
|
||||
monitorSendAllSlowLogAtQuit();
|
||||
if(quitCnt == 0){
|
||||
|
@ -738,16 +702,12 @@ static void* monitorThreadFunc(void *param){
|
|||
if (slowLogData != NULL) {
|
||||
if (slowLogData->type == SLOW_LOG_READ_BEGINNIG){
|
||||
if(slowLogData->pFile != NULL){
|
||||
SAppInstInfo* pInst = getAppInstByClusterId(slowLogData->clusterId);
|
||||
if(pInst != NULL) {
|
||||
SEpSet ep = getEpSet_s(&pInst->mgmtEp);
|
||||
monitorSendSlowLogAtBeginning(slowLogData->clusterId, &(slowLogData->fileName), slowLogData->pFile, slowLogData->offset, pInst->pTransporter, &ep);
|
||||
}
|
||||
monitorSendSlowLogAtBeginning(slowLogData->clusterId, &(slowLogData->fileName), slowLogData->pFile, slowLogData->offset);
|
||||
}else{
|
||||
monitorSendAllSlowLogFromTempDir(slowLogData->clusterId);
|
||||
}
|
||||
} else if(slowLogData->type == SLOW_LOG_WRITE){
|
||||
monitorWriteSlowLog2File(slowLogData, tmpPath);
|
||||
monitorWriteSlowLog2File(slowLogData, tmpSlowLogPath);
|
||||
} else if(slowLogData->type == SLOW_LOG_READ_RUNNING){
|
||||
monitorSendSlowLogAtRunning(slowLogData->clusterId);
|
||||
} else if(slowLogData->type == SLOW_LOG_READ_QUIT){
|
||||
|
@ -767,10 +727,7 @@ static void* monitorThreadFunc(void *param){
|
|||
}
|
||||
tsem2_timewait(&monitorSem, 100);
|
||||
}
|
||||
|
||||
taosCloseQueue(monitorQueue);
|
||||
tsem2_destroy(&monitorSem);
|
||||
slowLogFlag = -2;
|
||||
atomic_store_32(&slowLogFlag, -2);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -799,27 +756,59 @@ static void tscMonitorStop() {
|
|||
}
|
||||
}
|
||||
|
||||
void monitorInit() {
|
||||
int32_t monitorInit() {
|
||||
tscInfo("[monitor] tscMonitor init");
|
||||
monitorCounterHash = (SHashObj*)taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_ENTRY_LOCK);
|
||||
if (monitorCounterHash == NULL) {
|
||||
tscError("failed to create monitorCounterHash");
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return -1;
|
||||
}
|
||||
taosHashSetFreeFp(monitorCounterHash, destroyMonitorClient);
|
||||
|
||||
monitorSlowLogHash = (SHashObj*)taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_ENTRY_LOCK);
|
||||
if (monitorSlowLogHash == NULL) {
|
||||
tscError("failed to create monitorSlowLogHash");
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return -1;
|
||||
}
|
||||
taosHashSetFreeFp(monitorSlowLogHash, destroySlowLogClient);
|
||||
|
||||
monitorTimer = taosTmrInit(0, 0, 0, "MONITOR");
|
||||
if (monitorTimer == NULL) {
|
||||
tscError("failed to create monitor timer");
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (getSlowLogTmpDir(tmpSlowLogPath, sizeof(tmpSlowLogPath)) < 0){
|
||||
terrno = TSDB_CODE_TSC_INTERNAL_ERROR;
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (taosMulModeMkDir(tmpSlowLogPath, 0777, true) != 0) {
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
tscError("failed to create dir:%s since %s", tmpSlowLogPath, terrstr());
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (tsem2_init(&monitorSem, 0, 0) != 0) {
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
tscError("sem init error since %s", terrstr());
|
||||
return -1;
|
||||
}
|
||||
|
||||
monitorQueue = taosOpenQueue();
|
||||
if(monitorQueue == NULL){
|
||||
tscError("open queue error since %s", terrstr());
|
||||
return -1;
|
||||
}
|
||||
|
||||
taosInitRWLatch(&monitorLock);
|
||||
tscMonitortInit();
|
||||
if (tscMonitortInit() != 0){
|
||||
return -1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
void monitorClose() {
|
||||
|
@ -834,20 +823,23 @@ void monitorClose() {
|
|||
taosHashCleanup(monitorCounterHash);
|
||||
taosHashCleanup(monitorSlowLogHash);
|
||||
taosTmrCleanUp(monitorTimer);
|
||||
taosCloseQueue(monitorQueue);
|
||||
tsem2_destroy(&monitorSem);
|
||||
taosWUnLockLatch(&monitorLock);
|
||||
}
|
||||
|
||||
int32_t monitorPutData2MonitorQueue(MonitorSlowLogData data){
|
||||
if (atomic_load_32(&slowLogFlag) == -2) {
|
||||
tscError("[monitor] slow log thread is exiting");
|
||||
return -1;
|
||||
}
|
||||
MonitorSlowLogData* slowLogData = taosAllocateQitem(sizeof(MonitorSlowLogData), DEF_QITEM, 0);
|
||||
if (slowLogData == NULL) {
|
||||
tscError("[monitor] failed to allocate slow log data");
|
||||
return -1;
|
||||
}
|
||||
*slowLogData = data;
|
||||
tscDebug("[monitor] write slow log to queue, clusterId:%"PRIx64 " type:%d", slowLogData->clusterId, slowLogData->type);
|
||||
while (atomic_load_32(&slowLogFlag) == -1) {
|
||||
taosMsleep(5);
|
||||
}
|
||||
tscDebug("[monitor] write slow log to queue, clusterId:%"PRIx64 " type:%s, data:%s", slowLogData->clusterId, queueTypeStr[slowLogData->type], slowLogData->data);
|
||||
if (taosWriteQitem(monitorQueue, slowLogData) == 0){
|
||||
tsem2_post(&monitorSem);
|
||||
}else{
|
||||
|
|
|
@ -154,7 +154,7 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) {
|
|||
if(taosHashGet(appInfo.pInstMapByClusterId, &connectRsp.clusterId, LONG_BYTES) == NULL){
|
||||
if(taosHashPut(appInfo.pInstMapByClusterId, &connectRsp.clusterId, LONG_BYTES, &pTscObj->pAppInfo, POINTER_BYTES) != 0){
|
||||
tscError("failed to put appInfo into appInfo.pInstMapByClusterId");
|
||||
}
|
||||
}else{
|
||||
MonitorSlowLogData data = {0};
|
||||
data.clusterId = pTscObj->pAppInfo->clusterId;
|
||||
data.type = SLOW_LOG_READ_BEGINNIG;
|
||||
|
@ -162,6 +162,7 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) {
|
|||
monitorClientSlowQueryInit(connectRsp.clusterId);
|
||||
monitorClientSQLReqInit(connectRsp.clusterId);
|
||||
}
|
||||
}
|
||||
|
||||
taosThreadMutexLock(&clientHbMgr.lock);
|
||||
SAppHbMgr* pAppHbMgr = taosArrayGetP(clientHbMgr.appHbMgrs, pTscObj->appHbMgrIdx);
|
||||
|
|
|
@ -72,6 +72,7 @@ static int32_t stmtCreateRequest(STscStmt* pStmt) {
|
|||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
pStmt->exec.pRequest->syncQuery = true;
|
||||
pStmt->exec.pRequest->isStmtBind = true;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -830,6 +831,7 @@ TAOS_STMT* stmtInit(STscObj* taos, int64_t reqid, TAOS_STMT_OPTIONS* pOptions) {
|
|||
pStmt->bInfo.needParse = true;
|
||||
pStmt->sql.status = STMT_INIT;
|
||||
pStmt->reqid = reqid;
|
||||
pStmt->errCode = TSDB_CODE_SUCCESS;
|
||||
|
||||
if (NULL != pOptions) {
|
||||
memcpy(&pStmt->options, pOptions, sizeof(pStmt->options));
|
||||
|
@ -882,6 +884,10 @@ int stmtPrepare(TAOS_STMT* stmt, const char* sql, unsigned long length) {
|
|||
|
||||
STMT_DLOG_E("start to prepare");
|
||||
|
||||
if (pStmt->errCode != TSDB_CODE_SUCCESS) {
|
||||
return pStmt->errCode;
|
||||
}
|
||||
|
||||
if (pStmt->sql.status >= STMT_PREPARE) {
|
||||
STMT_ERR_RET(stmtResetStmt(pStmt));
|
||||
}
|
||||
|
@ -953,6 +959,10 @@ int stmtSetTbName(TAOS_STMT* stmt, const char* tbName) {
|
|||
|
||||
STMT_DLOG("start to set tbName: %s", tbName);
|
||||
|
||||
if (pStmt->errCode != TSDB_CODE_SUCCESS) {
|
||||
return pStmt->errCode;
|
||||
}
|
||||
|
||||
STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_SETTBNAME));
|
||||
|
||||
int32_t insert = 0;
|
||||
|
@ -999,8 +1009,18 @@ int stmtSetTbTags(TAOS_STMT* stmt, TAOS_MULTI_BIND* tags) {
|
|||
|
||||
STMT_DLOG_E("start to set tbTags");
|
||||
|
||||
if (pStmt->errCode != TSDB_CODE_SUCCESS) {
|
||||
return pStmt->errCode;
|
||||
}
|
||||
|
||||
STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_SETTAGS));
|
||||
|
||||
SBoundColInfo *tags_info = (SBoundColInfo*)pStmt->bInfo.boundTags;
|
||||
if (tags_info->numOfBound <= 0 || tags_info->numOfCols <= 0) {
|
||||
tscWarn("no tags bound in sql, will not bound tags");
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
if (pStmt->bInfo.inExecCache) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -1021,6 +1041,10 @@ int stmtSetTbTags(TAOS_STMT* stmt, TAOS_MULTI_BIND* tags) {
|
|||
}
|
||||
|
||||
int stmtFetchTagFields(STscStmt* pStmt, int32_t* fieldNum, TAOS_FIELD_E** fields) {
|
||||
if (pStmt->errCode != TSDB_CODE_SUCCESS) {
|
||||
return pStmt->errCode;
|
||||
}
|
||||
|
||||
if (STMT_TYPE_QUERY == pStmt->sql.type) {
|
||||
tscError("invalid operation to get query tag fileds");
|
||||
STMT_ERR_RET(TSDB_CODE_TSC_STMT_API_ERROR);
|
||||
|
@ -1039,6 +1063,10 @@ int stmtFetchTagFields(STscStmt* pStmt, int32_t* fieldNum, TAOS_FIELD_E** fields
|
|||
}
|
||||
|
||||
int stmtFetchColFields(STscStmt* pStmt, int32_t* fieldNum, TAOS_FIELD_E** fields) {
|
||||
if (pStmt->errCode != TSDB_CODE_SUCCESS) {
|
||||
return pStmt->errCode;
|
||||
}
|
||||
|
||||
if (STMT_TYPE_QUERY == pStmt->sql.type) {
|
||||
tscError("invalid operation to get query column fileds");
|
||||
STMT_ERR_RET(TSDB_CODE_TSC_STMT_API_ERROR);
|
||||
|
@ -1150,8 +1178,13 @@ int stmtBindBatch(TAOS_STMT* stmt, TAOS_MULTI_BIND* bind, int32_t colIdx) {
|
|||
|
||||
STMT_DLOG("start to bind stmt data, colIdx: %d", colIdx);
|
||||
|
||||
if (pStmt->errCode != TSDB_CODE_SUCCESS) {
|
||||
return pStmt->errCode;
|
||||
}
|
||||
|
||||
STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_BIND));
|
||||
|
||||
|
||||
if (pStmt->bInfo.needParse && pStmt->sql.runTimes && pStmt->sql.type > 0 &&
|
||||
STMT_TYPE_MULTI_INSERT != pStmt->sql.type) {
|
||||
pStmt->bInfo.needParse = false;
|
||||
|
@ -1307,6 +1340,10 @@ int stmtAddBatch(TAOS_STMT* stmt) {
|
|||
|
||||
STMT_DLOG_E("start to add batch");
|
||||
|
||||
if (pStmt->errCode != TSDB_CODE_SUCCESS) {
|
||||
return pStmt->errCode;
|
||||
}
|
||||
|
||||
STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_ADD_BATCH));
|
||||
|
||||
if (pStmt->sql.stbInterlaceMode) {
|
||||
|
@ -1471,6 +1508,10 @@ int stmtExec(TAOS_STMT* stmt) {
|
|||
|
||||
STMT_DLOG_E("start to exec");
|
||||
|
||||
if (pStmt->errCode != TSDB_CODE_SUCCESS) {
|
||||
return pStmt->errCode;
|
||||
}
|
||||
|
||||
STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_EXECUTE));
|
||||
|
||||
if (STMT_TYPE_QUERY == pStmt->sql.type) {
|
||||
|
@ -1599,6 +1640,10 @@ int stmtGetTagFields(TAOS_STMT* stmt, int* nums, TAOS_FIELD_E** fields) {
|
|||
|
||||
STMT_DLOG_E("start to get tag fields");
|
||||
|
||||
if (pStmt->errCode != TSDB_CODE_SUCCESS) {
|
||||
return pStmt->errCode;
|
||||
}
|
||||
|
||||
if (STMT_TYPE_QUERY == pStmt->sql.type) {
|
||||
STMT_ERRI_JRET(TSDB_CODE_TSC_STMT_API_ERROR);
|
||||
}
|
||||
|
@ -1637,6 +1682,10 @@ int stmtGetColFields(TAOS_STMT* stmt, int* nums, TAOS_FIELD_E** fields) {
|
|||
|
||||
STMT_DLOG_E("start to get col fields");
|
||||
|
||||
if (pStmt->errCode != TSDB_CODE_SUCCESS) {
|
||||
return pStmt->errCode;
|
||||
}
|
||||
|
||||
if (STMT_TYPE_QUERY == pStmt->sql.type) {
|
||||
STMT_ERRI_JRET(TSDB_CODE_TSC_STMT_API_ERROR);
|
||||
}
|
||||
|
@ -1674,6 +1723,10 @@ int stmtGetParamNum(TAOS_STMT* stmt, int* nums) {
|
|||
|
||||
STMT_DLOG_E("start to get param num");
|
||||
|
||||
if (pStmt->errCode != TSDB_CODE_SUCCESS) {
|
||||
return pStmt->errCode;
|
||||
}
|
||||
|
||||
STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS));
|
||||
|
||||
if (pStmt->bInfo.needParse && pStmt->sql.runTimes && pStmt->sql.type > 0 &&
|
||||
|
@ -1706,6 +1759,10 @@ int stmtGetParam(TAOS_STMT* stmt, int idx, int* type, int* bytes) {
|
|||
|
||||
STMT_DLOG_E("start to get param");
|
||||
|
||||
if (pStmt->errCode != TSDB_CODE_SUCCESS) {
|
||||
return pStmt->errCode;
|
||||
}
|
||||
|
||||
if (STMT_TYPE_QUERY == pStmt->sql.type) {
|
||||
STMT_RET(TSDB_CODE_TSC_STMT_API_ERROR);
|
||||
}
|
||||
|
|
|
@ -27,9 +27,9 @@
|
|||
#define EMPTY_BLOCK_POLL_IDLE_DURATION 10
|
||||
#define DEFAULT_AUTO_COMMIT_INTERVAL 5000
|
||||
#define DEFAULT_HEARTBEAT_INTERVAL 3000
|
||||
#define DEFAULT_ASKEP_INTERVAL 1000
|
||||
|
||||
struct SMqMgmt {
|
||||
int8_t inited;
|
||||
tmr_h timer;
|
||||
int32_t rsetId;
|
||||
};
|
||||
|
@ -783,35 +783,33 @@ static void generateTimedTask(int64_t refId, int32_t type) {
|
|||
if (tmq == NULL) return;
|
||||
|
||||
int8_t* pTaskType = taosAllocateQitem(sizeof(int8_t), DEF_QITEM, 0);
|
||||
if (pTaskType == NULL) return;
|
||||
|
||||
if (pTaskType != NULL){
|
||||
*pTaskType = type;
|
||||
taosWriteQitem(tmq->delayedTask, pTaskType);
|
||||
if (taosWriteQitem(tmq->delayedTask, pTaskType) == 0){
|
||||
tsem2_post(&tmq->rspSem);
|
||||
}
|
||||
}
|
||||
|
||||
taosReleaseRef(tmqMgmt.rsetId, refId);
|
||||
}
|
||||
|
||||
void tmqAssignAskEpTask(void* param, void* tmrId) {
|
||||
int64_t refId = *(int64_t*)param;
|
||||
int64_t refId = (int64_t)param;
|
||||
generateTimedTask(refId, TMQ_DELAYED_TASK__ASK_EP);
|
||||
taosMemoryFree(param);
|
||||
}
|
||||
|
||||
void tmqReplayTask(void* param, void* tmrId) {
|
||||
int64_t refId = *(int64_t*)param;
|
||||
int64_t refId = (int64_t)param;
|
||||
tmq_t* tmq = taosAcquireRef(tmqMgmt.rsetId, refId);
|
||||
if (tmq == NULL) goto END;
|
||||
if (tmq == NULL) return;
|
||||
|
||||
tsem2_post(&tmq->rspSem);
|
||||
taosReleaseRef(tmqMgmt.rsetId, refId);
|
||||
END:
|
||||
taosMemoryFree(param);
|
||||
}
|
||||
|
||||
void tmqAssignDelayedCommitTask(void* param, void* tmrId) {
|
||||
int64_t refId = *(int64_t*)param;
|
||||
int64_t refId = (int64_t)param;
|
||||
generateTimedTask(refId, TMQ_DELAYED_TASK__COMMIT);
|
||||
taosMemoryFree(param);
|
||||
}
|
||||
|
||||
int32_t tmqHbCb(void* param, SDataBuf* pMsg, int32_t code) {
|
||||
|
@ -848,11 +846,10 @@ int32_t tmqHbCb(void* param, SDataBuf* pMsg, int32_t code) {
|
|||
}
|
||||
|
||||
void tmqSendHbReq(void* param, void* tmrId) {
|
||||
int64_t refId = *(int64_t*)param;
|
||||
int64_t refId = (int64_t)param;
|
||||
|
||||
tmq_t* tmq = taosAcquireRef(tmqMgmt.rsetId, refId);
|
||||
if (tmq == NULL) {
|
||||
taosMemoryFree(param);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -928,7 +925,9 @@ void tmqSendHbReq(void* param, void* tmrId) {
|
|||
atomic_val_compare_exchange_8(&pollFlag, 1, 0);
|
||||
OVER:
|
||||
tDestroySMqHbReq(&req);
|
||||
if(tmrId != NULL){
|
||||
taosTmrReset(tmqSendHbReq, tmq->heartBeatIntervalMs, param, tmqMgmt.timer, &tmq->hbLiveTimer);
|
||||
}
|
||||
taosReleaseRef(tmqMgmt.rsetId, refId);
|
||||
}
|
||||
|
||||
|
@ -956,21 +955,14 @@ int32_t tmqHandleAllDelayedTask(tmq_t* pTmq) {
|
|||
if (*pTaskType == TMQ_DELAYED_TASK__ASK_EP) {
|
||||
askEp(pTmq, NULL, false, false);
|
||||
|
||||
int64_t* pRefId = taosMemoryMalloc(sizeof(int64_t));
|
||||
*pRefId = pTmq->refId;
|
||||
|
||||
tscDebug("consumer:0x%" PRIx64 " retrieve ep from mnode in 1s", pTmq->consumerId);
|
||||
taosTmrReset(tmqAssignAskEpTask, 1000, pRefId, tmqMgmt.timer, &pTmq->epTimer);
|
||||
taosTmrReset(tmqAssignAskEpTask, DEFAULT_ASKEP_INTERVAL, (void*)(pTmq->refId), tmqMgmt.timer, &pTmq->epTimer);
|
||||
} else if (*pTaskType == TMQ_DELAYED_TASK__COMMIT) {
|
||||
tmq_commit_cb* pCallbackFn = pTmq->commitCb ? pTmq->commitCb : defaultCommitCbFn;
|
||||
|
||||
asyncCommitAllOffsets(pTmq, pCallbackFn, pTmq->commitCbUserParam);
|
||||
int64_t* pRefId = taosMemoryMalloc(sizeof(int64_t));
|
||||
*pRefId = pTmq->refId;
|
||||
|
||||
tscDebug("consumer:0x%" PRIx64 " next commit to vnode(s) in %.2fs", pTmq->consumerId,
|
||||
pTmq->autoCommitInterval / 1000.0);
|
||||
taosTmrReset(tmqAssignDelayedCommitTask, pTmq->autoCommitInterval, pRefId, tmqMgmt.timer, &pTmq->commitTimer);
|
||||
taosTmrReset(tmqAssignDelayedCommitTask, pTmq->autoCommitInterval, (void*)(pTmq->refId), tmqMgmt.timer, &pTmq->commitTimer);
|
||||
} else {
|
||||
tscError("consumer:0x%" PRIx64 " invalid task type:%d", pTmq->consumerId, *pTaskType);
|
||||
}
|
||||
|
@ -1112,6 +1104,16 @@ void tmqFreeImpl(void* handle) {
|
|||
|
||||
taosArrayDestroyEx(tmq->clientTopics, freeClientVgImpl);
|
||||
taos_close_internal(tmq->pTscObj);
|
||||
|
||||
if(tmq->commitTimer) {
|
||||
taosTmrStopA(&tmq->commitTimer);
|
||||
}
|
||||
if(tmq->epTimer) {
|
||||
taosTmrStopA(&tmq->epTimer);
|
||||
}
|
||||
if(tmq->hbLiveTimer) {
|
||||
taosTmrStopA(&tmq->hbLiveTimer);
|
||||
}
|
||||
taosMemoryFree(tmq);
|
||||
|
||||
tscDebug("consumer:0x%" PRIx64 " closed", id);
|
||||
|
@ -1131,6 +1133,18 @@ static void tmqMgmtInit(void) {
|
|||
}
|
||||
}
|
||||
|
||||
void tmqMgmtClose(void) {
|
||||
if (tmqMgmt.timer) {
|
||||
taosTmrCleanUp(tmqMgmt.timer);
|
||||
tmqMgmt.timer = NULL;
|
||||
}
|
||||
|
||||
if (tmqMgmt.rsetId >= 0) {
|
||||
taosCloseRef(tmqMgmt.rsetId);
|
||||
tmqMgmt.rsetId = -1;
|
||||
}
|
||||
}
|
||||
|
||||
#define SET_ERROR_MSG_TMQ(MSG) \
|
||||
if (errstr != NULL) snprintf(errstr, errstrLen, MSG);
|
||||
|
||||
|
@ -1222,9 +1236,7 @@ tmq_t* tmq_consumer_new(tmq_conf_t* conf, char* errstr, int32_t errstrLen) {
|
|||
goto _failed;
|
||||
}
|
||||
|
||||
int64_t* pRefId = taosMemoryMalloc(sizeof(int64_t));
|
||||
*pRefId = pTmq->refId;
|
||||
pTmq->hbLiveTimer = taosTmrStart(tmqSendHbReq, pTmq->heartBeatIntervalMs, pRefId, tmqMgmt.timer);
|
||||
pTmq->hbLiveTimer = taosTmrStart(tmqSendHbReq, pTmq->heartBeatIntervalMs, (void*)pTmq->refId, tmqMgmt.timer);
|
||||
|
||||
char buf[TSDB_OFFSET_LEN] = {0};
|
||||
STqOffsetVal offset = {.type = pTmq->resetOffsetCfg};
|
||||
|
@ -1354,18 +1366,9 @@ int32_t tmq_subscribe(tmq_t* tmq, const tmq_list_t* topic_list) {
|
|||
}
|
||||
|
||||
// init ep timer
|
||||
if (tmq->epTimer == NULL) {
|
||||
int64_t* pRefId1 = taosMemoryMalloc(sizeof(int64_t));
|
||||
*pRefId1 = tmq->refId;
|
||||
tmq->epTimer = taosTmrStart(tmqAssignAskEpTask, 1000, pRefId1, tmqMgmt.timer);
|
||||
}
|
||||
|
||||
tmq->epTimer = taosTmrStart(tmqAssignAskEpTask, DEFAULT_ASKEP_INTERVAL, (void*)(tmq->refId), tmqMgmt.timer);
|
||||
// init auto commit timer
|
||||
if (tmq->autoCommit && tmq->commitTimer == NULL) {
|
||||
int64_t* pRefId2 = taosMemoryMalloc(sizeof(int64_t));
|
||||
*pRefId2 = tmq->refId;
|
||||
tmq->commitTimer = taosTmrStart(tmqAssignDelayedCommitTask, tmq->autoCommitInterval, pRefId2, tmqMgmt.timer);
|
||||
}
|
||||
tmq->commitTimer = taosTmrStart(tmqAssignDelayedCommitTask, tmq->autoCommitInterval, (void*)(tmq->refId), tmqMgmt.timer);
|
||||
|
||||
FAIL:
|
||||
taosArrayDestroyP(req.topicNames, taosMemoryFree);
|
||||
|
@ -2068,9 +2071,7 @@ static void* tmqHandleAllRsp(tmq_t* tmq, int64_t timeout) {
|
|||
pVg->blockReceiveTs = taosGetTimestampMs();
|
||||
pVg->blockSleepForReplay = pRsp->rsp.sleepTime;
|
||||
if (pVg->blockSleepForReplay > 0) {
|
||||
int64_t* pRefId1 = taosMemoryMalloc(sizeof(int64_t));
|
||||
*pRefId1 = tmq->refId;
|
||||
taosTmrStart(tmqReplayTask, pVg->blockSleepForReplay, pRefId1, tmqMgmt.timer);
|
||||
taosTmrStart(tmqReplayTask, pVg->blockSleepForReplay, (void*)(tmq->refId), tmqMgmt.timer);
|
||||
}
|
||||
}
|
||||
tscDebug("consumer:0x%" PRIx64 " process poll rsp, vgId:%d, offset:%s, blocks:%d, rows:%" PRId64
|
||||
|
@ -2329,7 +2330,7 @@ int32_t tmq_consumer_close(tmq_t* tmq) {
|
|||
return code;
|
||||
}
|
||||
}
|
||||
taosSsleep(2); // sleep 2s for hb to send offset and rows to server
|
||||
tmqSendHbReq((void*)(tmq->refId), NULL);
|
||||
|
||||
tmq_list_t* lst = tmq_list_new();
|
||||
int32_t code = tmq_subscribe(tmq, lst);
|
||||
|
|
|
@ -111,7 +111,6 @@ uint16_t tsMonitorPort = 6043;
|
|||
int32_t tsMonitorMaxLogs = 100;
|
||||
bool tsMonitorComp = false;
|
||||
bool tsMonitorLogProtocol = false;
|
||||
int32_t tsMonitorIntervalForBasic = 30;
|
||||
bool tsMonitorForceV2 = true;
|
||||
|
||||
// audit
|
||||
|
@ -712,7 +711,6 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
|
|||
if (cfgAddInt32(pCfg, "monitorMaxLogs", tsMonitorMaxLogs, 1, 1000000, CFG_SCOPE_SERVER, CFG_DYN_NONE) != 0) return -1;
|
||||
if (cfgAddBool(pCfg, "monitorComp", tsMonitorComp, CFG_SCOPE_SERVER, CFG_DYN_NONE) != 0) return -1;
|
||||
if (cfgAddBool(pCfg, "monitorLogProtocol", tsMonitorLogProtocol, CFG_SCOPE_SERVER, CFG_DYN_SERVER) != 0) return -1;
|
||||
if (cfgAddInt32(pCfg, "monitorIntervalForBasic", tsMonitorIntervalForBasic, 1, 200000, CFG_SCOPE_SERVER, CFG_DYN_NONE) != 0) return -1;
|
||||
if (cfgAddBool(pCfg, "monitorForceV2", tsMonitorForceV2, CFG_SCOPE_SERVER, CFG_DYN_NONE) != 0) return -1;
|
||||
|
||||
if (cfgAddBool(pCfg, "audit", tsEnableAudit, CFG_SCOPE_SERVER, CFG_DYN_ENT_SERVER) != 0) return -1;
|
||||
|
@ -1165,7 +1163,6 @@ static int32_t taosSetServerCfg(SConfig *pCfg) {
|
|||
tsMonitorComp = cfgGetItem(pCfg, "monitorComp")->bval;
|
||||
tsQueryRspPolicy = cfgGetItem(pCfg, "queryRspPolicy")->i32;
|
||||
tsMonitorLogProtocol = cfgGetItem(pCfg, "monitorLogProtocol")->bval;
|
||||
tsMonitorIntervalForBasic = cfgGetItem(pCfg, "monitorIntervalForBasic")->i32;
|
||||
tsMonitorForceV2 = cfgGetItem(pCfg, "monitorForceV2")->i32;
|
||||
|
||||
tsEnableAudit = cfgGetItem(pCfg, "audit")->bval;
|
||||
|
|
|
@ -69,7 +69,7 @@
|
|||
pReq->sql = NULL; \
|
||||
} while (0)
|
||||
|
||||
static int32_t tSerializeSMonitorParas(SEncoder *encoder, const SMonitorParas* pMonitorParas) {
|
||||
static int32_t tSerializeSMonitorParas(SEncoder *encoder, const SMonitorParas *pMonitorParas) {
|
||||
if (tEncodeI8(encoder, pMonitorParas->tsEnableMonitor) < 0) return -1;
|
||||
if (tEncodeI32(encoder, pMonitorParas->tsMonitorInterval) < 0) return -1;
|
||||
if (tEncodeI32(encoder, pMonitorParas->tsSlowLogScope) < 0) return -1;
|
||||
|
@ -80,7 +80,7 @@ static int32_t tSerializeSMonitorParas(SEncoder *encoder, const SMonitorParas* p
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int32_t tDeserializeSMonitorParas(SDecoder *decoder, SMonitorParas* pMonitorParas){
|
||||
static int32_t tDeserializeSMonitorParas(SDecoder *decoder, SMonitorParas *pMonitorParas) {
|
||||
if (tDecodeI8(decoder, (int8_t *)&pMonitorParas->tsEnableMonitor) < 0) return -1;
|
||||
if (tDecodeI32(decoder, &pMonitorParas->tsMonitorInterval) < 0) return -1;
|
||||
if (tDecodeI32(decoder, &pMonitorParas->tsSlowLogScope) < 0) return -1;
|
||||
|
@ -1577,7 +1577,7 @@ int32_t tDeserializeSStatisReq(void *buf, int32_t bufLen, SStatisReq *pReq) {
|
|||
if (tDecodeCStrTo(&decoder, pReq->pCont) < 0) return -1;
|
||||
}
|
||||
if (!tDecodeIsEnd(&decoder)) {
|
||||
if (tDecodeI8(&decoder, (int8_t*)&pReq->type) < 0) return -1;
|
||||
if (tDecodeI8(&decoder, (int8_t *)&pReq->type) < 0) return -1;
|
||||
}
|
||||
tEndDecode(&decoder);
|
||||
tDecoderClear(&decoder);
|
||||
|
@ -5737,65 +5737,74 @@ _exit:
|
|||
}
|
||||
|
||||
int32_t tSerializeSAlterVnodeConfigReq(void *buf, int32_t bufLen, SAlterVnodeConfigReq *pReq) {
|
||||
int32_t tlen;
|
||||
SEncoder encoder = {0};
|
||||
|
||||
tEncoderInit(&encoder, buf, bufLen);
|
||||
|
||||
if (tStartEncode(&encoder) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->vgVersion) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->buffer) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->pageSize) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->pages) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->cacheLastSize) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->daysPerFile) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->daysToKeep0) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->daysToKeep1) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->daysToKeep2) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->walFsyncPeriod) < 0) return -1;
|
||||
if (tEncodeI8(&encoder, pReq->walLevel) < 0) return -1;
|
||||
if (tEncodeI8(&encoder, pReq->strict) < 0) return -1;
|
||||
if (tEncodeI8(&encoder, pReq->cacheLast) < 0) return -1;
|
||||
TAOS_CHECK_ERRNO(tStartEncode(&encoder));
|
||||
TAOS_CHECK_ERRNO(tEncodeI32(&encoder, pReq->vgVersion));
|
||||
TAOS_CHECK_ERRNO(tEncodeI32(&encoder, pReq->buffer));
|
||||
TAOS_CHECK_ERRNO(tEncodeI32(&encoder, pReq->pageSize));
|
||||
TAOS_CHECK_ERRNO(tEncodeI32(&encoder, pReq->pages));
|
||||
TAOS_CHECK_ERRNO(tEncodeI32(&encoder, pReq->cacheLastSize));
|
||||
TAOS_CHECK_ERRNO(tEncodeI32(&encoder, pReq->daysPerFile));
|
||||
TAOS_CHECK_ERRNO(tEncodeI32(&encoder, pReq->daysToKeep0));
|
||||
TAOS_CHECK_ERRNO(tEncodeI32(&encoder, pReq->daysToKeep1));
|
||||
TAOS_CHECK_ERRNO(tEncodeI32(&encoder, pReq->daysToKeep2));
|
||||
TAOS_CHECK_ERRNO(tEncodeI32(&encoder, pReq->walFsyncPeriod));
|
||||
TAOS_CHECK_ERRNO(tEncodeI8(&encoder, pReq->walLevel));
|
||||
TAOS_CHECK_ERRNO(tEncodeI8(&encoder, pReq->strict));
|
||||
TAOS_CHECK_ERRNO(tEncodeI8(&encoder, pReq->cacheLast));
|
||||
for (int32_t i = 0; i < 7; ++i) {
|
||||
if (tEncodeI64(&encoder, pReq->reserved[i]) < 0) return -1;
|
||||
TAOS_CHECK_ERRNO(tEncodeI64(&encoder, pReq->reserved[i]));
|
||||
}
|
||||
|
||||
// 1st modification
|
||||
if (tEncodeI16(&encoder, pReq->sttTrigger) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->minRows) < 0) return -1;
|
||||
TAOS_CHECK_ERRNO(tEncodeI16(&encoder, pReq->sttTrigger));
|
||||
TAOS_CHECK_ERRNO(tEncodeI32(&encoder, pReq->minRows));
|
||||
// 2nd modification
|
||||
if (tEncodeI32(&encoder, pReq->walRetentionPeriod) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->walRetentionSize) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->keepTimeOffset) < 0) return -1;
|
||||
TAOS_CHECK_ERRNO(tEncodeI32(&encoder, pReq->walRetentionPeriod));
|
||||
TAOS_CHECK_ERRNO(tEncodeI32(&encoder, pReq->walRetentionSize));
|
||||
TAOS_CHECK_ERRNO(tEncodeI32(&encoder, pReq->keepTimeOffset));
|
||||
|
||||
if (tEncodeI32(&encoder, pReq->s3KeepLocal) < 0) return -1;
|
||||
if (tEncodeI8(&encoder, pReq->s3Compact) < 0) return -1;
|
||||
TAOS_CHECK_ERRNO(tEncodeI32(&encoder, pReq->s3KeepLocal));
|
||||
TAOS_CHECK_ERRNO(tEncodeI8(&encoder, pReq->s3Compact));
|
||||
|
||||
tEndEncode(&encoder);
|
||||
|
||||
int32_t tlen = encoder.pos;
|
||||
_exit:
|
||||
if (terrno) {
|
||||
uError("%s failed at line %d since %s", __func__, terrln, terrstr());
|
||||
tlen = -1;
|
||||
} else {
|
||||
tlen = encoder.pos;
|
||||
}
|
||||
tEncoderClear(&encoder);
|
||||
return tlen;
|
||||
}
|
||||
|
||||
int32_t tDeserializeSAlterVnodeConfigReq(void *buf, int32_t bufLen, SAlterVnodeConfigReq *pReq) {
|
||||
SDecoder decoder = {0};
|
||||
|
||||
tDecoderInit(&decoder, buf, bufLen);
|
||||
|
||||
if (tStartDecode(&decoder) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->vgVersion) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->buffer) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->pageSize) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->pages) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->cacheLastSize) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->daysPerFile) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->daysToKeep0) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->daysToKeep1) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->daysToKeep2) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->walFsyncPeriod) < 0) return -1;
|
||||
if (tDecodeI8(&decoder, &pReq->walLevel) < 0) return -1;
|
||||
if (tDecodeI8(&decoder, &pReq->strict) < 0) return -1;
|
||||
if (tDecodeI8(&decoder, &pReq->cacheLast) < 0) return -1;
|
||||
TAOS_CHECK_ERRNO(tStartDecode(&decoder));
|
||||
TAOS_CHECK_ERRNO(tDecodeI32(&decoder, &pReq->vgVersion));
|
||||
TAOS_CHECK_ERRNO(tDecodeI32(&decoder, &pReq->buffer));
|
||||
TAOS_CHECK_ERRNO(tDecodeI32(&decoder, &pReq->pageSize));
|
||||
TAOS_CHECK_ERRNO(tDecodeI32(&decoder, &pReq->pages));
|
||||
TAOS_CHECK_ERRNO(tDecodeI32(&decoder, &pReq->cacheLastSize));
|
||||
TAOS_CHECK_ERRNO(tDecodeI32(&decoder, &pReq->daysPerFile));
|
||||
TAOS_CHECK_ERRNO(tDecodeI32(&decoder, &pReq->daysToKeep0));
|
||||
TAOS_CHECK_ERRNO(tDecodeI32(&decoder, &pReq->daysToKeep1));
|
||||
TAOS_CHECK_ERRNO(tDecodeI32(&decoder, &pReq->daysToKeep2));
|
||||
TAOS_CHECK_ERRNO(tDecodeI32(&decoder, &pReq->walFsyncPeriod));
|
||||
TAOS_CHECK_ERRNO(tDecodeI8(&decoder, &pReq->walLevel));
|
||||
TAOS_CHECK_ERRNO(tDecodeI8(&decoder, &pReq->strict));
|
||||
TAOS_CHECK_ERRNO(tDecodeI8(&decoder, &pReq->cacheLast));
|
||||
for (int32_t i = 0; i < 7; ++i) {
|
||||
if (tDecodeI64(&decoder, &pReq->reserved[i]) < 0) return -1;
|
||||
TAOS_CHECK_ERRNO(tDecodeI64(&decoder, &pReq->reserved[i]));
|
||||
}
|
||||
|
||||
// 1st modification
|
||||
|
@ -5803,8 +5812,8 @@ int32_t tDeserializeSAlterVnodeConfigReq(void *buf, int32_t bufLen, SAlterVnodeC
|
|||
pReq->sttTrigger = -1;
|
||||
pReq->minRows = -1;
|
||||
} else {
|
||||
if (tDecodeI16(&decoder, &pReq->sttTrigger) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->minRows) < 0) return -1;
|
||||
TAOS_CHECK_ERRNO(tDecodeI16(&decoder, &pReq->sttTrigger));
|
||||
TAOS_CHECK_ERRNO(tDecodeI32(&decoder, &pReq->minRows));
|
||||
}
|
||||
|
||||
// 2n modification
|
||||
|
@ -5812,24 +5821,29 @@ int32_t tDeserializeSAlterVnodeConfigReq(void *buf, int32_t bufLen, SAlterVnodeC
|
|||
pReq->walRetentionPeriod = -1;
|
||||
pReq->walRetentionSize = -1;
|
||||
} else {
|
||||
if (tDecodeI32(&decoder, &pReq->walRetentionPeriod) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->walRetentionSize) < 0) return -1;
|
||||
TAOS_CHECK_ERRNO(tDecodeI32(&decoder, &pReq->walRetentionPeriod));
|
||||
TAOS_CHECK_ERRNO(tDecodeI32(&decoder, &pReq->walRetentionSize));
|
||||
}
|
||||
pReq->keepTimeOffset = TSDB_DEFAULT_KEEP_TIME_OFFSET;
|
||||
if (!tDecodeIsEnd(&decoder)) {
|
||||
if (tDecodeI32(&decoder, &pReq->keepTimeOffset) < 0) return -1;
|
||||
TAOS_CHECK_ERRNO(tDecodeI32(&decoder, &pReq->keepTimeOffset));
|
||||
}
|
||||
|
||||
pReq->s3KeepLocal = TSDB_DEFAULT_S3_KEEP_LOCAL;
|
||||
pReq->s3Compact = TSDB_DEFAULT_S3_COMPACT;
|
||||
if (!tDecodeIsEnd(&decoder)) {
|
||||
if (tDecodeI32(&decoder, &pReq->s3KeepLocal) < 0) return -1;
|
||||
if (tDecodeI8(&decoder, &pReq->s3Compact) < 0) return -1;
|
||||
TAOS_CHECK_ERRNO(tDecodeI32(&decoder, &pReq->s3KeepLocal) < 0);
|
||||
TAOS_CHECK_ERRNO(tDecodeI8(&decoder, &pReq->s3Compact) < 0);
|
||||
}
|
||||
|
||||
tEndDecode(&decoder);
|
||||
|
||||
_exit:
|
||||
tDecoderClear(&decoder);
|
||||
return 0;
|
||||
if (terrno) {
|
||||
uError("%s failed at line %d since %s", __func__, terrln, terrstr());
|
||||
}
|
||||
return terrno;
|
||||
}
|
||||
|
||||
int32_t tSerializeSAlterVnodeReplicaReq(void *buf, int32_t bufLen, SAlterVnodeReplicaReq *pReq) {
|
||||
|
@ -9300,7 +9314,7 @@ int32_t tDecodeSTqCheckInfo(SDecoder *pDecoder, STqCheckInfo *pInfo) {
|
|||
}
|
||||
void tDeleteSTqCheckInfo(STqCheckInfo *pInfo) { taosArrayDestroy(pInfo->colIdList); }
|
||||
|
||||
int32_t tEncodeSMqRebVgReq(SEncoder* pCoder, const SMqRebVgReq* pReq) {
|
||||
int32_t tEncodeSMqRebVgReq(SEncoder *pCoder, const SMqRebVgReq *pReq) {
|
||||
if (tStartEncode(pCoder) < 0) return -1;
|
||||
if (tEncodeI64(pCoder, pReq->leftForVer) < 0) return -1;
|
||||
if (tEncodeI32(pCoder, pReq->vgId) < 0) return -1;
|
||||
|
@ -9320,7 +9334,7 @@ int32_t tEncodeSMqRebVgReq(SEncoder* pCoder, const SMqRebVgReq* pReq) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t tDecodeSMqRebVgReq(SDecoder* pCoder, SMqRebVgReq* pReq) {
|
||||
int32_t tDecodeSMqRebVgReq(SDecoder *pCoder, SMqRebVgReq *pReq) {
|
||||
if (tStartDecode(pCoder) < 0) return -1;
|
||||
|
||||
if (tDecodeI64(pCoder, &pReq->leftForVer) < 0) return -1;
|
||||
|
@ -9345,7 +9359,6 @@ int32_t tDecodeSMqRebVgReq(SDecoder* pCoder, SMqRebVgReq* pReq) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
|
||||
int32_t tEncodeDeleteRes(SEncoder *pCoder, const SDeleteRes *pRes) {
|
||||
int32_t nUid = taosArrayGetSize(pRes->uidList);
|
||||
|
||||
|
|
|
@ -1969,3 +1969,98 @@ int32_t TEST_char2ts(const char* format, int64_t* ts, int32_t precision, const c
|
|||
taosArrayDestroy(formats);
|
||||
return code;
|
||||
}
|
||||
|
||||
static int8_t UNIT_INDEX[26] = {/*a*/ 2, 0, -1, 6, -1, -1, -1,
|
||||
/*h*/ 5, -1, -1, -1, -1, 4, 8,
|
||||
/*o*/ -1, -1, -1, -1, 3, -1,
|
||||
/*u*/ 1, -1, 7, -1, 9, -1};
|
||||
|
||||
#define GET_UNIT_INDEX(idx) UNIT_INDEX[(idx) - 97]
|
||||
|
||||
static int64_t UNIT_MATRIX[10][11] = { /* ns, us, ms, s, min, h, d, w, month, y*/
|
||||
/*ns*/ { 1, 1000, 0},
|
||||
/*us*/ {1000, 1, 1000, 0},
|
||||
/*ms*/ { 0, 1000, 1, 1000, 0},
|
||||
/*s*/ { 0, 0, 1000, 1, 60, 0},
|
||||
/*min*/ { 0, 0, 0, 60, 1, 60, 0},
|
||||
/*h*/ { 0, 0, 0, 0, 60, 1, 1, 0},
|
||||
/*d*/ { 0, 0, 0, 0, 0, 24, 1, 7, 1, 0},
|
||||
/*w*/ { 0, 0, 0, 0, 0, 0, 7, 1, -1, 0},
|
||||
/*mon*/ { 0, 0, 0, 0, 0, 0, 0, 0, 1, 12, 0},
|
||||
/*y*/ { 0, 0, 0, 0, 0, 0, 0, 0, 12, 1, 0}};
|
||||
|
||||
static bool recursiveTsmaCheckRecursive(int64_t baseInterval, int8_t baseIdx, int64_t interval, int8_t idx, bool checkEq) {
|
||||
if (UNIT_MATRIX[baseIdx][idx] == -1) return false;
|
||||
if (baseIdx == idx) {
|
||||
if (interval < baseInterval) return false;
|
||||
if (checkEq && interval == baseInterval) return false;
|
||||
return interval % baseInterval == 0;
|
||||
}
|
||||
int8_t next = baseIdx + 1;
|
||||
int64_t val = UNIT_MATRIX[baseIdx][next];
|
||||
while (val != 0 && next <= idx) {
|
||||
if (val == -1) {
|
||||
next++;
|
||||
val = UNIT_MATRIX[baseIdx][next];
|
||||
continue;
|
||||
}
|
||||
if (val % baseInterval == 0 || baseInterval % val == 0) {
|
||||
int8_t extra = baseInterval >= val ? 0 : 1;
|
||||
bool needCheckEq = baseInterval >= val && !(baseIdx < next && val == 1);
|
||||
if (!recursiveTsmaCheckRecursive(baseInterval / val + extra, next, interval, idx, needCheckEq && checkEq)) {
|
||||
next++;
|
||||
val = UNIT_MATRIX[baseIdx][next];
|
||||
continue;
|
||||
} else {
|
||||
return true;
|
||||
}
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool recursiveTsmaCheckRecursiveReverse(int64_t baseInterval, int8_t baseIdx, int64_t interval, int8_t idx, bool checkEq) {
|
||||
if (UNIT_MATRIX[baseIdx][idx] == -1) return false;
|
||||
|
||||
if (baseIdx == idx) {
|
||||
if (interval < baseInterval) return false;
|
||||
if (checkEq && interval == baseInterval) return false;
|
||||
return interval % baseInterval == 0;
|
||||
}
|
||||
|
||||
int8_t next = baseIdx - 1;
|
||||
int64_t val = UNIT_MATRIX[baseIdx][next];
|
||||
while (val != 0 && next >= 0) {
|
||||
return recursiveTsmaCheckRecursiveReverse(baseInterval * val, next, interval, idx, checkEq);
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* @breif check if tsma with param [interval], [unit] can create based on base tsma with baseInterval and baseUnit
|
||||
* @param baseInterval, baseUnit, interval/unit of base tsma
|
||||
* @param interval the tsma interval going to create. Not that if unit is not calander unit, then interval has already been
|
||||
* translated to TICKS of [precision]
|
||||
* @param unit the tsma unit going to create
|
||||
* @param precision the precision of this db
|
||||
* @param checkEq pass true if same interval is not acceptable, false if acceptable.
|
||||
* @ret true the tsma can be created, else cannot
|
||||
* */
|
||||
bool checkRecursiveTsmaInterval(int64_t baseInterval, int8_t baseUnit, int64_t interval, int8_t unit, int8_t precision, bool checkEq) {
|
||||
bool baseIsCalendarDuration = IS_CALENDAR_TIME_DURATION(baseUnit);
|
||||
if (!baseIsCalendarDuration) baseInterval = convertTimeFromPrecisionToUnit(baseInterval, precision, baseUnit);
|
||||
bool isCalendarDuration = IS_CALENDAR_TIME_DURATION(unit);
|
||||
if (!isCalendarDuration) interval = convertTimeFromPrecisionToUnit(interval, precision, unit);
|
||||
|
||||
bool needCheckEq = baseIsCalendarDuration == isCalendarDuration && checkEq;
|
||||
|
||||
int8_t baseIdx = GET_UNIT_INDEX(baseUnit), idx = GET_UNIT_INDEX(unit);
|
||||
if (baseIdx <= idx) {
|
||||
return recursiveTsmaCheckRecursive(baseInterval, baseIdx, interval, idx, needCheckEq);
|
||||
} else {
|
||||
return recursiveTsmaCheckRecursiveReverse(baseInterval, baseIdx, interval, idx, checkEq);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -43,7 +43,6 @@ typedef struct SDnodeMgmt {
|
|||
GetMnodeLoadsFp getMnodeLoadsFp;
|
||||
GetQnodeLoadsFp getQnodeLoadsFp;
|
||||
int32_t statusSeq;
|
||||
SendMonitorReportFp sendMonitorReportFpBasic;
|
||||
} SDnodeMgmt;
|
||||
|
||||
// dmHandle.c
|
||||
|
|
|
@ -65,7 +65,6 @@ static int32_t dmOpenMgmt(SMgmtInputOpt *pInput, SMgmtOutputOpt *pOutput) {
|
|||
pMgmt->processDropNodeFp = pInput->processDropNodeFp;
|
||||
pMgmt->sendMonitorReportFp = pInput->sendMonitorReportFp;
|
||||
pMgmt->sendAuditRecordsFp = pInput->sendAuditRecordFp;
|
||||
pMgmt->sendMonitorReportFpBasic = pInput->sendMonitorReportFpBasic;
|
||||
pMgmt->getVnodeLoadsFp = pInput->getVnodeLoadsFp;
|
||||
pMgmt->getVnodeLoadsLiteFp = pInput->getVnodeLoadsLiteFp;
|
||||
pMgmt->getMnodeLoadsFp = pInput->getMnodeLoadsFp;
|
||||
|
|
|
@ -175,15 +175,6 @@ static void *dmMonitorThreadFp(void *param) {
|
|||
taosMemoryTrim(0);
|
||||
}
|
||||
}
|
||||
|
||||
if(tsMonitorForceV2){
|
||||
if (curTime < lastTimeForBasic) lastTimeForBasic = curTime;
|
||||
float intervalForBasic = (curTime - lastTimeForBasic) / 1000.0f;
|
||||
if (intervalForBasic >= tsMonitorIntervalForBasic) {
|
||||
(*pMgmt->sendMonitorReportFpBasic)();
|
||||
lastTimeForBasic = curTime;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return NULL;
|
||||
|
|
|
@ -233,6 +233,7 @@ SArray *mmGetMsgHandles() {
|
|||
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_RESUME_RSP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_STOP_RSP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_UPDATE_CHKPT_RSP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_STREAM_CONSEN_CHKPT_RSP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_STREAM_CREATE, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_STREAM_DROP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_STREAM_CREATE_RSP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
|
@ -242,7 +243,6 @@ SArray *mmGetMsgHandles() {
|
|||
if (dmSetMgmtHandle(pArray, TDMT_VND_STREAM_TASK_RESET_RSP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_HEARTBEAT, mmPutMsgToReadQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_CHKPT_REPORT, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_CHKPT_CONSEN, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_REQ_CHKPT, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_VND_KILL_COMPACT_RSP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
|
||||
|
|
|
@ -76,6 +76,7 @@ SArray *smGetMsgHandles() {
|
|||
if (dmSetMgmtHandle(pArray, TDMT_VND_STREAM_TASK_UPDATE, smPutNodeMsgToMgmtQueue, 1) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_DEPLOY, smPutNodeMsgToMgmtQueue, 1) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_UPDATE_CHKPT, smPutNodeMsgToMgmtQueue, 1) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_STREAM_CONSEN_CHKPT, smPutNodeMsgToMgmtQueue, 1) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_DROP, smPutNodeMsgToMgmtQueue, 1) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_RUN, smPutNodeMsgToStreamQueue, 1) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_DISPATCH, smPutNodeMsgToStreamQueue, 1) == NULL) goto _OVER;
|
||||
|
@ -96,7 +97,6 @@ SArray *smGetMsgHandles() {
|
|||
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_HEARTBEAT_RSP, smPutNodeMsgToStreamQueue, 1) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_REQ_CHKPT_RSP, smPutNodeMsgToStreamQueue, 1) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_CHKPT_REPORT_RSP, smPutNodeMsgToStreamQueue, 1) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_CHKPT_CONSEN_RSP, smPutNodeMsgToMgmtQueue, 1) == NULL) goto _OVER;
|
||||
|
||||
code = 0;
|
||||
_OVER:
|
||||
|
|
|
@ -972,10 +972,10 @@ SArray *vmGetMsgHandles() {
|
|||
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_HEARTBEAT_RSP, vmPutMsgToStreamQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_REQ_CHKPT_RSP, vmPutMsgToStreamQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_CHKPT_REPORT_RSP, vmPutMsgToStreamQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_MND_STREAM_CHKPT_CONSEN_RSP, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_VND_GET_STREAM_PROGRESS, vmPutMsgToStreamQueue, 0) == NULL) goto _OVER;
|
||||
|
||||
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_UPDATE_CHKPT, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_STREAM_CONSEN_CHKPT, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
|
||||
if (dmSetMgmtHandle(pArray, TDMT_VND_ALTER_REPLICA, vmPutMsgToMgmtQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_VND_ALTER_CONFIG, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
|
|
|
@ -287,7 +287,8 @@ int32_t vmPutRpcMsgToQueue(SVnodeMgmt *pMgmt, EQueueType qtype, SRpcMsg *pRpc) {
|
|||
return -1;
|
||||
}
|
||||
|
||||
SRpcMsg *pMsg = taosAllocateQitem(sizeof(SRpcMsg), RPC_QITEM, pRpc->contLen);
|
||||
EQItype itype = APPLY_QUEUE == qtype ? DEF_QITEM : RPC_QITEM;
|
||||
SRpcMsg *pMsg = taosAllocateQitem(sizeof(SRpcMsg), itype, pRpc->contLen);
|
||||
if (pMsg == NULL) {
|
||||
rpcFreeCont(pRpc->pCont);
|
||||
pRpc->pCont = NULL;
|
||||
|
|
|
@ -128,7 +128,6 @@ int32_t dmProcessNodeMsg(SMgmtWrapper *pWrapper, SRpcMsg *pMsg);
|
|||
// dmMonitor.c
|
||||
void dmSendMonitorReport();
|
||||
void dmSendAuditRecords();
|
||||
void dmSendMonitorReportBasic();
|
||||
void dmGetVnodeLoads(SMonVloadInfo *pInfo);
|
||||
void dmGetVnodeLoadsLite(SMonVloadInfo *pInfo);
|
||||
void dmGetMnodeLoads(SMonMloadInfo *pInfo);
|
||||
|
|
|
@ -394,7 +394,6 @@ SMgmtInputOpt dmBuildMgmtInputOpt(SMgmtWrapper *pWrapper) {
|
|||
.processDropNodeFp = dmProcessDropNodeReq,
|
||||
.sendMonitorReportFp = dmSendMonitorReport,
|
||||
.sendAuditRecordFp = auditSendRecordsInBatch,
|
||||
.sendMonitorReportFpBasic = dmSendMonitorReportBasic,
|
||||
.getVnodeLoadsFp = dmGetVnodeLoads,
|
||||
.getVnodeLoadsLiteFp = dmGetVnodeLoadsLite,
|
||||
.getMnodeLoadsFp = dmGetMnodeLoads,
|
||||
|
|
|
@ -123,16 +123,6 @@ void dmSendMonitorReport() {
|
|||
monGenAndSendReport();
|
||||
}
|
||||
|
||||
void dmSendMonitorReportBasic() {
|
||||
if (!tsEnableMonitor || tsMonitorFqdn[0] == 0 || tsMonitorPort == 0) return;
|
||||
dTrace("send monitor report to %s:%u", tsMonitorFqdn, tsMonitorPort);
|
||||
|
||||
SDnode *pDnode = dmInstance();
|
||||
dmGetDmMonitorInfoBasic(pDnode);
|
||||
dmGetMmMonitorInfo(pDnode);
|
||||
monGenAndSendReportBasic();
|
||||
}
|
||||
|
||||
//Todo: put this in seperate file in the future
|
||||
void dmSendAuditRecords() {
|
||||
auditSendRecordsInBatch();
|
||||
|
|
|
@ -208,7 +208,9 @@ static void dmProcessRpcMsg(SDnode *pDnode, SRpcMsg *pRpc, SEpSet *pEpSet) {
|
|||
}
|
||||
|
||||
pRpc->info.wrapper = pWrapper;
|
||||
pMsg = taosAllocateQitem(sizeof(SRpcMsg), RPC_QITEM, pRpc->contLen);
|
||||
|
||||
EQItype itype = IsReq(pRpc) ? RPC_QITEM : DEF_QITEM; // rsp msg is not restricted by tsRpcQueueMemoryUsed
|
||||
pMsg = taosAllocateQitem(sizeof(SRpcMsg), itype, pRpc->contLen);
|
||||
if (pMsg == NULL) goto _OVER;
|
||||
|
||||
memcpy(pMsg, pRpc, sizeof(SRpcMsg));
|
||||
|
|
|
@ -155,7 +155,6 @@ typedef struct {
|
|||
ProcessDropNodeFp processDropNodeFp;
|
||||
SendMonitorReportFp sendMonitorReportFp;
|
||||
SendAuditRecordsFp sendAuditRecordFp;
|
||||
SendMonitorReportFp sendMonitorReportFpBasic;
|
||||
GetVnodeLoadsFp getVnodeLoadsFp;
|
||||
GetVnodeLoadsFp getVnodeLoadsLiteFp;
|
||||
GetMnodeLoadsFp getMnodeLoadsFp;
|
||||
|
|
|
@ -83,7 +83,7 @@ typedef struct SOrphanTask {
|
|||
|
||||
typedef struct {
|
||||
SMsgHead head;
|
||||
} SMStreamReqCheckpointRsp, SMStreamUpdateChkptRsp;
|
||||
} SMStreamReqCheckpointRsp, SMStreamUpdateChkptRsp, SMStreamReqConsensChkptRsp;
|
||||
|
||||
typedef struct STaskChkptInfo {
|
||||
int32_t nodeId;
|
||||
|
@ -133,8 +133,8 @@ int32_t mndCreateStreamResetStatusTrans(SMnode *pMnode, SStreamObj *pStream)
|
|||
int32_t mndStreamSetUpdateChkptAction(SMnode *pMnode, STrans *pTrans, SStreamObj *pStream);
|
||||
int32_t mndCreateStreamChkptInfoUpdateTrans(SMnode *pMnode, SStreamObj *pStream, SArray *pChkptInfoList);
|
||||
int32_t mndScanCheckpointReportInfo(SRpcMsg *pReq);
|
||||
int32_t mndSendConsensusCheckpointIdRsp(SArray* pList, int64_t checkpointId);
|
||||
|
||||
int32_t mndCreateSetConsensusChkptIdTrans(SMnode *pMnode, SStreamObj *pStream, int32_t taskId, int64_t checkpointId,
|
||||
int64_t ts);
|
||||
void removeTasksInBuf(SArray *pTaskIds, SStreamExecInfo *pExecInfo);
|
||||
|
||||
SStreamTaskIter *createStreamTaskIter(SStreamObj *pStream);
|
||||
|
@ -146,14 +146,10 @@ void mndInitStreamExecInfo(SMnode *pMnode, SStreamExecInfo *pExecInf
|
|||
int32_t removeExpiredNodeEntryAndTaskInBuf(SArray *pNodeSnapshot);
|
||||
void removeStreamTasksInBuf(SStreamObj *pStream, SStreamExecInfo *pExecNode);
|
||||
|
||||
SCheckpointConsensusInfo *mndGetConsensusInfo(SHashObj *pHash, int64_t streamId);
|
||||
void mndAddConsensusTasks(SCheckpointConsensusInfo *pInfo, const SRestoreCheckpointInfo *pRestoreInfo, SRpcMsg *pMsg);
|
||||
int64_t mndGetConsensusCheckpointId(SCheckpointConsensusInfo *pInfo, SStreamObj *pStream);
|
||||
bool mndAllTaskSendCheckpointId(SCheckpointConsensusInfo *pInfo, int32_t numOfTasks, int32_t* pTotal);
|
||||
SCheckpointConsensusInfo *mndGetConsensusInfo(SHashObj *pHash, int64_t streamId, int32_t numOfTasks);
|
||||
void mndAddConsensusTasks(SCheckpointConsensusInfo *pInfo, const SRestoreCheckpointInfo *pRestoreInfo);
|
||||
void mndClearConsensusRspEntry(SCheckpointConsensusInfo *pInfo);
|
||||
int32_t doSendConsensusCheckpointRsp(SRestoreCheckpointInfo *pInfo, SRpcMsg *pMsg, int64_t checkpointId);
|
||||
int64_t mndClearConsensusCheckpointId(SHashObj* pHash, int64_t streamId);
|
||||
int32_t mndRegisterConsensusChkptId(SHashObj* pHash, int64_t streamId);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
|
|
@ -177,6 +177,15 @@ static void mndStreamCheckNode(SMnode *pMnode) {
|
|||
}
|
||||
}
|
||||
|
||||
static void mndStreamConsensusChkpt(SMnode *pMnode) {
|
||||
int32_t contLen = 0;
|
||||
void *pReq = mndBuildTimerMsg(&contLen);
|
||||
if (pReq != NULL) {
|
||||
SRpcMsg rpcMsg = {.msgType = TDMT_MND_STREAM_CONSEN_TIMER, .pCont = pReq, .contLen = contLen};
|
||||
tmsgPutToQueue(&pMnode->msgCb, WRITE_QUEUE, &rpcMsg);
|
||||
}
|
||||
}
|
||||
|
||||
static void mndPullupTelem(SMnode *pMnode) {
|
||||
mTrace("pullup telem msg");
|
||||
int32_t contLen = 0;
|
||||
|
@ -308,7 +317,6 @@ static int32_t minCronTime() {
|
|||
min = TMIN(min, tsCompactPullupInterval);
|
||||
min = TMIN(min, tsMqRebalanceInterval);
|
||||
min = TMIN(min, tsStreamCheckpointInterval);
|
||||
min = TMIN(min, 6); // checkpointRemain
|
||||
min = TMIN(min, tsStreamNodeCheckInterval);
|
||||
min = TMIN(min, tsArbHeartBeatIntervalSec);
|
||||
min = TMIN(min, tsArbCheckSyncIntervalSec);
|
||||
|
@ -353,6 +361,10 @@ void mndDoTimerPullupTask(SMnode *pMnode, int64_t sec) {
|
|||
mndStreamCheckNode(pMnode);
|
||||
}
|
||||
|
||||
if (sec % 5 == 0) {
|
||||
mndStreamConsensusChkpt(pMnode);
|
||||
}
|
||||
|
||||
if (sec % tsTelemInterval == (TMIN(60, (tsTelemInterval - 1)))) {
|
||||
mndPullupTelem(pMnode);
|
||||
}
|
||||
|
|
|
@ -2004,8 +2004,13 @@ static int32_t mndRetrieveTSMA(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBlo
|
|||
|
||||
// interval
|
||||
char interval[64 + VARSTR_HEADER_SIZE] = {0};
|
||||
int32_t len = snprintf(interval + VARSTR_HEADER_SIZE, 64, "%" PRId64 "%c", pSma->interval,
|
||||
int32_t len = 0;
|
||||
if (!IS_CALENDAR_TIME_DURATION(pSma->intervalUnit)) {
|
||||
len = snprintf(interval + VARSTR_HEADER_SIZE, 64, "%" PRId64 "%c", pSma->interval,
|
||||
getPrecisionUnit(pSrcDb->cfg.precision));
|
||||
} else {
|
||||
len = snprintf(interval + VARSTR_HEADER_SIZE, 64, "%" PRId64 "%c", pSma->interval, pSma->intervalUnit);
|
||||
}
|
||||
varDataSetLen(interval, len);
|
||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||
colDataSetVal(pColInfo, numOfRows, interval, false);
|
||||
|
|
|
@ -59,7 +59,9 @@ static int32_t mndProcessNodeCheckReq(SRpcMsg *pMsg);
|
|||
static int32_t extractNodeListFromStream(SMnode *pMnode, SArray *pNodeList);
|
||||
static int32_t mndProcessStreamReqCheckpoint(SRpcMsg *pReq);
|
||||
static int32_t mndProcessCheckpointReport(SRpcMsg *pReq);
|
||||
static int32_t mndProcessConsensusCheckpointId(SRpcMsg *pReq);
|
||||
//static int32_t mndProcessConsensusCheckpointId(SRpcMsg *pMsg);
|
||||
static int32_t mndProcessConsensusInTmr(SRpcMsg *pMsg);
|
||||
static void doSendQuickRsp(SRpcHandleInfo *pInfo, int32_t msgSize, int32_t vgId, int32_t code);
|
||||
|
||||
static SVgroupChangeInfo mndFindChangedNodeInfo(SMnode *pMnode, const SArray *pPrevNodeList, const SArray *pNodeList);
|
||||
|
||||
|
@ -106,6 +108,7 @@ int32_t mndInitStream(SMnode *pMnode) {
|
|||
mndSetMsgHandle(pMnode, TDMT_VND_STREAM_TASK_UPDATE_RSP, mndTransProcessRsp);
|
||||
mndSetMsgHandle(pMnode, TDMT_VND_STREAM_TASK_RESET_RSP, mndTransProcessRsp);
|
||||
mndSetMsgHandle(pMnode, TDMT_STREAM_TASK_UPDATE_CHKPT_RSP, mndTransProcessRsp);
|
||||
mndSetMsgHandle(pMnode, TDMT_STREAM_CONSEN_CHKPT_RSP, mndTransProcessRsp);
|
||||
|
||||
// for msgs inside mnode
|
||||
// TODO change the name
|
||||
|
@ -118,11 +121,11 @@ int32_t mndInitStream(SMnode *pMnode) {
|
|||
mndSetMsgHandle(pMnode, TDMT_MND_STREAM_BEGIN_CHECKPOINT, mndProcessStreamCheckpoint);
|
||||
mndSetMsgHandle(pMnode, TDMT_MND_STREAM_REQ_CHKPT, mndProcessStreamReqCheckpoint);
|
||||
mndSetMsgHandle(pMnode, TDMT_MND_STREAM_CHKPT_REPORT, mndProcessCheckpointReport);
|
||||
mndSetMsgHandle(pMnode, TDMT_MND_STREAM_CHKPT_CONSEN, mndProcessConsensusCheckpointId);
|
||||
mndSetMsgHandle(pMnode, TDMT_MND_STREAM_UPDATE_CHKPT_EVT, mndScanCheckpointReportInfo);
|
||||
mndSetMsgHandle(pMnode, TDMT_STREAM_TASK_REPORT_CHECKPOINT, mndTransProcessRsp);
|
||||
mndSetMsgHandle(pMnode, TDMT_MND_STREAM_HEARTBEAT, mndProcessStreamHb);
|
||||
mndSetMsgHandle(pMnode, TDMT_MND_STREAM_NODECHANGE_CHECK, mndProcessNodeCheckReq);
|
||||
mndSetMsgHandle(pMnode, TDMT_MND_STREAM_CONSEN_TIMER, mndProcessConsensusInTmr);
|
||||
|
||||
mndSetMsgHandle(pMnode, TDMT_MND_PAUSE_STREAM, mndProcessPauseStreamReq);
|
||||
mndSetMsgHandle(pMnode, TDMT_MND_RESUME_STREAM, mndProcessResumeStreamReq);
|
||||
|
@ -803,7 +806,7 @@ static int32_t mndProcessCreateStreamReq(SRpcMsg *pReq) {
|
|||
taosThreadMutexLock(&execInfo.lock);
|
||||
mDebug("stream stream:%s start to register tasks into task nodeList and set initial checkpointId", createReq.name);
|
||||
saveTaskAndNodeInfoIntoBuf(&streamObj, &execInfo);
|
||||
mndRegisterConsensusChkptId(execInfo.pStreamConsensus, streamObj.uid);
|
||||
// mndRegisterConsensusChkptId(execInfo.pStreamConsensus, streamObj.uid);
|
||||
taosThreadMutexUnlock(&execInfo.lock);
|
||||
|
||||
// execute creation
|
||||
|
@ -1134,6 +1137,7 @@ static int32_t mndCheckTaskAndNodeStatus(SMnode *pMnode) {
|
|||
mDebug("s-task:0x%" PRIx64 "-0x%x (nodeId:%d) status:%s, checkpoint not issued", pEntry->id.streamId,
|
||||
(int32_t)pEntry->id.taskId, pEntry->nodeId, streamTaskGetStatusStr(pEntry->status));
|
||||
ready = false;
|
||||
break;
|
||||
}
|
||||
|
||||
if (pEntry->hTaskId != 0) {
|
||||
|
@ -1153,6 +1157,27 @@ static int32_t mndCheckTaskAndNodeStatus(SMnode *pMnode) {
|
|||
return ready ? 0 : -1;
|
||||
}
|
||||
|
||||
int64_t getStreamTaskLastReadyState(SArray *pTaskList, int64_t streamId) {
|
||||
int64_t ts = -1;
|
||||
int32_t taskId = -1;
|
||||
|
||||
for (int32_t i = 0; i < taosArrayGetSize(pTaskList); ++i) {
|
||||
STaskId *p = taosArrayGet(pTaskList, i);
|
||||
STaskStatusEntry *pEntry = taosHashGet(execInfo.pTaskMap, p, sizeof(*p));
|
||||
if (pEntry == NULL || pEntry->id.streamId != streamId) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (pEntry->status == TASK_STATUS__READY && ts < pEntry->startTime) {
|
||||
ts = pEntry->startTime;
|
||||
taskId = pEntry->id.taskId;
|
||||
}
|
||||
}
|
||||
|
||||
mDebug("stream:0x%" PRIx64 " last ready ts:%" PRId64 " s-task:0x%x", streamId, ts, taskId);
|
||||
return ts;
|
||||
}
|
||||
|
||||
typedef struct {
|
||||
int64_t streamId;
|
||||
int64_t duration;
|
||||
|
@ -1191,6 +1216,15 @@ static int32_t mndProcessStreamCheckpoint(SRpcMsg *pReq) {
|
|||
continue;
|
||||
}
|
||||
|
||||
taosThreadMutexLock(&execInfo.lock);
|
||||
int64_t startTs = getStreamTaskLastReadyState(execInfo.pTaskList, pStream->uid);
|
||||
if (startTs != -1 && (now - startTs) < tsStreamCheckpointInterval * 1000) {
|
||||
taosThreadMutexUnlock(&execInfo.lock);
|
||||
sdbRelease(pSdb, pStream);
|
||||
continue;
|
||||
}
|
||||
taosThreadMutexUnlock(&execInfo.lock);
|
||||
|
||||
SCheckpointInterval in = {.streamId = pStream->uid, .duration = duration};
|
||||
taosArrayPush(pList, &in);
|
||||
|
||||
|
@ -1237,9 +1271,6 @@ static int32_t mndProcessStreamCheckpoint(SRpcMsg *pReq) {
|
|||
code = mndProcessStreamCheckpointTrans(pMnode, p, checkpointId, 1, true);
|
||||
sdbRelease(pSdb, p);
|
||||
|
||||
// clear the consensus checkpoint info
|
||||
mndClearConsensusCheckpointId(execInfo.pStreamConsensus, p->uid);
|
||||
|
||||
if (code != -1) {
|
||||
started += 1;
|
||||
|
||||
|
@ -2309,7 +2340,7 @@ static int32_t mndProcessNodeCheckReq(SRpcMsg *pMsg) {
|
|||
taosThreadMutexUnlock(&execInfo.lock);
|
||||
|
||||
if (numOfNodes == 0) {
|
||||
mDebug("end to do stream task node change checking, no vgroup exists, do nothing");
|
||||
mDebug("end to do stream task(s) node change checking, no stream tasks exist, do nothing");
|
||||
execInfo.ts = ts;
|
||||
atomic_store_32(&mndNodeCheckSentinel, 0);
|
||||
return 0;
|
||||
|
@ -2612,113 +2643,238 @@ int32_t mndProcessCheckpointReport(SRpcMsg *pReq) {
|
|||
|
||||
taosThreadMutexUnlock(&execInfo.lock);
|
||||
|
||||
{
|
||||
SRpcMsg rsp = {.code = 0, .info = pReq->info, .contLen = sizeof(SMStreamUpdateChkptRsp)};
|
||||
rsp.pCont = rpcMallocCont(rsp.contLen);
|
||||
SMsgHead *pHead = rsp.pCont;
|
||||
pHead->vgId = htonl(req.nodeId);
|
||||
|
||||
tmsgSendRsp(&rsp);
|
||||
pReq->info.handle = NULL; // disable auto rsp
|
||||
}
|
||||
|
||||
doSendQuickRsp(&pReq->info, sizeof(SMStreamUpdateChkptRsp), req.nodeId, TSDB_CODE_SUCCESS);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndProcessConsensusCheckpointId(SRpcMsg *pReq) {
|
||||
SMnode *pMnode = pReq->info.node;
|
||||
SDecoder decoder = {0};
|
||||
static int64_t getConsensusId(int64_t streamId, int32_t numOfTasks, int32_t* pExistedTasks, bool *pAllSame) {
|
||||
int32_t num = 0;
|
||||
int64_t chkId = INT64_MAX;
|
||||
*pExistedTasks = 0;
|
||||
*pAllSame = true;
|
||||
|
||||
SRestoreCheckpointInfo req = {0};
|
||||
tDecoderInit(&decoder, pReq->pCont, pReq->contLen);
|
||||
for(int32_t i = 0; i < taosArrayGetSize(execInfo.pTaskList); ++i) {
|
||||
STaskId* p = taosArrayGet(execInfo.pTaskList, i);
|
||||
if (p->streamId != streamId) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (tDecodeRestoreCheckpointInfo(&decoder, &req)) {
|
||||
tDecoderClear(&decoder);
|
||||
terrno = TSDB_CODE_INVALID_MSG;
|
||||
mError("invalid task consensus-checkpoint msg received");
|
||||
num += 1;
|
||||
STaskStatusEntry* pe = taosHashGet(execInfo.pTaskMap, p, sizeof(*p));
|
||||
if (chkId > pe->checkpointInfo.latestId) {
|
||||
if (chkId != INT64_MAX) {
|
||||
*pAllSame = false;
|
||||
}
|
||||
chkId = pe->checkpointInfo.latestId;
|
||||
}
|
||||
}
|
||||
|
||||
*pExistedTasks = num;
|
||||
if (num < numOfTasks) { // not all task send info to mnode through hbMsg, no valid checkpoint Id
|
||||
return -1;
|
||||
}
|
||||
tDecoderClear(&decoder);
|
||||
|
||||
mDebug("receive stream task consensus-checkpoint msg, vgId:%d, s-task:0x%" PRIx64 "-0x%x, checkpointId:%" PRId64,
|
||||
req.nodeId, req.streamId, req.taskId, req.checkpointId);
|
||||
return chkId;
|
||||
}
|
||||
|
||||
static void doSendQuickRsp(SRpcHandleInfo *pInfo, int32_t msgSize, int32_t vgId, int32_t code) {
|
||||
SRpcMsg rsp = {.code = code, .info = *pInfo, .contLen = msgSize};
|
||||
rsp.pCont = rpcMallocCont(rsp.contLen);
|
||||
SMsgHead *pHead = rsp.pCont;
|
||||
pHead->vgId = htonl(vgId);
|
||||
|
||||
tmsgSendRsp(&rsp);
|
||||
pInfo->handle = NULL; // disable auto rsp
|
||||
}
|
||||
|
||||
//static int32_t mndProcessConsensusCheckpointId(SRpcMsg *pMsg) {
|
||||
// SMnode *pMnode = pMsg->info.node;
|
||||
// SDecoder decoder = {0};
|
||||
//
|
||||
// SRestoreCheckpointInfo req = {0};
|
||||
// tDecoderInit(&decoder, pMsg->pCont, pMsg->contLen);
|
||||
//
|
||||
// if (tDecodeRestoreCheckpointInfo(&decoder, &req)) {
|
||||
// tDecoderClear(&decoder);
|
||||
// terrno = TSDB_CODE_INVALID_MSG;
|
||||
// mError("invalid task consensus-checkpoint msg received");
|
||||
// return -1;
|
||||
// }
|
||||
// tDecoderClear(&decoder);
|
||||
//
|
||||
// mDebug("receive stream task consensus-checkpoint msg, vgId:%d, s-task:0x%" PRIx64 "-0x%x, checkpointId:%" PRId64,
|
||||
// req.nodeId, req.streamId, req.taskId, req.checkpointId);
|
||||
//
|
||||
// // register to the stream task done map, if all tasks has sent this kinds of message, start the checkpoint trans.
|
||||
// taosThreadMutexLock(&execInfo.lock);
|
||||
//
|
||||
// // mnode handle the create stream transaction too slow may cause this problem
|
||||
// SStreamObj *pStream = mndGetStreamObj(pMnode, req.streamId);
|
||||
// if (pStream == NULL) {
|
||||
// mWarn("failed to find the stream:0x%" PRIx64 ", not handle consensus-checkpointId", req.streamId);
|
||||
//
|
||||
// // not in meta-store yet, try to acquire the task in exec buffer
|
||||
// // the checkpoint req arrives too soon before the completion of the create stream trans.
|
||||
// STaskId id = {.streamId = req.streamId, .taskId = req.taskId};
|
||||
// void *p = taosHashGet(execInfo.pTaskMap, &id, sizeof(id));
|
||||
// if (p == NULL) {
|
||||
// mError("failed to find the stream:0x%" PRIx64 " in buf, not handle consensus-checkpointId", req.streamId);
|
||||
// terrno = TSDB_CODE_MND_STREAM_NOT_EXIST;
|
||||
// taosThreadMutexUnlock(&execInfo.lock);
|
||||
//
|
||||
// doSendQuickRsp(&pMsg->info, sizeof(SMStreamReqConsensChkptRsp), req.nodeId, terrno);
|
||||
// return -1;
|
||||
// } else {
|
||||
// mDebug("s-task:0x%" PRIx64 "-0x%x in buf not in mnode/meta, create stream trans may not complete yet",
|
||||
// req.streamId, req.taskId);
|
||||
// // todo wait for stream is created
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// mInfo("vgId:%d stream:0x%" PRIx64 " %s meta-stored checkpointId:%" PRId64, req.nodeId, req.streamId, pStream->name,
|
||||
// pStream->checkpointId);
|
||||
//
|
||||
// int32_t numOfTasks = (pStream == NULL) ? 0 : mndGetNumOfStreamTasks(pStream);
|
||||
// if ((pStream != NULL) && (pStream->checkpointId == 0)) { // not generated checkpoint yet, return 0 directly
|
||||
// taosThreadMutexUnlock(&execInfo.lock);
|
||||
// mndCreateSetConsensusChkptIdTrans(pMnode, pStream, req.taskId, 0, req.startTs);
|
||||
//
|
||||
// doSendQuickRsp(&pMsg->info, sizeof(SMStreamReqConsensChkptRsp), req.nodeId, terrno);
|
||||
// return TSDB_CODE_SUCCESS;
|
||||
// }
|
||||
//
|
||||
// int32_t num = 0;
|
||||
// int64_t chkId = getConsensusId(req.streamId, numOfTasks, &num);
|
||||
//
|
||||
// // some tasks not send hbMsg to mnode yet, wait for 5s.
|
||||
// if (chkId == -1) {
|
||||
// mDebug("not all(%d/%d) task(s) send hbMsg yet, wait for a while and check again, s-task:0x%x", req.taskId, num,
|
||||
// numOfTasks);
|
||||
// SCheckpointConsensusInfo *pInfo = mndGetConsensusInfo(execInfo.pStreamConsensus, req.streamId, numOfTasks);
|
||||
// mndAddConsensusTasks(pInfo, &req);
|
||||
//
|
||||
// taosThreadMutexUnlock(&execInfo.lock);
|
||||
// doSendQuickRsp(&pMsg->info, sizeof(SMStreamReqConsensChkptRsp), req.nodeId, terrno);
|
||||
// return 0;
|
||||
// }
|
||||
//
|
||||
// if (chkId == req.checkpointId) {
|
||||
// mDebug("vgId:%d stream:0x%" PRIx64 " %s consensus-checkpointId is:%" PRId64 ", meta-stored checkpointId:%" PRId64,
|
||||
// req.nodeId, req.streamId, pStream->name, chkId, pStream->checkpointId);
|
||||
// mndCreateSetConsensusChkptIdTrans(pMnode, pStream, req.taskId, chkId, req.startTs);
|
||||
//
|
||||
// taosThreadMutexUnlock(&execInfo.lock);
|
||||
// doSendQuickRsp(&pMsg->info, sizeof(SMStreamReqConsensChkptRsp), req.nodeId, terrno);
|
||||
// return 0;
|
||||
// }
|
||||
//
|
||||
// // wait for 5s and check again
|
||||
// SCheckpointConsensusInfo *pInfo = mndGetConsensusInfo(execInfo.pStreamConsensus, req.streamId, numOfTasks);
|
||||
// mndAddConsensusTasks(pInfo, &req);
|
||||
//
|
||||
// if (pStream != NULL) {
|
||||
// mndReleaseStream(pMnode, pStream);
|
||||
// }
|
||||
//
|
||||
// taosThreadMutexUnlock(&execInfo.lock);
|
||||
// doSendQuickRsp(&pMsg->info, sizeof(SMStreamReqConsensChkptRsp), req.nodeId, terrno);
|
||||
// return 0;
|
||||
//}
|
||||
|
||||
int32_t mndProcessConsensusInTmr(SRpcMsg *pMsg) {
|
||||
SMnode *pMnode = pMsg->info.node;
|
||||
int64_t now = taosGetTimestampMs();
|
||||
SArray *pStreamList = taosArrayInit(4, sizeof(int64_t));
|
||||
|
||||
mDebug("start to process consensus-checkpointId in tmr");
|
||||
|
||||
bool allReady = true;
|
||||
SArray *pNodeSnapshot = mndTakeVgroupSnapshot(pMnode, &allReady);
|
||||
taosArrayDestroy(pNodeSnapshot);
|
||||
if (!allReady) {
|
||||
mWarn("not all vnodes are ready, end to process the consensus-checkpointId in tmr process");
|
||||
taosArrayDestroy(pStreamList);
|
||||
return 0;
|
||||
}
|
||||
|
||||
// register to the stream task done map, if all tasks has sent this kinds of message, start the checkpoint trans.
|
||||
taosThreadMutexLock(&execInfo.lock);
|
||||
|
||||
SStreamObj *pStream = mndGetStreamObj(pMnode, req.streamId);
|
||||
if (pStream == NULL) {
|
||||
mWarn("failed to find the stream:0x%" PRIx64 ", not handle checkpoint-report, try to acquire in buf", req.streamId);
|
||||
void *pIter = NULL;
|
||||
while ((pIter = taosHashIterate(execInfo.pStreamConsensus, pIter)) != NULL) {
|
||||
SCheckpointConsensusInfo *pInfo = (SCheckpointConsensusInfo *)pIter;
|
||||
|
||||
// not in meta-store yet, try to acquire the task in exec buffer
|
||||
// the checkpoint req arrives too soon before the completion of the create stream trans.
|
||||
STaskId id = {.streamId = req.streamId, .taskId = req.taskId};
|
||||
void *p = taosHashGet(execInfo.pTaskMap, &id, sizeof(id));
|
||||
if (p == NULL) {
|
||||
mError("failed to find the stream:0x%" PRIx64 " in buf, not handle the checkpoint-report", req.streamId);
|
||||
terrno = TSDB_CODE_MND_STREAM_NOT_EXIST;
|
||||
taosThreadMutexUnlock(&execInfo.lock);
|
||||
return -1;
|
||||
int64_t streamId = -1;
|
||||
int32_t num = taosArrayGetSize(pInfo->pTaskList);
|
||||
SArray *pList = taosArrayInit(4, sizeof(int32_t));
|
||||
|
||||
SStreamObj *pStream = mndGetStreamObj(pMnode, pInfo->streamId);
|
||||
if (pStream == NULL) { // stream has been dropped already
|
||||
mDebug("stream:0x%" PRIx64 " dropped already, continue", pInfo->streamId);
|
||||
taosArrayDestroy(pList);
|
||||
continue;
|
||||
}
|
||||
|
||||
for (int32_t j = 0; j < num; ++j) {
|
||||
SCheckpointConsensusEntry *pe = taosArrayGet(pInfo->pTaskList, j);
|
||||
streamId = pe->req.streamId;
|
||||
|
||||
int32_t existed = 0;
|
||||
bool allSame = true;
|
||||
int64_t chkId = getConsensusId(pe->req.streamId, pInfo->numOfTasks, &existed, &allSame);
|
||||
if (chkId == -1) {
|
||||
mDebug("not all(%d/%d) task(s) send hbMsg yet, wait for a while and check again, s-task:0x%x", existed,
|
||||
pInfo->numOfTasks, pe->req.taskId);
|
||||
break;
|
||||
}
|
||||
|
||||
if (((now - pe->ts) >= 10 * 1000) || allSame) {
|
||||
mDebug("s-task:0x%x sendTs:%" PRId64 " wait %.2fs and all tasks have same checkpointId", pe->req.taskId,
|
||||
pe->req.startTs, (now - pe->ts) / 1000.0);
|
||||
ASSERT(chkId <= pe->req.checkpointId);
|
||||
mndCreateSetConsensusChkptIdTrans(pMnode, pStream, pe->req.taskId, chkId, pe->req.startTs);
|
||||
|
||||
taosArrayPush(pList, &pe->req.taskId);
|
||||
streamId = pe->req.streamId;
|
||||
} else {
|
||||
mDebug("s-task:0x%" PRIx64 "-0x%x in buf not in mnode/meta, create stream trans may not complete yet",
|
||||
req.streamId, req.taskId);
|
||||
mDebug("s-task:0x%x sendTs:%" PRId64 " wait %.2fs already, wait for next round to check", pe->req.taskId,
|
||||
pe->req.startTs, (now - pe->ts) / 1000.0);
|
||||
}
|
||||
}
|
||||
|
||||
int32_t numOfTasks = (pStream == NULL) ? 0 : mndGetNumOfStreamTasks(pStream);
|
||||
|
||||
SCheckpointConsensusInfo *pInfo = mndGetConsensusInfo(execInfo.pStreamConsensus, req.streamId);
|
||||
|
||||
int64_t ckId = mndGetConsensusCheckpointId(pInfo, pStream);
|
||||
if (ckId != -1) { // consensus checkpoint id already exist
|
||||
SRpcMsg rsp = {0};
|
||||
rsp.code = 0;
|
||||
rsp.info = pReq->info;
|
||||
rsp.contLen = sizeof(SRestoreCheckpointInfoRsp) + sizeof(SMsgHead);
|
||||
rsp.pCont = rpcMallocCont(rsp.contLen);
|
||||
|
||||
SMsgHead *pHead = rsp.pCont;
|
||||
pHead->vgId = htonl(req.nodeId);
|
||||
|
||||
mDebug("stream:0x%" PRIx64 " consensus-checkpointId:%" PRId64 " exists, return directly", req.streamId, ckId);
|
||||
doSendConsensusCheckpointRsp(&req, &rsp, ckId);
|
||||
|
||||
taosThreadMutexUnlock(&execInfo.lock);
|
||||
pReq->info.handle = NULL; // disable auto rsp
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
mndAddConsensusTasks(pInfo, &req, pReq);
|
||||
|
||||
int32_t total = 0;
|
||||
if (mndAllTaskSendCheckpointId(pInfo, numOfTasks, &total)) { // all tasks has send the reqs
|
||||
// start transaction to set the checkpoint id
|
||||
int64_t checkpointId = mndGetConsensusCheckpointId(pInfo, pStream);
|
||||
mInfo("stream:0x%" PRIx64 " %s all %d tasks send latest checkpointId, the consensus-checkpointId is:%" PRId64
|
||||
" will be issued soon",
|
||||
req.streamId, pStream->name, numOfTasks, checkpointId);
|
||||
|
||||
// start the checkpoint consensus trans
|
||||
int32_t code = mndSendConsensusCheckpointIdRsp(pInfo->pTaskList, checkpointId);
|
||||
if (code == TSDB_CODE_SUCCESS) {
|
||||
mndClearConsensusRspEntry(pInfo);
|
||||
mDebug("clear all waiting for rsp entry for stream:0x%" PRIx64, req.streamId);
|
||||
} else {
|
||||
mDebug("stream:0x%" PRIx64 " not start send consensus-checkpointId msg, due to not all task ready", req.streamId);
|
||||
}
|
||||
} else {
|
||||
mDebug("stream:0x%" PRIx64 " %d/%d tasks send consensus-checkpointId info", req.streamId, total, numOfTasks);
|
||||
}
|
||||
|
||||
if (pStream != NULL) {
|
||||
mndReleaseStream(pMnode, pStream);
|
||||
|
||||
if (taosArrayGetSize(pList) > 0) {
|
||||
for (int32_t i = 0; i < taosArrayGetSize(pList); ++i) {
|
||||
int32_t *taskId = taosArrayGet(pList, i);
|
||||
for (int32_t k = 0; k < taosArrayGetSize(pInfo->pTaskList); ++k) {
|
||||
SCheckpointConsensusEntry *pe = taosArrayGet(pInfo->pTaskList, k);
|
||||
if (pe->req.taskId == *taskId) {
|
||||
taosArrayRemove(pInfo->pTaskList, k);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
taosArrayDestroy(pList);
|
||||
|
||||
if (taosArrayGetSize(pInfo->pTaskList) == 0) {
|
||||
mndClearConsensusRspEntry(pInfo);
|
||||
ASSERT(streamId != -1);
|
||||
taosArrayPush(pStreamList, &streamId);
|
||||
}
|
||||
}
|
||||
|
||||
for (int32_t i = 0; i < taosArrayGetSize(pStreamList); ++i) {
|
||||
int64_t *pStreamId = (int64_t *)taosArrayGet(pStreamList, i);
|
||||
mndClearConsensusCheckpointId(execInfo.pStreamConsensus, *pStreamId);
|
||||
}
|
||||
|
||||
taosThreadMutexUnlock(&execInfo.lock);
|
||||
pReq->info.handle = NULL; // disable auto rsp
|
||||
|
||||
return 0;
|
||||
taosArrayDestroy(pStreamList);
|
||||
mDebug("end to process consensus-checkpointId in tmr");
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t mndProcessCreateStreamReqFromMNode(SRpcMsg *pReq) {
|
||||
|
|
|
@ -246,7 +246,7 @@ int32_t mndProcessStreamHb(SRpcMsg *pReq) {
|
|||
}
|
||||
tDecoderClear(&decoder);
|
||||
|
||||
mTrace("receive stream-meta hb from vgId:%d, active numOfTasks:%d, msgId:%d", req.vgId, req.numOfTasks, req.msgId);
|
||||
mDebug("receive stream-meta hb from vgId:%d, active numOfTasks:%d, msgId:%d", req.vgId, req.numOfTasks, req.msgId);
|
||||
|
||||
pFailedChkpt = taosArrayInit(4, sizeof(SFailedCheckpointInfo));
|
||||
pOrphanTasks = taosArrayInit(4, sizeof(SOrphanTask));
|
||||
|
@ -284,6 +284,23 @@ int32_t mndProcessStreamHb(SRpcMsg *pReq) {
|
|||
continue;
|
||||
}
|
||||
|
||||
STaskCkptInfo *pChkInfo = &p->checkpointInfo;
|
||||
if (pChkInfo->consensusChkptId != 0) {
|
||||
SRestoreCheckpointInfo cp = {
|
||||
.streamId = p->id.streamId,
|
||||
.taskId = p->id.taskId,
|
||||
.checkpointId = p->checkpointInfo.latestId,
|
||||
.startTs = pChkInfo->consensusTs,
|
||||
};
|
||||
|
||||
SStreamObj *pStream = mndGetStreamObj(pMnode, p->id.streamId);
|
||||
int32_t numOfTasks = mndGetNumOfStreamTasks(pStream);
|
||||
|
||||
SCheckpointConsensusInfo *pInfo = mndGetConsensusInfo(execInfo.pStreamConsensus, p->id.streamId, numOfTasks);
|
||||
mndAddConsensusTasks(pInfo, &cp);
|
||||
mndReleaseStream(pMnode, pStream);
|
||||
}
|
||||
|
||||
if (pTaskEntry->stage != p->stage && pTaskEntry->stage != -1) {
|
||||
updateStageInfo(pTaskEntry, p->stage);
|
||||
if (pTaskEntry->nodeId == SNODE_HANDLE) {
|
||||
|
@ -292,7 +309,6 @@ int32_t mndProcessStreamHb(SRpcMsg *pReq) {
|
|||
} else {
|
||||
streamTaskStatusCopy(pTaskEntry, p);
|
||||
|
||||
STaskCkptInfo *pChkInfo = &p->checkpointInfo;
|
||||
if ((pChkInfo->activeId != 0) && pChkInfo->failed) {
|
||||
mError("stream task:0x%" PRIx64 " checkpointId:%" PRIx64 " transId:%d failed, kill it", p->id.taskId,
|
||||
pChkInfo->activeId, pChkInfo->activeTransId);
|
||||
|
|
|
@ -84,9 +84,10 @@ SArray *mndTakeVgroupSnapshot(SMnode *pMnode, bool *allReady) {
|
|||
SSdb *pSdb = pMnode->pSdb;
|
||||
void *pIter = NULL;
|
||||
SVgObj *pVgroup = NULL;
|
||||
int32_t replica = -1; // do the replica check
|
||||
|
||||
*allReady = true;
|
||||
SArray *pVgroupListSnapshot = taosArrayInit(4, sizeof(SNodeEntry));
|
||||
SArray *pVgroupList = taosArrayInit(4, sizeof(SNodeEntry));
|
||||
|
||||
while (1) {
|
||||
pIter = sdbFetch(pSdb, SDB_VGROUP, pIter, (void **)&pVgroup);
|
||||
|
@ -97,6 +98,17 @@ SArray *mndTakeVgroupSnapshot(SMnode *pMnode, bool *allReady) {
|
|||
SNodeEntry entry = {.nodeId = pVgroup->vgId, .hbTimestamp = pVgroup->updateTime};
|
||||
entry.epset = mndGetVgroupEpset(pMnode, pVgroup);
|
||||
|
||||
if (replica == -1) {
|
||||
replica = pVgroup->replica;
|
||||
} else {
|
||||
if (replica != pVgroup->replica) {
|
||||
mInfo("vgId:%d replica:%d inconsistent with other vgroups replica:%d, not ready for stream operations",
|
||||
pVgroup->vgId, pVgroup->replica, replica);
|
||||
*allReady = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// if not all ready till now, no need to check the remaining vgroups.
|
||||
if (*allReady) {
|
||||
for (int32_t i = 0; i < pVgroup->replica; ++i) {
|
||||
|
@ -107,8 +119,10 @@ SArray *mndTakeVgroupSnapshot(SMnode *pMnode, bool *allReady) {
|
|||
}
|
||||
|
||||
ESyncState state = pVgroup->vnodeGid[i].syncState;
|
||||
if (state == TAOS_SYNC_STATE_OFFLINE || state == TAOS_SYNC_STATE_ERROR) {
|
||||
mInfo("vgId:%d offline/err, not ready for checkpoint or other operations", pVgroup->vgId);
|
||||
if (state == TAOS_SYNC_STATE_OFFLINE || state == TAOS_SYNC_STATE_ERROR || state == TAOS_SYNC_STATE_LEARNER ||
|
||||
state == TAOS_SYNC_STATE_CANDIDATE) {
|
||||
mInfo("vgId:%d state:%d , not ready for checkpoint or other operations, not check other vgroups",
|
||||
pVgroup->vgId, state);
|
||||
*allReady = false;
|
||||
break;
|
||||
}
|
||||
|
@ -119,7 +133,7 @@ SArray *mndTakeVgroupSnapshot(SMnode *pMnode, bool *allReady) {
|
|||
epsetToStr(&entry.epset, buf, tListLen(buf));
|
||||
|
||||
mDebug("take node snapshot, nodeId:%d %s", entry.nodeId, buf);
|
||||
taosArrayPush(pVgroupListSnapshot, &entry);
|
||||
taosArrayPush(pVgroupList, &entry);
|
||||
sdbRelease(pSdb, pVgroup);
|
||||
}
|
||||
|
||||
|
@ -138,11 +152,11 @@ SArray *mndTakeVgroupSnapshot(SMnode *pMnode, bool *allReady) {
|
|||
epsetToStr(&entry.epset, buf, tListLen(buf));
|
||||
mDebug("take snode snapshot, nodeId:%d %s", entry.nodeId, buf);
|
||||
|
||||
taosArrayPush(pVgroupListSnapshot, &entry);
|
||||
taosArrayPush(pVgroupList, &entry);
|
||||
sdbRelease(pSdb, pObj);
|
||||
}
|
||||
|
||||
return pVgroupListSnapshot;
|
||||
return pVgroupList;
|
||||
}
|
||||
|
||||
SStreamObj *mndGetStreamObj(SMnode *pMnode, int64_t streamId) {
|
||||
|
@ -637,6 +651,7 @@ void removeTasksInBuf(SArray *pTaskIds, SStreamExecInfo* pExecInfo) {
|
|||
void removeStreamTasksInBuf(SStreamObj *pStream, SStreamExecInfo *pExecNode) {
|
||||
taosThreadMutexLock(&pExecNode->lock);
|
||||
|
||||
// 1. remove task entries
|
||||
SStreamTaskIter *pIter = createStreamTaskIter(pStream);
|
||||
while (streamTaskIterNextTask(pIter)) {
|
||||
SStreamTask *pTask = streamTaskIterGetCurrent(pIter);
|
||||
|
@ -646,8 +661,11 @@ void removeStreamTasksInBuf(SStreamObj *pStream, SStreamExecInfo *pExecNode) {
|
|||
}
|
||||
|
||||
ASSERT(taosHashGetSize(pExecNode->pTaskMap) == taosArrayGetSize(pExecNode->pTaskList));
|
||||
taosThreadMutexUnlock(&pExecNode->lock);
|
||||
|
||||
// 2. remove stream entry in consensus hash table
|
||||
mndClearConsensusCheckpointId(execInfo.pStreamConsensus, pStream->uid);
|
||||
|
||||
taosThreadMutexUnlock(&pExecNode->lock);
|
||||
destroyStreamTaskIter(pIter);
|
||||
}
|
||||
|
||||
|
@ -821,50 +839,113 @@ int32_t mndScanCheckpointReportInfo(SRpcMsg *pReq) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t doSendConsensusCheckpointRsp(SRestoreCheckpointInfo* pInfo, SRpcMsg* pMsg, int64_t checkpointId) {
|
||||
static int32_t mndStreamSetChkptIdAction(SMnode *pMnode, STrans *pTrans, SStreamTask* pTask, int64_t checkpointId, int64_t ts) {
|
||||
SRestoreCheckpointInfo req = {
|
||||
.taskId = pTask->id.taskId,
|
||||
.streamId = pTask->id.streamId,
|
||||
.checkpointId = checkpointId,
|
||||
.startTs = ts,
|
||||
.nodeId = pTask->info.nodeId,
|
||||
.transId = pTrans->id,
|
||||
};
|
||||
|
||||
int32_t code = 0;
|
||||
int32_t blen;
|
||||
|
||||
SRestoreCheckpointInfoRsp req = {
|
||||
.streamId = pInfo->streamId, .taskId = pInfo->taskId, .checkpointId = checkpointId, .startTs = pInfo->startTs};
|
||||
|
||||
tEncodeSize(tEncodeRestoreCheckpointInfoRsp, &req, blen, code);
|
||||
tEncodeSize(tEncodeRestoreCheckpointInfo, &req, blen, code);
|
||||
if (code < 0) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return -1;
|
||||
}
|
||||
|
||||
int32_t tlen = sizeof(SMsgHead) + blen;
|
||||
void *abuf = POINTER_SHIFT(pMsg->pCont, sizeof(SMsgHead));
|
||||
|
||||
void *pBuf = taosMemoryMalloc(tlen);
|
||||
if (pBuf == NULL) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return -1;
|
||||
}
|
||||
|
||||
void *abuf = POINTER_SHIFT(pBuf, sizeof(SMsgHead));
|
||||
SEncoder encoder;
|
||||
tEncoderInit(&encoder, abuf, tlen);
|
||||
tEncodeRestoreCheckpointInfoRsp(&encoder, &req);
|
||||
tEncodeRestoreCheckpointInfo(&encoder, &req);
|
||||
|
||||
SMsgHead *pMsgHead = (SMsgHead *)pMsg->pCont;
|
||||
SMsgHead *pMsgHead = (SMsgHead *)pBuf;
|
||||
pMsgHead->contLen = htonl(tlen);
|
||||
pMsgHead->vgId = htonl(pInfo->nodeId);
|
||||
pMsgHead->vgId = htonl(pTask->info.nodeId);
|
||||
|
||||
tEncoderClear(&encoder);
|
||||
|
||||
tmsgSendRsp(pMsg);
|
||||
SEpSet epset = {0};
|
||||
bool hasEpset = false;
|
||||
code = extractNodeEpset(pMnode, &epset, &hasEpset, pTask->id.taskId, pTask->info.nodeId);
|
||||
if (code != TSDB_CODE_SUCCESS || !hasEpset) {
|
||||
taosMemoryFree(pBuf);
|
||||
return code;
|
||||
}
|
||||
|
||||
code = setTransAction(pTrans, pBuf, tlen, TDMT_STREAM_CONSEN_CHKPT, &epset, 0, TSDB_CODE_VND_INVALID_VGROUP_ID);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
taosMemoryFree(pBuf);
|
||||
}
|
||||
|
||||
return code;
|
||||
}
|
||||
|
||||
int32_t mndSendConsensusCheckpointIdRsp(SArray* pInfoList, int64_t checkpointId) {
|
||||
for(int32_t i = 0; i < taosArrayGetSize(pInfoList); ++i) {
|
||||
SCheckpointConsensusEntry* pInfo = taosArrayGet(pInfoList, i);
|
||||
doSendConsensusCheckpointRsp(&pInfo->req, &pInfo->rsp, checkpointId);
|
||||
int32_t mndCreateSetConsensusChkptIdTrans(SMnode *pMnode, SStreamObj *pStream, int32_t taskId, int64_t checkpointId,
|
||||
int64_t ts) {
|
||||
char msg[128] = {0};
|
||||
snprintf(msg, tListLen(msg), "set consen-chkpt-id for task:0x%x", taskId);
|
||||
|
||||
STrans *pTrans = doCreateTrans(pMnode, pStream, NULL, TRN_CONFLICT_NOTHING, MND_STREAM_CHKPT_CONSEN_NAME, msg);
|
||||
if (pTrans == NULL) {
|
||||
return terrno;
|
||||
}
|
||||
return 0;
|
||||
|
||||
STaskId id = {.streamId = pStream->uid, .taskId = taskId};
|
||||
SStreamTask *pTask = mndGetStreamTask(&id, pStream);
|
||||
ASSERT(pTask);
|
||||
|
||||
/*int32_t code = */ mndStreamRegisterTrans(pTrans, MND_STREAM_CHKPT_CONSEN_NAME, pStream->uid);
|
||||
int32_t code = mndStreamSetChkptIdAction(pMnode, pTrans, pTask, checkpointId, ts);
|
||||
if (code != 0) {
|
||||
sdbRelease(pMnode->pSdb, pStream);
|
||||
mndTransDrop(pTrans);
|
||||
return code;
|
||||
}
|
||||
|
||||
code = mndPersistTransLog(pStream, pTrans, SDB_STATUS_READY);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
sdbRelease(pMnode->pSdb, pStream);
|
||||
mndTransDrop(pTrans);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (mndTransPrepare(pMnode, pTrans) != 0) {
|
||||
mError("trans:%d, failed to prepare set consensus-chkptId trans since %s", pTrans->id, terrstr());
|
||||
sdbRelease(pMnode->pSdb, pStream);
|
||||
mndTransDrop(pTrans);
|
||||
return -1;
|
||||
}
|
||||
|
||||
sdbRelease(pMnode->pSdb, pStream);
|
||||
mndTransDrop(pTrans);
|
||||
|
||||
return TSDB_CODE_ACTION_IN_PROGRESS;
|
||||
}
|
||||
|
||||
SCheckpointConsensusInfo* mndGetConsensusInfo(SHashObj* pHash, int64_t streamId) {
|
||||
SCheckpointConsensusInfo* mndGetConsensusInfo(SHashObj* pHash, int64_t streamId, int32_t numOfTasks) {
|
||||
void* pInfo = taosHashGet(pHash, &streamId, sizeof(streamId));
|
||||
if (pInfo != NULL) {
|
||||
return (SCheckpointConsensusInfo*)pInfo;
|
||||
}
|
||||
|
||||
SCheckpointConsensusInfo p = {
|
||||
.genTs = -1, .checkpointId = -1, .pTaskList = taosArrayInit(4, sizeof(SCheckpointConsensusEntry))};
|
||||
.pTaskList = taosArrayInit(4, sizeof(SCheckpointConsensusEntry)),
|
||||
.numOfTasks = numOfTasks,
|
||||
.streamId = streamId,
|
||||
};
|
||||
|
||||
taosHashPut(pHash, &streamId, sizeof(streamId), &p, sizeof(p));
|
||||
|
||||
void* pChkptInfo = (SCheckpointConsensusInfo*)taosHashGet(pHash, &streamId, sizeof(streamId));
|
||||
|
@ -873,87 +954,27 @@ SCheckpointConsensusInfo* mndGetConsensusInfo(SHashObj* pHash, int64_t streamId)
|
|||
|
||||
// no matter existed or not, add the request into info list anyway, since we need to send rsp mannually
|
||||
// discard the msg may lead to the lost of connections.
|
||||
void mndAddConsensusTasks(SCheckpointConsensusInfo *pInfo, const SRestoreCheckpointInfo *pRestoreInfo, SRpcMsg *pMsg) {
|
||||
SCheckpointConsensusEntry info = {0};
|
||||
void mndAddConsensusTasks(SCheckpointConsensusInfo *pInfo, const SRestoreCheckpointInfo *pRestoreInfo) {
|
||||
SCheckpointConsensusEntry info = {.ts = taosGetTimestampMs()};
|
||||
memcpy(&info.req, pRestoreInfo, sizeof(info.req));
|
||||
|
||||
info.rsp.code = 0;
|
||||
info.rsp.info = pMsg->info;
|
||||
info.rsp.contLen = sizeof(SRestoreCheckpointInfoRsp) + sizeof(SMsgHead);
|
||||
info.rsp.pCont = rpcMallocCont(info.rsp.contLen);
|
||||
|
||||
SMsgHead *pHead = info.rsp.pCont;
|
||||
pHead->vgId = htonl(pRestoreInfo->nodeId);
|
||||
for (int32_t i = 0; i < taosArrayGetSize(pInfo->pTaskList); ++i) {
|
||||
SCheckpointConsensusEntry *p = taosArrayGet(pInfo->pTaskList, i);
|
||||
if (p->req.taskId == info.req.taskId) {
|
||||
mDebug("s-task:0x%x already in consensus-checkpointId list for stream:0x%" PRIx64 ", update ts %" PRId64
|
||||
"->%" PRId64 " total existed:%d",
|
||||
pRestoreInfo->taskId, pRestoreInfo->streamId, p->req.startTs, info.req.startTs,
|
||||
(int32_t)taosArrayGetSize(pInfo->pTaskList));
|
||||
p->req.startTs = info.req.startTs;
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
taosArrayPush(pInfo->pTaskList, &info);
|
||||
}
|
||||
|
||||
static int32_t entryComparFn(const void* p1, const void* p2) {
|
||||
const SCheckpointConsensusEntry* pe1 = p1;
|
||||
const SCheckpointConsensusEntry* pe2 = p2;
|
||||
|
||||
if (pe1->req.taskId == pe2->req.taskId) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
return pe1->req.taskId < pe2->req.taskId? -1:1;
|
||||
}
|
||||
|
||||
bool mndAllTaskSendCheckpointId(SCheckpointConsensusInfo* pInfo, int32_t numOfTasks, int32_t* pTotal) {
|
||||
int32_t numOfExisted = taosArrayGetSize(pInfo->pTaskList);
|
||||
if (numOfExisted < numOfTasks) {
|
||||
if (pTotal != NULL) {
|
||||
*pTotal = numOfExisted;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
taosArraySort(pInfo->pTaskList, entryComparFn);
|
||||
|
||||
int32_t num = 1;
|
||||
int32_t taskId = ((SCheckpointConsensusEntry*)taosArrayGet(pInfo->pTaskList, 0))->req.taskId;
|
||||
for(int32_t i = 1; i < taosArrayGetSize(pInfo->pTaskList); ++i) {
|
||||
SCheckpointConsensusEntry* pe = taosArrayGet(pInfo->pTaskList, i);
|
||||
if (pe->req.taskId != taskId) {
|
||||
num += 1;
|
||||
taskId = pe->req.taskId;
|
||||
}
|
||||
}
|
||||
|
||||
if (pTotal != NULL) {
|
||||
*pTotal = num;
|
||||
}
|
||||
|
||||
ASSERT(num <= numOfTasks);
|
||||
return num == numOfTasks;
|
||||
}
|
||||
|
||||
int64_t mndGetConsensusCheckpointId(SCheckpointConsensusInfo* pInfo, SStreamObj* pStream) {
|
||||
if (pInfo->genTs > 0) { // there is no checkpoint ever generated if the checkpointId is 0.
|
||||
mDebug("existed consensus-checkpointId:%" PRId64 " for stream:0x%" PRIx64 " %s exist, and return",
|
||||
pInfo->checkpointId, pStream->uid, pStream->name);
|
||||
return pInfo->checkpointId;
|
||||
}
|
||||
|
||||
int32_t numOfTasks = mndGetNumOfStreamTasks(pStream);
|
||||
if (!mndAllTaskSendCheckpointId(pInfo, numOfTasks, NULL)) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
int64_t checkpointId = INT64_MAX;
|
||||
|
||||
for (int32_t i = 0; i < taosArrayGetSize(pInfo->pTaskList); ++i) {
|
||||
SCheckpointConsensusEntry *pEntry = taosArrayGet(pInfo->pTaskList, i);
|
||||
if (pEntry->req.checkpointId < checkpointId) {
|
||||
checkpointId = pEntry->req.checkpointId;
|
||||
mTrace("stream:0x%" PRIx64 " %s task:0x%x vgId:%d latest checkpointId:%" PRId64, pStream->uid, pStream->name,
|
||||
pEntry->req.taskId, pEntry->req.nodeId, pEntry->req.checkpointId);
|
||||
}
|
||||
}
|
||||
|
||||
pInfo->checkpointId = checkpointId;
|
||||
pInfo->genTs = taosGetTimestampMs();
|
||||
return checkpointId;
|
||||
int32_t num = taosArrayGetSize(pInfo->pTaskList);
|
||||
mDebug("s-task:0x%x checkpointId:%" PRId64 " added into consensus-checkpointId list, stream:0x%" PRIx64
|
||||
" waiting tasks:%d",
|
||||
pRestoreInfo->taskId, pRestoreInfo->checkpointId, pRestoreInfo->streamId, num);
|
||||
}
|
||||
|
||||
void mndClearConsensusRspEntry(SCheckpointConsensusInfo* pInfo) {
|
||||
|
@ -968,15 +989,15 @@ int64_t mndClearConsensusCheckpointId(SHashObj* pHash, int64_t streamId) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t mndRegisterConsensusChkptId(SHashObj* pHash, int64_t streamId) {
|
||||
void* pInfo = taosHashGet(pHash, &streamId, sizeof(streamId));
|
||||
ASSERT(pInfo == NULL);
|
||||
|
||||
SCheckpointConsensusInfo p = {.genTs = taosGetTimestampMs(), .checkpointId = 0, .pTaskList = NULL};
|
||||
taosHashPut(pHash, &streamId, sizeof(streamId), &p, sizeof(p));
|
||||
|
||||
SCheckpointConsensusInfo* pChkptInfo = (SCheckpointConsensusInfo*)taosHashGet(pHash, &streamId, sizeof(streamId));
|
||||
ASSERT(pChkptInfo->genTs > 0 && pChkptInfo->checkpointId == 0);
|
||||
mDebug("s-task:0x%" PRIx64 " set the initial consensus-checkpointId:0", streamId);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
//int32_t mndRegisterConsensusChkptId(SHashObj* pHash, int64_t streamId) {
|
||||
// void* pInfo = taosHashGet(pHash, &streamId, sizeof(streamId));
|
||||
// ASSERT(pInfo == NULL);
|
||||
//
|
||||
// SCheckpointConsensusInfo p = {.genTs = taosGetTimestampMs(), .checkpointId = 0, .pTaskList = NULL};
|
||||
// taosHashPut(pHash, &streamId, sizeof(streamId), &p, sizeof(p));
|
||||
//
|
||||
// SCheckpointConsensusInfo* pChkptInfo = (SCheckpointConsensusInfo*)taosHashGet(pHash, &streamId, sizeof(streamId));
|
||||
// ASSERT(pChkptInfo->genTs > 0 && pChkptInfo->checkpointId == 0);
|
||||
// mDebug("s-task:0x%" PRIx64 " set the initial consensus-checkpointId:0", streamId);
|
||||
// return TSDB_CODE_SUCCESS;
|
||||
//}
|
|
@ -1241,6 +1241,7 @@ static void mndTransResetActions(SMnode *pMnode, STrans *pTrans, SArray *pArray)
|
|||
}
|
||||
}
|
||||
|
||||
// execute at bottom half
|
||||
static int32_t mndTransWriteSingleLog(SMnode *pMnode, STrans *pTrans, STransAction *pAction, bool topHalf) {
|
||||
if (pAction->rawWritten) return 0;
|
||||
if (topHalf) {
|
||||
|
@ -1267,6 +1268,7 @@ static int32_t mndTransWriteSingleLog(SMnode *pMnode, STrans *pTrans, STransActi
|
|||
return code;
|
||||
}
|
||||
|
||||
// execute at top half
|
||||
static int32_t mndTransSendSingleMsg(SMnode *pMnode, STrans *pTrans, STransAction *pAction, bool topHalf) {
|
||||
if (pAction->msgSent) return 0;
|
||||
if (mndCannotExecuteTransAction(pMnode, topHalf)) {
|
||||
|
@ -1644,6 +1646,11 @@ static bool mndTransPerformCommitActionStage(SMnode *pMnode, STrans *pTrans, boo
|
|||
pTrans->stage = TRN_STAGE_FINISH; // TRN_STAGE_PRE_FINISH is not necessary
|
||||
mInfo("trans:%d, stage from commitAction to finished", pTrans->id);
|
||||
continueExec = true;
|
||||
} else if (code == TSDB_CODE_MND_TRANS_CTX_SWITCH && topHalf) {
|
||||
pTrans->code = 0;
|
||||
pTrans->stage = TRN_STAGE_COMMIT;
|
||||
mInfo("trans:%d, back to commit stage", pTrans->id);
|
||||
continueExec = true;
|
||||
} else {
|
||||
pTrans->code = terrno;
|
||||
pTrans->failedTimes++;
|
||||
|
@ -1783,11 +1790,13 @@ void mndTransExecuteImp(SMnode *pMnode, STrans *pTrans, bool topHalf) {
|
|||
mndTransSendRpcRsp(pMnode, pTrans);
|
||||
}
|
||||
|
||||
// start trans, pullup, receive rsp, kill
|
||||
void mndTransExecute(SMnode *pMnode, STrans *pTrans) {
|
||||
bool topHalf = true;
|
||||
return mndTransExecuteImp(pMnode, pTrans, topHalf);
|
||||
}
|
||||
|
||||
// update trans
|
||||
void mndTransRefresh(SMnode *pMnode, STrans *pTrans) {
|
||||
bool topHalf = false;
|
||||
return mndTransExecuteImp(pMnode, pTrans, topHalf);
|
||||
|
|
|
@ -43,14 +43,14 @@ int32_t sndBuildStreamTask(SSnode *pSnode, SStreamTask *pTask, int64_t nextProce
|
|||
|
||||
char *p = streamTaskGetStatus(pTask)->name;
|
||||
if (pTask->info.fillHistory) {
|
||||
sndInfo("vgId:%d expand stream task, s-task:%s, checkpointId:%" PRId64 " checkpointVer:%" PRId64
|
||||
sndInfo("vgId:%d build stream task, s-task:%s, checkpointId:%" PRId64 " checkpointVer:%" PRId64
|
||||
" nextProcessVer:%" PRId64
|
||||
" child id:%d, level:%d, status:%s fill-history:%d, related stream task:0x%x trigger:%" PRId64 " ms",
|
||||
SNODE_HANDLE, pTask->id.idStr, pChkInfo->checkpointId, pChkInfo->checkpointVer, pChkInfo->nextProcessVer,
|
||||
pTask->info.selfChildId, pTask->info.taskLevel, p, pTask->info.fillHistory,
|
||||
(int32_t)pTask->streamTaskId.taskId, pTask->info.delaySchedParam);
|
||||
} else {
|
||||
sndInfo("vgId:%d expand stream task, s-task:%s, checkpointId:%" PRId64 " checkpointVer:%" PRId64
|
||||
sndInfo("vgId:%d build stream task, s-task:%s, checkpointId:%" PRId64 " checkpointVer:%" PRId64
|
||||
" nextProcessVer:%" PRId64
|
||||
" child id:%d, level:%d, status:%s fill-history:%d, related fill-task:0x%x trigger:%" PRId64 " ms",
|
||||
SNODE_HANDLE, pTask->id.idStr, pChkInfo->checkpointId, pChkInfo->checkpointVer, pChkInfo->nextProcessVer,
|
||||
|
@ -149,15 +149,15 @@ int32_t sndProcessWriteMsg(SSnode *pSnode, SRpcMsg *pMsg, SRpcMsg *pRsp) {
|
|||
case TDMT_VND_STREAM_TASK_UPDATE:
|
||||
return tqStreamTaskProcessUpdateReq(pSnode->pMeta, &pSnode->msgCb, pMsg, true);
|
||||
case TDMT_VND_STREAM_TASK_RESET:
|
||||
return tqStreamTaskProcessTaskResetReq(pSnode->pMeta, pMsg);
|
||||
return tqStreamTaskProcessTaskResetReq(pSnode->pMeta, pMsg->pCont);
|
||||
case TDMT_STREAM_TASK_PAUSE:
|
||||
return tqStreamTaskProcessTaskPauseReq(pSnode->pMeta, pMsg->pCont);
|
||||
case TDMT_STREAM_TASK_RESUME:
|
||||
return tqStreamTaskProcessTaskResumeReq(pSnode->pMeta, pMsg->info.conn.applyIndex, pMsg->pCont, false);
|
||||
case TDMT_STREAM_TASK_UPDATE_CHKPT:
|
||||
return tqStreamTaskProcessUpdateCheckpointReq(pSnode->pMeta, true, pMsg->pCont, pMsg->contLen);
|
||||
case TDMT_MND_STREAM_CHKPT_CONSEN_RSP:
|
||||
return tqStreamProcessConsensusChkptRsp(pSnode->pMeta, pMsg);
|
||||
return tqStreamTaskProcessUpdateCheckpointReq(pSnode->pMeta, true, pMsg->pCont);
|
||||
case TDMT_STREAM_CONSEN_CHKPT:
|
||||
return tqStreamTaskProcessConsenChkptIdReq(pSnode->pMeta, pMsg);
|
||||
default:
|
||||
ASSERT(0);
|
||||
}
|
||||
|
|
|
@ -298,6 +298,7 @@ int32_t tqProcessTaskRetrieveRsp(STQ* pTq, SRpcMsg* pMsg);
|
|||
int32_t tqProcessTaskScanHistory(STQ* pTq, SRpcMsg* pMsg);
|
||||
int32_t tqStreamProgressRetrieveReq(STQ* pTq, SRpcMsg* pMsg);
|
||||
int32_t tqProcessTaskUpdateCheckpointReq(STQ* pTq, char* msg, int32_t msgLen);
|
||||
int32_t tqProcessTaskConsenChkptIdReq(STQ* pTq, SRpcMsg* pMsg);
|
||||
|
||||
// sma
|
||||
int32_t smaInit();
|
||||
|
|
|
@ -1016,7 +1016,11 @@ int32_t tqProcessTaskDropReq(STQ* pTq, char* msg, int32_t msgLen) {
|
|||
}
|
||||
|
||||
int32_t tqProcessTaskUpdateCheckpointReq(STQ* pTq, char* msg, int32_t msgLen) {
|
||||
return tqStreamTaskProcessUpdateCheckpointReq(pTq->pStreamMeta, pTq->pVnode->restored, msg, msgLen);
|
||||
return tqStreamTaskProcessUpdateCheckpointReq(pTq->pStreamMeta, pTq->pVnode->restored, msg);
|
||||
}
|
||||
|
||||
int32_t tqProcessTaskConsenChkptIdReq(STQ* pTq, SRpcMsg* pMsg) {
|
||||
return tqStreamTaskProcessConsenChkptIdReq(pTq->pStreamMeta, pMsg);
|
||||
}
|
||||
|
||||
int32_t tqProcessTaskPauseReq(STQ* pTq, int64_t sversion, char* msg, int32_t msgLen) {
|
||||
|
@ -1239,7 +1243,7 @@ int32_t tqProcessTaskUpdateReq(STQ* pTq, SRpcMsg* pMsg) {
|
|||
}
|
||||
|
||||
int32_t tqProcessTaskResetReq(STQ* pTq, SRpcMsg* pMsg) {
|
||||
return tqStreamTaskProcessTaskResetReq(pTq->pStreamMeta, pMsg);
|
||||
return tqStreamTaskProcessTaskResetReq(pTq->pStreamMeta, pMsg->pCont);
|
||||
}
|
||||
|
||||
int32_t tqProcessTaskRetrieveTriggerReq(STQ* pTq, SRpcMsg* pMsg) {
|
||||
|
@ -1277,5 +1281,5 @@ int32_t tqProcessTaskChkptReportRsp(STQ* pTq, SRpcMsg* pMsg) {
|
|||
}
|
||||
|
||||
int32_t tqProcessTaskConsensusChkptRsp(STQ* pTq, SRpcMsg* pMsg) {
|
||||
return tqStreamProcessConsensusChkptRsp(pTq->pStreamMeta, pMsg);
|
||||
return tqStreamProcessConsensusChkptRsp2(pTq->pStreamMeta, pMsg);
|
||||
}
|
||||
|
|
|
@ -51,10 +51,9 @@ int32_t streamStateSnapReaderOpen(STQ* pTq, int64_t sver, int64_t ever, SStreamS
|
|||
|
||||
SStreamSnapReader* pSnapReader = NULL;
|
||||
|
||||
if (streamSnapReaderOpen(meta, sver, chkpId, meta->path, &pSnapReader) == 0) {
|
||||
if ((code = streamSnapReaderOpen(meta, sver, chkpId, meta->path, &pSnapReader)) == 0) {
|
||||
pReader->complete = 1;
|
||||
} else {
|
||||
code = -1;
|
||||
taosMemoryFree(pReader);
|
||||
goto _err;
|
||||
}
|
||||
|
@ -75,7 +74,7 @@ _err:
|
|||
int32_t streamStateSnapReaderClose(SStreamStateReader* pReader) {
|
||||
int32_t code = 0;
|
||||
tqDebug("vgId:%d, vnode %s snapshot reader closed", TD_VID(pReader->pTq->pVnode), STREAM_STATE_TRANSFER);
|
||||
streamSnapReaderClose(pReader->pReaderImpl);
|
||||
code = streamSnapReaderClose(pReader->pReaderImpl);
|
||||
taosMemoryFree(pReader);
|
||||
return code;
|
||||
}
|
||||
|
@ -138,32 +137,36 @@ int32_t streamStateSnapWriterOpen(STQ* pTq, int64_t sver, int64_t ever, SStreamS
|
|||
pWriter->sver = sver;
|
||||
pWriter->ever = ever;
|
||||
|
||||
taosMkDir(pTq->pStreamMeta->path);
|
||||
|
||||
SStreamSnapWriter* pSnapWriter = NULL;
|
||||
if (streamSnapWriterOpen(pTq, sver, ever, pTq->pStreamMeta->path, &pSnapWriter) < 0) {
|
||||
if (taosMkDir(pTq->pStreamMeta->path) != 0) {
|
||||
code = TAOS_SYSTEM_ERROR(errno);
|
||||
tqError("vgId:%d, vnode %s snapshot writer failed to create directory %s since %s", TD_VID(pTq->pVnode),
|
||||
STREAM_STATE_TRANSFER, pTq->pStreamMeta->path, tstrerror(code));
|
||||
goto _err;
|
||||
}
|
||||
|
||||
tqDebug("vgId:%d, vnode %s snapshot writer opened, path:%s", TD_VID(pTq->pVnode), STREAM_STATE_TRANSFER, pTq->pStreamMeta->path);
|
||||
SStreamSnapWriter* pSnapWriter = NULL;
|
||||
if ((code = streamSnapWriterOpen(pTq, sver, ever, pTq->pStreamMeta->path, &pSnapWriter)) < 0) {
|
||||
goto _err;
|
||||
}
|
||||
|
||||
tqDebug("vgId:%d, vnode %s snapshot writer opened, path:%s", TD_VID(pTq->pVnode), STREAM_STATE_TRANSFER,
|
||||
pTq->pStreamMeta->path);
|
||||
pWriter->pWriterImpl = pSnapWriter;
|
||||
|
||||
*ppWriter = pWriter;
|
||||
return code;
|
||||
return 0;
|
||||
|
||||
_err:
|
||||
tqError("vgId:%d, vnode %s snapshot writer failed to open since %s", TD_VID(pTq->pVnode), STREAM_STATE_TRANSFER,
|
||||
tstrerror(code));
|
||||
taosMemoryFree(pWriter);
|
||||
*ppWriter = NULL;
|
||||
return -1;
|
||||
return code;
|
||||
}
|
||||
|
||||
int32_t streamStateSnapWriterClose(SStreamStateWriter* pWriter, int8_t rollback) {
|
||||
int32_t code = 0;
|
||||
tqDebug("vgId:%d, vnode %s snapshot writer closed", TD_VID(pWriter->pTq->pVnode), STREAM_STATE_TRANSFER);
|
||||
code = streamSnapWriterClose(pWriter->pWriterImpl, rollback);
|
||||
|
||||
return code;
|
||||
return streamSnapWriterClose(pWriter->pWriterImpl, rollback);
|
||||
}
|
||||
|
||||
int32_t streamStateSnapWrite(SStreamStateWriter* pWriter, uint8_t* pData, uint32_t nData) {
|
||||
|
|
|
@ -228,6 +228,9 @@ int32_t tqStreamTaskProcessUpdateReq(SStreamMeta* pMeta, SMsgCb* cb, SRpcMsg* pM
|
|||
}
|
||||
|
||||
updated = streamTaskUpdateEpsetInfo(pTask, req.pNodeList);
|
||||
|
||||
// send the checkpoint-source-rsp for source task to end the checkpoint trans in mnode
|
||||
streamTaskSendPreparedCheckpointsourceRsp(pTask);
|
||||
streamTaskResetStatus(pTask);
|
||||
|
||||
streamTaskStopMonitorCheckRsp(&pTask->taskCheckInfo, pTask->id.idStr);
|
||||
|
@ -264,7 +267,6 @@ int32_t tqStreamTaskProcessUpdateReq(SStreamMeta* pMeta, SMsgCb* cb, SRpcMsg* pM
|
|||
tqDebug("s-task:%s vgId:%d not save task since not update epset actually, stop task", idstr, vgId);
|
||||
}
|
||||
|
||||
// stop
|
||||
streamTaskStop(pTask);
|
||||
if (ppHTask != NULL) {
|
||||
streamTaskStop(*ppHTask);
|
||||
|
@ -279,7 +281,10 @@ int32_t tqStreamTaskProcessUpdateReq(SStreamMeta* pMeta, SMsgCb* cb, SRpcMsg* pM
|
|||
int32_t numOfTasks = streamMetaGetNumOfTasks(pMeta);
|
||||
int32_t updateTasks = taosHashGetSize(pMeta->updateInfo.pTasks);
|
||||
|
||||
if (restored) {
|
||||
tqDebug("vgId:%d s-task:0x%x update epset transId:%d, set the restart flag", vgId, req.taskId, req.transId);
|
||||
pMeta->startInfo.tasksWillRestart = 1;
|
||||
}
|
||||
|
||||
if (updateTasks < numOfTasks) {
|
||||
tqDebug("vgId:%d closed tasks:%d, unclosed:%d, all tasks will be started when nodeEp update completed", vgId,
|
||||
|
@ -292,8 +297,7 @@ int32_t tqStreamTaskProcessUpdateReq(SStreamMeta* pMeta, SMsgCb* cb, SRpcMsg* pM
|
|||
streamMetaClearUpdateTaskList(pMeta);
|
||||
|
||||
if (!restored) {
|
||||
tqDebug("vgId:%d vnode restore not completed, not start the tasks, clear the start after nodeUpdate flag", vgId);
|
||||
pMeta->startInfo.tasksWillRestart = 0;
|
||||
tqDebug("vgId:%d vnode restore not completed, not start all tasks", vgId);
|
||||
} else {
|
||||
tqDebug("vgId:%d all %d task(s) nodeEp updated and closed, transId:%d", vgId, numOfTasks, req.transId);
|
||||
#if 0
|
||||
|
@ -666,7 +670,7 @@ int32_t tqStreamTaskProcessDropReq(SStreamMeta* pMeta, char* msg, int32_t msgLen
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t tqStreamTaskProcessUpdateCheckpointReq(SStreamMeta* pMeta, bool restored, char* msg, int32_t msgLen) {
|
||||
int32_t tqStreamTaskProcessUpdateCheckpointReq(SStreamMeta* pMeta, bool restored, char* msg) {
|
||||
SVUpdateCheckpointInfoReq* pReq = (SVUpdateCheckpointInfoReq*)msg;
|
||||
|
||||
int32_t vgId = pMeta->vgId;
|
||||
|
@ -738,6 +742,7 @@ static int32_t restartStreamTasks(SStreamMeta* pMeta, bool isLeader) {
|
|||
streamMetaStartAllTasks(pMeta);
|
||||
} else {
|
||||
streamMetaResetStartInfo(&pMeta->startInfo, pMeta->vgId);
|
||||
pMeta->startInfo.restartCount = 0;
|
||||
streamMetaWUnLock(pMeta);
|
||||
tqInfo("vgId:%d, follower node not start stream tasks or stream is disabled", vgId);
|
||||
}
|
||||
|
@ -854,8 +859,8 @@ int32_t tqStartTaskCompleteCallback(SStreamMeta* pMeta) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t tqStreamTaskProcessTaskResetReq(SStreamMeta* pMeta, SRpcMsg* pMsg) {
|
||||
SVPauseStreamTaskReq* pReq = (SVPauseStreamTaskReq*)pMsg->pCont;
|
||||
int32_t tqStreamTaskProcessTaskResetReq(SStreamMeta* pMeta, char* pMsg) {
|
||||
SVPauseStreamTaskReq* pReq = (SVPauseStreamTaskReq*)pMsg;
|
||||
|
||||
SStreamTask* pTask = streamMetaAcquireTask(pMeta, pReq->streamId, pReq->taskId);
|
||||
if (pTask == NULL) {
|
||||
|
@ -867,7 +872,6 @@ int32_t tqStreamTaskProcessTaskResetReq(SStreamMeta* pMeta, SRpcMsg* pMsg) {
|
|||
tqDebug("s-task:%s receive task-reset msg from mnode, reset status and ready for data processing", pTask->id.idStr);
|
||||
|
||||
taosThreadMutexLock(&pTask->lock);
|
||||
|
||||
streamTaskClearCheckInfo(pTask, true);
|
||||
|
||||
// clear flag set during do checkpoint, and open inputQ for all upstream tasks
|
||||
|
@ -882,9 +886,10 @@ int32_t tqStreamTaskProcessTaskResetReq(SStreamMeta* pMeta, SRpcMsg* pMsg) {
|
|||
|
||||
streamTaskSetStatusReady(pTask);
|
||||
} else if (pState->state == TASK_STATUS__UNINIT) {
|
||||
tqDebug("s-task:%s start task by checking downstream tasks", pTask->id.idStr);
|
||||
ASSERT(pTask->status.downstreamReady == 0);
|
||||
tqStreamTaskRestoreCheckpoint(pMeta, pTask->id.streamId, pTask->id.taskId);
|
||||
// tqDebug("s-task:%s start task by checking downstream tasks", pTask->id.idStr);
|
||||
// ASSERT(pTask->status.downstreamReady == 0);
|
||||
// tqStreamTaskRestoreCheckpoint(pMeta, pTask->id.streamId, pTask->id.taskId);
|
||||
tqDebug("s-task:%s status:%s do nothing after receiving reset-task from mnode", pTask->id.idStr, pState->name);
|
||||
} else {
|
||||
tqDebug("s-task:%s status:%s do nothing after receiving reset-task from mnode", pTask->id.idStr, pState->name);
|
||||
}
|
||||
|
@ -1111,6 +1116,8 @@ int32_t tqStreamProcessReqCheckpointRsp(SStreamMeta* pMeta, SRpcMsg* pMsg) { ret
|
|||
|
||||
int32_t tqStreamProcessChkptReportRsp(SStreamMeta* pMeta, SRpcMsg* pMsg) { return doProcessDummyRspMsg(pMeta, pMsg); }
|
||||
|
||||
int32_t tqStreamProcessConsensusChkptRsp2(SStreamMeta* pMeta, SRpcMsg* pMsg) { return doProcessDummyRspMsg(pMeta, pMsg); }
|
||||
|
||||
int32_t tqStreamProcessCheckpointReadyRsp(SStreamMeta* pMeta, SRpcMsg* pMsg) {
|
||||
SMStreamCheckpointReadyRspMsg* pRsp = pMsg->pCont;
|
||||
|
||||
|
@ -1126,22 +1133,21 @@ int32_t tqStreamProcessCheckpointReadyRsp(SStreamMeta* pMeta, SRpcMsg* pMsg) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t tqStreamProcessConsensusChkptRsp(SStreamMeta* pMeta, SRpcMsg* pMsg) {
|
||||
int32_t tqStreamTaskProcessConsenChkptIdReq(SStreamMeta* pMeta, SRpcMsg* pMsg) {
|
||||
int32_t vgId = pMeta->vgId;
|
||||
int32_t code = 0;
|
||||
|
||||
char* msg = POINTER_SHIFT(pMsg->pCont, sizeof(SMsgHead));
|
||||
int32_t len = pMsg->contLen - sizeof(SMsgHead);
|
||||
SRpcMsg rsp = {.info = pMsg->info, .code = TSDB_CODE_SUCCESS};
|
||||
int64_t now = taosGetTimestampMs();
|
||||
|
||||
SRestoreCheckpointInfoRsp req = {0};
|
||||
SRestoreCheckpointInfo req = {0};
|
||||
|
||||
SDecoder decoder;
|
||||
tDecoderInit(&decoder, (uint8_t*)msg, len);
|
||||
|
||||
rsp.info.handle = NULL;
|
||||
if (tDecodeRestoreCheckpointInfoRsp(&decoder, &req) < 0) {
|
||||
// rsp.code = TSDB_CODE_MSG_DECODE_ERROR; // disable it temporarily
|
||||
tqError("vgId:%d failed to decode restore task checkpointId, code:%s", vgId, tstrerror(rsp.code));
|
||||
if (tDecodeRestoreCheckpointInfo(&decoder, &req) < 0) {
|
||||
tqError("vgId:%d failed to decode set consensus checkpointId req, code:%s", vgId, tstrerror(code));
|
||||
tDecoderClear(&decoder);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -1150,7 +1156,7 @@ int32_t tqStreamProcessConsensusChkptRsp(SStreamMeta* pMeta, SRpcMsg* pMsg) {
|
|||
|
||||
SStreamTask* pTask = streamMetaAcquireTask(pMeta, req.streamId, req.taskId);
|
||||
if (pTask == NULL) {
|
||||
tqError("vgId:%d process restore checkpointId req, failed to acquire task:0x%x, it may have been dropped already",
|
||||
tqError("vgId:%d process set consensus checkpointId req, failed to acquire task:0x%x, it may have been dropped already",
|
||||
pMeta->vgId, req.taskId);
|
||||
streamMetaAddFailedTask(pMeta, req.streamId, req.taskId);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -1172,16 +1178,25 @@ int32_t tqStreamProcessConsensusChkptRsp(SStreamMeta* pMeta, SRpcMsg* pMsg) {
|
|||
taosThreadMutexLock(&pTask->lock);
|
||||
ASSERT(pTask->chkInfo.checkpointId >= req.checkpointId);
|
||||
|
||||
if (pTask->chkInfo.consensusTransId >= req.transId) {
|
||||
tqDebug("s-task:%s vgId:%d latest consensus transId:%d, expired consensus trans:%d, discard",
|
||||
pTask->id.idStr, vgId, pTask->chkInfo.consensusTransId, req.transId);
|
||||
taosThreadMutexUnlock(&pTask->lock);
|
||||
streamMetaReleaseTask(pMeta, pTask);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
if (pTask->chkInfo.checkpointId != req.checkpointId) {
|
||||
tqDebug("s-task:%s vgId:%d update the checkpoint from %" PRId64 " to %" PRId64, pTask->id.idStr, vgId,
|
||||
pTask->chkInfo.checkpointId, req.checkpointId);
|
||||
tqDebug("s-task:%s vgId:%d update the checkpoint from %" PRId64 " to %" PRId64" transId:%d", pTask->id.idStr, vgId,
|
||||
pTask->chkInfo.checkpointId, req.checkpointId, req.transId);
|
||||
pTask->chkInfo.checkpointId = req.checkpointId;
|
||||
tqSetRestoreVersionInfo(pTask);
|
||||
} else {
|
||||
tqDebug("s-task:%s vgId:%d consensus-checkpointId:%" PRId64 " equals to current checkpointId, no need to update",
|
||||
pTask->id.idStr, vgId, req.checkpointId);
|
||||
tqDebug("s-task:%s vgId:%d consensus-checkpointId:%" PRId64 " equals to current id, transId:%d not update",
|
||||
pTask->id.idStr, vgId, req.checkpointId, req.transId);
|
||||
}
|
||||
|
||||
pTask->chkInfo.consensusTransId = req.transId;
|
||||
taosThreadMutexUnlock(&pTask->lock);
|
||||
|
||||
if (pMeta->role == NODE_ROLE_LEADER) {
|
||||
|
|
|
@ -1072,7 +1072,7 @@ static int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, SArray
|
|||
rocksdb_writebatch_t *wb = pTsdb->rCache.writebatch;
|
||||
for (int i = 0; i < num_keys; ++i) {
|
||||
SIdxKey *idxKey = &((SIdxKey *)TARRAY_DATA(remainCols))[i];
|
||||
SLastUpdateCtx *updCtx = (SLastUpdateCtx *)taosArrayGet(updCtxArray, i);
|
||||
SLastUpdateCtx *updCtx = (SLastUpdateCtx *)taosArrayGet(updCtxArray, idxKey->idx);
|
||||
SRowKey *pRowKey = &updCtx->tsdbRowKey.key;
|
||||
SColVal *pColVal = &updCtx->colVal;
|
||||
|
||||
|
|
|
@ -630,6 +630,11 @@ int32_t vnodeProcessWriteMsg(SVnode *pVnode, SRpcMsg *pMsg, int64_t ver, SRpcMsg
|
|||
goto _err;
|
||||
}
|
||||
} break;
|
||||
case TDMT_STREAM_CONSEN_CHKPT: {
|
||||
if (pVnode->restored) {
|
||||
tqProcessTaskConsenChkptIdReq(pVnode->pTq, pMsg);
|
||||
}
|
||||
} break;
|
||||
case TDMT_STREAM_TASK_PAUSE: {
|
||||
if (pVnode->restored && vnodeIsLeader(pVnode) &&
|
||||
tqProcessTaskPauseReq(pVnode->pTq, ver, pMsg->pCont, pMsg->contLen) < 0) {
|
||||
|
@ -647,11 +652,6 @@ int32_t vnodeProcessWriteMsg(SVnode *pVnode, SRpcMsg *pMsg, int64_t ver, SRpcMsg
|
|||
tqProcessTaskResetReq(pVnode->pTq, pMsg);
|
||||
}
|
||||
} break;
|
||||
case TDMT_MND_STREAM_CHKPT_CONSEN_RSP: {
|
||||
if (pVnode->restored) {
|
||||
tqProcessTaskConsensusChkptRsp(pVnode->pTq, pMsg);
|
||||
}
|
||||
} break;
|
||||
case TDMT_VND_ALTER_CONFIRM:
|
||||
needCommit = pVnode->config.hashChange;
|
||||
if (vnodeProcessAlterConfirmReq(pVnode, ver, pReq, len, pRsp) < 0) {
|
||||
|
@ -738,7 +738,7 @@ int32_t vnodePreprocessQueryMsg(SVnode *pVnode, SRpcMsg *pMsg) {
|
|||
return qWorkerPreprocessQueryMsg(pVnode->pQuery, pMsg, TDMT_SCH_QUERY == pMsg->msgType);
|
||||
}
|
||||
|
||||
int32_t vnodeProcessQueryMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo* pInfo) {
|
||||
int32_t vnodeProcessQueryMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pInfo) {
|
||||
vTrace("message in vnode query queue is processing");
|
||||
if ((pMsg->msgType == TDMT_SCH_QUERY || pMsg->msgType == TDMT_VND_TMQ_CONSUME ||
|
||||
pMsg->msgType == TDMT_VND_TMQ_CONSUME_PUSH) &&
|
||||
|
@ -1978,6 +1978,9 @@ _exit:
|
|||
return code;
|
||||
}
|
||||
|
||||
extern int32_t tsdbDisableAndCancelAllBgTask(STsdb *pTsdb);
|
||||
extern int32_t tsdbEnableBgTask(STsdb *pTsdb);
|
||||
|
||||
static int32_t vnodeProcessAlterConfigReq(SVnode *pVnode, int64_t ver, void *pReq, int32_t len, SRpcMsg *pRsp) {
|
||||
bool walChanged = false;
|
||||
bool tsdbChanged = false;
|
||||
|
@ -2075,7 +2078,14 @@ static int32_t vnodeProcessAlterConfigReq(SVnode *pVnode, int64_t ver, void *pRe
|
|||
}
|
||||
|
||||
if (req.sttTrigger != -1 && req.sttTrigger != pVnode->config.sttTrigger) {
|
||||
if (req.sttTrigger > 1 && pVnode->config.sttTrigger > 1) {
|
||||
pVnode->config.sttTrigger = req.sttTrigger;
|
||||
} else {
|
||||
vnodeAWait(&pVnode->commitTask);
|
||||
tsdbDisableAndCancelAllBgTask(pVnode->pTsdb);
|
||||
pVnode->config.sttTrigger = req.sttTrigger;
|
||||
tsdbEnableBgTask(pVnode->pTsdb);
|
||||
}
|
||||
}
|
||||
|
||||
if (req.minRows != -1 && req.minRows != pVnode->config.tsdbCfg.minRows) {
|
||||
|
|
|
@ -128,6 +128,7 @@ typedef struct SExprSupp {
|
|||
SqlFunctionCtx* pCtx;
|
||||
int32_t* rowEntryInfoOffset; // offset value for each row result cell info
|
||||
SFilterInfo* pFilterInfo;
|
||||
bool hasWindowOrGroup;
|
||||
} SExprSupp;
|
||||
|
||||
typedef enum {
|
||||
|
|
|
@ -77,6 +77,8 @@ SOperatorInfo* createAggregateOperatorInfo(SOperatorInfo* downstream, SAggPhysiN
|
|||
goto _error;
|
||||
}
|
||||
|
||||
pOperator->exprSupp.hasWindowOrGroup = false;
|
||||
|
||||
SSDataBlock* pResBlock = createDataBlockFromDescNode(pAggNode->node.pOutputDataBlockDesc);
|
||||
initBasicInfo(&pInfo->binfo, pResBlock);
|
||||
|
||||
|
@ -519,6 +521,7 @@ int32_t initAggSup(SExprSupp* pSup, SAggSupporter* pAggSup, SExprInfo* pExprInfo
|
|||
}
|
||||
|
||||
for (int32_t i = 0; i < numOfCols; ++i) {
|
||||
pSup->pCtx[i].hasWindowOrGroup = pSup->hasWindowOrGroup;
|
||||
if (pState) {
|
||||
pSup->pCtx[i].saveHandle.pBuf = NULL;
|
||||
pSup->pCtx[i].saveHandle.pState = pState;
|
||||
|
|
|
@ -207,6 +207,8 @@ SOperatorInfo* createCountwindowOperatorInfo(SOperatorInfo* downstream, SPhysiNo
|
|||
goto _error;
|
||||
}
|
||||
|
||||
pOperator->exprSupp.hasWindowOrGroup = true;
|
||||
|
||||
int32_t code = TSDB_CODE_SUCCESS;
|
||||
SCountWinodwPhysiNode* pCountWindowNode = (SCountWinodwPhysiNode*)physiNode;
|
||||
|
||||
|
|
|
@ -66,6 +66,8 @@ SOperatorInfo* createEventwindowOperatorInfo(SOperatorInfo* downstream, SPhysiNo
|
|||
goto _error;
|
||||
}
|
||||
|
||||
pOperator->exprSupp.hasWindowOrGroup = true;
|
||||
|
||||
SEventWinodwPhysiNode* pEventWindowNode = (SEventWinodwPhysiNode*)physiNode;
|
||||
|
||||
int32_t tsSlotId = ((SColumnNode*)pEventWindowNode->window.pTspk)->slotId;
|
||||
|
|
|
@ -1716,6 +1716,7 @@ SqlFunctionCtx* createSqlFunctionCtx(SExprInfo* pExprInfo, int32_t numOfOutput,
|
|||
pCtx->param = pFunct->pParam;
|
||||
pCtx->saveHandle.currentPage = -1;
|
||||
pCtx->pStore = pStore;
|
||||
pCtx->hasWindowOrGroup = false;
|
||||
}
|
||||
|
||||
for (int32_t i = 1; i < numOfOutput; ++i) {
|
||||
|
@ -2187,7 +2188,7 @@ int32_t buildGroupIdMapForAllTables(STableListInfo* pTableListInfo, SReadHandle*
|
|||
|
||||
for (int i = 0; i < numOfTables; i++) {
|
||||
STableKeyInfo* info = taosArrayGet(pTableListInfo->pTableList, i);
|
||||
info->groupId = info->uid;
|
||||
info->groupId = groupByTbname ? info->uid : 0;
|
||||
|
||||
taosHashPut(pTableListInfo->remainGroups, &(info->groupId), sizeof(info->groupId), &(info->uid),
|
||||
sizeof(info->uid));
|
||||
|
|
|
@ -909,12 +909,21 @@ void initBasicInfo(SOptrBasicInfo* pInfo, SSDataBlock* pBlock) {
|
|||
initResultRowInfo(&pInfo->resultRowInfo);
|
||||
}
|
||||
|
||||
static void* destroySqlFunctionCtx(SqlFunctionCtx* pCtx, int32_t numOfOutput) {
|
||||
static void* destroySqlFunctionCtx(SqlFunctionCtx* pCtx, SExprInfo* pExpr, int32_t numOfOutput) {
|
||||
if (pCtx == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
for (int32_t i = 0; i < numOfOutput; ++i) {
|
||||
if (pExpr != NULL) {
|
||||
SExprInfo* pExprInfo = &pExpr[i];
|
||||
for (int32_t j = 0; j < pExprInfo->base.numOfParams; ++j) {
|
||||
if (pExprInfo->base.pParam[j].type == FUNC_PARAM_TYPE_VALUE) {
|
||||
taosMemoryFree(pCtx[i].input.pData[j]);
|
||||
taosMemoryFree(pCtx[i].input.pColumnDataAgg[j]);
|
||||
}
|
||||
}
|
||||
}
|
||||
for (int32_t j = 0; j < pCtx[i].numOfParams; ++j) {
|
||||
taosVariantDestroy(&pCtx[i].param[j].param);
|
||||
}
|
||||
|
@ -947,7 +956,7 @@ int32_t initExprSupp(SExprSupp* pSup, SExprInfo* pExprInfo, int32_t numOfExpr, S
|
|||
}
|
||||
|
||||
void cleanupExprSupp(SExprSupp* pSupp) {
|
||||
destroySqlFunctionCtx(pSupp->pCtx, pSupp->numOfExprs);
|
||||
destroySqlFunctionCtx(pSupp->pCtx, pSupp->pExprInfo, pSupp->numOfExprs);
|
||||
if (pSupp->pExprInfo != NULL) {
|
||||
destroyExprInfo(pSupp->pExprInfo, pSupp->numOfExprs);
|
||||
taosMemoryFreeClear(pSupp->pExprInfo);
|
||||
|
|
|
@ -502,6 +502,8 @@ SOperatorInfo* createGroupOperatorInfo(SOperatorInfo* downstream, SAggPhysiNode*
|
|||
goto _error;
|
||||
}
|
||||
|
||||
pOperator->exprSupp.hasWindowOrGroup = true;
|
||||
|
||||
SSDataBlock* pResBlock = createDataBlockFromDescNode(pAggNode->node.pOutputDataBlockDesc);
|
||||
initBasicInfo(&pInfo->binfo, pResBlock);
|
||||
|
||||
|
|
|
@ -99,6 +99,7 @@ SOperatorInfo* createProjectOperatorInfo(SOperatorInfo* downstream, SProjectPhys
|
|||
goto _error;
|
||||
}
|
||||
|
||||
pOperator->exprSupp.hasWindowOrGroup = false;
|
||||
pOperator->pTaskInfo = pTaskInfo;
|
||||
|
||||
int32_t numOfCols = 0;
|
||||
|
@ -409,6 +410,7 @@ SOperatorInfo* createIndefinitOutputOperatorInfo(SOperatorInfo* downstream, SPhy
|
|||
pOperator->pTaskInfo = pTaskInfo;
|
||||
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
pSup->hasWindowOrGroup = false;
|
||||
|
||||
SIndefRowsFuncPhysiNode* pPhyNode = (SIndefRowsFuncPhysiNode*)pNode;
|
||||
|
||||
|
@ -724,9 +726,12 @@ static void setPseudoOutputColInfo(SSDataBlock* pResult, SqlFunctionCtx* pCtx, S
|
|||
|
||||
int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBlock* pSrcBlock, SqlFunctionCtx* pCtx,
|
||||
int32_t numOfOutput, SArray* pPseudoList) {
|
||||
int32_t code = TSDB_CODE_SUCCESS;
|
||||
setPseudoOutputColInfo(pResult, pCtx, pPseudoList);
|
||||
pResult->info.dataLoad = 1;
|
||||
|
||||
SArray* processByRowFunctionCtx = NULL;
|
||||
|
||||
if (pSrcBlock == NULL) {
|
||||
for (int32_t k = 0; k < numOfOutput; ++k) {
|
||||
int32_t outputSlotId = pExpr[k].base.resSchema.slotId;
|
||||
|
@ -743,7 +748,7 @@ int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBloc
|
|||
}
|
||||
|
||||
pResult->info.rows = 1;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
goto _exit;
|
||||
}
|
||||
|
||||
if (pResult != pSrcBlock) {
|
||||
|
@ -816,10 +821,10 @@ int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBloc
|
|||
SColumnInfoData idata = {.info = pResColData->info, .hasNull = true};
|
||||
|
||||
SScalarParam dest = {.columnData = &idata};
|
||||
int32_t code = scalarCalculate(pExpr[k].pExpr->_optrRoot.pRootNode, pBlockList, &dest);
|
||||
code = scalarCalculate(pExpr[k].pExpr->_optrRoot.pRootNode, pBlockList, &dest);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
taosArrayDestroy(pBlockList);
|
||||
return code;
|
||||
goto _exit;
|
||||
}
|
||||
|
||||
int32_t startOffset = createNewColModel ? 0 : pResult->info.rows;
|
||||
|
@ -852,11 +857,21 @@ int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBloc
|
|||
pfCtx->pDstBlock = pResult;
|
||||
}
|
||||
|
||||
int32_t code = pfCtx->fpSet.process(pfCtx);
|
||||
code = pfCtx->fpSet.process(pfCtx);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
return code;
|
||||
goto _exit;
|
||||
}
|
||||
numOfRows = pResInfo->numOfRes;
|
||||
if (fmIsProcessByRowFunc(pfCtx->functionId)) {
|
||||
if (NULL == processByRowFunctionCtx) {
|
||||
processByRowFunctionCtx = taosArrayInit(1, sizeof(SqlFunctionCtx*));
|
||||
if (!processByRowFunctionCtx) {
|
||||
code = terrno;
|
||||
goto _exit;
|
||||
}
|
||||
}
|
||||
taosArrayPush(processByRowFunctionCtx, &pfCtx);
|
||||
}
|
||||
} else if (fmIsAggFunc(pfCtx->functionId)) {
|
||||
// selective value output should be set during corresponding function execution
|
||||
if (fmIsSelectValueFunc(pfCtx->functionId)) {
|
||||
|
@ -886,10 +901,10 @@ int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBloc
|
|||
SColumnInfoData idata = {.info = pResColData->info, .hasNull = true};
|
||||
|
||||
SScalarParam dest = {.columnData = &idata};
|
||||
int32_t code = scalarCalculate((SNode*)pExpr[k].pExpr->_function.pFunctNode, pBlockList, &dest);
|
||||
code = scalarCalculate((SNode*)pExpr[k].pExpr->_function.pFunctNode, pBlockList, &dest);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
taosArrayDestroy(pBlockList);
|
||||
return code;
|
||||
goto _exit;
|
||||
}
|
||||
|
||||
int32_t startOffset = createNewColModel ? 0 : pResult->info.rows;
|
||||
|
@ -905,9 +920,21 @@ int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBloc
|
|||
}
|
||||
}
|
||||
|
||||
if (processByRowFunctionCtx && taosArrayGetSize(processByRowFunctionCtx) > 0){
|
||||
SqlFunctionCtx** pfCtx = taosArrayGet(processByRowFunctionCtx, 0);
|
||||
code = (*pfCtx)->fpSet.processFuncByRow(processByRowFunctionCtx);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _exit;
|
||||
}
|
||||
numOfRows = (*pfCtx)->resultInfo->numOfRes;
|
||||
}
|
||||
if (!createNewColModel) {
|
||||
pResult->info.rows += numOfRows;
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
_exit:
|
||||
if(processByRowFunctionCtx) {
|
||||
taosArrayDestroy(processByRowFunctionCtx);
|
||||
processByRowFunctionCtx = NULL;
|
||||
}
|
||||
return code;
|
||||
}
|
||||
|
|
|
@ -2187,6 +2187,17 @@ static void rebuildDeleteBlockData(SSDataBlock* pBlock, STimeWindow* pWindow, co
|
|||
taosMemoryFree(p);
|
||||
}
|
||||
|
||||
static int32_t colIdComparFn(const void* param1, const void * param2) {
|
||||
int32_t p1 = *(int32_t*) param1;
|
||||
int32_t p2 = *(int32_t*) param2;
|
||||
|
||||
if (p1 == p2) {
|
||||
return 0;
|
||||
} else {
|
||||
return (p1 < p2)? -1:1;
|
||||
}
|
||||
}
|
||||
|
||||
static int32_t setBlockIntoRes(SStreamScanInfo* pInfo, const SSDataBlock* pBlock, STimeWindow* pTimeWindow, bool filter) {
|
||||
SDataBlockInfo* pBlockInfo = &pInfo->pRes->info;
|
||||
SOperatorInfo* pOperator = pInfo->pStreamScanOp;
|
||||
|
@ -2203,6 +2214,8 @@ static int32_t setBlockIntoRes(SStreamScanInfo* pInfo, const SSDataBlock* pBlock
|
|||
STableScanInfo* pTableScanInfo = pInfo->pTableScanOp->info;
|
||||
pBlockInfo->id.groupId = tableListGetTableGroupId(pTableScanInfo->base.pTableListInfo, pBlock->info.id.uid);
|
||||
|
||||
SArray* pColList = taosArrayInit(4, sizeof(int32_t));
|
||||
|
||||
// todo extract method
|
||||
for (int32_t i = 0; i < taosArrayGetSize(pInfo->matchInfo.pList); ++i) {
|
||||
SColMatchItem* pColMatchInfo = taosArrayGet(pInfo->matchInfo.pList, i);
|
||||
|
@ -2217,6 +2230,7 @@ static int32_t setBlockIntoRes(SStreamScanInfo* pInfo, const SSDataBlock* pBlock
|
|||
SColumnInfoData* pDst = taosArrayGet(pInfo->pRes->pDataBlock, pColMatchInfo->dstSlotId);
|
||||
colDataAssign(pDst, pResCol, pBlock->info.rows, &pInfo->pRes->info);
|
||||
colExists = true;
|
||||
taosArrayPush(pColList, &pColMatchInfo->dstSlotId);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
@ -2225,6 +2239,7 @@ static int32_t setBlockIntoRes(SStreamScanInfo* pInfo, const SSDataBlock* pBlock
|
|||
if (!colExists) {
|
||||
SColumnInfoData* pDst = taosArrayGet(pInfo->pRes->pDataBlock, pColMatchInfo->dstSlotId);
|
||||
colDataSetNNULL(pDst, 0, pBlockInfo->rows);
|
||||
taosArrayPush(pColList, &pColMatchInfo->dstSlotId);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -2240,7 +2255,34 @@ static int32_t setBlockIntoRes(SStreamScanInfo* pInfo, const SSDataBlock* pBlock
|
|||
|
||||
// reset the error code.
|
||||
terrno = 0;
|
||||
|
||||
for(int32_t i = 0; i < pInfo->numOfPseudoExpr; ++i) {
|
||||
taosArrayPush(pColList, &pInfo->pPseudoExpr[i].base.resSchema.slotId);
|
||||
}
|
||||
}
|
||||
|
||||
taosArraySort(pColList, colIdComparFn);
|
||||
|
||||
int32_t i = 0, j = 0;
|
||||
while(i < taosArrayGetSize(pColList)) {
|
||||
int32_t slot1 = *(int32_t*)taosArrayGet(pColList, i);
|
||||
if (slot1 > j) {
|
||||
SColumnInfoData* pDst = taosArrayGet(pInfo->pRes->pDataBlock, j);
|
||||
colDataSetNNULL(pDst, 0, pBlockInfo->rows);
|
||||
j += 1;
|
||||
} else {
|
||||
i += 1;
|
||||
j += 1;
|
||||
}
|
||||
}
|
||||
|
||||
while(j < taosArrayGetSize(pInfo->pRes->pDataBlock)) {
|
||||
SColumnInfoData* pDst = taosArrayGet(pInfo->pRes->pDataBlock, j);
|
||||
colDataSetNNULL(pDst, 0, pBlockInfo->rows);
|
||||
j += 1;
|
||||
}
|
||||
|
||||
taosArrayDestroy(pColList);
|
||||
|
||||
if (filter) {
|
||||
doFilter(pInfo->pRes, pOperator->exprSupp.pFilterInfo, NULL);
|
||||
|
|
|
@ -1236,8 +1236,15 @@ static void copyIntervalDeleteKey(SSHashObj* pMap, SArray* pWins) {
|
|||
|
||||
static SSDataBlock* buildIntervalResult(SOperatorInfo* pOperator) {
|
||||
SStreamIntervalOperatorInfo* pInfo = pOperator->info;
|
||||
|
||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||
uint16_t opType = pOperator->operatorType;
|
||||
|
||||
// check if query task is closed or not
|
||||
if (isTaskKilled(pTaskInfo)) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (IS_FINAL_INTERVAL_OP(pOperator)) {
|
||||
doBuildPullDataBlock(pInfo->pPullWins, &pInfo->pullIndex, pInfo->pPullDataRes);
|
||||
if (pInfo->pPullDataRes->info.rows != 0) {
|
||||
|
@ -1546,6 +1553,7 @@ SOperatorInfo* createStreamFinalIntervalOperatorInfo(SOperatorInfo* downstream,
|
|||
goto _error;
|
||||
}
|
||||
|
||||
pOperator->exprSupp.hasWindowOrGroup = true;
|
||||
pOperator->pTaskInfo = pTaskInfo;
|
||||
SStorageAPI* pAPI = &pTaskInfo->storageAPI;
|
||||
|
||||
|
@ -3121,6 +3129,7 @@ _error:
|
|||
static void clearStreamSessionOperator(SStreamSessionAggOperatorInfo* pInfo) {
|
||||
tSimpleHashClear(pInfo->streamAggSup.pResultRows);
|
||||
pInfo->streamAggSup.stateStore.streamStateSessionClear(pInfo->streamAggSup.pState);
|
||||
pInfo->clearState = false;
|
||||
}
|
||||
|
||||
void deleteSessionWinState(SStreamAggSupporter* pAggSup, SSDataBlock* pBlock, SSHashObj* pMapUpdate,
|
||||
|
@ -3170,7 +3179,6 @@ static SSDataBlock* doStreamSessionSemiAgg(SOperatorInfo* pOperator) {
|
|||
// semi session operator clear disk buffer
|
||||
clearStreamSessionOperator(pInfo);
|
||||
setStreamOperatorCompleted(pOperator);
|
||||
pInfo->clearState = false;
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
|
@ -4209,6 +4217,8 @@ SOperatorInfo* createStreamIntervalOperatorInfo(SOperatorInfo* downstream, SPhys
|
|||
pInfo->ignoreExpiredDataSaved = false;
|
||||
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
pSup->hasWindowOrGroup = true;
|
||||
|
||||
initBasicInfo(&pInfo->binfo, pResBlock);
|
||||
initExecTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pTaskInfo->window);
|
||||
|
||||
|
|
|
@ -1213,6 +1213,8 @@ SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SIntervalPh
|
|||
initBasicInfo(&pInfo->binfo, pResBlock);
|
||||
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
pSup->hasWindowOrGroup = true;
|
||||
|
||||
pInfo->primaryTsIndex = ((SColumnNode*)pPhyNode->window.pTspk)->slotId;
|
||||
|
||||
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
||||
|
@ -1461,6 +1463,7 @@ SOperatorInfo* createStatewindowOperatorInfo(SOperatorInfo* downstream, SStateWi
|
|||
goto _error;
|
||||
}
|
||||
|
||||
pOperator->exprSupp.hasWindowOrGroup = true;
|
||||
int32_t tsSlotId = ((SColumnNode*)pStateNode->window.pTspk)->slotId;
|
||||
SColumnNode* pColNode = (SColumnNode*)(pStateNode->pStateKey);
|
||||
|
||||
|
@ -1558,6 +1561,8 @@ SOperatorInfo* createSessionAggOperatorInfo(SOperatorInfo* downstream, SSessionW
|
|||
goto _error;
|
||||
}
|
||||
|
||||
pOperator->exprSupp.hasWindowOrGroup = true;
|
||||
|
||||
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
||||
initResultSizeInfo(&pOperator->resultInfo, 4096);
|
||||
|
||||
|
@ -1845,6 +1850,7 @@ SOperatorInfo* createMergeAlignedIntervalOperatorInfo(SOperatorInfo* downstream,
|
|||
|
||||
SIntervalAggOperatorInfo* iaInfo = miaInfo->intervalAggOperatorInfo;
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
pSup->hasWindowOrGroup = true;
|
||||
|
||||
int32_t code = filterInitFromNode((SNode*)pNode->window.node.pConditions, &pOperator->exprSupp.pFilterInfo, 0);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
|
@ -2147,6 +2153,7 @@ SOperatorInfo* createMergeIntervalOperatorInfo(SOperatorInfo* downstream, SMerge
|
|||
pIntervalInfo->binfo.outputTsOrder = pIntervalPhyNode->window.node.outputTsOrder;
|
||||
|
||||
SExprSupp* pExprSupp = &pOperator->exprSupp;
|
||||
pExprSupp->hasWindowOrGroup = true;
|
||||
|
||||
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
||||
initResultSizeInfo(&pOperator->resultInfo, 4096);
|
||||
|
|
|
@ -50,6 +50,7 @@ typedef struct SBuiltinFuncDefinition {
|
|||
const char* pStateFunc;
|
||||
FCreateMergeFuncParameters createMergeParaFuc;
|
||||
FEstimateReturnRows estimateReturnRowsFunc;
|
||||
processFuncByRow processFuncByRow;
|
||||
} SBuiltinFuncDefinition;
|
||||
|
||||
extern const SBuiltinFuncDefinition funcMgtBuiltins[];
|
||||
|
|
|
@ -133,6 +133,7 @@ int32_t getApercentileMaxSize();
|
|||
bool getDiffFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
|
||||
bool diffFunctionSetup(SqlFunctionCtx* pCtx, SResultRowEntryInfo* pResInfo);
|
||||
int32_t diffFunction(SqlFunctionCtx* pCtx);
|
||||
int32_t diffFunctionByRow(SArray* pCtx);
|
||||
|
||||
bool getDerivativeFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
|
||||
bool derivativeFuncSetup(SqlFunctionCtx* pCtx, SResultRowEntryInfo* pResInfo);
|
||||
|
|
|
@ -57,6 +57,7 @@ extern "C" {
|
|||
#define FUNC_MGT_PRIMARY_KEY_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(28)
|
||||
#define FUNC_MGT_TSMA_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(29)
|
||||
#define FUNC_MGT_COUNT_LIKE_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(30) // funcs that should also return 0 when no rows found
|
||||
#define FUNC_MGT_PROCESS_BY_ROW FUNC_MGT_FUNC_CLASSIFICATION_MASK(31)
|
||||
|
||||
#define FUNC_MGT_TEST_MASK(val, mask) (((val) & (mask)) != 0)
|
||||
|
||||
|
|
|
@ -67,7 +67,7 @@ typedef struct tMemBucket {
|
|||
SHashObj *groupPagesMap; // disk page map for different groups;
|
||||
} tMemBucket;
|
||||
|
||||
tMemBucket *tMemBucketCreate(int32_t nElemSize, int16_t dataType, double minval, double maxval);
|
||||
tMemBucket *tMemBucketCreate(int32_t nElemSize, int16_t dataType, double minval, double maxval, bool hasWindowOrGroup);
|
||||
|
||||
void tMemBucketDestroy(tMemBucket *pBucket);
|
||||
|
||||
|
|