Merge branch '3.0' into fix/3_liaohj
This commit is contained in:
commit
3c5ca17737
|
@ -25,7 +25,7 @@ create_definition:
|
||||||
col_name column_definition
|
col_name column_definition
|
||||||
|
|
||||||
column_definition:
|
column_definition:
|
||||||
type_name [comment 'string_value']
|
type_name [comment 'string_value'] [PRIMARY KEY]
|
||||||
|
|
||||||
table_options:
|
table_options:
|
||||||
table_option ...
|
table_option ...
|
||||||
|
@ -41,11 +41,12 @@ table_option: {
|
||||||
**More explanations**
|
**More explanations**
|
||||||
|
|
||||||
1. The first column of a table MUST be of type TIMESTAMP. It is automatically set as the primary key.
|
1. The first column of a table MUST be of type TIMESTAMP. It is automatically set as the primary key.
|
||||||
2. The maximum length of the table name is 192 bytes.
|
2. In addition to the timestamp primary key column, an additional primary key column can be specified using the `PRIMARY KEY` keyword. The second column specified as the primary key must be of type integer or string (varchar).
|
||||||
3. The maximum length of each row is 48k(64k since version 3.0.5.0) bytes, please note that the extra 2 bytes used by each BINARY/NCHAR/GEOMETRY column are also counted.
|
3. The maximum length of the table name is 192 bytes.
|
||||||
4. The name of the subtable can only consist of characters from the English alphabet, digits and underscore. Table names can't start with a digit. Table names are case insensitive.
|
4. The maximum length of each row is 48k(64k since version 3.0.5.0) bytes, please note that the extra 2 bytes used by each BINARY/NCHAR/GEOMETRY column are also counted.
|
||||||
5. The maximum length in bytes must be specified when using BINARY/NCHAR/GEOMETRY types.
|
5. The name of the subtable can only consist of characters from the English alphabet, digits and underscore. Table names can't start with a digit. Table names are case insensitive.
|
||||||
6. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive.
|
6. The maximum length in bytes must be specified when using BINARY/NCHAR/GEOMETRY types.
|
||||||
|
7. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive.
|
||||||
For example \`aBc\` and \`abc\` are different table names but `abc` and `aBc` are same table names because they are both converted to `abc` internally.
|
For example \`aBc\` and \`abc\` are different table names but `abc` and `aBc` are same table names because they are both converted to `abc` internally.
|
||||||
Only ASCII visible characters can be used with escape character.
|
Only ASCII visible characters can be used with escape character.
|
||||||
|
|
||||||
|
@ -107,6 +108,7 @@ You can perform the following modifications on existing tables:
|
||||||
2. DROP COLUMN: deletes a column from the supertable.
|
2. DROP COLUMN: deletes a column from the supertable.
|
||||||
3. MODIFY COLUMN: changes the length of the data type specified for the column. Note that you can only specify a length greater than the current length.
|
3. MODIFY COLUMN: changes the length of the data type specified for the column. Note that you can only specify a length greater than the current length.
|
||||||
4. RENAME COLUMN: renames a specified column in the table.
|
4. RENAME COLUMN: renames a specified column in the table.
|
||||||
|
5. The primary key column of a table cannot be modified or added or deleted using ADD/DROP COLUMN.
|
||||||
|
|
||||||
### Add a Column
|
### Add a Column
|
||||||
|
|
||||||
|
|
|
@ -147,6 +147,7 @@ Modifications to the table schema of a supertable take effect on all subtables w
|
||||||
- DROP TAG: deletes a tag from the supertable. When you delete a tag from a supertable, it is automatically deleted from all subtables within the supertable.
|
- DROP TAG: deletes a tag from the supertable. When you delete a tag from a supertable, it is automatically deleted from all subtables within the supertable.
|
||||||
- MODIFY TAG: modifies the definition of a tag in the supertable. You can use this keyword to change the length of a BINARY or NCHAR tag column. Note that you can only specify a length greater than the current length.
|
- MODIFY TAG: modifies the definition of a tag in the supertable. You can use this keyword to change the length of a BINARY or NCHAR tag column. Note that you can only specify a length greater than the current length.
|
||||||
- RENAME TAG: renames a specified tag in the supertable. When you rename a tag in a supertable, it is automatically renamed in all subtables within the supertable.
|
- RENAME TAG: renames a specified tag in the supertable. When you rename a tag in a supertable, it is automatically renamed in all subtables within the supertable.
|
||||||
|
- Like odinary tables, the primary key of a supertable cannot be modified or added or deleted using ADD/DROP COLUMN.
|
||||||
|
|
||||||
### Add a Column
|
### Add a Column
|
||||||
|
|
||||||
|
|
|
@ -57,6 +57,7 @@ INSERT INTO
|
||||||
```
|
```
|
||||||
|
|
||||||
6. However, an INSERT statement that writes data to multiple subtables can succeed for some tables and fail for others. This situation is caused because vnodes perform write operations independently of each other. One vnode failing to write data does not affect the ability of other vnodes to write successfully.
|
6. However, an INSERT statement that writes data to multiple subtables can succeed for some tables and fail for others. This situation is caused because vnodes perform write operations independently of each other. One vnode failing to write data does not affect the ability of other vnodes to write successfully.
|
||||||
|
7. The primary key column value must be specified and cannot be NULL.
|
||||||
|
|
||||||
**Normal Syntax**
|
**Normal Syntax**
|
||||||
1. The USING clause automatically creates the specified subtable if it does not exist. If it's unknown whether the table already exists, the table can be created automatically while inserting using the SQL statement below. To use this functionality, a STable must be used as template and tag values must be provided. Any tags that you do not specify will be assigned a null value.
|
1. The USING clause automatically creates the specified subtable if it does not exist. If it's unknown whether the table already exists, the table can be created automatically while inserting using the SQL statement below. To use this functionality, a STable must be used as template and tag values must be provided. Any tags that you do not specify will be assigned a null value.
|
||||||
|
|
|
@ -39,7 +39,7 @@ select_expr: {
|
||||||
|
|
||||||
from_clause: {
|
from_clause: {
|
||||||
table_reference [, table_reference] ...
|
table_reference [, table_reference] ...
|
||||||
| join_clause [, join_clause] ...
|
| table_reference join_clause [, join_clause] ...
|
||||||
}
|
}
|
||||||
|
|
||||||
table_reference:
|
table_reference:
|
||||||
|
@ -52,7 +52,7 @@ table_expr: {
|
||||||
}
|
}
|
||||||
|
|
||||||
join_clause:
|
join_clause:
|
||||||
table_reference [INNER] JOIN table_reference ON condition
|
[INNER|LEFT|RIGHT|FULL] [OUTER|SEMI|ANTI|ASOF|WINDOW] JOIN table_reference [ON condition] [WINDOW_OFFSET(start_offset, end_offset)] [JLIMIT jlimit_num]
|
||||||
|
|
||||||
window_clause: {
|
window_clause: {
|
||||||
SESSION(ts_col, tol_val)
|
SESSION(ts_col, tol_val)
|
||||||
|
@ -408,9 +408,11 @@ SELECT AVG(CASE WHEN voltage < 200 or voltage > 250 THEN 220 ELSE voltage END) F
|
||||||
|
|
||||||
## JOIN
|
## JOIN
|
||||||
|
|
||||||
TDengine supports the `INTER JOIN` based on the timestamp primary key, that is, the `JOIN` condition must contain the timestamp primary key. As long as the requirement of timestamp-based primary key is met, `INTER JOIN` can be made between normal tables, sub-tables, super tables and sub-queries at will, and there is no limit on the number of tables, primary key and other conditions must be combined with `AND` operator.
|
Before the 3.3.0.0 version, TDengine only supported Inner Join queries. Since the 3.3.0.0 version, TDengine supports a wider range of JOIN types, including LEFT JOIN, RIGHT JOIN, FULL JOIN, SEMI JOIN, ANTI-SEMI JOIN in traditional databases, as well as ASOF JOIN and WINDOW JOIN in time series databases. JOIN operations are supported between subtables, normal tables, super tables, and subqueries.
|
||||||
|
|
||||||
For standard tables:
|
### Examples
|
||||||
|
|
||||||
|
INNER JOIN between normal tables:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT *
|
SELECT *
|
||||||
|
@ -418,23 +420,23 @@ FROM temp_tb_1 t1, pressure_tb_1 t2
|
||||||
WHERE t1.ts = t2.ts
|
WHERE t1.ts = t2.ts
|
||||||
```
|
```
|
||||||
|
|
||||||
For supertables:
|
LEFT JOIN between super tables:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT *
|
SELECT *
|
||||||
FROM temp_stable t1, temp_stable t2
|
FROM temp_stable t1 LEFT JOIN temp_stable t2
|
||||||
WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
|
ON t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
|
||||||
```
|
```
|
||||||
|
|
||||||
For sub-table and super table:
|
LEFT ASOF JOIN between child table and super table:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT *
|
SELECT *
|
||||||
FROM temp_ctable t1, temp_stable t2
|
FROM temp_ctable t1 LEFT ASOF JOIN temp_stable t2
|
||||||
WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
|
ON t1.ts = t2.ts AND t1.deviceid = t2.deviceid;
|
||||||
```
|
```
|
||||||
|
|
||||||
Similarly, join operations can be performed on the result sets of multiple subqueries.
|
For more information about JOIN operations, please refer to the page [TDengine Join] (../join).
|
||||||
|
|
||||||
## Nested Query
|
## Nested Query
|
||||||
|
|
||||||
|
|
|
@ -34,6 +34,13 @@ SELECT * FROM information_schema.INS_INDEXES
|
||||||
|
|
||||||
You can also add filter conditions to limit the results.
|
You can also add filter conditions to limit the results.
|
||||||
|
|
||||||
|
|
||||||
|
````sql
|
||||||
|
SHOW INDEXES FROM tbl_name [FROM db_name];
|
||||||
|
SHOW INDEXES FROM [db_name.]tbl_name ;
|
||||||
|
````
|
||||||
|
Use `show indexes` commands to show indices that have been created for the specified database or table.
|
||||||
|
|
||||||
## Detailed Specification
|
## Detailed Specification
|
||||||
|
|
||||||
1. Indexes can improve query performance significantly if they are used properly. The operators supported by tag index include `=`, `>`, `>=`, `<`, `<=`. If you use these operators with tags, indexes can improve query performance significantly. However, for operators not in this scope, indexes don't help. More and more operators will be added in future.
|
1. Indexes can improve query performance significantly if they are used properly. The operators supported by tag index include `=`, `>`, `>=`, `<`, `<=`. If you use these operators with tags, indexes can improve query performance significantly. However, for operators not in this scope, indexes don't help. More and more operators will be added in future.
|
||||||
|
|
|
@ -503,38 +503,38 @@ TO_CHAR(ts, format_str_literal)
|
||||||
|
|
||||||
**Supported Formats**
|
**Supported Formats**
|
||||||
|
|
||||||
| **Format** | **Comment**| **example** |
|
| **Format** | **Comment** | **example** |
|
||||||
| --- | --- | --- |
|
| ------------------- | ---------------------------------------------- | ------------------------- |
|
||||||
|AM,am,PM,pm| Meridiem indicator(without periods) | 07:00:00am|
|
| AM,am,PM,pm | Meridiem indicator(without periods) | 07:00:00am |
|
||||||
|A.M.,a.m.,P.M.,p.m.| Meridiem indicator(with periods)| 07:00:00a.m.|
|
| A.M.,a.m.,P.M.,p.m. | Meridiem indicator(with periods) | 07:00:00a.m. |
|
||||||
|YYYY,yyyy|year, 4 or more digits| 2023-10-10|
|
| YYYY,yyyy | year, 4 or more digits | 2023-10-10 |
|
||||||
|YYY,yyy| year, last 3 digits| 023-10-10|
|
| YYY,yyy | year, last 3 digits | 023-10-10 |
|
||||||
|YY,yy| year, last 2 digits| 23-10-10|
|
| YY,yy | year, last 2 digits | 23-10-10 |
|
||||||
|Y,y| year, last digit| 3-10-10|
|
| Y,y | year, last digit | 3-10-10 |
|
||||||
|MONTH|full uppercase of month| 2023-JANUARY-01|
|
| MONTH | full uppercase of month | 2023-JANUARY-01 |
|
||||||
|Month|full capitalized month| 2023-January-01|
|
| Month | full capitalized month | 2023-January-01 |
|
||||||
|month|full lowercase of month| 2023-january-01|
|
| month | full lowercase of month | 2023-january-01 |
|
||||||
|MON| abbreviated uppercase of month(3 char)| JAN, SEP|
|
| MON | abbreviated uppercase of month(3 char) | JAN, SEP |
|
||||||
|Mon| abbreviated capitalized month| Jan, Sep|
|
| Mon | abbreviated capitalized month | Jan, Sep |
|
||||||
|mon|abbreviated lowercase of month| jan, sep|
|
| mon | abbreviated lowercase of month | jan, sep |
|
||||||
|MM,mm|month number 01-12|2023-01-01|
|
| MM,mm | month number 01-12 | 2023-01-01 |
|
||||||
|DD,dd|month day, 01-31||
|
| DD,dd | month day, 01-31 | |
|
||||||
|DAY|full uppercase of week day|MONDAY|
|
| DAY | full uppercase of week day | MONDAY |
|
||||||
|Day|full capitalized week day|Monday|
|
| Day | full capitalized week day | Monday |
|
||||||
|day|full lowercase of week day|monday|
|
| day | full lowercase of week day | monday |
|
||||||
|DY|abbreviated uppercase of week day|MON|
|
| DY | abbreviated uppercase of week day | MON |
|
||||||
|Dy|abbreviated capitalized week day|Mon|
|
| Dy | abbreviated capitalized week day | Mon |
|
||||||
|dy|abbreviated lowercase of week day|mon|
|
| dy | abbreviated lowercase of week day | mon |
|
||||||
|DDD|year day, 001-366||
|
| DDD | year day, 001-366 | |
|
||||||
|D,d|week day number, 1-7, Sunday(1) to Saturday(7)||
|
| D,d | week day number, 1-7, Sunday(1) to Saturday(7) | |
|
||||||
|HH24,hh24|hour of day, 00-23|2023-01-30 23:59:59|
|
| HH24,hh24 | hour of day, 00-23 | 2023-01-30 23:59:59 |
|
||||||
|hh12,HH12, hh, HH| hour of day, 01-12|2023-01-30 12:59:59PM|
|
| hh12,HH12, hh, HH | hour of day, 01-12 | 2023-01-30 12:59:59PM |
|
||||||
|MI,mi|minute, 00-59||
|
| MI,mi | minute, 00-59 | |
|
||||||
|SS,ss|second, 00-59||
|
| SS,ss | second, 00-59 | |
|
||||||
|MS,ms|milli second, 000-999||
|
| MS,ms | milli second, 000-999 | |
|
||||||
|US,us|micro second, 000000-999999||
|
| US,us | micro second, 000000-999999 | |
|
||||||
|NS,ns|nano second, 000000000-999999999||
|
| NS,ns | nano second, 000000000-999999999 | |
|
||||||
|TZH,tzh|time zone hour|2023-01-30 11:59:59PM +08|
|
| TZH,tzh | time zone hour | 2023-01-30 11:59:59PM +08 |
|
||||||
|
|
||||||
**More explanations**:
|
**More explanations**:
|
||||||
- The output format of `Month`, `Day` are left aligined, like`2023-OCTOBER -01`, `2023-SEPTEMBER-01`, `September` is the longest, no paddings. Week days are slimilar.
|
- The output format of `Month`, `Day` are left aligined, like`2023-OCTOBER -01`, `2023-SEPTEMBER-01`, `September` is the longest, no paddings. Week days are slimilar.
|
||||||
|
@ -955,6 +955,7 @@ FIRST(expr)
|
||||||
- FIRST(\*) can be used to get the first non-null value of all columns; When querying a super table and multiResultFunctionStarReturnTags is set to 0 (default), FIRST(\*) only returns columns of super table; When set to 1, returns columns and tags of the super table.
|
- FIRST(\*) can be used to get the first non-null value of all columns; When querying a super table and multiResultFunctionStarReturnTags is set to 0 (default), FIRST(\*) only returns columns of super table; When set to 1, returns columns and tags of the super table.
|
||||||
- NULL will be returned if all the values of the specified column are all NULL
|
- NULL will be returned if all the values of the specified column are all NULL
|
||||||
- A result will NOT be returned if all the columns in the result set are all NULL
|
- A result will NOT be returned if all the columns in the result set are all NULL
|
||||||
|
- For a table with composite primary key, the data with the smallest primary key value is returned.
|
||||||
|
|
||||||
### INTERP
|
### INTERP
|
||||||
|
|
||||||
|
@ -988,6 +989,7 @@ ignore_null_values: {
|
||||||
- `INTERP` can be applied to supertable by interpolating primary key sorted data of all its childtables. It can also be used with `partition by tbname` when applied to supertable to generate interpolation on each single timeline.
|
- `INTERP` can be applied to supertable by interpolating primary key sorted data of all its childtables. It can also be used with `partition by tbname` when applied to supertable to generate interpolation on each single timeline.
|
||||||
- Pseudocolumn `_irowts` can be used along with `INTERP` to return the timestamps associated with interpolation points(support after version 3.0.2.0).
|
- Pseudocolumn `_irowts` can be used along with `INTERP` to return the timestamps associated with interpolation points(support after version 3.0.2.0).
|
||||||
- Pseudocolumn `_isfilled` can be used along with `INTERP` to indicate whether the results are original records or data points generated by interpolation algorithm(support after version 3.0.3.0).
|
- Pseudocolumn `_isfilled` can be used along with `INTERP` to indicate whether the results are original records or data points generated by interpolation algorithm(support after version 3.0.3.0).
|
||||||
|
- For a table with composite primary key, onley the data with the smallest primary key value is used to generate interpolation.
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
|
@ -1017,6 +1019,7 @@ LAST(expr)
|
||||||
- LAST(\*) can be used to get the last non-NULL value of all columns; When querying a super table and multiResultFunctionStarReturnTags is set to 0 (default), LAST(\*) only returns columns of super table; When set to 1, returns columns and tags of the super table.
|
- LAST(\*) can be used to get the last non-NULL value of all columns; When querying a super table and multiResultFunctionStarReturnTags is set to 0 (default), LAST(\*) only returns columns of super table; When set to 1, returns columns and tags of the super table.
|
||||||
- If the values of a column in the result set are all NULL, NULL is returned for that column; if all columns in the result are all NULL, no result will be returned.
|
- If the values of a column in the result set are all NULL, NULL is returned for that column; if all columns in the result are all NULL, no result will be returned.
|
||||||
- When it's used on a STable, if there are multiple values with the timestamp in the result set, one of them will be returned randomly and it's not guaranteed that the same value is returned if the same query is run multiple times.
|
- When it's used on a STable, if there are multiple values with the timestamp in the result set, one of them will be returned randomly and it's not guaranteed that the same value is returned if the same query is run multiple times.
|
||||||
|
- For a table with composite primary key, the data with the largest primary key value is returned.
|
||||||
|
|
||||||
|
|
||||||
### LAST_ROW
|
### LAST_ROW
|
||||||
|
@ -1038,6 +1041,7 @@ LAST_ROW(expr)
|
||||||
- LAST_ROW(\*) can be used to get the last value of all columns; When querying a super table and multiResultFunctionStarReturnTags is set to 0 (default), LAST_ROW(\*) only returns columns of super table; When set to 1, returns columns and tags of the super table.
|
- LAST_ROW(\*) can be used to get the last value of all columns; When querying a super table and multiResultFunctionStarReturnTags is set to 0 (default), LAST_ROW(\*) only returns columns of super table; When set to 1, returns columns and tags of the super table.
|
||||||
- When it's used on a STable, if there are multiple values with the timestamp in the result set, one of them will be returned randomly and it's not guaranteed that the same value is returned if the same query is run multiple times.
|
- When it's used on a STable, if there are multiple values with the timestamp in the result set, one of them will be returned randomly and it's not guaranteed that the same value is returned if the same query is run multiple times.
|
||||||
- Can't be used with `INTERVAL`.
|
- Can't be used with `INTERVAL`.
|
||||||
|
- Like `LAST`, the data with the largest primary key value is returned for a table with composite primary key.
|
||||||
|
|
||||||
### MAX
|
### MAX
|
||||||
|
|
||||||
|
@ -1144,7 +1148,7 @@ TOP(expr, k)
|
||||||
UNIQUE(expr)
|
UNIQUE(expr)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Description**: The values that occur the first time in the specified column. The effect is similar to `distinct` keyword.
|
**Description**: The values that occur the first time in the specified column. The effect is similar to `distinct` keyword. For a table with composite primary key, only the data with the smallest primary key value is returned.
|
||||||
|
|
||||||
**Return value type**:Same as the data type of the column being operated upon
|
**Return value type**:Same as the data type of the column being operated upon
|
||||||
|
|
||||||
|
@ -1190,7 +1194,7 @@ ignore_negative: {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Description**: The derivative of a specific column. The time rage can be specified by parameter `time_interval`, the minimum allowed time range is 1 second (1s); the value of `ignore_negative` can be 0 or 1, 1 means negative values are ignored.
|
**Description**: The derivative of a specific column. The time rage can be specified by parameter `time_interval`, the minimum allowed time range is 1 second (1s); the value of `ignore_negative` can be 0 or 1, 1 means negative values are ignored. For tables with composite primary key, the data with the smallest primary key value is used to calculate the derivative.
|
||||||
|
|
||||||
**Return value type**: DOUBLE
|
**Return value type**: DOUBLE
|
||||||
|
|
||||||
|
@ -1213,7 +1217,7 @@ ignore_negative: {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Description**: The different of each row with its previous row for a specific column. `ignore_negative` can be specified as 0 or 1, the default value is 1 if it's not specified. `1` means negative values are ignored.
|
**Description**: The different of each row with its previous row for a specific column. `ignore_negative` can be specified as 0 or 1, the default value is 1 if it's not specified. `1` means negative values are ignored. For tables with composite primary key, the data with the smallest primary key value is used to calculate the difference.
|
||||||
|
|
||||||
**Return value type**:Same as the data type of the column being operated upon
|
**Return value type**:Same as the data type of the column being operated upon
|
||||||
|
|
||||||
|
@ -1233,7 +1237,7 @@ ignore_negative: {
|
||||||
IRATE(expr)
|
IRATE(expr)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Description**: instantaneous rate on a specific column. The last two samples in the specified time range are used to calculate instantaneous rate. If the last sample value is smaller, then only the last sample value is used instead of the difference between the last two sample values.
|
**Description**: instantaneous rate on a specific column. The last two samples in the specified time range are used to calculate instantaneous rate. If the last sample value is smaller, then only the last sample value is used instead of the difference between the last two sample values. For tables with composite primary key, the data with the smallest primary key value is used to calculate the rate.
|
||||||
|
|
||||||
**Return value type**: DOUBLE
|
**Return value type**: DOUBLE
|
||||||
|
|
||||||
|
@ -1323,7 +1327,7 @@ STATEDURATION(expr, oper, val, unit)
|
||||||
TWA(expr)
|
TWA(expr)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Description**: Time weighted average on a specific column within a time range
|
**Description**: Time weighted average on a specific column within a time range. For tables with composite primary key, the data with the smallest primary key value is used to calculate the average.
|
||||||
|
|
||||||
**Return value type**: DOUBLE
|
**Return value type**: DOUBLE
|
||||||
|
|
||||||
|
|
|
@ -11,13 +11,14 @@ Because stream processing is built in to TDengine, you are no longer reliant on
|
||||||
## Create a Stream
|
## Create a Stream
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name SUBTABLE(expression) AS subquery
|
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name[(field1_name, field2_name [PRIMARY KEY], ...)] [TAGS (create_definition [, create_definition] ...)] SUBTABLE(expression) AS subquery
|
||||||
stream_options: {
|
stream_options: {
|
||||||
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
|
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
|
||||||
WATERMARK time
|
WATERMARK time
|
||||||
IGNORE EXPIRED [0|1]
|
IGNORE EXPIRED [0|1]
|
||||||
DELETE_MARK time
|
DELETE_MARK time
|
||||||
FILL_HISTORY [0|1]
|
FILL_HISTORY [0|1]
|
||||||
|
IGNORE UPDATE [0|1]
|
||||||
}
|
}
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -32,7 +33,7 @@ subquery: SELECT [DISTINCT] select_list
|
||||||
[window_clause]
|
[window_clause]
|
||||||
```
|
```
|
||||||
|
|
||||||
Session windows, state windows, and sliding windows are supported. When you configure a session or state window for a supertable, you must use PARTITION BY TBNAME.
|
Session windows, state windows, and sliding windows are supported. When you configure a session or state window for a supertable, you must use PARTITION BY TBNAME. If the source table has a composite primary key, state windows, event windows, and count windows are not supported.
|
||||||
|
|
||||||
Subtable Clause defines the naming rules of auto-created subtable, you can see more details in below part: Partitions of Stream.
|
Subtable Clause defines the naming rules of auto-created subtable, you can see more details in below part: Partitions of Stream.
|
||||||
|
|
||||||
|
|
|
@ -1,65 +1,134 @@
|
||||||
---
|
---
|
||||||
title: Indexing
|
sidebar_label: Window Pre-Aggregation
|
||||||
sidebar_label: Indexing
|
title: Window Pre-Aggregation
|
||||||
description: This document describes the SQL statements related to indexing in TDengine.
|
description: Instructions for using Window Pre-Aggregation
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine supports SMA and tag indexing.
|
To improve the performance of aggregate function queries on large datasets, you can create Time-Range Small Materialized Aggregates (TSMA) objects. These objects perform pre-computation on specified aggregate functions using fixed time windows and store the computed results. When querying, you can retrieve the pre-computed results to enhance query performance.
|
||||||
|
|
||||||
## Create an Index
|
## Creating TSMA
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE INDEX index_name ON tb_name (col_name [, col_name] ...)
|
-- Create TSMA based on a super table or regular table
|
||||||
|
CREATE TSMA tsma_name ON [dbname.]table_name FUNCTION (func_name(func_param) [, ...] ) INTERVAL(time_duration);
|
||||||
|
-- Create a large window TSMA based on a small window TSMA
|
||||||
|
CREATE RECURSIVE TSMA tsma_name ON [db_name.]tsma_name1 INTERVAL(time_duration);
|
||||||
|
|
||||||
CREATE SMA INDEX index_name ON tb_name index_option
|
time_duration:
|
||||||
|
number unit
|
||||||
index_option:
|
|
||||||
FUNCTION(functions) INTERVAL(interval_val [, interval_offset]) [SLIDING(sliding_val)] [WATERMARK(watermark_val)] [MAX_DELAY(max_delay_val)]
|
|
||||||
|
|
||||||
functions:
|
|
||||||
function [, function] ...
|
|
||||||
```
|
```
|
||||||
### tag Indexing
|
|
||||||
|
|
||||||
[tag index](../tag-index)
|
To create a TSMA, you need to specify the TSMA name, table name, function list, and window size. When creating a TSMA based on an existing TSMA, using the `RECURSIVE` keyword, you don't need to specify the `FUNCTION()`. It will create a TSMA with the same function list as the existing TSMA, and the INTERVAL must be a multiple of the window of the base TSMA.
|
||||||
|
|
||||||
### SMA Indexing
|
The naming rule for TSMA is similar to the table name, with a maximum length of the table name length minus the length of the output table suffix. The table name length limit is 193, and the output table suffix is `_tsma_res_stb_`. The maximum length of the TSMA name is 178.
|
||||||
|
|
||||||
Performs pre-aggregation on the specified column over the time window defined by the INTERVAL clause. The type is specified in functions_string. SMA indexing improves aggregate query performance for the specified time period. One supertable can only contain one SMA index.
|
TSMA can only be created based on super tables and regular tables, not on subtables.
|
||||||
|
|
||||||
- The max, min, and sum functions are supported.
|
In the function list, you can only specify supported aggregate functions (see below), and the number of function parameters must be 1, even if the current function supports multiple parameters. The function parameters must be ordinary column names, not tag columns. Duplicate functions and columns in the function list will be deduplicated. When calculating TSMA, all `intermediate results of the functions` will be output to another super table, and the output super table also includes all tag columns of the original table. The maximum number of functions in the function list is the maximum number of columns in the output table (including tag columns) minus the four additional columns added for TSMA calculation, namely `_wstart`, `_wend`, `_wduration`, and a new tag column `tbname`, minus the number of tag columns in the original table. If the number of columns exceeds the limit, an error `Too many columns` will be reported.
|
||||||
- WATERMARK: Enter a value between 0ms and 900000ms. The most precise unit supported is milliseconds. The default value is 5 seconds. This option can be used only on supertables.
|
|
||||||
- MAX_DELAY: Enter a value between 1ms and 900000ms. The most precise unit supported is milliseconds. The default value is the value of interval provided that it does not exceed 900000ms. This option can be used only on supertables. Note: Retain the default value if possible. Configuring a small MAX_DELAY may cause results to be frequently pushed, affecting storage and query performance.
|
Since the output of TSMA is a super table, the row length of the output table is subject to the maximum row length limit. The size of the `intermediate results of different functions` varies, but they are generally larger than the original data size. If the row length of the output table exceeds the maximum row length limit, an error `Row length exceeds max length` will be reported. In this case, you need to reduce the number of functions or split commonly used functions groups into multiple TSMA objects.
|
||||||
|
|
||||||
|
The window size is limited to [1ms ~ 1h]. The unit of INTERVAL is the same as the INTERVAL clause in the query, such as a (milliseconds), b (nanoseconds), h (hours), m (minutes), s (seconds), u (microseconds).
|
||||||
|
|
||||||
|
TSMA is a database-level object, but it is globally unique. The number of TSMA that can be created in the cluster is limited by the parameter `maxTsmaNum`, with a default value of 8 and a range of [0-12]. Note that since TSMA background calculation uses stream computing, creating a TSMA will create a stream. Therefore, the number of TSMA that can be created is also limited by the number of existing streams and the maximum number of streams that can be created.
|
||||||
|
|
||||||
|
## Supported Functions
|
||||||
|
| function | comments |
|
||||||
|
|---|---|
|
||||||
|
|min||
|
||||||
|
|max||
|
||||||
|
|sum||
|
||||||
|
|first||
|
||||||
|
|last||
|
||||||
|
|avg||
|
||||||
|
|count| If you want to use count(*), you should create the count(ts) function|
|
||||||
|
|spread||
|
||||||
|
|stddev||
|
||||||
|
|hyperloglog||
|
||||||
|
|||
|
||||||
|
|
||||||
|
## Drop TSMA
|
||||||
|
```sql
|
||||||
|
DROP TSMA [db_name.]tsma_name;
|
||||||
|
```
|
||||||
|
|
||||||
|
If there are other TSMA created based on the TSMA being deleted, the delete operation will report an `Invalid drop base tsma, drop recursive tsma first` error. Therefore, all Recursive TSMA must be deleted first.
|
||||||
|
|
||||||
|
## TSMA Calculation
|
||||||
|
The calculation result of TSMA is a super table in the same database as the original table, but it is not visible to users. It cannot be deleted and will be automatically deleted when `DROP TSMA` is executed. The calculation of TSMA is done through stream computing, which is a background asynchronous process. The calculation result of TSMA is not guaranteed to be real-time, but it can guarantee eventual correctness.
|
||||||
|
|
||||||
|
When there is a large amount of historical data, after creating TSMA, the stream computing will first calculate the historical data. During this period, newly created TSMA will not be used. The calculation will be automatically recalculated when data updates, deletions, or expired data arrive. During the recalculation period, the TSMA query results are not guaranteed to be real-time. If you want to query real-time data, you can use the hint `/*+ skip_tsma() */` in the SQL statement or disable the `querySmaOptimize` parameter to query from the original data.
|
||||||
|
|
||||||
|
## Using and Limitations of TSMA
|
||||||
|
|
||||||
|
Client configuration parameter: `querySmaOptimize`, used to control whether to use TSMA during queries. Set it to `True` to use TSMA, and `False` to query from the original data.
|
||||||
|
|
||||||
|
Client configuration parameter: `maxTsmaCalcDelay`, in seconds, is used to control the acceptable TSMA calculation delay for users. If the calculation progress of a TSMA is within this range from the latest time, the TSMA will be used. If it exceeds this range, it will not be used. The default value is 600 (10 minutes), with a minimum value of 600 (10 minutes) and a maximum value of 86400 (1 day).
|
||||||
|
|
||||||
|
### Using TSMA Duraing Query
|
||||||
|
|
||||||
|
The aggregate functions defined in TSMA can be directly used in most query scenarios. If multiple TSMA are available, the one with the larger window size is preferred. For unclosed windows, the calculation can be done using smaller window TSMA or the original data. However, there are certain scenarios where TSMA cannot be used (see below). In such cases, the entire query will be calculated using the original data.
|
||||||
|
|
||||||
|
The default behavior for queries without specified window sizes is to prioritize the use of the largest window TSMA that includes all the aggregate functions used in the query. For example, `SELECT COUNT(*) FROM stable GROUP BY tbname` will use the TSMA with the largest window that includes the `count(ts)` function. Therefore, when using aggregate queries frequently, it is recommended to create TSMA objects with larger window size.
|
||||||
|
|
||||||
|
When specifying the window size, which is the `INTERVAL` statement, use the largest TSMA window that is divisible by the window size of the query. In window queries, the window size of the `INTERVAL`, `OFFSET`, and `SLIDING` all affect the TSMA window size that can be used. Divisible window TSMA refers to a TSMA window size that is divisible by the `INTERVAL`, `OFFSET`, and `SLIDING` of the query statement. Therefore, when using window queries frequently, consider the window size, as well as the offset and sliding size when creating TSMA objects.
|
||||||
|
|
||||||
|
Example 1. If TSMA with window size of `5m` and `10m` is created, and the query is `INTERVAL(30m)`, the TSMA with window size of `10m` will be used. If the query is `INTERVAL(30m, 10m) SLIDING(5m)`, only the TSMA with window size of `5m` can be used for the query.
|
||||||
|
|
||||||
|
### Limitations of Query
|
||||||
|
|
||||||
|
When the parameter `querySmaOptimize` is enabled and there is no `skip_tsma()` hint, the following query scenarios cannot use TSMA:
|
||||||
|
|
||||||
|
- When the aggregate functions defined in a TSMA do not cover the function list of the current query.
|
||||||
|
- Non-`INTERVAL` windows or the query window size (including `INTERVAL, SLIDING, OFFSET`) is not multiples of the defined window size. For example, if the defined window is 2m and the query uses a 5-minute window, but if there is a 1m window available, it can be used.
|
||||||
|
- Query with filtering on any regular column (non-primary key time column) in the `WHERE` condition.
|
||||||
|
- When `PARTITION` or `GROUP BY` includes any regular column or its expression
|
||||||
|
- When other faster optimization logic can be used, such as last cache optimization, if it meets the conditions for last optimization, it will be prioritized. If last optimization is not possible, then it will be determined whether TSMA optimization can be used.
|
||||||
|
- When the current TSMA calculation progress delay is greater than the configuration parameter `maxTsmaCalcDelay`
|
||||||
|
|
||||||
|
Some examples:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
DROP DATABASE IF EXISTS d0;
|
SELECT agg_func_list [, pesudo_col_list] FROM stable WHERE exprs [GROUP/PARTITION BY [tbname] [, tag_list]] [HAVING ...] [INTERVAL(time_duration, offset) SLIDING(duration)]...;
|
||||||
CREATE DATABASE d0;
|
|
||||||
USE d0;
|
-- create
|
||||||
CREATE TABLE IF NOT EXISTS st1 (ts timestamp, c1 int, c2 float, c3 double) TAGS (t1 int unsigned);
|
CREATE TSMA tsma1 ON stable FUNCTION(COUNT(ts), SUM(c1), SUM(c3), MIN(c1), MIN(c3), AVG(c1)) INTERVAL(1m);
|
||||||
CREATE TABLE ct1 USING st1 TAGS(1000);
|
-- query
|
||||||
CREATE TABLE ct2 USING st1 TAGS(2000);
|
SELECT COUNT(*), SUM(c1) + SUM(c3) FROM stable; ---- use tsma1
|
||||||
INSERT INTO ct1 VALUES(now+0s, 10, 2.0, 3.0);
|
SELECT COUNT(*), AVG(c1) FROM stable GROUP/PARTITION BY tbname, tag1, tag2; --- use tsma1
|
||||||
INSERT INTO ct1 VALUES(now+1s, 11, 2.1, 3.1)(now+2s, 12, 2.2, 3.2)(now+3s, 13, 2.3, 3.3);
|
SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(1h); ---use tsma1
|
||||||
CREATE SMA INDEX sma_index_name1 ON st1 FUNCTION(max(c1),max(c2),min(c1)) INTERVAL(5m,10s) SLIDING(5m) WATERMARK 5s MAX_DELAY 1m;
|
SELECT COUNT(*), MIN(c1), SPREAD(c1) FROM stable INTERVAL(1h); ----- can't use, spread func not defined, although SPREAD can be calculated by MIN and MAX which are defined.
|
||||||
-- query from SMA Index
|
SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(30s); ----- can't use tsma1, time_duration not fit. Normally, query_time_duration should be multple of create_duration.
|
||||||
ALTER LOCAL 'querySmaOptimize' '1';
|
SELECT COUNT(*), MIN(c1) FROM stable where c2 > 0; ---- can't use tsma1, can't do c2 filtering
|
||||||
SELECT max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDING(5m);
|
SELECT COUNT(*) FROM stable GROUP BY c2; ---- can't use any tsma
|
||||||
SELECT _wstart,_wend,_wduration,max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDING(5m);
|
SELECT MIN(c3), MIN(c2) FROM stable INTERVAL(1m); ---- can't use tsma1, c2 is not defined in tsma1.
|
||||||
-- query from raw data
|
|
||||||
ALTER LOCAL 'querySmaOptimize' '0';
|
-- Another tsma2 created with INTERVAL(1h) based on tsma1
|
||||||
|
CREATE RECURSIVE TSMA tsma2 on tsma1 INTERVAL(1h);
|
||||||
|
SELECT COUNT(*), SUM(c1) FROM stable; ---- use tsma2
|
||||||
|
SELECT COUNT(*), AVG(c1) FROM stable GROUP/PARTITION BY tbname, tag1, tag2; --- use tsma2
|
||||||
|
SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(2h); ---use tsma2
|
||||||
|
SELECT COUNT(*), MIN(c1) FROM stable WHERE ts < '2023-01-01 10:10:10' INTERVAL(30m); --use tsma1
|
||||||
|
SELECT COUNT(*), MIN(c1) + MIN(c3) FROM stable INTERVAL(30m); ---use tsma1
|
||||||
|
SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(1h) SLIDING(30m); ---use tsma1
|
||||||
|
SELECT COUNT(*), MIN(c1), SPREAD(c1) FROM stable INTERVAL(1h); ----- can't use tsma1 or tsma2, spread func not defined
|
||||||
|
SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(30s); ----- can't use tsma1 or tsma2, time_duration not fit. Normally, query_time_duration should be multple of create_duration.
|
||||||
|
SELECT COUNT(*), MIN(c1) FROM stable where c2 > 0; ---- can't use tsma1 or tsam2, can't do c2 filtering
|
||||||
```
|
```
|
||||||
|
|
||||||
## Delete an Index
|
### Limitations of Usage
|
||||||
|
|
||||||
|
After creating a TSMA, there are certain restrictions on operations that can be performed on the original table:
|
||||||
|
|
||||||
|
- You must delete all TSMAs on the table before you can delete the table itself.
|
||||||
|
- All tag columns of the original table cannot be deleted, nor can the tag column names or sub-table tag values be modified. You must first delete the TSMA before you can delete the tag column.
|
||||||
|
- If some columns are being used by the TSMA, these columns cannot be deleted. You must first delete the TSMA. However, adding new columns to the table is not affected. However, new columns added are not included in any TSMA, so if you want to calculate the new columns, you need to create new TSMA for them.
|
||||||
|
|
||||||
|
## Show TSMA
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
DROP INDEX index_name;
|
SHOW [db_name.]TSMAS;
|
||||||
|
SELECT * FROM information_schema.ins_tsma;
|
||||||
```
|
```
|
||||||
|
|
||||||
## View Indices
|
If more functions are specified during creation, and the column names are longer, the function list may be truncated when displayed (currently supports a maximum output of 256KB)
|
||||||
|
|
||||||
````sql
|
|
||||||
SHOW INDEXES FROM tbl_name [FROM db_name];
|
|
||||||
SHOW INDEXES FROM [db_name.]tbl_name ;
|
|
||||||
````
|
|
||||||
|
|
||||||
Shows indices that have been created for the specified database or table.
|
|
|
@ -0,0 +1,298 @@
|
||||||
|
---
|
||||||
|
sidebar_label: JOIN
|
||||||
|
title: JOIN
|
||||||
|
description: JOIN Description
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
## Join Concept
|
||||||
|
|
||||||
|
### Driving Table
|
||||||
|
|
||||||
|
The table used for driving Join queries is the left table in the Left Join series and the right table in the Right Join series.
|
||||||
|
|
||||||
|
### Join Conditions
|
||||||
|
|
||||||
|
Join conditions refer to the conditions specified for join operation. All join queries supported by TDengine require specifying join conditions. Join conditions usually only appear in `ON` (except for Inner Join and Window Join). For Inner Join, conditions that appear in `WHERE` can also be regarded as join conditions. For Window Join join conditions are specified in `WINDOW_OFFSET` clause.
|
||||||
|
|
||||||
|
Except for ASOF Join, all join types supported by TDengine must explicitly specify join conditions. Since ASOF Join has implicit join conditions defined by default, it is not necessary to explicitly specify the join conditions (if the default conditions meet the requirements).
|
||||||
|
|
||||||
|
Except for ASOF/Window Join, the join condition can include not only the primary join condition(refer below), but also any number of other join conditions. The primary join condition must have an `AND` relationship with the other join conditions, while there is no such restriction between the other join conditions. The other join conditions can include any logical operation combination of primary key columns, Tag columns, normal columns, constants, and their scalar functions or operations.
|
||||||
|
|
||||||
|
|
||||||
|
Taking smart meters as an example, the following SQL statements all contain valid join conditions:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT a.* FROM meters a LEFT JOIN meters b ON a.ts = b.ts AND a.ts > '2023-10-18 10:00:00.000';
|
||||||
|
SELECT a.* FROM meters a LEFT JOIN meters b ON a.ts = b.ts AND (a.ts > '2023-10-18 10:00:00.000' OR a.ts < '2023-10-17 10:00:00.000');
|
||||||
|
SELECT a.* FROM meters a LEFT JOIN meters b ON timetruncate(a.ts, 1s) = timetruncate(b.ts, 1s) AND (a.ts + 1s > '2023-10-18 10:00:00.000' OR a.groupId > 0);
|
||||||
|
SELECT a.* FROM meters a LEFT ASOF JOIN meters b ON timetruncate(a.ts, 1s) < timetruncate(b.ts, 1s) AND a.groupId = b.groupId;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Primary Join Condition
|
||||||
|
|
||||||
|
As a time series database, all join queries of TDengine revolve around the primary timestamp column, so all join queries except ASOF/Window Join are required to contain equivalent join condition of the primary key column. The equivalent join condition of the primary key column that first appear in the join conditions in order will be used as the primary join condition. The primary join condition of ASOF Join can contain non-equivalent join condition, for Window Join the primary join condition is specified by `WINDOW_OFFSET` clause.
|
||||||
|
|
||||||
|
Except for Window Join, TDengine supports performing `timetruncate` function operation in the primary join condition, e.g. `ON timetruncate(a.ts, 1s) = timetruncate(b.ts, 1s)`. Other functions and scalar operations to primary key column are not currently supported in the primary join condition.
|
||||||
|
|
||||||
|
### Grouping Conditions
|
||||||
|
|
||||||
|
ASOF/Window Join supports grouping the input data of join queries, and then performing join operations within each group. Grouping only applies to the input of join queries, and the output result will not include grouping information. Equivalent conditions that appear in `ON` in ASOF/Window Join (excluding the primary join condition of ASOF) will be used as grouping conditions.
|
||||||
|
|
||||||
|
|
||||||
|
### Primary Key Timeline
|
||||||
|
|
||||||
|
TDengine, as a time series database, requires that each table must have a primary key timestamp column, which will perform many time-related operations as the primary key timeline of the table. It is also necessary to clarify which column will be regarded as the primary key timeline for subsequent time-related operations in the results of subqueries or join operations. In subqueries, the ordered first occurrence of the primary key column (or its operation) or the pseudo-column equivalent to the primary key column (`_wstart`/`_wend`) in the query results will be regarded as the primary key timeline of the output table. The selection of the primary key timeline in the join output results follows the following rules:
|
||||||
|
|
||||||
|
- The primary key column of the driving table (subquery) in the Left/Right Join series will be used as the primary key timeline for subsequent queries. In addition, in each Window Join window, because the left and right tables are ordered at the same time, the primary key column of any table can be used as the primary key timeline in the window, and the primary key column of current table is preferentially selected as the primary key timeline.
|
||||||
|
|
||||||
|
- The primary key column of any table in Inner Join can be treated as the primary key timeline. When there are similar grouping conditions (equivalent conditions of TAG columns and `AND` relationship with the primary join condition), there will be no available primary key timeline.
|
||||||
|
|
||||||
|
- Full Join will not result in any primary key timeline because it cannot generate any valid primary key time series, so no timeline-related operations can be performed in or after a Full Join.
|
||||||
|
|
||||||
|
|
||||||
|
## Syntax Conventions
|
||||||
|
Because we will introduce the Left/Right Join series simultaneously through sharing below, the introductions of Left/Right Outer, Semi, Anti-Semi, ASOF, and Window series Joins will all use a similar "left/right" approach to introduce Left/Right Join simultaneously. Here is a brief introduction to the meaning of this writing method. The words written before "/" are the words applied to Left Join, and the words written after "/" are the words applied to Right Join.
|
||||||
|
|
||||||
|
For example:
|
||||||
|
|
||||||
|
The phrase "left/right table" means "left table" for Left Join and "right table" for Right Join.
|
||||||
|
|
||||||
|
Similarly,
|
||||||
|
|
||||||
|
The phrase "right/left table" means "right table" for Left Join and "left table" for Right Join.
|
||||||
|
|
||||||
|
## Join Function
|
||||||
|
|
||||||
|
### Inner Join
|
||||||
|
|
||||||
|
#### Definition
|
||||||
|
Only data from both left and right tables that meet the join conditions will be returned, which can be regarded as the intersection of data from two tables that meet the join conditions.
|
||||||
|
|
||||||
|
#### Grammar
|
||||||
|
```sql
|
||||||
|
SELECT ... FROM table_name1 [INNER] JOIN table_name2 [ON ...] [WHERE ...] [...]
|
||||||
|
Or
|
||||||
|
SELECT ... FROM table_name1, table_name2 WHERE ... [...]
|
||||||
|
```
|
||||||
|
#### Result set
|
||||||
|
Cartesian product set of left and right table row data that meets the join conditions.
|
||||||
|
|
||||||
|
#### Scope
|
||||||
|
Inner Join are supported between super tables, normal tables, child tables, and subqueries.
|
||||||
|
|
||||||
|
#### Notes
|
||||||
|
- For the first type syntax, the `INNER` keyword is optional. The primary join condition and other join conditions can be specified in `ON` and/or `WHERE`, and filters can also be specified in `WHERE`. At least one of `ON`/`WHERE` must be specified.
|
||||||
|
- For the second type syntax, all primary join condition, other join conditions, and filters can be specified in `WHERE`.
|
||||||
|
- When performing Inner Join on the super table, the Tag column equivalent conditions with the `AND` relationship of the primary join condition will be used as a similar grouping condition, so the output result cannot remain time serious ordered.
|
||||||
|
|
||||||
|
#### Examples
|
||||||
|
|
||||||
|
The timestamp when the voltage is greater than 220V occurs simultaneously in table d1001 and table d1002 and their respective voltage values:
|
||||||
|
```sql
|
||||||
|
SELECT a.ts, a.voltage, b.voltage FROM d1001 a JOIN d1002 b ON a.ts = b.ts and a.voltage > 220 and b.voltage > 220
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### Left/Right Outer Join
|
||||||
|
|
||||||
|
|
||||||
|
#### Definition
|
||||||
|
It returns data sets that meet the join conditions for both left and right tables, as well as data sets that do not meet the join conditions in the left/right tables.
|
||||||
|
|
||||||
|
#### Grammar
|
||||||
|
```sql
|
||||||
|
SELECT ... FROM table_name1 LEFT|RIGHT [OUTER] JOIN table_name2 ON ... [WHERE ...] [...]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Result set
|
||||||
|
The result set of Inner Join are rows in the left/right table that do not meet the join conditions combining with null data (`NULL`) in the right/left table.
|
||||||
|
|
||||||
|
#### Scope
|
||||||
|
Left/Right Outer Join are supported between super tables, normal tables, child tables, and subqueries.
|
||||||
|
|
||||||
|
#### Notes
|
||||||
|
- the `OUTER` keyword is optional.
|
||||||
|
|
||||||
|
#### Examples
|
||||||
|
|
||||||
|
Timestamp and voltage values at all times in table d1001 and the timestamp when the voltage is greater than 220V occurs simultaneously in table d1001 and table d1002 and their respective voltage values:
|
||||||
|
```sql
|
||||||
|
SELECT a.ts, a.voltage, b.voltage FROM d1001 a LEFT JOIN d1002 b ON a.ts = b.ts and a.voltage > 220 and b.voltage > 220
|
||||||
|
```
|
||||||
|
|
||||||
|
### Left/Right Semi Join
|
||||||
|
|
||||||
|
#### Definition
|
||||||
|
It usually expresses the meaning of `IN`/`EXISTS`, which means that for any data in the left/right table, only when there is any row data in the right/left table that meets the join conditions, will the left/right table row data be returned.
|
||||||
|
|
||||||
|
#### Grammar
|
||||||
|
```sql
|
||||||
|
SELECT ... FROM table_name1 LEFT|RIGHT SEMI JOIN table_name2 ON ... [WHERE ...] [...]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Result set
|
||||||
|
The row data set composed of rows that meet the join conditions in the left/right table and any one row that meets the join conditions in the right/left table.
|
||||||
|
|
||||||
|
#### Scope
|
||||||
|
Left/Right Semi Join are supported between super tables, normal tables, child tables, and subqueries.
|
||||||
|
|
||||||
|
#### Examples
|
||||||
|
|
||||||
|
The timestamp when the voltage in table d1001 is greater than 220V and there are other meters with voltages greater than 220V at the same time:
|
||||||
|
```sql
|
||||||
|
SELECT a.ts FROM d1001 a LEFT SEMI JOIN meters b ON a.ts = b.ts and a.voltage > 220 and b.voltage > 220 and b.tbname != 'd1001'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Left/Right Anti-Semi Join
|
||||||
|
|
||||||
|
#### Definition
|
||||||
|
Opposite meaning to the Left/Right Semi Join. It usually expresses the meaning of `NOT IN`/`NOT EXISTS`, that is, for any row data in the left/right table, only will be returned when there is no row data that meets the join conditions in the right/left table.
|
||||||
|
|
||||||
|
#### Grammar
|
||||||
|
```sql
|
||||||
|
SELECT ... FROM table_name1 LEFT|RIGHT ANTI JOIN table_name2 ON ... [WHERE ...] [...]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Result set
|
||||||
|
A collection of rows in the left/right table that do not meet the join conditions and null data (`NULL`) in the right/left table.
|
||||||
|
|
||||||
|
#### Scope
|
||||||
|
Left/Right Anti-Semi Join are supported between super tables, normal tables, child tables, and subqueries.
|
||||||
|
|
||||||
|
#### Examples
|
||||||
|
|
||||||
|
The timestamp when the voltage in table d1001 is greater than 220V and there is not any other meters with voltages greater than 220V at the same time:
|
||||||
|
```sql
|
||||||
|
SELECT a.ts FROM d1001 a LEFT ANTI JOIN meters b ON a.ts = b.ts and b.voltage > 220 and b.tbname != 'd1001' WHERE a.voltage > 220
|
||||||
|
```
|
||||||
|
|
||||||
|
### left/Right ASOF Join
|
||||||
|
|
||||||
|
#### Definition
|
||||||
|
Different from other traditional join's exact matching patterns, ASOF Join allows for incomplete matching in a specified matching pattern, that is, matching in the manner closest to the primary key timestamp.
|
||||||
|
|
||||||
|
#### Grammar
|
||||||
|
```sql
|
||||||
|
SELECT ... FROM table_name1 LEFT|RIGHT ASOF JOIN table_name2 [ON ...] [JLIMIT jlimit_num] [WHERE ...] [...]
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Result set
|
||||||
|
The Cartesian product set of up to `jlimit_num` rows data or null data (`NULL`) closest to the timestamp of each row in the left/right table, ordered by primary key, that meets the join conditions in the right/left table.
|
||||||
|
|
||||||
|
|
||||||
|
##### Scope
|
||||||
|
Left/Right ASOF Join are supported between super tables, normal tables, child tables.
|
||||||
|
|
||||||
|
#### Notes
|
||||||
|
- Only supports ASOF Join between tables, not between subqueries.
|
||||||
|
- The `ON` clause supports a single matching rule (primary join condition) with the primary key column or the timetruncate function operation of the primary key column (other scalar operations and functions are not supported). The supported operators and their meanings are as follows:
|
||||||
|
|
||||||
|
|
||||||
|
| **Operator** | **Meaning for Left ASOF Join** |
|
||||||
|
| :-------------: | ------------------------ |
|
||||||
|
| > | Match rows in the right table whose primary key timestamp is less than and the most closed to the left table's primary key timestamp |
|
||||||
|
| >= | Match rows in the right table whose primary key timestamp is less than or equal to and the most closed to the left table's primary key timestamp |
|
||||||
|
| = | Match rows in the right table whose primary key timestamp is equal to the left table's primary key timestamp |
|
||||||
|
| < | Match rows in the right table whose the primary key timestamp is greater than and the most closed to the left table's primary key timestamp |
|
||||||
|
| <= | Match rows in the right table whose primary key timestamp is greater than or equal to and the most closed to the left table's primary key timestamp |
|
||||||
|
|
||||||
|
For Right ASOF Join, the above operators have the opposite meaning.
|
||||||
|
|
||||||
|
- If there is no `ON` clause or no primary join condition is specified in the `ON` clause, the default primary join condition operator will be “>=”, that is, (for Left ASOF Join) matching rows in the right table whose primary key timestamp is less than or equal to the left table's primary key timestamp. Multiple primary join conditions are not supported.
|
||||||
|
- In the `ON` clause, except for the primary key column, equivalent conditions between Tag columns and ordinary columns (which do not support scalar functions and operations) can be specified for grouping calculations. Other types of conditions are not supported.
|
||||||
|
- Only `AND` operation is supported between all `ON` conditions.
|
||||||
|
- `JLIMIT` is used to specify the maximum number of rows for a single row match result. It's optional. The default value is 1 when not specified, which means that each row of data in the left/right table can obtain at most one row of matching results from the right/left table. The value range of `JLIMIT` is [0,1024]. All the `jlimit_num` rows data that meet the join conditions do not require the same timestamp. When there are not enough `jlimit_num` rows data that meet the conditions in the right/left table, the number of returned result rows may be less than `jlimit_num`. When there are more than `jlimit_num` rows data that meet the conditions in the right/left table and all their timestamps are the same, random `jlimit_num` rows data will be returned.
|
||||||
|
|
||||||
|
#### Examples
|
||||||
|
|
||||||
|
The moment that voltage in table d1001 is greater than 220V and at the same time or at the last moment the voltage in table d1002 is also greater than 220V and their respective voltage values:
|
||||||
|
```sql
|
||||||
|
SELECT a.ts, a.voltage, a.ts, b.voltage FROM d1001 a LEFT ASOF JOIN d1002 b ON a.ts >= b.ts where a.voltage > 220 and b.voltage > 220
|
||||||
|
```
|
||||||
|
|
||||||
|
### Left/Right Window Join
|
||||||
|
|
||||||
|
#### Definition
|
||||||
|
Construct windows based on the primary key timestamp of each row in the left/right table and the window boundary, and then perform window join accordingly, supporting projection, scalar, and aggregation operations within the window.
|
||||||
|
|
||||||
|
#### Grammar
|
||||||
|
```sql
|
||||||
|
SELECT ... FROM table_name1 LEFT|RIGHT WINDOW JOIN table_name2 [ON ...] WINDOW_OFFSET(start_offset, end_offset) [JLIMIT jlimit_num] [WHERE ...] [...]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Result set
|
||||||
|
The Cartesian product of each row of data in the left/right table and null data (`NULL`) or up to `jlimit_num` rows of data in the constructed window(based on the left/right table primary key timestamp and `WINDOW_OFFSET`) in the right/left table.
|
||||||
|
Or
|
||||||
|
The Cartesian product of each row of data in the left/right table and null data (`NULL`) or the aggregation result of up to `jlimit_num` rows of data in the constructed window(based on the left/right table primary key timestamp and `WINDOW_OFFSET`) in the right/left table.
|
||||||
|
|
||||||
|
#### Scope
|
||||||
|
Left/Right Window Join are supported between super tables, normal tables, child tables.
|
||||||
|
|
||||||
|
#### Notes
|
||||||
|
- Only supports Window Join between tables, not between subqueries.
|
||||||
|
- The `ON` clause is optional. Except for the primary key column, equivalent conditions between Tag columns and ordinary columns (which do not support scalar functions and operations) can be specified in `ON` clause for grouping calculations. Other types of conditions are not supported.
|
||||||
|
- Only `AND` operation is supported between all `ON` conditions.
|
||||||
|
- `WINDOW_OFFSET` is used to specify the offset of the left and right boundaries of the window relative to the timestamp of the left/right table's primary key. It supports the form of built-in time units. For example: `WINDOW_OFFSET (-1a, 1a)`, for Left Window Join, it means that each window boundary is [left table primary key timestamp - 1 millisecond, left table primary key timestamp + 1 millisecond], and both the left and right boundaries are closed intervals. The time unit after the number can be `b` (nanosecond), `u` (microsecond), `a` (millisecond), `s` (second), `m` (minute), `h` (hour), `d` (day), `w` (week). Natural months (`n`) and natural years (`y`) are not supported. The minimum time unit supported is database precision. The precision of the databases where the left and right tables are located should be the same.
|
||||||
|
- `JLIMIT` is used to specify the maximum number of matching rows in a single window. Optional. If not specified, all matching rows in each window are obtained by default. The value range of `JLIMIT` is [0,1024]. Less than `jlimit_num` rows of data will be returned when there are not enough `jlimit_num` rows of data in the right table that meet the condition. When there are more than `jlimit_num` rows of data in the right table that meet the condition, `jlimit_num` rows of data with the smallest primary key timestamp in the window will be returned.
|
||||||
|
- No `GROUP BY`/`PARTITION BY`/Window queries could be used together with Window Join in one single SQL statement.
|
||||||
|
- Supports scalar filtering in the `WHERE` clause, aggregation function filtering for each window in the `HAVING` clause (does not support scalar filtering), does not support `SLIMIT`, and does not support various window pseudo-columns.
|
||||||
|
|
||||||
|
#### Examples
|
||||||
|
|
||||||
|
The voltage value of table d1002 within 1 second before and after the moment that voltage value of table d1001 is greater than 220V:
|
||||||
|
```sql
|
||||||
|
SELECT a.ts, a.voltage, b.voltage FROM d1001 a LEFT WINDOW JOIN d1002 b WINDOW_OFFSET(-1s, 1s) where a.voltage > 220
|
||||||
|
```
|
||||||
|
|
||||||
|
The moment that the voltage value of table d1001 is greater than 220V and the average voltage value of table d1002 is also greater than 220V in the interval of 1 second before and after that:
|
||||||
|
```sql
|
||||||
|
SELECT a.ts, a.voltage, avg(b.voltage) FROM d1001 a LEFT WINDOW JOIN d1002 b WINDOW_OFFSET(-1s, 1s) where a.voltage > 220 HAVING(avg(b.voltage) > 220)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Full Outer Join
|
||||||
|
|
||||||
|
#### Definition
|
||||||
|
It includes data sets that meet the join conditions for both left and right tables, as well as data sets that do not meet the join conditions in the left and right tables.
|
||||||
|
|
||||||
|
#### Grammar
|
||||||
|
SELECT ... FROM table_name1 FULL [OUTER] JOIN table_name2 ON ... [WHERE ...] [...]
|
||||||
|
|
||||||
|
#### Result set
|
||||||
|
The result set of Inner Join + rows data set composed of rows in the left table that do not meet the join conditions and null data(`NULL`) in the right table + rows data set composed of rows in the right table that do not meet the join conditions and null data(`NULL`) in the left table.
|
||||||
|
|
||||||
|
#### Scope
|
||||||
|
Full Outer Join is supported between super tables, normal tables, child tables, and subqueries.
|
||||||
|
|
||||||
|
#### Notes
|
||||||
|
- the `OUTER` keyword is optional.
|
||||||
|
|
||||||
|
#### Examples
|
||||||
|
|
||||||
|
All timestamps and voltage values recorded in both tables d1001 and d1002:
|
||||||
|
```sql
|
||||||
|
SELECT a.ts, a.voltage, b.ts, b.voltage FROM d1001 a FULL JOIN d1002 b on a.ts = b.ts
|
||||||
|
```
|
||||||
|
|
||||||
|
## Limitations
|
||||||
|
|
||||||
|
### Input timeline limits
|
||||||
|
- Currently, all types of join require input data to contain a valid primary key timeline, which can be satisfied by all table queries. Subqueries need to pay attention to whether the output data contains a valid primary key timeline.
|
||||||
|
|
||||||
|
### Join conditions limits
|
||||||
|
- Except for ASOF and Window Join, the join conditions of other types of join must include the primary join condition;
|
||||||
|
- Only `AND` operation is supported between the primary join condition and other join conditions.
|
||||||
|
- The primary key column used in the primary join condition only supports `timetruncate` function operations (not other functions and scalar operations), and there are no restrictions when used as other join conditions.
|
||||||
|
|
||||||
|
### Grouping conditions limits
|
||||||
|
- Only support equivalent conditions for Tag and ordinary columns except for primary key columns.
|
||||||
|
- Does not support scalar operations.
|
||||||
|
- Supports multiple grouping conditions, and only supports `AND` operation between conditions.
|
||||||
|
|
||||||
|
### Query result order limits
|
||||||
|
- In scenarios where there are normal tables, subtables, and subqueries without grouping conditions or sorting, the query results will be output in the order of the primary key columns of the driving table.
|
||||||
|
- In scenarios such as super table queries, Full Join, or with grouping conditions and without sorting, there is no fixed output order for query results.
|
||||||
|
Therefore, in scenarios where sorting is required and the output is not in a fixed order, sorting operations need to be performed. Some functions that rely on timelines may not be able to execute without soring due to the lack of valid timeline output.
|
||||||
|
|
||||||
|
### Nested join and multi-table join limits
|
||||||
|
- Currently, except for Inner Join which supports nesting and multi-table Join, other types of join do not support nesting and multi-table join.
|
|
@ -241,6 +241,16 @@ Please note the `taoskeeper` needs to be installed and running to create the `lo
|
||||||
| Default Value | 0 |
|
| Default Value | 0 |
|
||||||
| Notes | When this parameter is set to 0, last(\*)/last_row(\*)/first(\*) only returns the columns of the super table; When it is 1, return the columns and tags of the super table. |
|
| Notes | When this parameter is set to 0, last(\*)/last_row(\*)/first(\*) only returns the columns of the super table; When it is 1, return the columns and tags of the super table. |
|
||||||
|
|
||||||
|
### maxTsmaCalcDelay
|
||||||
|
|
||||||
|
| Attribute | Description |
|
||||||
|
| -------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
|
| Applicable | Client only |
|
||||||
|
| Meaning | Query allowed tsma calculation delay, if the tsma calculation delay is greater than the configured value, the TSMA will not be used. |
|
||||||
|
| Value Range | 600s - 86400s, 10 minutes to 1 hour |
|
||||||
|
| Default value | 600s |
|
||||||
|
|
||||||
|
|
||||||
## Locale Parameters
|
## Locale Parameters
|
||||||
|
|
||||||
### timezone
|
### timezone
|
||||||
|
@ -760,6 +770,15 @@ The charset that takes effect is UTF-8.
|
||||||
| Value Range | 1-10000|
|
| Value Range | 1-10000|
|
||||||
| Default Value | 20 |
|
| Default Value | 20 |
|
||||||
|
|
||||||
|
### maxTsmaNum
|
||||||
|
|
||||||
|
| Attribute | Description |
|
||||||
|
| --------- | ----------------------------- |
|
||||||
|
| Applicable | Server Only |
|
||||||
|
| Meaning | Max num of TSMAs |
|
||||||
|
| Value Range | 0-12 |
|
||||||
|
| Default Value | 8 |
|
||||||
|
|
||||||
## 3.0 Parameters
|
## 3.0 Parameters
|
||||||
|
|
||||||
| # | **Parameter** | **Applicable to 2.x ** | **Applicable to 3.0 ** | Current behavior in 3.0 |
|
| # | **Parameter** | **Applicable to 2.x ** | **Applicable to 3.0 ** | Current behavior in 3.0 |
|
||||||
|
|
|
@ -27,6 +27,7 @@ where:
|
||||||
- `tag_set` will be used as tags, with format like `<tag_key>=<tag_value>,<tag_key>=<tag_value>` Enter a space between `tag_set` and `field_set`.
|
- `tag_set` will be used as tags, with format like `<tag_key>=<tag_value>,<tag_key>=<tag_value>` Enter a space between `tag_set` and `field_set`.
|
||||||
- `field_set`will be used as data columns, with format like `<field_key>=<field_value>,<field_key>=<field_value>` Enter a space between `field_set` and `timestamp`.
|
- `field_set`will be used as data columns, with format like `<field_key>=<field_value>,<field_key>=<field_value>` Enter a space between `field_set` and `timestamp`.
|
||||||
- `timestamp` is the primary key timestamp corresponding to this row of data
|
- `timestamp` is the primary key timestamp corresponding to this row of data
|
||||||
|
- schemaless writing does not support writing data to tables with a second primary key column.
|
||||||
|
|
||||||
All data in tag_set is automatically converted to the NCHAR data type and does not require double quotes (").
|
All data in tag_set is automatically converted to the NCHAR data type and does not require double quotes (").
|
||||||
|
|
||||||
|
@ -39,7 +40,7 @@ In the schemaless writing data line protocol, each data item in the field_set ne
|
||||||
- Spaces, equals sign (=), comma (,), double quote ("), and backslash (\\) need to be escaped with a backslash (\\) in front. (All refer to the ASCII character). The rules are as follows:
|
- Spaces, equals sign (=), comma (,), double quote ("), and backslash (\\) need to be escaped with a backslash (\\) in front. (All refer to the ASCII character). The rules are as follows:
|
||||||
|
|
||||||
| **Serial number** | **Element** | **Escape characters** |
|
| **Serial number** | **Element** | **Escape characters** |
|
||||||
| -------- | ----------- | ----------------------------- |
|
| ----------------- | ----------- | ------------------------- |
|
||||||
| 1 | Measurement | Comma, Space |
|
| 1 | Measurement | Comma, Space |
|
||||||
| 2 | Tag key | Comma, Equals Sign, Space |
|
| 2 | Tag key | Comma, Equals Sign, Space |
|
||||||
| 3 | Tag value | Comma, Equals Sign, Space |
|
| 3 | Tag value | Comma, Equals Sign, Space |
|
||||||
|
@ -49,7 +50,7 @@ In the schemaless writing data line protocol, each data item in the field_set ne
|
||||||
With two contiguous backslashes, the first is interpreted as an escape character. Examples of backslash escape rules are as follows:
|
With two contiguous backslashes, the first is interpreted as an escape character. Examples of backslash escape rules are as follows:
|
||||||
|
|
||||||
| **Serial number** | **Backslashes** | **Interpreted as** |
|
| **Serial number** | **Backslashes** | **Interpreted as** |
|
||||||
| -------- | ----------- | ----------------------------- |
|
| ----------------- | --------------- | ------------------ |
|
||||||
| 1 | \ | \ |
|
| 1 | \ | \ |
|
||||||
| 2 | \\\\ | \ |
|
| 2 | \\\\ | \ |
|
||||||
| 3 | \\\\\\ | \\\\ |
|
| 3 | \\\\\\ | \\\\ |
|
||||||
|
|
|
@ -0,0 +1,92 @@
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
title: Configurable Column Compression
|
||||||
|
description: Configurable column storage compression method
|
||||||
|
---
|
||||||
|
|
||||||
|
# Configurable Storage Compression
|
||||||
|
|
||||||
|
Since TDengine 3.3.0.0, more advanced compression feature is introduced, you can specify compression or not, the compression method and compression level for each column.
|
||||||
|
|
||||||
|
## Compression Terminology Definition
|
||||||
|
|
||||||
|
### Compression Level Definition
|
||||||
|
|
||||||
|
- Level 1 Compression: Encoding the data, which is essentially a form of compression
|
||||||
|
- Level 2 Compression: Compressing data blocks.
|
||||||
|
|
||||||
|
### Compression Algorithm Level
|
||||||
|
|
||||||
|
In this article, it specifically refers to the level within the secondary compression algorithm, such as zstd, at least 8 levels can be selected, each level has different performance, essentially it is a tradeoff between compression ratio, compression speed, and decompression speed. To avoid the difficulty of choice, it is simplified and defined as the following three levels:
|
||||||
|
|
||||||
|
- high: The highest compression ratio, the worst compression speed and decompression speed.
|
||||||
|
- low: The best compression speed and decompression speed, the lowest compression ratio.
|
||||||
|
- medium: Balancing compression ratio, compression speed, and decompression speed.
|
||||||
|
|
||||||
|
### Compression Algorithm List
|
||||||
|
|
||||||
|
- Encoding algorithm list (Level 1 compression): simple8b, bit-packing, delta-i, delta-d, disabled
|
||||||
|
|
||||||
|
- Compression algorithm list (Level 2 compression): lz4, zlib, zstd, tsz, xz, disabled
|
||||||
|
|
||||||
|
- Default compression algorithm list and applicable range for each data type
|
||||||
|
|
||||||
|
| Data Type | Optional Encoding Algorithm | Default Encoding Algorithm | Optional Compression Algorithm|Default Compression Algorithm| Default Compression Level|
|
||||||
|
| :-----------:|:----------:|:-------:|:-------:|:----------:|:----:|
|
||||||
|
tinyint/untinyint/smallint/usmallint/int/uint | simple8b| simple8b | lz4/zlib/zstd/xz| lz4 | medium|
|
||||||
|
| bigint/ubigint/timestamp | simple8b/delta-i | delta-i |lz4/zlib/zstd/xz | lz4| medium|
|
||||||
|
|float/double | delta-d|delta-d |lz4/zlib/zstd/xz/tsz|tsz| medium|
|
||||||
|
|binary/nchar| disabled| disabled|lz4/zlib/zstd/xz| lz4| medium|
|
||||||
|
|bool| bit-packing| bit-packing| lz4/zlib/zstd/xz| lz4| medium|
|
||||||
|
|
||||||
|
Note: For floating point types, if configured as tsz, its precision is determined by the global configuration of taosd. If configured as tsz, but the lossy compression flag is not configured, lz4 is used for compression by default.
|
||||||
|
|
||||||
|
## SQL
|
||||||
|
|
||||||
|
### Create Table with Compression
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE [dbname.]tabname (colName colType [ENCODE 'encode_type'] [COMPRESS 'compress_type' [LEVEL 'level'], [, other cerate_definition]...])
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameter Description**
|
||||||
|
|
||||||
|
- tabname: Super table or ordinary table name
|
||||||
|
- encode_type: Level 1 compression, specific parameters see the above list
|
||||||
|
- compress_type: Level 2 compression, specific parameters see the above list
|
||||||
|
- level: Specifically refers to the level of secondary compression, the default value is medium, supports abbreviation as 'h'/'l'/'m'
|
||||||
|
|
||||||
|
**Function Description**
|
||||||
|
|
||||||
|
- Specify the compression method for the column when creating a table
|
||||||
|
|
||||||
|
### Change Compression Method
|
||||||
|
|
||||||
|
```sql
|
||||||
|
ALTER TABLE [db_name.]tabName MODIFY COLUMN colName [ENCODE 'ecode_type'] [COMPRESS 'compress_type'] [LEVEL "high"]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameter Description**
|
||||||
|
|
||||||
|
- tabName: Table name, can be a super table or an ordinary table
|
||||||
|
- colName: The column to change the compression algorithm, can only be a normal column
|
||||||
|
|
||||||
|
**Function Description**
|
||||||
|
|
||||||
|
- Change the compression method of the column
|
||||||
|
|
||||||
|
### View Compression Dethod
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DESCRIBE [dbname.]tabName
|
||||||
|
```
|
||||||
|
|
||||||
|
**Function Description**
|
||||||
|
|
||||||
|
- Display basic information of the column, including type and compression method
|
||||||
|
|
||||||
|
## Compatibility
|
||||||
|
|
||||||
|
- Fully compatible with existing data
|
||||||
|
- Can't be rolled back once you upgrade to 3.3.0.0
|
|
@ -23,7 +23,7 @@ create_subtable_clause: {
|
||||||
}
|
}
|
||||||
|
|
||||||
create_definition:
|
create_definition:
|
||||||
col_name column_type
|
col_name column_type [PRIMARY KEY]
|
||||||
|
|
||||||
table_options:
|
table_options:
|
||||||
table_option ...
|
table_option ...
|
||||||
|
@ -38,12 +38,13 @@ table_option: {
|
||||||
|
|
||||||
**使用说明**
|
**使用说明**
|
||||||
|
|
||||||
1. 表的第一个字段必须是 TIMESTAMP,并且系统自动将其设为主键;
|
1. 表的第一个字段必须是 TIMESTAMP,并且系统自动将其设为主键。
|
||||||
2. 表名最大长度为 192;
|
2. 除时间戳主键列之外,还可以通过 PRIAMRY KEY 关键字指定第二列为额外的主键列。被指定为主键列的第二列必须为整型或字符串类型(varchar)。
|
||||||
3. 表的每行长度不能超过 48KB(从 3.0.5.0 版本开始为 64KB);(注意:每个 BINARY/NCHAR/GEOMETRY 类型的列还会额外占用 2 个字节的存储位置)
|
3. 表名最大长度为 192。
|
||||||
4. 子表名只能由字母、数字和下划线组成,且不能以数字开头,不区分大小写
|
4. 表的每行长度不能超过 48KB(从 3.0.5.0 版本开始为 64KB);(注意:每个 BINARY/NCHAR/GEOMETRY 类型的列还会额外占用 2 个字节的存储位置)。
|
||||||
5. 使用数据类型 BINARY/NCHAR/GEOMETRY,需指定其最长的字节数,如 BINARY(20),表示 20 字节;
|
5. 子表名只能由字母、数字和下划线组成,且不能以数字开头,不区分大小写。
|
||||||
6. 为了兼容支持更多形式的表名,TDengine 引入新的转义符 "\`",可以让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查。但是同样具有长度限制要求。使用转义字符以后,不再对转义字符中的内容进行大小写统一。
|
6. 使用数据类型 BINARY/NCHAR/GEOMETRY,需指定其最长的字节数,如 BINARY(20),表示 20 字节。
|
||||||
|
7. 为了兼容支持更多形式的表名,TDengine 引入新的转义符 "\`",可以让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查。但是同样具有长度限制要求。使用转义字符以后,不再对转义字符中的内容进行大小写统一,
|
||||||
例如:\`aBc\` 和 \`abc\` 是不同的表名,但是 abc 和 aBc 是相同的表名。
|
例如:\`aBc\` 和 \`abc\` 是不同的表名,但是 abc 和 aBc 是相同的表名。
|
||||||
|
|
||||||
**参数说明**
|
**参数说明**
|
||||||
|
@ -106,6 +107,7 @@ alter_table_option: {
|
||||||
2. DROP COLUMN:删除列。
|
2. DROP COLUMN:删除列。
|
||||||
3. MODIFY COLUMN:修改列定义,如果数据列的类型是可变长类型,那么可以使用此指令修改其宽度,只能改大,不能改小。
|
3. MODIFY COLUMN:修改列定义,如果数据列的类型是可变长类型,那么可以使用此指令修改其宽度,只能改大,不能改小。
|
||||||
4. RENAME COLUMN:修改列名称。
|
4. RENAME COLUMN:修改列名称。
|
||||||
|
5. 普通表的主键列不能被修改,也不能通过 ADD/DROP COLUMN 来添加/删除主键列。
|
||||||
|
|
||||||
### 增加列
|
### 增加列
|
||||||
|
|
||||||
|
|
|
@ -148,6 +148,7 @@ alter_table_option: {
|
||||||
- DROP TAG:删除超级表的一个标签。从超级表删除某个标签后,该超级表下的所有子表也会自动删除该标签。
|
- DROP TAG:删除超级表的一个标签。从超级表删除某个标签后,该超级表下的所有子表也会自动删除该标签。
|
||||||
- MODIFY TAG:修改超级表的一个标签的列宽度。标签的类型只能是 nchar 和 binary,使用此指令可以修改其宽度,只能改大,不能改小。
|
- MODIFY TAG:修改超级表的一个标签的列宽度。标签的类型只能是 nchar 和 binary,使用此指令可以修改其宽度,只能改大,不能改小。
|
||||||
- RENAME TAG:修改超级表的一个标签的名称。从超级表修改某个标签名后,该超级表下的所有子表也会自动更新该标签名。
|
- RENAME TAG:修改超级表的一个标签的名称。从超级表修改某个标签名后,该超级表下的所有子表也会自动更新该标签名。
|
||||||
|
- 与普通表一样,超级表的主键列不允许被修改,也不允许通过 ADD/DROP COLUMN 来添加或删除主键列。
|
||||||
|
|
||||||
### 增加列
|
### 增加列
|
||||||
|
|
||||||
|
|
|
@ -57,6 +57,7 @@ INSERT INTO
|
||||||
INSERT INTO d1001 USING meters TAGS('Beijing.Chaoyang', 2) VALUES('a');
|
INSERT INTO d1001 USING meters TAGS('Beijing.Chaoyang', 2) VALUES('a');
|
||||||
```
|
```
|
||||||
6. 对于向多个子表插入数据的情况,依然会有部分数据写入失败,部分数据写入成功的情况。这是因为多个子表可能分布在不同的 VNODE 上,客户端将 INSERT 语句完整解析后,将数据发往各个涉及的 VNODE 上,每个 VNODE 独立进行写入操作。如果某个 VNODE 因为某些原因(比如网络问题或磁盘故障)导致写入失败,并不会影响其他 VNODE 节点的写入。
|
6. 对于向多个子表插入数据的情况,依然会有部分数据写入失败,部分数据写入成功的情况。这是因为多个子表可能分布在不同的 VNODE 上,客户端将 INSERT 语句完整解析后,将数据发往各个涉及的 VNODE 上,每个 VNODE 独立进行写入操作。如果某个 VNODE 因为某些原因(比如网络问题或磁盘故障)导致写入失败,并不会影响其他 VNODE 节点的写入。
|
||||||
|
7. 主键列值必须指定且不能为 NULL。
|
||||||
|
|
||||||
**正常语法说明**
|
**正常语法说明**
|
||||||
|
|
||||||
|
|
|
@ -39,7 +39,7 @@ select_expr: {
|
||||||
|
|
||||||
from_clause: {
|
from_clause: {
|
||||||
table_reference [, table_reference] ...
|
table_reference [, table_reference] ...
|
||||||
| join_clause [, join_clause] ...
|
| table_reference join_clause [, join_clause] ...
|
||||||
}
|
}
|
||||||
|
|
||||||
table_reference:
|
table_reference:
|
||||||
|
@ -52,7 +52,7 @@ table_expr: {
|
||||||
}
|
}
|
||||||
|
|
||||||
join_clause:
|
join_clause:
|
||||||
table_reference [INNER] JOIN table_reference ON condition
|
[INNER|LEFT|RIGHT|FULL] [OUTER|SEMI|ANTI|ASOF|WINDOW] JOIN table_reference [ON condition] [WINDOW_OFFSET(start_offset, end_offset)] [JLIMIT jlimit_num]
|
||||||
|
|
||||||
window_clause: {
|
window_clause: {
|
||||||
SESSION(ts_col, tol_val)
|
SESSION(ts_col, tol_val)
|
||||||
|
@ -410,7 +410,9 @@ SELECT AVG(CASE WHEN voltage < 200 or voltage > 250 THEN 220 ELSE voltage END) F
|
||||||
|
|
||||||
## JOIN 子句
|
## JOIN 子句
|
||||||
|
|
||||||
TDengine 支持基于时间戳主键的内连接,即 JOIN 条件必须包含时间戳主键。只要满足基于时间戳主键这个要求,普通表、子表、超级表和子查询之间可以随意的进行内连接,且对表个数没有限制,其它连接条件与主键间必须是 AND 操作。
|
在 3.3.0.0 版本之前 TDengine 只支持内连接,自 3.3.0.0 版本起 TDengine 支持了更为广泛的 JOIN 类型,这其中既包括传统数据库中的 LEFT JOIN、RIGHT JOIN、FULL JOIN、SEMI JOIN、ANTI-SEMI JOIN,也包括时序库中特色的 ASOF JOIN、WINDOW JOIN。JOIN 操作支持在子表、普通表、超级表以及子查询间进行。
|
||||||
|
|
||||||
|
### 示例
|
||||||
|
|
||||||
普通表与普通表之间的 JOIN 操作:
|
普通表与普通表之间的 JOIN 操作:
|
||||||
|
|
||||||
|
@ -420,23 +422,23 @@ FROM temp_tb_1 t1, pressure_tb_1 t2
|
||||||
WHERE t1.ts = t2.ts
|
WHERE t1.ts = t2.ts
|
||||||
```
|
```
|
||||||
|
|
||||||
超级表与超级表之间的 JOIN 操作:
|
超级表与超级表之间的 LEFT JOIN 操作:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT *
|
SELECT *
|
||||||
FROM temp_stable t1, temp_stable t2
|
FROM temp_stable t1 LEFT JOIN temp_stable t2
|
||||||
WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
|
ON t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
|
||||||
```
|
```
|
||||||
|
|
||||||
子表与超级表之间的 JOIN 操作:
|
子表与超级表之间的 LEFT ASOF JOIN 操作:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT *
|
SELECT *
|
||||||
FROM temp_ctable t1, temp_stable t2
|
FROM temp_ctable t1 LEFT ASOF JOIN temp_stable t2
|
||||||
WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
|
ON t1.ts = t2.ts AND t1.deviceid = t2.deviceid;
|
||||||
```
|
```
|
||||||
|
|
||||||
类似地,也可以对多个子查询的查询结果进行 JOIN 操作。
|
更多 JOIN 操作相关介绍参见页面 [TDengine 关联查询](../join)
|
||||||
|
|
||||||
## 嵌套查询
|
## 嵌套查询
|
||||||
|
|
||||||
|
|
|
@ -34,6 +34,13 @@ SELECT * FROM information_schema.INS_INDEXES
|
||||||
|
|
||||||
也可以为上面的查询语句加上过滤条件以缩小查询范围。
|
也可以为上面的查询语句加上过滤条件以缩小查询范围。
|
||||||
|
|
||||||
|
或者通过 SHOW 命令查看指定表上的索引
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SHOW INDEXES FROM tbl_name [FROM db_name];
|
||||||
|
SHOW INDEXES FROM [db_name.]tbl_name;
|
||||||
|
```
|
||||||
|
|
||||||
## 使用说明
|
## 使用说明
|
||||||
|
|
||||||
1. 索引使用得当能够提升数据过滤的效率,目前支持的过滤算子有 `=`, `>`, `>=`, `<`, `<=`。如果查询过滤条件中使用了这些算子,则索引能够明显提升查询效率。但如果查询过滤条件中使用的是其它算子,则索引起不到作用,查询效率没有变化。未来会逐步添加更多的算子。
|
1. 索引使用得当能够提升数据过滤的效率,目前支持的过滤算子有 `=`, `>`, `>=`, `<`, `<=`。如果查询过滤条件中使用了这些算子,则索引能够明显提升查询效率。但如果查询过滤条件中使用的是其它算子,则索引起不到作用,查询效率没有变化。未来会逐步添加更多的算子。
|
||||||
|
|
|
@ -503,38 +503,38 @@ TO_CHAR(ts, format_str_literal)
|
||||||
|
|
||||||
**支持的格式**
|
**支持的格式**
|
||||||
|
|
||||||
| **格式** | **说明**| **例子** |
|
| **格式** | **说明** | **例子** |
|
||||||
| --- | --- | --- |
|
| ------------------- | ----------------------------------------- | ------------------------- |
|
||||||
|AM,am,PM,pm| 无点分隔的上午下午 | 07:00:00am|
|
| AM,am,PM,pm | 无点分隔的上午下午 | 07:00:00am |
|
||||||
|A.M.,a.m.,P.M.,p.m.| 有点分隔的上午下午| 07:00:00a.m.|
|
| A.M.,a.m.,P.M.,p.m. | 有点分隔的上午下午 | 07:00:00a.m. |
|
||||||
|YYYY,yyyy|年, 4个及以上数字| 2023-10-10|
|
| YYYY,yyyy | 年, 4个及以上数字 | 2023-10-10 |
|
||||||
|YYY,yyy| 年, 最后3位数字| 023-10-10|
|
| YYY,yyy | 年, 最后3位数字 | 023-10-10 |
|
||||||
|YY,yy| 年, 最后2位数字| 23-10-10|
|
| YY,yy | 年, 最后2位数字 | 23-10-10 |
|
||||||
|Y,y|年, 最后一位数字| 3-10-10|
|
| Y,y | 年, 最后一位数字 | 3-10-10 |
|
||||||
|MONTH|月, 全大写| 2023-JANUARY-01|
|
| MONTH | 月, 全大写 | 2023-JANUARY-01 |
|
||||||
|Month|月, 首字母大写| 2023-January-01|
|
| Month | 月, 首字母大写 | 2023-January-01 |
|
||||||
|month|月, 全小写| 2023-january-01|
|
| month | 月, 全小写 | 2023-january-01 |
|
||||||
|MON| 月, 缩写, 全大写(三个字符)| JAN, SEP|
|
| MON | 月, 缩写, 全大写(三个字符) | JAN, SEP |
|
||||||
|Mon| 月, 缩写, 首字母大写| Jan, Sep|
|
| Mon | 月, 缩写, 首字母大写 | Jan, Sep |
|
||||||
|mon|月, 缩写, 全小写| jan, sep|
|
| mon | 月, 缩写, 全小写 | jan, sep |
|
||||||
|MM,mm|月, 数字 01-12|2023-01-01|
|
| MM,mm | 月, 数字 01-12 | 2023-01-01 |
|
||||||
|DD,dd|月日, 01-31||
|
| DD,dd | 月日, 01-31 | |
|
||||||
|DAY|周日, 全大写|MONDAY|
|
| DAY | 周日, 全大写 | MONDAY |
|
||||||
|Day|周日, 首字符大写|Monday|
|
| Day | 周日, 首字符大写 | Monday |
|
||||||
|day|周日, 全小写|monday|
|
| day | 周日, 全小写 | monday |
|
||||||
|DY|周日, 缩写, 全大写|MON|
|
| DY | 周日, 缩写, 全大写 | MON |
|
||||||
|Dy|周日, 缩写, 首字符大写|Mon|
|
| Dy | 周日, 缩写, 首字符大写 | Mon |
|
||||||
|dy|周日, 缩写, 全小写|mon|
|
| dy | 周日, 缩写, 全小写 | mon |
|
||||||
|DDD|年日, 001-366||
|
| DDD | 年日, 001-366 | |
|
||||||
|D,d|周日, 数字, 1-7, Sunday(1) to Saturday(7)||
|
| D,d | 周日, 数字, 1-7, Sunday(1) to Saturday(7) | |
|
||||||
|HH24,hh24|小时, 00-23|2023-01-30 23:59:59|
|
| HH24,hh24 | 小时, 00-23 | 2023-01-30 23:59:59 |
|
||||||
|hh12,HH12, hh, HH| 小时, 01-12|2023-01-30 12:59:59PM|
|
| hh12,HH12, hh, HH | 小时, 01-12 | 2023-01-30 12:59:59PM |
|
||||||
|MI,mi|分钟, 00-59||
|
| MI,mi | 分钟, 00-59 | |
|
||||||
|SS,ss|秒, 00-59||
|
| SS,ss | 秒, 00-59 | |
|
||||||
|MS,ms|毫秒, 000-999||
|
| MS,ms | 毫秒, 000-999 | |
|
||||||
|US,us|微秒, 000000-999999||
|
| US,us | 微秒, 000000-999999 | |
|
||||||
|NS,ns|纳秒, 000000000-999999999||
|
| NS,ns | 纳秒, 000000000-999999999 | |
|
||||||
|TZH,tzh|时区小时|2023-01-30 11:59:59PM +08|
|
| TZH,tzh | 时区小时 | 2023-01-30 11:59:59PM +08 |
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**:
|
||||||
- `Month`, `Day`等的输出格式是左对齐的, 右侧添加空格, 如`2023-OCTOBER -01`, `2023-SEPTEMBER-01`, 9月是月份中英文字母数最长的, 因此9月没有空格. 星期类似.
|
- `Month`, `Day`等的输出格式是左对齐的, 右侧添加空格, 如`2023-OCTOBER -01`, `2023-SEPTEMBER-01`, 9月是月份中英文字母数最长的, 因此9月没有空格. 星期类似.
|
||||||
|
@ -957,6 +957,7 @@ FIRST(expr)
|
||||||
- 如果要返回各个列的首个(时间戳最小)非 NULL 值,可以使用 FIRST(\*);查询超级表,且multiResultFunctionStarReturnTags设置为 0 (默认值) 时,FIRST(\*)只返回超级表的普通列;设置为 1 时,返回超级表的普通列和标签列。
|
- 如果要返回各个列的首个(时间戳最小)非 NULL 值,可以使用 FIRST(\*);查询超级表,且multiResultFunctionStarReturnTags设置为 0 (默认值) 时,FIRST(\*)只返回超级表的普通列;设置为 1 时,返回超级表的普通列和标签列。
|
||||||
- 如果结果集中的某列全部为 NULL 值,则该列的返回结果也是 NULL;
|
- 如果结果集中的某列全部为 NULL 值,则该列的返回结果也是 NULL;
|
||||||
- 如果结果集中所有列全部为 NULL 值,则不返回结果。
|
- 如果结果集中所有列全部为 NULL 值,则不返回结果。
|
||||||
|
- 对于存在复合主键的表的查询,若最小时间戳的数据有多条,则只有对应的复合主键最小的数据被返回。
|
||||||
|
|
||||||
### INTERP
|
### INTERP
|
||||||
|
|
||||||
|
@ -989,6 +990,7 @@ ignore_null_values: {
|
||||||
- INTERP 作用于超级表时, 会将该超级表下的所有子表数据按照主键列排序后进行插值计算,也可以搭配 PARTITION BY tbname 使用,将结果强制规约到单个时间线。
|
- INTERP 作用于超级表时, 会将该超级表下的所有子表数据按照主键列排序后进行插值计算,也可以搭配 PARTITION BY tbname 使用,将结果强制规约到单个时间线。
|
||||||
- INTERP 可以与伪列 _irowts 一起使用,返回插值点所对应的时间戳(3.0.2.0版本以后支持)。
|
- INTERP 可以与伪列 _irowts 一起使用,返回插值点所对应的时间戳(3.0.2.0版本以后支持)。
|
||||||
- INTERP 可以与伪列 _isfilled 一起使用,显示返回结果是否为原始记录或插值算法产生的数据(3.0.3.0版本以后支持)。
|
- INTERP 可以与伪列 _isfilled 一起使用,显示返回结果是否为原始记录或插值算法产生的数据(3.0.3.0版本以后支持)。
|
||||||
|
- INTERP 对于带复合主键的表的查询,若存在相同时间戳的数据,则只有对应的复合主键最小的数据参与运算。
|
||||||
|
|
||||||
### LAST
|
### LAST
|
||||||
|
|
||||||
|
@ -1009,6 +1011,7 @@ LAST(expr)
|
||||||
- 如果要返回各个列的最后(时间戳最大)一个非 NULL 值,可以使用 LAST(\*);查询超级表,且multiResultFunctionStarReturnTags设置为 0 (默认值) 时,LAST(\*)只返回超级表的普通列;设置为 1 时,返回超级表的普通列和标签列。
|
- 如果要返回各个列的最后(时间戳最大)一个非 NULL 值,可以使用 LAST(\*);查询超级表,且multiResultFunctionStarReturnTags设置为 0 (默认值) 时,LAST(\*)只返回超级表的普通列;设置为 1 时,返回超级表的普通列和标签列。
|
||||||
- 如果结果集中的某列全部为 NULL 值,则该列的返回结果也是 NULL;如果结果集中所有列全部为 NULL 值,则不返回结果。
|
- 如果结果集中的某列全部为 NULL 值,则该列的返回结果也是 NULL;如果结果集中所有列全部为 NULL 值,则不返回结果。
|
||||||
- 在用于超级表时,时间戳完全一样且同为最大的数据行可能有多个,那么会从中随机返回一条,而并不保证多次运行所挑选的数据行必然一致。
|
- 在用于超级表时,时间戳完全一样且同为最大的数据行可能有多个,那么会从中随机返回一条,而并不保证多次运行所挑选的数据行必然一致。
|
||||||
|
- 对于存在复合主键的表的查询,若最大时间戳的数据有多条,则只有对应的复合主键最大的数据被返回。
|
||||||
|
|
||||||
|
|
||||||
### LAST_ROW
|
### LAST_ROW
|
||||||
|
@ -1029,6 +1032,7 @@ LAST_ROW(expr)
|
||||||
- 如果要返回各个列的最后一条记录(时间戳最大),可以使用 LAST_ROW(\*);查询超级表,且multiResultFunctionStarReturnTags设置为 0 (默认值) 时,LAST_ROW(\*)只返回超级表的普通列;设置为 1 时,返回超级表的普通列和标签列。
|
- 如果要返回各个列的最后一条记录(时间戳最大),可以使用 LAST_ROW(\*);查询超级表,且multiResultFunctionStarReturnTags设置为 0 (默认值) 时,LAST_ROW(\*)只返回超级表的普通列;设置为 1 时,返回超级表的普通列和标签列。
|
||||||
- 在用于超级表时,时间戳完全一样且同为最大的数据行可能有多个,那么会从中随机返回一条,而并不保证多次运行所挑选的数据行必然一致。
|
- 在用于超级表时,时间戳完全一样且同为最大的数据行可能有多个,那么会从中随机返回一条,而并不保证多次运行所挑选的数据行必然一致。
|
||||||
- 不能与 INTERVAL 一起使用。
|
- 不能与 INTERVAL 一起使用。
|
||||||
|
- 与 LAST 函数一样,对于存在复合主键的表的查询,若最大时间戳的数据有多条,则只有对应的复合主键最大的数据被返回。
|
||||||
|
|
||||||
### MAX
|
### MAX
|
||||||
|
|
||||||
|
@ -1135,7 +1139,7 @@ TOP(expr, k)
|
||||||
UNIQUE(expr)
|
UNIQUE(expr)
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:返回该列数据首次出现的值。该函数功能与 distinct 相似。
|
**功能说明**:返回该列数据首次出现的值。该函数功能与 distinct 相似。对于存在复合主键的表的查询,若最小时间戳的数据有多条,则只有对应的复合主键最小的数据被返回。
|
||||||
|
|
||||||
**返回数据类型**:同应用的字段。
|
**返回数据类型**:同应用的字段。
|
||||||
|
|
||||||
|
@ -1181,7 +1185,7 @@ ignore_negative: {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:统计表中某列数值的单位变化率。其中单位时间区间的长度可以通过 time_interval 参数指定,最小可以是 1 秒(1s);ignore_negative 参数的值可以是 0 或 1,为 1 时表示忽略负值。
|
**功能说明**:统计表中某列数值的单位变化率。其中单位时间区间的长度可以通过 time_interval 参数指定,最小可以是 1 秒(1s);ignore_negative 参数的值可以是 0 或 1,为 1 时表示忽略负值。对于存在复合主键的表的查询,若时间戳相同的数据存在多条,则只有对应的复合主键最小的数据参与运算。
|
||||||
|
|
||||||
**返回数据类型**:DOUBLE。
|
**返回数据类型**:DOUBLE。
|
||||||
|
|
||||||
|
@ -1204,7 +1208,7 @@ ignore_negative: {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:统计表中某列的值与前一行对应值的差。 ignore_negative 取值为 0|1 , 可以不填,默认值为 0. 不忽略负值。ignore_negative 为 1 时表示忽略负数。
|
**功能说明**:统计表中某列的值与前一行对应值的差。 ignore_negative 取值为 0|1 , 可以不填,默认值为 0. 不忽略负值。ignore_negative 为 1 时表示忽略负数。对于你存在复合主键的表的查询,若时间戳相同的数据存在多条,则只有对应的复合主键最小的数据参与运算。
|
||||||
|
|
||||||
**返回数据类型**:同应用字段。
|
**返回数据类型**:同应用字段。
|
||||||
|
|
||||||
|
@ -1224,7 +1228,7 @@ ignore_negative: {
|
||||||
IRATE(expr)
|
IRATE(expr)
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:计算瞬时增长率。使用时间区间中最后两个样本数据来计算瞬时增长速率;如果这两个值呈递减关系,那么只取最后一个数用于计算,而不是使用二者差值。
|
**功能说明**:计算瞬时增长率。使用时间区间中最后两个样本数据来计算瞬时增长速率;如果这两个值呈递减关系,那么只取最后一个数用于计算,而不是使用二者差值。对于存在复合主键的表的查询,若时间戳相同的数据存在多条,则只有对应的复合主键最小的数据参与运算。
|
||||||
|
|
||||||
**返回数据类型**:DOUBLE。
|
**返回数据类型**:DOUBLE。
|
||||||
|
|
||||||
|
@ -1314,7 +1318,7 @@ STATEDURATION(expr, oper, val, unit)
|
||||||
TWA(expr)
|
TWA(expr)
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:时间加权平均函数。统计表中某列在一段时间内的时间加权平均。
|
**功能说明**:时间加权平均函数。统计表中某列在一段时间内的时间加权平均。对于存在复合主键的表的查询,若时间戳相同的数据存在多条,则只有对应的复合主键最小的数据参与运算。
|
||||||
|
|
||||||
**返回数据类型**:DOUBLE。
|
**返回数据类型**:DOUBLE。
|
||||||
|
|
||||||
|
|
|
@ -8,7 +8,7 @@ description: 流式计算的相关 SQL 的详细语法
|
||||||
## 创建流式计算
|
## 创建流式计算
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name[(field1_name, ...)] [TAGS (create_definition [, create_definition] ...)] SUBTABLE(expression) AS subquery
|
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name[(field1_name, field2_name [PRIMARY KEY], ...)] [TAGS (create_definition [, create_definition] ...)] SUBTABLE(expression) AS subquery
|
||||||
stream_options: {
|
stream_options: {
|
||||||
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
|
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
|
||||||
WATERMARK time
|
WATERMARK time
|
||||||
|
@ -30,9 +30,9 @@ subquery: SELECT select_list
|
||||||
[window_clause]
|
[window_clause]
|
||||||
```
|
```
|
||||||
|
|
||||||
支持会话窗口、状态窗口、滑动窗口、事件窗口和计数窗口,其中,状态窗口、事件窗口和计数窗口搭配超级表时必须与partition by tbname一起使用
|
支持会话窗口、状态窗口、滑动窗口、事件窗口和计数窗口,其中,状态窗口、事件窗口和计数窗口搭配超级表时必须与partition by tbname一起使用。对于数据源表是复合主键的流,不支持状态窗口、事件窗口、计数窗口的计算。
|
||||||
|
|
||||||
stb_name 是保存计算结果的超级表的表名,如果该超级表不存在,会自动创建;如果已存在,则检查列的schema信息。详见 写入已存在的超级表
|
stb_name 是保存计算结果的超级表的表名,如果该超级表不存在,会自动创建;如果已存在,则检查列的schema信息。详见 写入已存在的超级表。
|
||||||
|
|
||||||
TAGS 子句定义了流计算中创建TAG的规则,可以为每个partition对应的子表生成自定义的TAG值,详见 自定义TAG
|
TAGS 子句定义了流计算中创建TAG的规则,可以为每个partition对应的子表生成自定义的TAG值,详见 自定义TAG
|
||||||
```sql
|
```sql
|
||||||
|
|
|
@ -1,66 +1,132 @@
|
||||||
---
|
---
|
||||||
sidebar_label: 索引
|
sidebar_label: 窗口预聚集
|
||||||
title: 索引
|
title: 窗口预聚集
|
||||||
description: 索引功能的使用细节
|
description: 窗口预聚集使用说明
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine 从 3.0.0.0 版本开始引入了索引功能,支持 SMA 索引和 tag 索引。
|
为了提高大数据量的聚合函数查询性能,通过创建窗口预聚集 (TSMA Time-Range Small Materialized Aggregates) 对象, 使用固定时间窗口对指定的聚集函数进行预计算,并将计算结果存储下来,查询时通过查询预计算结果以提高查询性能。
|
||||||
|
|
||||||
## 创建索引
|
## 创建TSMA
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
|
-- 创建基于超级表或普通表的tsma
|
||||||
|
CREATE TSMA tsma_name ON [dbname.]table_name FUNCTION (func_name(func_param) [, ...] ) INTERVAL(time_duration);
|
||||||
|
-- 创建基于小窗口tsma的大窗口tsma
|
||||||
|
CREATE RECURSIVE TSMA tsma_name ON [db_name.]tsma_name1 INTERVAL(time_duration);
|
||||||
|
|
||||||
CREATE INDEX index_name ON tb_name index_option
|
time_duration:
|
||||||
|
number unit
|
||||||
CREATE SMA INDEX index_name ON tb_name index_option
|
|
||||||
|
|
||||||
index_option:
|
|
||||||
FUNCTION(functions) INTERVAL(interval_val [, interval_offset]) [SLIDING(sliding_val)] [WATERMARK(watermark_val)] [MAX_DELAY(max_delay_val)]
|
|
||||||
|
|
||||||
functions:
|
|
||||||
function [, function] ...
|
|
||||||
```
|
```
|
||||||
### tag 索引
|
|
||||||
|
|
||||||
[tag 索引](../tag-index)
|
创建 TSMA 时需要指定 TSMA 名字, 表名字, 函数列表以及窗口大小. 当基于 TSMA 创建时 TSMA 时, 即使用 `RECURSIVE` 关键字, 不需要指定 `FUNCTION()`, 将创建与已有 TSMA 相同的函数列表的TSMA, 且 INTERVAL 必须为所基于的TSMA窗口的整数倍。
|
||||||
|
|
||||||
### SMA 索引
|
其中 TSMA 命名规则与表名字类似, 长度最大限制为表名长度限制减去输出表后缀长度, 表名长度限制为193, 输出表后缀为`_tsma_res_stb_`, TSMA 名字最大长度为178.
|
||||||
|
|
||||||
对指定列按 INTERVAL 子句定义的时间窗口创建进行预聚合计算,预聚合计算类型由 functions_string 指定。SMA 索引能提升指定时间段的聚合查询的性能。目前,限制一个超级表只能创建一个 SMA INDEX。
|
TSMA只能基于超级表和普通表创建, 不能基于子表创建.
|
||||||
|
|
||||||
- 支持的函数包括 MAX、MIN 和 SUM。
|
函数列表中只能指定支持的聚集函数(见下文), 并且函数参数必须为1个, 即使当前函数支持多个参数, 函数参数内必须为普通列名, 不能为标签列. 函数列表中完全相同的函数和列会被去重, 如同时创建两个avg(c1), 则只会计算一个输出. TSMA 计算时将会把所有`函数中间结果`都输出到另一张超级表中, 输出超级表还包含了原始表的所有tag列. 函数列表中函数个数最多支持创建表最大列个数(包括tag列)减去 TSMA 计算附加的四列, 分别为`_wstart`, `_wend`, `_wduration`, 以及一个新增tag列 `tbname`, 再减去原始表的tag列数. 若列个数超出限制, 会报`Too many columns`错误.
|
||||||
- WATERMARK: 最小单位毫秒,取值范围 [0ms, 900000ms],默认值为 5 秒,只可用于超级表。
|
|
||||||
- MAX_DELAY: 最小单位毫秒,取值范围 [1ms, 900000ms],默认值为 interval 的值(但不能超过最大值),只可用于超级表。注:不建议 MAX_DELAY 设置太小,否则会过于频繁的推送结果,影响存储和查询性能,如无特殊需求,取默认值即可。
|
由于TSMA输出为一张超级表, 因此输出表的行长度受最大行长度限制, 不同函数的`中间结果`大小各异, 一般都大于原始数据大小, 若输出表的行长度大于最大行长度限制, 将会报`Row length exceeds max length`错误. 此时需要减少函数个数或者将常用的函数进行分组拆分到多个TSMA中.
|
||||||
|
|
||||||
|
窗口大小的限制为[1ms ~ 1h]. INTERVAL 的单位与查询中INTERVAL字句相同, 如 a (毫秒), b (纳秒), h (小时), m (分钟), s (秒), u (微妙).
|
||||||
|
|
||||||
|
TSMA为库内对象, 但名字全局唯一. 集群内一共可创建TSMA个数受参数`maxTsmaNum`限制, 参数默认值为8, 范围: [0-12]. 注意, 由于TSMA后台计算使用流计算, 因此每创建一条TSMA, 将会创建一条流, 因此能够创建的TSMA条数也受当前已经存在的流条数和最大可创建流条数限制.
|
||||||
|
|
||||||
|
## 支持的函数列表
|
||||||
|
| 函数| 备注 |
|
||||||
|
|---|---|
|
||||||
|
|min||
|
||||||
|
|max||
|
||||||
|
|sum||
|
||||||
|
|first||
|
||||||
|
|last||
|
||||||
|
|avg||
|
||||||
|
|count| 若想使用count(*), 则应创建count(ts)函数|
|
||||||
|
|spread||
|
||||||
|
|stddev||
|
||||||
|
|hyperloglog||
|
||||||
|
|||
|
||||||
|
|
||||||
|
## 删除TSMA
|
||||||
|
```sql
|
||||||
|
DROP TSMA [db_name.]tsma_name;
|
||||||
|
```
|
||||||
|
若存在其他TSMA基于当前被删除TSMA创建, 则删除操作报`Invalid drop base tsma, drop recursive tsma first`错误. 因此需先删除 所有Recursive TSMA.
|
||||||
|
|
||||||
|
## TSMA的计算
|
||||||
|
TSMA的计算结果为与原始表相同库下的一张超级表, 此表用户不可见. 不可删除, 在`DROP TSMA`时自动删除. TSMA的计算是通过流计算完成的, 此过程为后台异步过程, TSMA的计算结果不保证实时性, 但可以保证最终正确性.
|
||||||
|
|
||||||
|
当存在大量历史数据时, 创建TSMA之后, 流计算将会首先计算历史数据, 此期间新创建的TSMA不会被使用. 数据更新删除或者过期数据到来时自动重新计算影响部分数据。 在重新计算期间 TSMA 查询结果不保证实时性。若希望查询实时数据, 可以通过在 SQL 中添加 hint `/*+ skip_tsma() */` 或者关闭参数`querySmaOptimize`从原始数据查询。
|
||||||
|
|
||||||
|
## TSMA的使用与限制
|
||||||
|
|
||||||
|
客户端配置参数: `querySmaOptimize`, 用于控制查询时是否使用TSMA, `True`为使用, `False`为不使用即从原始数据查询.
|
||||||
|
|
||||||
|
客户端配置参数:`maxTsmaCalcDelay`,单位 s,用于控制用户可以接受的 TSMA 计算延迟,若 TSMA 的计算进度与最新时间差距在此范围内, 则该 TSMA 将会被使用, 若超出该范围, 则不使用, 默认值: 600(10 分钟), 最小值: 600(10 分钟), 最大值: 86400(1 天).
|
||||||
|
|
||||||
|
### 查询时使用TSMA
|
||||||
|
|
||||||
|
已在 TSMA 中定义的 agg 函数在大部分查询场景下都可直接使用, 若存在多个可用的 TSMA, 优先使用大窗口的 TSMA, 未闭合窗口通过查询小窗口TSMA或者原始数据计算。 同时也有某些场景不能使用 TSMA(见下文)。 不可用时整个查询将使用原始数据进行计算。
|
||||||
|
|
||||||
|
未指定窗口大小的查询语句默认优先使用包含所有查询聚合函数的最大窗口 TSMA 进行数据的计算。 如`SELECT COUNT(*) FROM stable GROUP BY tbname`将会使用包含count(ts)且窗口最大的TSMA。因此若使用聚合查询频率高时, 应当尽可能创建大窗口的TSMA.
|
||||||
|
|
||||||
|
指定窗口大小时即 `INTERVAL` 语句,使用最大的可整除窗口 TSMA。 窗口查询中, `INTERVAL` 的窗口大小, `OFFSET` 以及 `SLIDING` 都影响能使用的 TSMA 窗口大小, 可整 除窗口 TSMA 即 TSMA 窗口大小可被查询语句的 `INTERVAL, OFFSET, SLIDING` 整除的窗口。因此若使用窗口查询较多时, 需要考虑经常查询的窗口大小, 以及 offset, sliding大小来创建TSMA.
|
||||||
|
|
||||||
|
例 1. 如 创建 TSMA 窗口大小 `5m` 一条, `10m` 一条, 查询时 `INTERVAL(30m)`, 那么优先使用 `10m` 的 TSMA, 若查询为 `INTERVAL(30m, 10m) SLIDING(5m)`, 那么仅可使用 `5m` 的 TSMA 查询。
|
||||||
|
|
||||||
|
|
||||||
|
### 查询限制
|
||||||
|
|
||||||
|
在开启了参数`querySmaOptimize`并且无`skip_tsma()` hint时, 以下查询场景无法使用TSMA:
|
||||||
|
|
||||||
|
- 某个TSMA 中定义的 agg 函数不能覆盖当前查询的函数列表时
|
||||||
|
- 非 `INTERVAL` 的其他窗口,或者 `INTERVAL` 查询窗口大小(包括 `INTERVAL,SLIDING,OFFSET`)不是定义窗口的整数倍,如定义窗口为 2m,查询使用 5 分钟窗口,但若存在 1m 的窗口,则可以使用。
|
||||||
|
- 查询 `WHERE` 条件中包含任意普通列(非主键时间列)的过滤。
|
||||||
|
- `PARTITION` 或者 `GROUY BY` 包含任意普通列或其表达式时
|
||||||
|
- 可以使用其他更快的优化逻辑时, 如last cache优化, 若符合last优化的条件, 则先走last 优化, 无法走last时, 再判断是否可以走tsma优化
|
||||||
|
- 当前 TSMA 计算进度延迟大于配置参数 `maxTsmaCalcDelay`时
|
||||||
|
|
||||||
|
下面是一些例子:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
DROP DATABASE IF EXISTS d0;
|
SELECT agg_func_list [, pesudo_col_list] FROM stable WHERE exprs [GROUP/PARTITION BY [tbname] [, tag_list]] [HAVING ...] [INTERVAL(time_duration, offset) SLIDING(duration)]...;
|
||||||
CREATE DATABASE d0;
|
|
||||||
USE d0;
|
-- 创建
|
||||||
CREATE TABLE IF NOT EXISTS st1 (ts timestamp, c1 int, c2 float, c3 double) TAGS (t1 int unsigned);
|
CREATE TSMA tsma1 ON stable FUNCTION(COUNT(ts), SUM(c1), SUM(c3), MIN(c1), MIN(c3), AVG(c1)) INTERVAL(1m);
|
||||||
CREATE TABLE ct1 USING st1 TAGS(1000);
|
-- 查询
|
||||||
CREATE TABLE ct2 USING st1 TAGS(2000);
|
SELECT COUNT(*), SUM(c1) + SUM(c3) FROM stable; ---- use tsma1
|
||||||
INSERT INTO ct1 VALUES(now+0s, 10, 2.0, 3.0);
|
SELECT COUNT(*), AVG(c1) FROM stable GROUP/PARTITION BY tbname, tag1, tag2; --- use tsma1
|
||||||
INSERT INTO ct1 VALUES(now+1s, 11, 2.1, 3.1)(now+2s, 12, 2.2, 3.2)(now+3s, 13, 2.3, 3.3);
|
SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(1h); ---use tsma1
|
||||||
CREATE SMA INDEX sma_index_name1 ON st1 FUNCTION(max(c1),max(c2),min(c1)) INTERVAL(5m,10s) SLIDING(5m) WATERMARK 5s MAX_DELAY 1m;
|
SELECT COUNT(*), MIN(c1), SPREAD(c1) FROM stable INTERVAL(1h); ----- can't use, spread func not defined, although SPREAD can be calculated by MIN and MAX which are defined.
|
||||||
-- 从 SMA 索引查询
|
SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(30s); ----- can't use tsma1, time_duration not fit. Normally, query_time_duration should be multple of create_duration.
|
||||||
ALTER LOCAL 'querySmaOptimize' '1';
|
SELECT COUNT(*), MIN(c1) FROM stable where c2 > 0; ---- can't use tsma1, can't do c2 filtering
|
||||||
SELECT max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDING(5m);
|
SELECT COUNT(*) FROM stable GROUP BY c2; ---- can't use any tsma
|
||||||
SELECT _wstart,_wend,_wduration,max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDING(5m);
|
SELECT MIN(c3), MIN(c2) FROM stable INTERVAL(1m); ---- can't use tsma1, c2 is not defined in tsma1.
|
||||||
-- 从原始数据查询
|
|
||||||
ALTER LOCAL 'querySmaOptimize' '0';
|
-- Another tsma2 created with INTERVAL(1h) based on tsma1
|
||||||
|
CREATE RECURSIVE TSMA tsma2 on tsma1 INTERVAL(1h);
|
||||||
|
SELECT COUNT(*), SUM(c1) FROM stable; ---- use tsma2
|
||||||
|
SELECT COUNT(*), AVG(c1) FROM stable GROUP/PARTITION BY tbname, tag1, tag2; --- use tsma2
|
||||||
|
SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(2h); ---use tsma2
|
||||||
|
SELECT COUNT(*), MIN(c1) FROM stable WHERE ts < '2023-01-01 10:10:10' INTERVAL(30m); --use tsma1
|
||||||
|
SELECT COUNT(*), MIN(c1) + MIN(c3) FROM stable INTERVAL(30m); ---use tsma1
|
||||||
|
SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(1h) SLIDING(30m); ---use tsma1
|
||||||
|
SELECT COUNT(*), MIN(c1), SPREAD(c1) FROM stable INTERVAL(1h); ----- can't use tsma1 or tsma2, spread func not defined
|
||||||
|
SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(30s); ----- can't use tsma1 or tsma2, time_duration not fit. Normally, query_time_duration should be multple of create_duration.
|
||||||
|
SELECT COUNT(*), MIN(c1) FROM stable where c2 > 0; ---- can't use tsma1 or tsam2, can't do c2 filtering
|
||||||
```
|
```
|
||||||
|
|
||||||
## 删除索引
|
### 使用限制
|
||||||
|
|
||||||
|
创建TSMA之后, 对原始超级表的操作有以下限制:
|
||||||
|
|
||||||
|
- 必须删除该表上的所有TSMA才能删除该表.
|
||||||
|
- 原始表所有tag列不能删除, 也不能修改tag列名或子表的tag值, 必须先删除TSMA, 才能删除tag列.
|
||||||
|
- 若某些列被TSMA使用了, 则这些列不能被删除, 必须先删除TSMA. 添加列不受影响, 但是新添加的列不在任何TSMA中, 因此若要计算新增列, 需要新创建其他的TSMA.
|
||||||
|
|
||||||
|
## 查看TSMA
|
||||||
```sql
|
```sql
|
||||||
DROP INDEX index_name;
|
SHOW [db_name.]TSMAS;
|
||||||
|
SELECT * FROM information_schema.ins_tsma;
|
||||||
```
|
```
|
||||||
|
若创建时指定的较多的函数, 且列名较长, 在显示函数列表时可能会被截断(目前最大支持输出256KB).
|
||||||
## 查看索引
|
|
||||||
|
|
||||||
````sql
|
|
||||||
SHOW INDEXES FROM tbl_name [FROM db_name];
|
|
||||||
SHOW INDEXES FROM [db_name.]tbl_name;
|
|
||||||
````
|
|
||||||
|
|
||||||
显示在所指定的数据库或表上已创建的索引。
|
|
|
@ -0,0 +1,290 @@
|
||||||
|
---
|
||||||
|
sidebar_label: 关联查询
|
||||||
|
title: 关联查询
|
||||||
|
description: 关联查询详细描述
|
||||||
|
---
|
||||||
|
|
||||||
|
## Join 概念
|
||||||
|
|
||||||
|
### 驱动表
|
||||||
|
|
||||||
|
驱动关联查询进行的表,在 Left Join 系列中左表为驱动表,在 Right Join 系列中右表为驱动表。
|
||||||
|
|
||||||
|
### 连接条件
|
||||||
|
|
||||||
|
连接条件是指进行表关联所指定的条件,TDengine 支持的所有关联查询都需要指定连接条件,连接条件通常(Inner Join 和 Window Join 例外)只出现在 `ON` 之后。根据语义,Inner Join 中出现在 `WHERE` 之后的条件也可以视作连接条件,而 Window Join 是通过 `WINDOW_OFFSET` 来指定连接条件。
|
||||||
|
|
||||||
|
除 ASOF Join 外,TDengine 支持的所有 Join 类型都必须显式指定连接条件,ASOF Join 因为默认定义有隐式的连接条件,所以(在默认条件可以满足需求的情况下)可以不必显式指定连接条件。
|
||||||
|
|
||||||
|
除 ASOF/Window Join 外,连接条件中除了包含主连接条件外,还可以包含任意多条其他连接条件,主连接条件与其他连接条件间必须是 `AND` 关系,而其他连接条件之间则没有这个限制。其他连接条件中可以包含主键列、Tag 、普通列、常量及其标量函数或运算的任意逻辑运算组合。
|
||||||
|
|
||||||
|
以智能电表为例,下面这几条 SQL 都包含合法的连接条件:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT a.* FROM meters a LEFT JOIN meters b ON a.ts = b.ts AND a.ts > '2023-10-18 10:00:00.000';
|
||||||
|
SELECT a.* FROM meters a LEFT JOIN meters b ON a.ts = b.ts AND (a.ts > '2023-10-18 10:00:00.000' OR a.ts < '2023-10-17 10:00:00.000');
|
||||||
|
SELECT a.* FROM meters a LEFT JOIN meters b ON timetruncate(a.ts, 1s) = timetruncate(b.ts, 1s) AND (a.ts + 1s > '2023-10-18 10:00:00.000' OR a.groupId > 0);
|
||||||
|
SELECT a.* FROM meters a LEFT ASOF JOIN meters b ON timetruncate(a.ts, 1s) < timetruncate(b.ts, 1s) AND a.groupId = b.groupId;
|
||||||
|
```
|
||||||
|
|
||||||
|
### 主连接条件
|
||||||
|
|
||||||
|
作为一款时序数据库,TDengine 所有的关联查询都围绕主键时戳列进行,因此要求除 ASOF/Window Join 外的所有关联查询都必须含有主键列的等值连接条件,而按照顺序首次出现在连接条件中的主键列等值连接条件将会被作为主连接条件。ASOF Join 的主连接条件可以包含非等值的连接条件,而 Window Join 的主连接条件则是通过 `WINDOW_OFFSET` 来指定。
|
||||||
|
|
||||||
|
除 Window Join 外,TDengine 支持在主连接条件中进行 `timetruncate` 函数操作,例如 `ON timetruncate(a.ts, 1s) = timetruncate(b.ts, 1s)`,除此之外,暂不支持其他函数及标量运算。
|
||||||
|
|
||||||
|
### 分组条件
|
||||||
|
|
||||||
|
时序数据库特色的 ASOF/Window Join 支持对关联查询的输入数据进行分组,然后每个分组内进行关联操作。分组只对关联查询的输入进行,输出结果将不包含分组信息。ASOF/Window Join 中出现在 `ON` 之后的等值条件(ASOF 的主连接条件除外)将被作为分组条件。
|
||||||
|
|
||||||
|
### 主键时间线
|
||||||
|
|
||||||
|
TDengine 作为时序数据库要求每个表(子表)中必须有主键时间戳列,它将作为该表的主键时间线进行很多跟时间相关的运算,而子查询的结果或者 Join 运算的结果中也需要明确哪一列将被视作主键时间线参与后续的时间相关的运算。在子查询中,查询结果中存在的有序的第一个出现的主键列(或其运算)或等同主键列的伪列(`_wstart`/`_wend`)将被视作该输出表的主键时间线。Join 输出结果中主键时间线的选择遵从以下规则:
|
||||||
|
- Left/Right Join 系列中驱动表(子查询)的主键列将被作为后续查询的主键时间线;此外,在 Window Join 窗口内,因为左右表同时有序所以在窗口内可以把任意一个表的主键列做作主键时间线,优先选择本表的主键列作为主键时间线。
|
||||||
|
- Inner Join 可以把任意一个表的主键列做作主键时间线,当存在类似分组条件(Tag 列的等值条件且与主连接条件 `AND` 关系)时将无法产生主键时间线。
|
||||||
|
- Full Join 因为无法产生任何一个有效的主键时间序列,因此没有主键时间线,这也就意味着 Full Join 中无法进行时间线相关的运算。
|
||||||
|
|
||||||
|
|
||||||
|
## 语法说明
|
||||||
|
|
||||||
|
在接下来的章节中会通过共用的方式同时介绍 Left/Right Join 系列,因此后续的包括 Outer、Semi、Anti-Semi、ASOF、Window 系列介绍中都采用了类似 "left/right" 的写法来同时进行 Left/Right Join 的介绍。这里简要介绍这种写法的含义,写在 "/" 前面的表示应用于 Left Join,而写在 "/" 后面的表示应用于 Right Join。
|
||||||
|
|
||||||
|
举例说明:
|
||||||
|
|
||||||
|
"左/右表" 表示对 Left Join 来说,它指的是"左表",对 Right Join 来说,它指的是“右表”;
|
||||||
|
|
||||||
|
同理,
|
||||||
|
|
||||||
|
"右/左表" 表示对 Left Join 来说,它指的是"右表",对 Right Join 来说,它指的是“左表”;
|
||||||
|
|
||||||
|
|
||||||
|
## Join 功能
|
||||||
|
|
||||||
|
### Inner Join
|
||||||
|
|
||||||
|
#### 定义
|
||||||
|
内连接 - 只有左右表中同时符合连接条件的数据才会被返回,可以视为两个表符合连接条件的数据的交集。
|
||||||
|
|
||||||
|
#### 语法
|
||||||
|
```sql
|
||||||
|
SELECT ... FROM table_name1 [INNER] JOIN table_name2 [ON ...] [WHERE ...] [...]
|
||||||
|
或
|
||||||
|
SELECT ... FROM table_name1, table_name2 WHERE ... [...]
|
||||||
|
```
|
||||||
|
#### 结果集
|
||||||
|
符合连接条件的左右表行数据的笛卡尔积集合。
|
||||||
|
|
||||||
|
#### 适用范围
|
||||||
|
支持超级表、普通表、子表、子查询间 Inner Join。
|
||||||
|
|
||||||
|
#### 说明
|
||||||
|
- 对于第一种语法,`INNER` 关键字可选, `ON` 和/或 `WHERE` 中可以指定主连接条件和其他连接条件,`WHERE` 中还可以指定过滤条件,`ON`/`WHERE` 两者至少指定一个。
|
||||||
|
- 对于第二种语法,可以在 `WHERE` 中指定主连接条件、其他连接条件、过滤条件。
|
||||||
|
- 对超级表进行 Inner Join 时,与主连接条件 `AND` 关系的 Tag 列等值条件将作为类似分组条件使用,因此输出结果不能保持有序。
|
||||||
|
|
||||||
|
#### 示例
|
||||||
|
|
||||||
|
表 d1001 和表 d1002 中同时出现电压大于 220V 的时刻及各自的电压值:
|
||||||
|
```sql
|
||||||
|
SELECT a.ts, a.voltage, b.voltage FROM d1001 a JOIN d1002 b ON a.ts = b.ts and a.voltage > 220 and b.voltage > 220
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### Left/Right Outer Join
|
||||||
|
|
||||||
|
#### 定义
|
||||||
|
左/右(外)连接 - 既包含左右表同时符合连接条件的数据集合,也包括左/右表中不符合连接条件的数据集合。
|
||||||
|
|
||||||
|
#### 语法
|
||||||
|
```sql
|
||||||
|
SELECT ... FROM table_name1 LEFT|RIGHT [OUTER] JOIN table_name2 ON ... [WHERE ...] [...]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 结果集
|
||||||
|
Inner Join 的结果集 + 左/右表中不符合连接条件的行和右/左表的空数据(`NULL`)组成的行数据集合。
|
||||||
|
|
||||||
|
#### 适用范围
|
||||||
|
支持超级表、普通表、子表、子查询间 Left/Right Join。
|
||||||
|
|
||||||
|
#### 说明
|
||||||
|
- OUTER 关键字可选。
|
||||||
|
|
||||||
|
#### 示例
|
||||||
|
|
||||||
|
表 d1001 所有时刻的电压值以及和表 d1002 中同时出现电压大于 220V 的时刻及各自的电压值:
|
||||||
|
```sql
|
||||||
|
SELECT a.ts, a.voltage, b.voltage FROM d1001 a LEFT JOIN d1002 b ON a.ts = b.ts and a.voltage > 220 and b.voltage > 220
|
||||||
|
```
|
||||||
|
|
||||||
|
### Left/Right Semi Join
|
||||||
|
|
||||||
|
#### 定义
|
||||||
|
左/右半连接 - 通常表达的是 `IN``/EXISTS` 的含义,即对左/右表任意一条数据来说,只有当右/左表中存在任一符合连接条件的数据时才返回左/右表行数据。
|
||||||
|
|
||||||
|
#### 语法
|
||||||
|
```sql
|
||||||
|
SELECT ... FROM table_name1 LEFT|RIGHT SEMI JOIN table_name2 ON ... [WHERE ...] [...]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 结果集
|
||||||
|
左/右表中符合连接条件的行和右/左表任一符合连接条件的行组成的行数据集合。
|
||||||
|
|
||||||
|
#### 适用范围
|
||||||
|
支持超级表、普通表、子表、子查询间 Left/Right Semi Join。
|
||||||
|
|
||||||
|
#### 示例
|
||||||
|
|
||||||
|
表 d1001 中出现电压大于 220V 且存在其他电表同一时刻电压也大于 220V 的时间:
|
||||||
|
```sql
|
||||||
|
SELECT a.ts FROM d1001 a LEFT SEMI JOIN meters b ON a.ts = b.ts and a.voltage > 220 and b.voltage > 220 and b.tbname != 'd1001'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Left/Right Anti-Semi Join
|
||||||
|
|
||||||
|
#### 定义
|
||||||
|
左/右反连接 - 同左/右半连接的逻辑正好相反,通常表达的是 `NOT IN`/`NOT EXISTS` 的含义,即对左/右表任意一条数据来说,只有当右/左表中不存在任何符合连接条件的数据时才返回左/右表行数据。
|
||||||
|
|
||||||
|
#### 语法
|
||||||
|
```sql
|
||||||
|
SELECT ... FROM table_name1 LEFT|RIGHT ANTI JOIN table_name2 ON ... [WHERE ...] [...]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 结果集
|
||||||
|
左/右表中不符合连接条件的行和右/左表的空数据(`NULL`)组成的行数据集合。
|
||||||
|
|
||||||
|
#### 适用范围
|
||||||
|
支持超级表、普通表、子表、子查询间 Left/Right Anti-Semi Join。
|
||||||
|
|
||||||
|
#### 示例
|
||||||
|
|
||||||
|
表 d1001 中出现电压大于 220V 且不存在其他电表同一时刻电压也大于 220V 的时间:
|
||||||
|
```sql
|
||||||
|
SELECT a.ts FROM d1001 a LEFT ANTI JOIN meters b ON a.ts = b.ts and b.voltage > 220 and b.tbname != 'd1001' WHERE a.voltage > 220
|
||||||
|
```
|
||||||
|
|
||||||
|
### left/Right ASOF Join
|
||||||
|
|
||||||
|
#### 定义
|
||||||
|
左/右不完全匹配连接 - 不同于其他传统 Join 的完全匹配模式,ASOF Join 允许以指定的匹配模式进行不完全匹配,即按照主键时间戳最接近的方式进行匹配。
|
||||||
|
|
||||||
|
#### 语法
|
||||||
|
```sql
|
||||||
|
SELECT ... FROM table_name1 LEFT|RIGHT ASOF JOIN table_name2 [ON ...] [JLIMIT jlimit_num] [WHERE ...] [...]
|
||||||
|
```
|
||||||
|
|
||||||
|
##### 结果集
|
||||||
|
左/右表中每一行数据与右/左表中符合连接条件的按主键列排序后时间戳最接近的最多 `jlimit_num` 条数据或空数据(`NULL`)的笛卡尔积集合。
|
||||||
|
|
||||||
|
##### 适用范围
|
||||||
|
支持超级表、普通表、子表间 Left/Right ASOF Join。
|
||||||
|
|
||||||
|
#### 说明
|
||||||
|
- 只支持表间 ASOF Join,不支持子查询间 ASOF Join。
|
||||||
|
- ON 子句中支持指定主键列或主键列的 timetruncate 函数运算(不支持其他标量运算及函数)后的单个匹配规则(主连接条件),支持的运算符及其含义如下:
|
||||||
|
|
||||||
|
|
||||||
|
| **运算符** | **Left ASOF 时含义** |
|
||||||
|
| :-------------: | ------------------------ |
|
||||||
|
| > | 匹配右表中主键时间戳小于左表主键时间戳且时间戳最接近的数据行 |
|
||||||
|
| >= | 匹配右表中主键时间戳小于等于左表主键时间戳且时间戳最接近的数据行 |
|
||||||
|
| = | 匹配右表中主键时间戳等于左表主键时间戳的行 |
|
||||||
|
| < | 匹配右表中主键时间戳大于左表主键时间戳且时间戳最接近的数据行 |
|
||||||
|
| <= | 匹配右表中主键时间戳大于等于左表主键时间戳且时间戳最接近的数据行 |
|
||||||
|
|
||||||
|
对于 Right ASOF 来说,上述运算符含义正好相反。
|
||||||
|
|
||||||
|
- 如果不含 `ON` 子句或 `ON` 子句中未指定主键列的匹配规则,则默认主键匹配规则运算符是 “>=”, 即(对 Left ASOF Join 来说)右表中主键时戳小于等于左表主键时戳的行数据。不支持多个主连接条件。
|
||||||
|
- `ON` 子句中还可以指定除主键列外的 Tag、普通列(不支持标量函数及运算)之间的等值条件用于分组计算,除此之外不支持其他类型的条件。
|
||||||
|
- 所有 ON 条件间只支持 `AND` 运算。
|
||||||
|
- `JLIMIT` 用于指定单行匹配结果的最大行数,可选,未指定时默认值为1,即左/右表每行数据最多从右/左表中获得一行匹配结果。`JLIMIT` 取值范围为 [0, 1024]。符合匹配条件的 `jlimit_num` 条数据不要求时间戳相同,当右/左表中不存在满足条件的 `jlimit_num` 条数据时,返回的结果行数可能小于 `jlimit_num`;当右/左表中存在符合条件的多于 `jlimit_num` 条数据时,如果时间戳相同将随机返回 `jlimit_num` 条数据。
|
||||||
|
|
||||||
|
#### 示例
|
||||||
|
|
||||||
|
表 d1001 电压值大于 220V 且表 d1002 中同一时刻或稍早前最后时刻出现电压大于 220V 的时间及各自的电压值:
|
||||||
|
```sql
|
||||||
|
SELECT a.ts, a.voltage, a.ts, b.voltage FROM d1001 a LEFT ASOF JOIN d1002 b ON a.ts >= b.ts where a.voltage > 220 and b.voltage > 220
|
||||||
|
```
|
||||||
|
|
||||||
|
### Left/Right Window Join
|
||||||
|
|
||||||
|
#### 定义
|
||||||
|
左/右窗口连接 - 根据左/右表中每一行的主键时间戳和窗口边界构造窗口并据此进行窗口连接,支持窗口内进行投影、标量和聚合操作。
|
||||||
|
|
||||||
|
#### 语法
|
||||||
|
```sql
|
||||||
|
SELECT ... FROM table_name1 LEFT|RIGHT WINDOW JOIN table_name2 [ON ...] WINDOW_OFFSET(start_offset, end_offset) [JLIMIT jlimit_num] [WHERE ...] [...]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 结果集
|
||||||
|
左/右表中每一行数据与右/左表中基于左/右表主键时戳列和 `WINDOW_OFFSET` 划分的窗口内的至多 `jlimit_num` 条数据或空数据(`NULL`)的笛卡尔积集合 或
|
||||||
|
左/右表中每一行数据与右/左表中基于左/右表主键时戳列和 `WINDOW_OFFSET` 划分的窗口内的至多 `jlimit_num` 条数据的聚合结果或空数据(`NULL`)组成的行数据集合。
|
||||||
|
|
||||||
|
#### 适用范围
|
||||||
|
支持超级表、普通表、子表间 Left/Right Window Join。
|
||||||
|
|
||||||
|
#### 说明
|
||||||
|
- 只支持表间 Window Join,不支持子查询间 Window Join;
|
||||||
|
- `ON` 子句可选,只支持指定除主键列外的 Tag、普通列(不支持标量函数及运算)之间的等值条件用于分组计算,所有条件间只支持 `AND` 运算;
|
||||||
|
- `WINDOW_OFFSET` 用于指定窗口的左右边界相对于左/右表主键时间戳的偏移量,支持自带时间单位的形式,例如:`WINDOW_OFFSET(-1a, 1a)`,对于 Left Window Join 来说,表示每个窗口为 [左表主键时间戳 - 1毫秒,左表主键时间戳 + 1毫秒] ,左右边界均为闭区间。数字后面的时间单位可以是 `b`(纳秒)、`u`(微秒)、`a`(毫秒)、`s`(秒)、`m`(分)、`h`(小时)、`d`(天)、`w`(周),不支持自然月(`n`)、自然年(`y`),支持的最小时间单位为数据库精度,左右表所在数据库精度需保持一致。
|
||||||
|
- `JLIMIT` 用于指定单个窗口内的最大匹配行数,可选,未指定时默认获取每个窗口内的所有匹配行。`JLIMIT` 取值范围为 [0, 1024],当右表中不存在满足条件的 `jlimit_num` 条数据时,返回的结果行数可能小于 `jlimit_num`;当右表中存在超过 `jlimit_num` 条满足条件的数据时,优先返回窗口内主键时间戳最小的 `jlimit_num` 条数据。
|
||||||
|
- SQL 语句中不能含其他 `GROUP BY`/`PARTITION BY`/窗口查询;
|
||||||
|
- 支持在 `WHERE` 子句中进行标量过滤,支持在 `HAVING` 子句中针对每个窗口进行聚合函数过滤(不支持标量过滤),不支持 `SLIMIT`,不支持各种窗口伪列;
|
||||||
|
|
||||||
|
#### 示例
|
||||||
|
|
||||||
|
表 d1001 电压值大于 220V 时前后1秒的区间内表 d1002 的电压值:
|
||||||
|
```sql
|
||||||
|
SELECT a.ts, a.voltage, b.voltage FROM d1001 a LEFT WINDOW JOIN d1002 b WINDOW_OFFSET(-1s, 1s) where a.voltage > 220
|
||||||
|
```
|
||||||
|
|
||||||
|
表 d1001 电压值大于 220V 且前后1秒的区间内表 d1002 的电压平均值也大于 220V 的时间及电压值:
|
||||||
|
```sql
|
||||||
|
SELECT a.ts, a.voltage, avg(b.voltage) FROM d1001 a LEFT WINDOW JOIN d1002 b WINDOW_OFFSET(-1s, 1s) where a.voltage > 220 HAVING(avg(b.voltage) > 220)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Full Outer Join
|
||||||
|
|
||||||
|
#### 定义
|
||||||
|
全(外)连接 - 既包含左右表同时符合连接条件的数据集合,也包括左右表中不符合连接条件的数据集合。
|
||||||
|
|
||||||
|
#### 语法
|
||||||
|
SELECT ... FROM table_name1 FULL [OUTER] JOIN table_name2 ON ... [WHERE ...] [...]
|
||||||
|
|
||||||
|
#### 结果集
|
||||||
|
Inner Join 的结果集 + 左表中不符合连接条件的行加上右表的空数据组成的行数据集合 + 右表中不符合连接条件的行加上左表的空数据(`NULL`)组成的行数据集合。
|
||||||
|
|
||||||
|
#### 适用范围
|
||||||
|
支持超级表、普通表、子表、子查询间 Full Outer Join。
|
||||||
|
|
||||||
|
#### 说明
|
||||||
|
- OUTER 关键字可选。
|
||||||
|
|
||||||
|
#### 示例
|
||||||
|
|
||||||
|
表 d1001 和表 d1002 中记录的所有时刻及电压值:
|
||||||
|
```sql
|
||||||
|
SELECT a.ts, a.voltage, b.ts, b.voltage FROM d1001 a FULL JOIN d1002 b on a.ts = b.ts
|
||||||
|
```
|
||||||
|
|
||||||
|
## 约束和限制
|
||||||
|
|
||||||
|
### 输入时间线限制
|
||||||
|
- 目前所有 Join 都要求输入数据含有效的主键时间线,所有表查询都可以满足,子查询需要注意输出数据是否含有效的主键时间线。
|
||||||
|
|
||||||
|
### 连接条件限制
|
||||||
|
- 除 ASOF 和 Window Join 之外,其他 Join 的连接条件中必须含主键列的主连接条件; 且
|
||||||
|
- 主连接条件与其他连接条件间只支持 `AND` 运算;
|
||||||
|
- 作为主连接条件的主键列只支持 `timetruncate` 函数运算(不支持其他函数和标量运算),作为其他连接条件时无限制;
|
||||||
|
|
||||||
|
### 分组条件限制
|
||||||
|
- 只支持除主键列外的 Tag、普通列的等值条件;
|
||||||
|
- 不支持标量运算;
|
||||||
|
- 支持多个分组条件,条件间只支持 `AND` 运算;
|
||||||
|
|
||||||
|
### 查询结果顺序限制
|
||||||
|
- 普通表、子表、子查询且无分组条件无排序的场景下,查询结果会按照驱动表的主键列顺序输出;
|
||||||
|
- 超级表查询、Full Join或有分组条件无排序的场景下,查询结果没有固定的输出顺序;
|
||||||
|
因此,在有排序需求且输出无固定顺序的场景下,需要进行排序操作。部分依赖时间线的函数可能会因为没有有效的时间线输出而无法执行。
|
||||||
|
|
||||||
|
### 嵌套 Join 与多表 Join 限制
|
||||||
|
- 目前除 Inner Join 支持嵌套与多表 Join 外,其他类型的 JoiN 暂不支持嵌套与多表 Join。
|
|
@ -240,6 +240,16 @@ taos -C
|
||||||
| 缺省值 | 0 |
|
| 缺省值 | 0 |
|
||||||
| 补充说明 | 该参数设置为 0 时,last(\*)/last_row(\*)/first(\*) 只返回超级表的普通列;为 1 时,返回超级表的普通列和标签列 |
|
| 补充说明 | 该参数设置为 0 时,last(\*)/last_row(\*)/first(\*) 只返回超级表的普通列;为 1 时,返回超级表的普通列和标签列 |
|
||||||
|
|
||||||
|
### maxTsmaCalcDelay
|
||||||
|
|
||||||
|
| 属性 | 说明 |
|
||||||
|
| -------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
|
| 适用范围 | 仅客户端适用 |
|
||||||
|
| 含义 | 查询时客户端可允许的tsma计算延迟, 若tsma的计算延迟大于配置值, 则该TSMA将不会被使用. |
|
||||||
|
| 取值范围 | 600s - 86400s, 即10分钟-1小时 |
|
||||||
|
| 缺省值 | 600s |
|
||||||
|
|
||||||
|
|
||||||
## 区域相关
|
## 区域相关
|
||||||
|
|
||||||
### timezone
|
### timezone
|
||||||
|
@ -745,6 +755,15 @@ charset 的有效值是 UTF-8。
|
||||||
| 取值范围 | 1-10000 |
|
| 取值范围 | 1-10000 |
|
||||||
| 缺省值 | 20 |
|
| 缺省值 | 20 |
|
||||||
|
|
||||||
|
### maxTsmaNum
|
||||||
|
|
||||||
|
| 属性 | 说明 |
|
||||||
|
| -------- | --------------------------- |
|
||||||
|
| 适用范围 | 仅服务端适用 |
|
||||||
|
| 含义 | 集群内可创建的TSMA个数 |
|
||||||
|
| 取值范围 | 0-12 |
|
||||||
|
| 缺省值 | 8 |
|
||||||
|
|
||||||
## 压缩参数
|
## 压缩参数
|
||||||
|
|
||||||
### compressMsgSize
|
### compressMsgSize
|
||||||
|
|
|
@ -28,6 +28,7 @@ measurement,tag_set field_set timestamp
|
||||||
- tag_set 将作为标签数据,其格式形如 `<tag_key>=<tag_value>,<tag_key>=<tag_value>`,也即可以使用英文逗号来分隔多个标签数据。它与 field_set 之间使用一个半角空格来分隔。
|
- tag_set 将作为标签数据,其格式形如 `<tag_key>=<tag_value>,<tag_key>=<tag_value>`,也即可以使用英文逗号来分隔多个标签数据。它与 field_set 之间使用一个半角空格来分隔。
|
||||||
- field_set 将作为普通列数据,其格式形如 `<field_key>=<field_value>,<field_key>=<field_value>`,同样是使用英文逗号来分隔多个普通列的数据。它与 timestamp 之间使用一个半角空格来分隔。
|
- field_set 将作为普通列数据,其格式形如 `<field_key>=<field_value>,<field_key>=<field_value>`,同样是使用英文逗号来分隔多个普通列的数据。它与 timestamp 之间使用一个半角空格来分隔。
|
||||||
- timestamp 即本行数据对应的主键时间戳。
|
- timestamp 即本行数据对应的主键时间戳。
|
||||||
|
- 无模式写入不支持含第二主键列的表的数据写入。
|
||||||
|
|
||||||
tag_set 中的所有的数据自动转化为 nchar 数据类型,并不需要使用双引号(")。
|
tag_set 中的所有的数据自动转化为 nchar 数据类型,并不需要使用双引号(")。
|
||||||
|
|
||||||
|
|
|
@ -201,7 +201,7 @@ TDengine 采用数据驱动的方式让缓存中的数据写入硬盘进行持
|
||||||
|
|
||||||
除此之外,TDengine 也提供了数据分级存储的功能,将不同时间段的数据存储在挂载的不同介质上的目录里,从而实现不同“热度”的数据存储在不同的存储介质上,充分利用存储,节约成本。比如,最新采集的数据需要经常访问,对硬盘的读取性能要求高,那么用户可以配置将这些数据存储在 SSD 盘上。超过一定期限的数据,查询需求量没有那么高,那么可以存储在相对便宜的 HDD 盘上。
|
除此之外,TDengine 也提供了数据分级存储的功能,将不同时间段的数据存储在挂载的不同介质上的目录里,从而实现不同“热度”的数据存储在不同的存储介质上,充分利用存储,节约成本。比如,最新采集的数据需要经常访问,对硬盘的读取性能要求高,那么用户可以配置将这些数据存储在 SSD 盘上。超过一定期限的数据,查询需求量没有那么高,那么可以存储在相对便宜的 HDD 盘上。
|
||||||
|
|
||||||
多级存储支持 3 级,每级最多可配置 16 个挂载点。
|
多级存储支持 3 级,每级最多可配置 128 个挂载点。
|
||||||
|
|
||||||
TDengine 多级存储配置方式如下(在配置文件/etc/taos/taos.cfg 中):
|
TDengine 多级存储配置方式如下(在配置文件/etc/taos/taos.cfg 中):
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,91 @@
|
||||||
|
---
|
||||||
|
title: 可配置压缩算法
|
||||||
|
description: 可配置压缩算法
|
||||||
|
---
|
||||||
|
|
||||||
|
# 可配置存储压缩
|
||||||
|
|
||||||
|
从 TDengine 3.3.0.0 版本开始,TDengine 提供了更高级的压缩功能,用户可以在建表时针对每一列配置是否进行压缩、以及使用的压缩算法和压缩级别。
|
||||||
|
|
||||||
|
## 压缩术语定义
|
||||||
|
|
||||||
|
### 压缩等级
|
||||||
|
|
||||||
|
- 一级压缩:对数据进行编码,本质也是一种压缩
|
||||||
|
- 二级压缩:在编码的基础上对数据块进行压缩
|
||||||
|
|
||||||
|
### 压缩级别
|
||||||
|
|
||||||
|
在本文中特指二级压缩算法内部的级别,比如zstd,至少8个level可选,每个level 下都有不同表现,本质是压缩率、压缩速度、解压速度之间的 tradeoff,为了避免选择困难,特简化定义为如下三种级别:
|
||||||
|
|
||||||
|
- high:压缩率最高,压缩速度和解压速度相对最差。
|
||||||
|
- low:压缩速度和解压速度最好,压缩率相对最低。
|
||||||
|
- medium:兼顾压缩率、压缩速度和解压速度。
|
||||||
|
|
||||||
|
### 压缩算法列表
|
||||||
|
|
||||||
|
- 编码算法列表(一级压缩):simple8b, bit-packing,delta-i, delta-d, disabled
|
||||||
|
|
||||||
|
- 压缩算法列表(二级压缩): lz4、zlib、zstd、tsz、xz、disabled
|
||||||
|
|
||||||
|
- 各个数据类型的默认压缩算法列表和适用范围
|
||||||
|
|
||||||
|
| 数据类型 | 可选编码算法 | 编码算法默认值 | 可选压缩算法|可选压缩算法| 压缩等级默认值|
|
||||||
|
| :-----------:|:----------:|:-------:|:-------:|:----------:|:----:|
|
||||||
|
tinyint/untinyint/smallint/usmallint/int/uint | simple8b| simple8b | lz4/zlib/zstd/xz| lz4 | medium|
|
||||||
|
| bigint/ubigint/timestamp | simple8b/delta-i | delta-i |lz4/zlib/zstd/xz | lz4| medium|
|
||||||
|
|float/double | delta-d|delta-d |lz4/zlib/zstd/xz/tsz|tsz| medium|
|
||||||
|
|binary/nchar| disabled| disabled|lz4/zlib/zstd/xz| lz4| medium|
|
||||||
|
|bool| bit-packing| bit-packing| lz4/zlib/zstd/xz| lz4| medium|
|
||||||
|
|
||||||
|
注意: 针对浮点类型,如果配置为tsz, 其精度由taosd的全局配置决定,如果配置为tsz, 但是没有配置有损压缩标志, 则使用lz4进行压缩
|
||||||
|
|
||||||
|
## SQL 语法
|
||||||
|
|
||||||
|
### 建表时指定压缩
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE [dbname.]tabname (colName colType [ENCODE 'encode_type'] [COMPRESS 'compress_type' [LEVEL 'level'], [, other cerate_definition]...])
|
||||||
|
```
|
||||||
|
|
||||||
|
**参数说明**
|
||||||
|
|
||||||
|
- tabname:超级表或者普通表名称
|
||||||
|
- encode_type: 一级压缩,具体参数见上面列表
|
||||||
|
- compress_type: 二级压缩,具体参数见上面列表
|
||||||
|
- level: 特指二级压缩的级别,默认值为medium, 支持简写为 'h'/'l'/'m'
|
||||||
|
|
||||||
|
**功能说明**
|
||||||
|
|
||||||
|
- 创建表的时候指定列的压缩方式
|
||||||
|
|
||||||
|
### 更改列的压缩方式
|
||||||
|
|
||||||
|
```sql
|
||||||
|
ALTER TABLE [db_name.]tabName MODIFY COLUMN colName [ENCODE 'ecode_type'] [COMPRESS 'compress_type'] [LEVEL "high"]
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
**参数说明**
|
||||||
|
|
||||||
|
- tabName: 表名,可以为超级表、普通表
|
||||||
|
- colName: 待更改压缩算法的列, 只能为普通列
|
||||||
|
|
||||||
|
**功能说明**
|
||||||
|
|
||||||
|
- 更改列的压缩方式
|
||||||
|
|
||||||
|
### 查看列的压缩方式
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DESCRIBE [dbname.]tabName
|
||||||
|
```
|
||||||
|
|
||||||
|
**功能说明**
|
||||||
|
|
||||||
|
- 显示列的基本信息,包括类型、压缩方式
|
||||||
|
|
||||||
|
## 兼容性
|
||||||
|
|
||||||
|
- 完全兼容已经存在的数据
|
||||||
|
- 从更低版本升级到 3.3.0.0 后不能回退
|
|
@ -89,7 +89,7 @@ bool checkColumnLevelOrSetDefault(uint8_t type, char level[TSDB_CL_COMPRESS_OPTI
|
||||||
void setColEncode(uint32_t* compress, uint8_t encode);
|
void setColEncode(uint32_t* compress, uint8_t encode);
|
||||||
void setColCompress(uint32_t* compress, uint16_t compressType);
|
void setColCompress(uint32_t* compress, uint16_t compressType);
|
||||||
void setColLevel(uint32_t* compress, uint8_t level);
|
void setColLevel(uint32_t* compress, uint8_t level);
|
||||||
int8_t setColCompressByOption(uint8_t type, uint8_t encode, uint16_t compressType, uint8_t level, bool check,
|
int32_t setColCompressByOption(uint8_t type, uint8_t encode, uint16_t compressType, uint8_t level, bool check,
|
||||||
uint32_t* compress);
|
uint32_t* compress);
|
||||||
|
|
||||||
int8_t validColCompressLevel(uint8_t type, uint8_t level);
|
int8_t validColCompressLevel(uint8_t type, uint8_t level);
|
||||||
|
@ -97,5 +97,5 @@ int8_t validColCompress(uint8_t type, uint8_t l2);
|
||||||
int8_t validColEncode(uint8_t type, uint8_t l1);
|
int8_t validColEncode(uint8_t type, uint8_t l1);
|
||||||
|
|
||||||
uint32_t createDefaultColCmprByType(uint8_t type);
|
uint32_t createDefaultColCmprByType(uint8_t type);
|
||||||
bool validColCmprByType(uint8_t type, uint32_t cmpr);
|
int32_t validColCmprByType(uint8_t type, uint32_t cmpr);
|
||||||
#endif /*_TD_TCOL_H_*/
|
#endif /*_TD_TCOL_H_*/
|
||||||
|
|
|
@ -308,6 +308,13 @@ typedef struct SUpdateInfo {
|
||||||
SScalableBf* pCloseWinSBF;
|
SScalableBf* pCloseWinSBF;
|
||||||
SHashObj* pMap;
|
SHashObj* pMap;
|
||||||
uint64_t maxDataVersion;
|
uint64_t maxDataVersion;
|
||||||
|
int8_t pkColType;
|
||||||
|
int32_t pkColLen;
|
||||||
|
char* pKeyBuff;
|
||||||
|
char* pValueBuff;
|
||||||
|
|
||||||
|
int (*comparePkRowFn)(void* pValue1, void* pTs, void* pPkVal, __compar_fn_t cmpPkFn);
|
||||||
|
__compar_fn_t comparePkCol;
|
||||||
} SUpdateInfo;
|
} SUpdateInfo;
|
||||||
|
|
||||||
typedef struct {
|
typedef struct {
|
||||||
|
@ -375,17 +382,17 @@ typedef struct SStateStore {
|
||||||
void** ppVal, int32_t* pVLen);
|
void** ppVal, int32_t* pVLen);
|
||||||
int32_t (*streamStateCountWinAdd)(SStreamState* pState, SSessionKey* pKey, void** pVal, int32_t* pVLen);
|
int32_t (*streamStateCountWinAdd)(SStreamState* pState, SSessionKey* pKey, void** pVal, int32_t* pVLen);
|
||||||
|
|
||||||
SUpdateInfo* (*updateInfoInit)(int64_t interval, int32_t precision, int64_t watermark, bool igUp);
|
SUpdateInfo* (*updateInfoInit)(int64_t interval, int32_t precision, int64_t watermark, bool igUp, int8_t pkType, int32_t pkLen);
|
||||||
TSKEY (*updateInfoFillBlockData)(SUpdateInfo* pInfo, SSDataBlock* pBlock, int32_t primaryTsCol);
|
TSKEY (*updateInfoFillBlockData)(SUpdateInfo* pInfo, SSDataBlock* pBlock, int32_t primaryTsCol, int32_t primaryKeyCol);
|
||||||
bool (*updateInfoIsUpdated)(SUpdateInfo* pInfo, uint64_t tableId, TSKEY ts);
|
bool (*updateInfoIsUpdated)(SUpdateInfo* pInfo, uint64_t tableId, TSKEY ts, void* pPkVal, int32_t len);
|
||||||
bool (*updateInfoIsTableInserted)(SUpdateInfo* pInfo, int64_t tbUid);
|
bool (*updateInfoIsTableInserted)(SUpdateInfo* pInfo, int64_t tbUid);
|
||||||
bool (*isIncrementalTimeStamp)(SUpdateInfo* pInfo, uint64_t tableId, TSKEY ts);
|
bool (*isIncrementalTimeStamp)(SUpdateInfo* pInfo, uint64_t tableId, TSKEY ts, void* pPkVal, int32_t len);
|
||||||
|
|
||||||
void (*updateInfoDestroy)(SUpdateInfo* pInfo);
|
void (*updateInfoDestroy)(SUpdateInfo* pInfo);
|
||||||
void (*windowSBfDelete)(SUpdateInfo* pInfo, uint64_t count);
|
void (*windowSBfDelete)(SUpdateInfo* pInfo, uint64_t count);
|
||||||
void (*windowSBfAdd)(SUpdateInfo* pInfo, uint64_t count);
|
void (*windowSBfAdd)(SUpdateInfo* pInfo, uint64_t count);
|
||||||
|
|
||||||
SUpdateInfo* (*updateInfoInitP)(SInterval* pInterval, int64_t watermark, bool igUp);
|
SUpdateInfo* (*updateInfoInitP)(SInterval* pInterval, int64_t watermark, bool igUp, int8_t pkType, int32_t pkLen);
|
||||||
void (*updateInfoAddCloseWindowSBF)(SUpdateInfo* pInfo);
|
void (*updateInfoAddCloseWindowSBF)(SUpdateInfo* pInfo);
|
||||||
void (*updateInfoDestoryColseWinSBF)(SUpdateInfo* pInfo);
|
void (*updateInfoDestoryColseWinSBF)(SUpdateInfo* pInfo);
|
||||||
int32_t (*updateInfoSerialize)(void* buf, int32_t bufLen, const SUpdateInfo* pInfo);
|
int32_t (*updateInfoSerialize)(void* buf, int32_t bufLen, const SUpdateInfo* pInfo);
|
||||||
|
|
|
@ -270,7 +270,6 @@ typedef struct SJoinTableNode {
|
||||||
SNode* addPrimCond;
|
SNode* addPrimCond;
|
||||||
bool hasSubQuery;
|
bool hasSubQuery;
|
||||||
bool isLowLevelJoin;
|
bool isLowLevelJoin;
|
||||||
SNode* pParent;
|
|
||||||
SNode* pLeft;
|
SNode* pLeft;
|
||||||
SNode* pRight;
|
SNode* pRight;
|
||||||
SNode* pOnCond;
|
SNode* pOnCond;
|
||||||
|
|
|
@ -488,7 +488,7 @@ struct SStreamTask {
|
||||||
SSHashObj* pNameMap;
|
SSHashObj* pNameMap;
|
||||||
void* pBackend;
|
void* pBackend;
|
||||||
int8_t subtableWithoutMd5;
|
int8_t subtableWithoutMd5;
|
||||||
char reserve[255];
|
char reserve[256];
|
||||||
};
|
};
|
||||||
|
|
||||||
typedef int32_t (*startComplete_fn_t)(struct SStreamMeta*);
|
typedef int32_t (*startComplete_fn_t)(struct SStreamMeta*);
|
||||||
|
|
|
@ -25,28 +25,10 @@
|
||||||
extern "C" {
|
extern "C" {
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
typedef struct SUpdateKey {
|
SUpdateInfo *updateInfoInitP(SInterval *pInterval, int64_t watermark, bool igUp, int8_t pkType, int32_t pkLen);
|
||||||
int64_t tbUid;
|
SUpdateInfo *updateInfoInit(int64_t interval, int32_t precision, int64_t watermark, bool igUp, int8_t pkType, int32_t pkLen);
|
||||||
TSKEY ts;
|
TSKEY updateInfoFillBlockData(SUpdateInfo *pInfo, SSDataBlock *pBlock, int32_t primaryTsCol, int32_t primaryKeyCol);
|
||||||
} SUpdateKey;
|
bool updateInfoIsUpdated(SUpdateInfo *pInfo, uint64_t tableId, TSKEY ts, void* pPkVal, int32_t len);
|
||||||
|
|
||||||
//typedef struct SUpdateInfo {
|
|
||||||
// SArray *pTsBuckets;
|
|
||||||
// uint64_t numBuckets;
|
|
||||||
// SArray *pTsSBFs;
|
|
||||||
// uint64_t numSBFs;
|
|
||||||
// int64_t interval;
|
|
||||||
// int64_t watermark;
|
|
||||||
// TSKEY minTS;
|
|
||||||
// SScalableBf *pCloseWinSBF;
|
|
||||||
// SHashObj *pMap;
|
|
||||||
// uint64_t maxDataVersion;
|
|
||||||
//} SUpdateInfo;
|
|
||||||
|
|
||||||
SUpdateInfo *updateInfoInitP(SInterval *pInterval, int64_t watermark, bool igUp);
|
|
||||||
SUpdateInfo *updateInfoInit(int64_t interval, int32_t precision, int64_t watermark, bool igUp);
|
|
||||||
TSKEY updateInfoFillBlockData(SUpdateInfo *pInfo, SSDataBlock *pBlock, int32_t primaryTsCol);
|
|
||||||
bool updateInfoIsUpdated(SUpdateInfo *pInfo, uint64_t tableId, TSKEY ts);
|
|
||||||
bool updateInfoIsTableInserted(SUpdateInfo *pInfo, int64_t tbUid);
|
bool updateInfoIsTableInserted(SUpdateInfo *pInfo, int64_t tbUid);
|
||||||
void updateInfoDestroy(SUpdateInfo *pInfo);
|
void updateInfoDestroy(SUpdateInfo *pInfo);
|
||||||
void updateInfoAddCloseWindowSBF(SUpdateInfo *pInfo);
|
void updateInfoAddCloseWindowSBF(SUpdateInfo *pInfo);
|
||||||
|
@ -55,7 +37,7 @@ int32_t updateInfoSerialize(void *buf, int32_t bufLen, const SUpdateInfo *p
|
||||||
int32_t updateInfoDeserialize(void *buf, int32_t bufLen, SUpdateInfo *pInfo);
|
int32_t updateInfoDeserialize(void *buf, int32_t bufLen, SUpdateInfo *pInfo);
|
||||||
void windowSBfDelete(SUpdateInfo *pInfo, uint64_t count);
|
void windowSBfDelete(SUpdateInfo *pInfo, uint64_t count);
|
||||||
void windowSBfAdd(SUpdateInfo *pInfo, uint64_t count);
|
void windowSBfAdd(SUpdateInfo *pInfo, uint64_t count);
|
||||||
bool isIncrementalTimeStamp(SUpdateInfo *pInfo, uint64_t tableId, TSKEY ts);
|
bool isIncrementalTimeStamp(SUpdateInfo *pInfo, uint64_t tableId, TSKEY ts, void* pPkVal, int32_t len);
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
}
|
}
|
||||||
|
|
|
@ -54,6 +54,12 @@ typedef int32_t (*__ext_compar_fn_t)(const void *p1, const void *p2, const void
|
||||||
*/
|
*/
|
||||||
void taosqsort(void *src, int64_t numOfElem, int64_t size, const void *param, __ext_compar_fn_t comparFn);
|
void taosqsort(void *src, int64_t numOfElem, int64_t size, const void *param, __ext_compar_fn_t comparFn);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Non-recursive quick sort.
|
||||||
|
*
|
||||||
|
*/
|
||||||
|
void taosqsort_r(void *src, int64_t nelem, int64_t size, const void *arg, __ext_compar_fn_t cmp);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* merge sort, with the compare function requiring additional parameters support
|
* merge sort, with the compare function requiring additional parameters support
|
||||||
*
|
*
|
||||||
|
|
|
@ -182,6 +182,8 @@ int32_t* taosGetErrno();
|
||||||
#define TSDB_CODE_TSC_STMT_CACHE_ERROR TAOS_DEF_ERROR_CODE(0, 0X0230)
|
#define TSDB_CODE_TSC_STMT_CACHE_ERROR TAOS_DEF_ERROR_CODE(0, 0X0230)
|
||||||
#define TSDB_CODE_TSC_ENCODE_PARAM_ERROR TAOS_DEF_ERROR_CODE(0, 0X0231)
|
#define TSDB_CODE_TSC_ENCODE_PARAM_ERROR TAOS_DEF_ERROR_CODE(0, 0X0231)
|
||||||
#define TSDB_CODE_TSC_ENCODE_PARAM_NULL TAOS_DEF_ERROR_CODE(0, 0X0232)
|
#define TSDB_CODE_TSC_ENCODE_PARAM_NULL TAOS_DEF_ERROR_CODE(0, 0X0232)
|
||||||
|
#define TSDB_CODE_TSC_COMPRESS_PARAM_ERROR TAOS_DEF_ERROR_CODE(0, 0X0233)
|
||||||
|
#define TSDB_CODE_TSC_COMPRESS_LEVEL_ERROR TAOS_DEF_ERROR_CODE(0, 0X0234)
|
||||||
#define TSDB_CODE_TSC_INTERNAL_ERROR TAOS_DEF_ERROR_CODE(0, 0X02FF)
|
#define TSDB_CODE_TSC_INTERNAL_ERROR TAOS_DEF_ERROR_CODE(0, 0X02FF)
|
||||||
|
|
||||||
// mnode-common
|
// mnode-common
|
||||||
|
@ -283,7 +285,6 @@ int32_t* taosGetErrno();
|
||||||
#define TSDB_CODE_MND_INVALID_STB_OPTION TAOS_DEF_ERROR_CODE(0, 0x036E)
|
#define TSDB_CODE_MND_INVALID_STB_OPTION TAOS_DEF_ERROR_CODE(0, 0x036E)
|
||||||
#define TSDB_CODE_MND_INVALID_ROW_BYTES TAOS_DEF_ERROR_CODE(0, 0x036F)
|
#define TSDB_CODE_MND_INVALID_ROW_BYTES TAOS_DEF_ERROR_CODE(0, 0x036F)
|
||||||
#define TSDB_CODE_MND_FIELD_VALUE_OVERFLOW TAOS_DEF_ERROR_CODE(0, 0x0370)
|
#define TSDB_CODE_MND_FIELD_VALUE_OVERFLOW TAOS_DEF_ERROR_CODE(0, 0x0370)
|
||||||
#define TSDB_CODE_MND_COLUMN_COMPRESS_ALREADY_EXIST TAOS_DEF_ERROR_CODE(0, 0x0371)
|
|
||||||
|
|
||||||
|
|
||||||
// mnode-func
|
// mnode-func
|
||||||
|
@ -406,6 +407,7 @@ int32_t* taosGetErrno();
|
||||||
#define TSDB_CODE_MND_INVALID_TARGET_TABLE TAOS_DEF_ERROR_CODE(0, 0x03F7)
|
#define TSDB_CODE_MND_INVALID_TARGET_TABLE TAOS_DEF_ERROR_CODE(0, 0x03F7)
|
||||||
|
|
||||||
|
|
||||||
|
#define TSDB_CODE_MND_COLUMN_COMPRESS_ALREADY_EXIST TAOS_DEF_ERROR_CODE(0, 0x03F8)
|
||||||
|
|
||||||
// dnode
|
// dnode
|
||||||
// #define TSDB_CODE_DND_MSG_NOT_PROCESSED TAOS_DEF_ERROR_CODE(0, 0x0400) // 2.x
|
// #define TSDB_CODE_DND_MSG_NOT_PROCESSED TAOS_DEF_ERROR_CODE(0, 0x0400) // 2.x
|
||||||
|
|
|
@ -236,35 +236,25 @@ typedef struct {
|
||||||
__data_compress_init initFn;
|
__data_compress_init initFn;
|
||||||
__data_compress_l1_fn_t comprFn;
|
__data_compress_l1_fn_t comprFn;
|
||||||
__data_decompress_l1_fn_t decomprFn;
|
__data_decompress_l1_fn_t decomprFn;
|
||||||
} TCompressL1FnSet;
|
} TCmprL1FnSet;
|
||||||
|
|
||||||
typedef struct {
|
typedef struct {
|
||||||
char *name;
|
char *name;
|
||||||
__data_compress_init initFn;
|
__data_compress_init initFn;
|
||||||
__data_compress_l2_fn_t comprFn;
|
__data_compress_l2_fn_t comprFn;
|
||||||
__data_decompress_l2_fn_t decomprFn;
|
__data_decompress_l2_fn_t decomprFn;
|
||||||
} TCompressL2FnSet;
|
} TCmprL2FnSet;
|
||||||
|
|
||||||
typedef struct {
|
typedef enum {
|
||||||
int8_t type;
|
|
||||||
int8_t level;
|
|
||||||
__data_compress_init initFn;
|
|
||||||
__data_compress_l1_fn_t l1CmprFn;
|
|
||||||
__data_decompress_l1_fn_t l1DecmprFn;
|
|
||||||
__data_compress_l2_fn_t l2CmprFn;
|
|
||||||
__data_decompress_l2_fn_t l2DecmprFn;
|
|
||||||
} TCompressPara;
|
|
||||||
|
|
||||||
typedef enum L1Compress {
|
|
||||||
L1_UNKNOWN = 0,
|
L1_UNKNOWN = 0,
|
||||||
L1_SIMPLE_8B,
|
L1_SIMPLE_8B,
|
||||||
L1_XOR,
|
L1_XOR,
|
||||||
L1_RLE,
|
L1_RLE,
|
||||||
L1_DELTAD,
|
L1_DELTAD,
|
||||||
L1_DISABLED = 0xFF,
|
L1_DISABLED = 0xFF,
|
||||||
} EL1CompressFuncType;
|
} TCmprL1Type;
|
||||||
|
|
||||||
typedef enum L2Compress {
|
typedef enum {
|
||||||
L2_UNKNOWN = 0,
|
L2_UNKNOWN = 0,
|
||||||
L2_LZ4,
|
L2_LZ4,
|
||||||
L2_ZLIB,
|
L2_ZLIB,
|
||||||
|
@ -272,7 +262,20 @@ typedef enum L2Compress {
|
||||||
L2_TSZ,
|
L2_TSZ,
|
||||||
L2_XZ,
|
L2_XZ,
|
||||||
L2_DISABLED = 0xFF,
|
L2_DISABLED = 0xFF,
|
||||||
} EL2ComressFuncType;
|
} TCmprL2Type;
|
||||||
|
|
||||||
|
typedef enum {
|
||||||
|
L2_LVL_NOCHANGE = 0,
|
||||||
|
L2_LVL_LOW,
|
||||||
|
L2_LVL_MEDIUM,
|
||||||
|
L2_LVL_HIGH,
|
||||||
|
L2_LVL_DISABLED = 0xFF,
|
||||||
|
} TCmprLvlType;
|
||||||
|
|
||||||
|
typedef struct {
|
||||||
|
char *name;
|
||||||
|
uint8_t lvl[3]; // l[0] = 'low', l[1] = 'mid', l[2] = 'high'
|
||||||
|
} TCmprLvlSet;
|
||||||
|
|
||||||
int32_t tcompressDebug(uint32_t cmprAlg, uint8_t *l1Alg, uint8_t *l2Alg, uint8_t *level);
|
int32_t tcompressDebug(uint32_t cmprAlg, uint8_t *l1Alg, uint8_t *l2Alg, uint8_t *level);
|
||||||
|
|
||||||
|
|
|
@ -188,7 +188,7 @@ typedef enum ELogicConditionType {
|
||||||
LOGIC_COND_TYPE_NOT,
|
LOGIC_COND_TYPE_NOT,
|
||||||
} ELogicConditionType;
|
} ELogicConditionType;
|
||||||
|
|
||||||
#define ENCRYPTED_LEN(len) (len/16) * 16 + (len%16?1:0) * 16
|
#define ENCRYPTED_LEN(len) (len / 16) * 16 + (len % 16 ? 1 : 0) * 16
|
||||||
#define ENCRYPT_KEY_LEN 16
|
#define ENCRYPT_KEY_LEN 16
|
||||||
#define ENCRYPT_KEY_LEN_MIN 8
|
#define ENCRYPT_KEY_LEN_MIN 8
|
||||||
|
|
||||||
|
@ -525,7 +525,7 @@ typedef enum ELogicConditionType {
|
||||||
#define TSDB_ARB_DUMMY_TIME 4765104000000 // 2121-01-01 00:00:00.000, :P
|
#define TSDB_ARB_DUMMY_TIME 4765104000000 // 2121-01-01 00:00:00.000, :P
|
||||||
|
|
||||||
#define TFS_MAX_TIERS 3
|
#define TFS_MAX_TIERS 3
|
||||||
#define TFS_MAX_DISKS_PER_TIER 16
|
#define TFS_MAX_DISKS_PER_TIER 128
|
||||||
#define TFS_MAX_DISKS (TFS_MAX_TIERS * TFS_MAX_DISKS_PER_TIER)
|
#define TFS_MAX_DISKS (TFS_MAX_TIERS * TFS_MAX_DISKS_PER_TIER)
|
||||||
#define TFS_MIN_LEVEL 0
|
#define TFS_MIN_LEVEL 0
|
||||||
#define TFS_MAX_LEVEL (TFS_MAX_TIERS - 1)
|
#define TFS_MAX_LEVEL (TFS_MAX_TIERS - 1)
|
||||||
|
@ -535,7 +535,7 @@ typedef enum ELogicConditionType {
|
||||||
|
|
||||||
enum { TRANS_STAT_INIT = 0, TRANS_STAT_EXECUTING, TRANS_STAT_EXECUTED, TRANS_STAT_ROLLBACKING, TRANS_STAT_ROLLBACKED };
|
enum { TRANS_STAT_INIT = 0, TRANS_STAT_EXECUTING, TRANS_STAT_EXECUTED, TRANS_STAT_ROLLBACKING, TRANS_STAT_ROLLBACKED };
|
||||||
enum { TRANS_OPER_INIT = 0, TRANS_OPER_EXECUTE, TRANS_OPER_ROLLBACK };
|
enum { TRANS_OPER_INIT = 0, TRANS_OPER_EXECUTE, TRANS_OPER_ROLLBACK };
|
||||||
enum { ENCRYPT_KEY_STAT_UNKNOWN = 0, ENCRYPT_KEY_STAT_UNSET, ENCRYPT_KEY_STAT_SET, ENCRYPT_KEY_STAT_LOADED};
|
enum { ENCRYPT_KEY_STAT_UNKNOWN = 0, ENCRYPT_KEY_STAT_UNSET, ENCRYPT_KEY_STAT_SET, ENCRYPT_KEY_STAT_LOADED };
|
||||||
|
|
||||||
typedef struct {
|
typedef struct {
|
||||||
char dir[TSDB_FILENAME_LEN];
|
char dir[TSDB_FILENAME_LEN];
|
||||||
|
|
|
@ -180,6 +180,13 @@ void taosHashCancelIterate(SHashObj *pHashObj, void *p);
|
||||||
*/
|
*/
|
||||||
void *taosHashGetKey(void *data, size_t *keyLen);
|
void *taosHashGetKey(void *data, size_t *keyLen);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get the corresponding value length for a given data in hash table
|
||||||
|
* @param data
|
||||||
|
* @return
|
||||||
|
*/
|
||||||
|
int32_t taosHashGetValueSize(void *data);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* return the payload data with the specified key(reference number added)
|
* return the payload data with the specified key(reference number added)
|
||||||
*
|
*
|
||||||
|
|
|
@ -57,9 +57,9 @@ else
|
||||||
arch=$cpuType
|
arch=$cpuType
|
||||||
fi
|
fi
|
||||||
|
|
||||||
echo "${top_dir}/../enterprise/packaging/build_taoskeeper.sh -r ${arch} -e taoskeeper"
|
echo "${top_dir}/../enterprise/packaging/build_taoskeeper.sh -r ${arch} -e taoskeeper -t ver-${tdengine_ver}"
|
||||||
echo "$top_dir=${top_dir}"
|
echo "$top_dir=${top_dir}"
|
||||||
taoskeeper_binary=`${top_dir}/../enterprise/packaging/build_taoskeeper.sh -r $arch -e taoskeeper`
|
taoskeeper_binary=`${top_dir}/../enterprise/packaging/build_taoskeeper.sh -r $arch -e taoskeeper -t ver-${tdengine_ver}`
|
||||||
echo "taoskeeper_binary: ${taoskeeper_binary}"
|
echo "taoskeeper_binary: ${taoskeeper_binary}"
|
||||||
|
|
||||||
# copy config files
|
# copy config files
|
||||||
|
@ -76,6 +76,13 @@ if [ -f "${compile_dir}/test/cfg/taosadapter.service" ]; then
|
||||||
cp ${compile_dir}/test/cfg/taosadapter.service ${pkg_dir}${install_home_path}/cfg || :
|
cp ${compile_dir}/test/cfg/taosadapter.service ${pkg_dir}${install_home_path}/cfg || :
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
if [ -f "%{_compiledir}/../../../explorer/target/taos-explorer.service" ]; then
|
||||||
|
cp %{_compiledir}/../../../explorer/target/taos-explorer.service ${pkg_dir}${install_home_path}/cfg || :
|
||||||
|
fi
|
||||||
|
if [ -f "%{_compiledir}/../../../explorer/server/example/explorer.toml" ]; then
|
||||||
|
cp %{_compiledir}/../../../explorer/server/example/explorer.toml ${pkg_dir}${install_home_path}/cfg || :
|
||||||
|
fi
|
||||||
|
|
||||||
cp ${taoskeeper_binary} ${pkg_dir}${install_home_path}/bin
|
cp ${taoskeeper_binary} ${pkg_dir}${install_home_path}/bin
|
||||||
#cp ${compile_dir}/../packaging/deb/taosd ${pkg_dir}${install_home_path}/init.d
|
#cp ${compile_dir}/../packaging/deb/taosd ${pkg_dir}${install_home_path}/init.d
|
||||||
cp ${compile_dir}/../packaging/tools/post.sh ${pkg_dir}${install_home_path}/script
|
cp ${compile_dir}/../packaging/tools/post.sh ${pkg_dir}${install_home_path}/script
|
||||||
|
@ -93,6 +100,10 @@ if [ -f "${compile_dir}/build/bin/taosadapter" ]; then
|
||||||
cp ${compile_dir}/build/bin/taosadapter ${pkg_dir}${install_home_path}/bin ||:
|
cp ${compile_dir}/build/bin/taosadapter ${pkg_dir}${install_home_path}/bin ||:
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
if [ -f "${compile_dir}/../../../explorer/target/release/taos-explorer" ]; then
|
||||||
|
cp ${compile_dir}/../../../explorer/target/release/taos-explorer ${pkg_dir}${install_home_path}/bin ||:
|
||||||
|
fi
|
||||||
|
|
||||||
cp ${compile_dir}/build/bin/taos ${pkg_dir}${install_home_path}/bin
|
cp ${compile_dir}/build/bin/taos ${pkg_dir}${install_home_path}/bin
|
||||||
cp ${compile_dir}/build/lib/${libfile} ${pkg_dir}${install_home_path}/driver
|
cp ${compile_dir}/build/lib/${libfile} ${pkg_dir}${install_home_path}/driver
|
||||||
[ -f ${compile_dir}/build/lib/${wslibfile} ] && cp ${compile_dir}/build/lib/${wslibfile} ${pkg_dir}${install_home_path}/driver ||:
|
[ -f ${compile_dir}/build/lib/${wslibfile} ] && cp ${compile_dir}/build/lib/${wslibfile} ${pkg_dir}${install_home_path}/driver ||:
|
||||||
|
|
|
@ -72,6 +72,14 @@ if [ -f %{_compiledir}/../build-taoskeeper/taoskeeper.service ]; then
|
||||||
cp %{_compiledir}/../build-taoskeeper/taoskeeper.service %{buildroot}%{homepath}/cfg ||:
|
cp %{_compiledir}/../build-taoskeeper/taoskeeper.service %{buildroot}%{homepath}/cfg ||:
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
if [ -f %{_compiledir}/../../../explorer/target/taos-explorer.service ]; then
|
||||||
|
cp %{_compiledir}/../../../explorer/target/taos-explorer.service %{buildroot}%{homepath}/cfg ||:
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -f %{_compiledir}/../../../explorer/server/example/explorer.toml ]; then
|
||||||
|
cp %{_compiledir}/../../../explorer/server/example/explorer.toml %{buildroot}%{homepath}/cfg ||:
|
||||||
|
fi
|
||||||
|
|
||||||
#cp %{_compiledir}/../packaging/rpm/taosd %{buildroot}%{homepath}/init.d
|
#cp %{_compiledir}/../packaging/rpm/taosd %{buildroot}%{homepath}/init.d
|
||||||
cp %{_compiledir}/../packaging/tools/post.sh %{buildroot}%{homepath}/script
|
cp %{_compiledir}/../packaging/tools/post.sh %{buildroot}%{homepath}/script
|
||||||
cp %{_compiledir}/../packaging/tools/preun.sh %{buildroot}%{homepath}/script
|
cp %{_compiledir}/../packaging/tools/preun.sh %{buildroot}%{homepath}/script
|
||||||
|
@ -84,6 +92,10 @@ cp %{_compiledir}/build/bin/udfd %{buildroot}%{homepath}/bin
|
||||||
cp %{_compiledir}/build/bin/taosBenchmark %{buildroot}%{homepath}/bin
|
cp %{_compiledir}/build/bin/taosBenchmark %{buildroot}%{homepath}/bin
|
||||||
cp %{_compiledir}/build/bin/taosdump %{buildroot}%{homepath}/bin
|
cp %{_compiledir}/build/bin/taosdump %{buildroot}%{homepath}/bin
|
||||||
|
|
||||||
|
if [ -f %{_compiledir}/../../../explorer/target/release/taos-explorer ]; then
|
||||||
|
cp %{_compiledir}/../../../explorer/target/release/taos-explorer %{buildroot}%{homepath}/bin
|
||||||
|
fi
|
||||||
|
|
||||||
if [ -f %{_compiledir}/../build-taoskeeper/taoskeeper ]; then
|
if [ -f %{_compiledir}/../build-taoskeeper/taoskeeper ]; then
|
||||||
cp %{_compiledir}/../build-taoskeeper/taoskeeper %{buildroot}%{homepath}/bin
|
cp %{_compiledir}/../build-taoskeeper/taoskeeper %{buildroot}%{homepath}/bin
|
||||||
fi
|
fi
|
||||||
|
|
|
@ -16,49 +16,27 @@ serverFqdn=""
|
||||||
script_dir=$(dirname $(readlink -f "$0"))
|
script_dir=$(dirname $(readlink -f "$0"))
|
||||||
# Dynamic directory
|
# Dynamic directory
|
||||||
|
|
||||||
clientName="taos"
|
PREFIX="taos"
|
||||||
serverName="taosd"
|
clientName="${PREFIX}"
|
||||||
|
serverName="${PREFIX}d"
|
||||||
udfdName="udfd"
|
udfdName="udfd"
|
||||||
configFile="taos.cfg"
|
configFile="${PREFIX}.cfg"
|
||||||
productName="TDengine"
|
productName="TDengine"
|
||||||
emailName="taosdata.com"
|
emailName="taosdata.com"
|
||||||
uninstallScript="rmtaos"
|
uninstallScript="rm${PREFIX}"
|
||||||
historyFile="taos_history"
|
historyFile="${PREFIX}_history"
|
||||||
tarName="package.tar.gz"
|
tarName="package.tar.gz"
|
||||||
dataDir="/var/lib/taos"
|
dataDir="/var/lib/${PREFIX}"
|
||||||
logDir="/var/log/taos"
|
logDir="/var/log/${PREFIX}"
|
||||||
configDir="/etc/taos"
|
configDir="/etc/${PREFIX}"
|
||||||
installDir="/usr/local/taos"
|
installDir="/usr/local/${PREFIX}"
|
||||||
adapterName="taosadapter"
|
adapterName="${PREFIX}adapter"
|
||||||
benchmarkName="taosBenchmark"
|
benchmarkName="${PREFIX}Benchmark"
|
||||||
dumpName="taosdump"
|
dumpName="${PREFIX}dump"
|
||||||
demoName="taosdemo"
|
demoName="${PREFIX}demo"
|
||||||
xname="taosx"
|
xname="${PREFIX}x"
|
||||||
keeperName="taoskeeper"
|
explorerName="${PREFIX}-explorer"
|
||||||
|
keeperName="${PREFIX}keeper"
|
||||||
clientName2="taos"
|
|
||||||
serverName2="${clientName2}d"
|
|
||||||
configFile2="${clientName2}.cfg"
|
|
||||||
productName2="TDengine"
|
|
||||||
emailName2="taosdata.com"
|
|
||||||
xname2="${clientName2}x"
|
|
||||||
adapterName2="${clientName2}adapter"
|
|
||||||
keeperName2="${clientName2}keeper"
|
|
||||||
|
|
||||||
explorerName="${clientName2}-explorer"
|
|
||||||
benchmarkName2="${clientName2}Benchmark"
|
|
||||||
demoName2="${clientName2}demo"
|
|
||||||
dumpName2="${clientName2}dump"
|
|
||||||
uninstallScript2="rm${clientName2}"
|
|
||||||
|
|
||||||
historyFile="${clientName2}_history"
|
|
||||||
logDir="/var/log/${clientName2}"
|
|
||||||
configDir="/etc/${clientName2}"
|
|
||||||
installDir="/usr/local/${clientName2}"
|
|
||||||
|
|
||||||
data_dir=${dataDir}
|
|
||||||
log_dir=${logDir}
|
|
||||||
cfg_install_dir=${configDir}
|
|
||||||
|
|
||||||
bin_link_dir="/usr/bin"
|
bin_link_dir="/usr/bin"
|
||||||
lib_link_dir="/usr/lib"
|
lib_link_dir="/usr/lib"
|
||||||
|
@ -71,7 +49,6 @@ install_main_dir=${installDir}
|
||||||
bin_dir="${installDir}/bin"
|
bin_dir="${installDir}/bin"
|
||||||
|
|
||||||
service_config_dir="/etc/systemd/system"
|
service_config_dir="/etc/systemd/system"
|
||||||
web_port=6041
|
|
||||||
|
|
||||||
# Color setting
|
# Color setting
|
||||||
RED='\033[0;31m'
|
RED='\033[0;31m'
|
||||||
|
@ -179,6 +156,26 @@ done
|
||||||
|
|
||||||
#echo "verType=${verType} interactiveFqdn=${interactiveFqdn}"
|
#echo "verType=${verType} interactiveFqdn=${interactiveFqdn}"
|
||||||
|
|
||||||
|
tools=(${clientName} ${benchmarkName} ${dumpName} ${demoName} remove.sh udfd set_core.sh TDinsight.sh start_pre.sh)
|
||||||
|
if [ "${verMode}" == "cluster" ]; then
|
||||||
|
services=(${serverName} ${adapterName} ${xname} ${explorerName} ${keeperName})
|
||||||
|
elif [ "${verMode}" == "edge" ]; then
|
||||||
|
if [ "${pagMode}" == "full" ]; then
|
||||||
|
services=(${serverName} ${adapterName} ${keeperName} ${explorerName})
|
||||||
|
else
|
||||||
|
services=(${serverName})
|
||||||
|
tools=(${clientName} ${benchmarkName} remove.sh start_pre.sh)
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
services=(${serverName} ${adapterName} ${xname} ${explorerName} ${keeperName})
|
||||||
|
fi
|
||||||
|
|
||||||
|
function install_services() {
|
||||||
|
for service in "${services[@]}"; do
|
||||||
|
install_service ${service}
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
function kill_process() {
|
function kill_process() {
|
||||||
pid=$(ps -ef | grep "$1" | grep -v "grep" | awk '{print $2}')
|
pid=$(ps -ef | grep "$1" | grep -v "grep" | awk '{print $2}')
|
||||||
if [ -n "$pid" ]; then
|
if [ -n "$pid" ]; then
|
||||||
|
@ -196,6 +193,7 @@ function install_main_path() {
|
||||||
${csudo}mkdir -p ${install_main_dir}/driver
|
${csudo}mkdir -p ${install_main_dir}/driver
|
||||||
${csudo}mkdir -p ${install_main_dir}/examples
|
${csudo}mkdir -p ${install_main_dir}/examples
|
||||||
${csudo}mkdir -p ${install_main_dir}/include
|
${csudo}mkdir -p ${install_main_dir}/include
|
||||||
|
${csudo}mkdir -p ${configDir}
|
||||||
# ${csudo}mkdir -p ${install_main_dir}/init.d
|
# ${csudo}mkdir -p ${install_main_dir}/init.d
|
||||||
if [ "$verMode" == "cluster" ]; then
|
if [ "$verMode" == "cluster" ]; then
|
||||||
${csudo}mkdir -p ${install_main_dir}/share
|
${csudo}mkdir -p ${install_main_dir}/share
|
||||||
|
@ -208,44 +206,48 @@ function install_main_path() {
|
||||||
|
|
||||||
function install_bin() {
|
function install_bin() {
|
||||||
# Remove links
|
# Remove links
|
||||||
${csudo}rm -f ${bin_link_dir}/${clientName2} || :
|
for tool in "${tools[@]}"; do
|
||||||
${csudo}rm -f ${bin_link_dir}/${serverName2} || :
|
${csudo}rm -f ${bin_link_dir}/${tool} || :
|
||||||
${csudo}rm -f ${bin_link_dir}/${udfdName} || :
|
done
|
||||||
${csudo}rm -f ${bin_link_dir}/${adapterName} || :
|
|
||||||
${csudo}rm -f ${bin_link_dir}/${uninstallScript2} || :
|
|
||||||
${csudo}rm -f ${bin_link_dir}/${demoName2} || :
|
|
||||||
${csudo}rm -f ${bin_link_dir}/${benchmarkName2} || :
|
|
||||||
${csudo}rm -f ${bin_link_dir}/${dumpName2} || :
|
|
||||||
${csudo}rm -f ${bin_link_dir}/${keeperName2} || :
|
|
||||||
${csudo}rm -f ${bin_link_dir}/set_core || :
|
|
||||||
${csudo}rm -f ${bin_link_dir}/TDinsight.sh || :
|
|
||||||
|
|
||||||
${csudo}cp -r ${script_dir}/bin/* ${install_main_dir}/bin && ${csudo}chmod 0555 ${install_main_dir}/bin/*
|
for service in "${services[@]}"; do
|
||||||
|
${csudo}rm -f ${bin_link_dir}/${service} || :
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ "${verType}" == "client" ]; then
|
||||||
|
${csudo}cp -r ${script_dir}/bin/${clientName} ${install_main_dir}/bin
|
||||||
|
${csudo}cp -r ${script_dir}/bin/${benchmarkName} ${install_main_dir}/bin
|
||||||
|
${csudo}cp -r ${script_dir}/bin/${dumpName} ${install_main_dir}/bin
|
||||||
|
${csudo}cp -r ${script_dir}/bin/remove.sh ${install_main_dir}/bin
|
||||||
|
else
|
||||||
|
${csudo}cp -r ${script_dir}/bin/* ${install_main_dir}/bin
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "${verMode}" == "cluster" && "${verType}" != "client" ]]; then
|
||||||
|
if [ -d ${script_dir}/${xname}/bin ]; then
|
||||||
|
${csudo}cp -r ${script_dir}/${xname}/bin/* ${install_main_dir}/bin
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -f ${script_dir}/bin/quick_deploy.sh ]; then
|
||||||
|
${csudo}cp -r ${script_dir}/bin/quick_deploy.sh ${install_main_dir}/bin
|
||||||
|
fi
|
||||||
|
|
||||||
|
${csudo}chmod 0555 ${install_main_dir}/bin/*
|
||||||
|
[ -x ${install_main_dir}/bin/remove.sh ] && ${csudo}mv ${install_main_dir}/bin/remove.sh ${install_main_dir}/uninstall.sh || :
|
||||||
|
|
||||||
#Make link
|
#Make link
|
||||||
[ -x ${install_main_dir}/bin/${clientName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${clientName2} ${bin_link_dir}/${clientName2} || :
|
for tool in "${tools[@]}"; do
|
||||||
[ -x ${install_main_dir}/bin/${serverName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${serverName2} ${bin_link_dir}/${serverName2} || :
|
if [ "${tool}" == "remove.sh" ]; then
|
||||||
[ -x ${install_main_dir}/bin/${udfdName} ] && ${csudo}ln -sf ${install_main_dir}/bin/${udfdName} ${bin_link_dir}/${udfdName} || :
|
[ -x ${install_main_dir}/uninstall.sh ] && ${csudo}ln -sf ${install_main_dir}/uninstall.sh ${bin_link_dir}/${uninstallScript} || :
|
||||||
[ -x ${install_main_dir}/bin/${adapterName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${adapterName2} ${bin_link_dir}/${adapterName2} || :
|
else
|
||||||
[ -x ${install_main_dir}/bin/${benchmarkName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${benchmarkName2} ${bin_link_dir}/${demoName2} || :
|
[ -x ${install_main_dir}/bin/${tool} ] && ${csudo}ln -sf ${install_main_dir}/bin/${tool} ${bin_link_dir}/${tool} || :
|
||||||
[ -x ${install_main_dir}/bin/${benchmarkName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${benchmarkName2} ${bin_link_dir}/${benchmarkName2} || :
|
|
||||||
[ -x ${install_main_dir}/bin/${dumpName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${dumpName2} ${bin_link_dir}/${dumpName2} || :
|
|
||||||
[ -x ${install_main_dir}/bin/${keeperName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${keeperName2} ${bin_link_dir}/${keeperName2} || :
|
|
||||||
[ -x ${install_main_dir}/bin/TDinsight.sh ] && ${csudo}ln -sf ${install_main_dir}/bin/TDinsight.sh ${bin_link_dir}/TDinsight.sh || :
|
|
||||||
if [ "$clientName2" == "${clientName}" ]; then
|
|
||||||
[ -x ${install_main_dir}/bin/remove.sh ] && ${csudo}ln -s ${install_main_dir}/bin/remove.sh ${bin_link_dir}/${uninstallScript} || :
|
|
||||||
fi
|
fi
|
||||||
[ -x ${install_main_dir}/bin/set_core.sh ] && ${csudo}ln -s ${install_main_dir}/bin/set_core.sh ${bin_link_dir}/set_core || :
|
done
|
||||||
|
|
||||||
if [ "$verMode" == "cluster" ] && [ "$clientName" != "$clientName2" ]; then
|
for service in "${services[@]}"; do
|
||||||
${csudo}rm -f ${bin_link_dir}/${xname2} || :
|
[ -x ${install_main_dir}/bin/${service} ] && ${csudo}ln -sf ${install_main_dir}/bin/${service} ${bin_link_dir}/${service} || :
|
||||||
${csudo}rm -f ${bin_link_dir}/${explorerName} || :
|
done
|
||||||
|
|
||||||
#Make link
|
|
||||||
[ -x ${install_main_dir}/bin/${xname2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${xname2} ${bin_link_dir}/${xname2} || :
|
|
||||||
[ -x ${install_main_dir}/bin/${explorerName} ] && ${csudo}ln -sf ${install_main_dir}/bin/${explorerName} ${bin_link_dir}/${explorerName} || :
|
|
||||||
[ -x ${install_main_dir}/bin/remove.sh ] && ${csudo}ln -s ${install_main_dir}/bin/remove.sh ${bin_link_dir}/${uninstallScript2} || :
|
|
||||||
fi
|
|
||||||
}
|
}
|
||||||
|
|
||||||
function install_lib() {
|
function install_lib() {
|
||||||
|
@ -415,10 +417,10 @@ function set_hostname() {
|
||||||
# ${csudo}sed -i -r "s/#*\s*(HOSTNAME=\s*).*/\1$newHostname/" /etc/sysconfig/network || :
|
# ${csudo}sed -i -r "s/#*\s*(HOSTNAME=\s*).*/\1$newHostname/" /etc/sysconfig/network || :
|
||||||
# fi
|
# fi
|
||||||
|
|
||||||
if [ -f ${cfg_install_dir}/${configFile2} ]; then
|
if [ -f ${configDir}/${configFile} ]; then
|
||||||
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$newHostname/" ${cfg_install_dir}/${configFile2}
|
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$newHostname/" ${configDir}/${configFile}
|
||||||
else
|
else
|
||||||
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$newHostname/" ${script_dir}/cfg/${configFile2}
|
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$newHostname/" ${script_dir}/cfg/${configFile}
|
||||||
fi
|
fi
|
||||||
serverFqdn=$newHostname
|
serverFqdn=$newHostname
|
||||||
|
|
||||||
|
@ -454,10 +456,10 @@ function set_ipAsFqdn() {
|
||||||
localFqdn="127.0.0.1"
|
localFqdn="127.0.0.1"
|
||||||
# Write the local FQDN to configuration file
|
# Write the local FQDN to configuration file
|
||||||
|
|
||||||
if [ -f ${cfg_install_dir}/${configFile2} ]; then
|
if [ -f ${configDir}/${configFile} ]; then
|
||||||
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${cfg_install_dir}/${configFile2}
|
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${configDir}/${configFile}
|
||||||
else
|
else
|
||||||
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${script_dir}/cfg/${configFile2}
|
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${script_dir}/cfg/${configFile}
|
||||||
fi
|
fi
|
||||||
serverFqdn=$localFqdn
|
serverFqdn=$localFqdn
|
||||||
echo
|
echo
|
||||||
|
@ -480,10 +482,10 @@ function set_ipAsFqdn() {
|
||||||
read -p "Please choose an IP from local IP list:" localFqdn
|
read -p "Please choose an IP from local IP list:" localFqdn
|
||||||
else
|
else
|
||||||
# Write the local FQDN to configuration file
|
# Write the local FQDN to configuration file
|
||||||
if [ -f ${cfg_install_dir}/${configFile2} ]; then
|
if [ -f ${configDir}/${configFile} ]; then
|
||||||
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${cfg_install_dir}/${configFile2}
|
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${configDir}/${configFile}
|
||||||
else
|
else
|
||||||
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${script_dir}/cfg/${configFile2}
|
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${script_dir}/cfg/${configFile}
|
||||||
fi
|
fi
|
||||||
serverFqdn=$localFqdn
|
serverFqdn=$localFqdn
|
||||||
break
|
break
|
||||||
|
@ -502,88 +504,118 @@ function local_fqdn_check() {
|
||||||
set_hostname
|
set_hostname
|
||||||
}
|
}
|
||||||
|
|
||||||
function install_adapter_config() {
|
function install_taosx_config() {
|
||||||
if [ -f ${script_dir}/cfg/${adapterName}.toml ]; then
|
|
||||||
${csudo}sed -i -r "s/localhost/${serverFqdn}/g" ${script_dir}/cfg/${adapterName}.toml
|
|
||||||
fi
|
|
||||||
if [ ! -f "${cfg_install_dir}/${adapterName}.toml" ]; then
|
|
||||||
${csudo}mkdir -p ${cfg_install_dir}
|
|
||||||
[ -f ${script_dir}/cfg/${adapterName}.toml ] && ${csudo}cp ${script_dir}/cfg/${adapterName}.toml ${cfg_install_dir}
|
|
||||||
[ -f ${cfg_install_dir}/${adapterName}.toml ] && ${csudo}chmod 644 ${cfg_install_dir}/${adapterName}.toml
|
|
||||||
else
|
|
||||||
[ -f ${script_dir}/cfg/${adapterName}.toml ] &&
|
|
||||||
${csudo}cp -f ${script_dir}/cfg/${adapterName}.toml ${cfg_install_dir}/${adapterName}.toml.new
|
|
||||||
fi
|
|
||||||
|
|
||||||
[ -f ${cfg_install_dir}/${adapterName}.toml ] &&
|
|
||||||
${csudo}ln -sf ${cfg_install_dir}/${adapterName}.toml ${install_main_dir}/cfg/${adapterName}.toml
|
|
||||||
|
|
||||||
[ ! -z $1 ] && return 0 || : # only install client
|
[ ! -z $1 ] && return 0 || : # only install client
|
||||||
|
|
||||||
|
fileName="${script_dir}/${xname}/etc/${PREFIX}/${xname}.toml"
|
||||||
|
if [ -f ${fileName} ]; then
|
||||||
|
${csudo}sed -i -r "s/#*\s*(fqdn\s*=\s*).*/\1\"${serverFqdn}\"/" ${fileName}
|
||||||
|
|
||||||
|
if [ -f "${configDir}/${xname}.toml" ]; then
|
||||||
|
${csudo}cp ${fileName} ${configDir}/${xname}.toml.new
|
||||||
|
else
|
||||||
|
${csudo}cp ${fileName} ${configDir}/${xname}.toml
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
function install_explorer_config() {
|
||||||
|
[ ! -z $1 ] && return 0 || : # only install client
|
||||||
|
|
||||||
|
if [ "$verMode" == "cluster" ]; then
|
||||||
|
fileName="${script_dir}/${xname}/etc/${PREFIX}/explorer.toml"
|
||||||
|
else
|
||||||
|
fileName="${script_dir}/cfg/explorer.toml"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -f ${fileName} ]; then
|
||||||
|
${csudo}sed -i "s/localhost/${serverFqdn}/g" ${fileName}
|
||||||
|
|
||||||
|
if [ -f "${configDir}/explorer.toml" ]; then
|
||||||
|
${csudo}cp ${fileName} ${configDir}/explorer.toml.new
|
||||||
|
else
|
||||||
|
${csudo}cp ${fileName} ${configDir}/explorer.toml
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function install_adapter_config() {
|
||||||
|
[ ! -z $1 ] && return 0 || : # only install client
|
||||||
|
|
||||||
|
fileName="${script_dir}/cfg/${adapterName}.toml"
|
||||||
|
if [ -f ${fileName} ]; then
|
||||||
|
${csudo}sed -i -r "s/localhost/${serverFqdn}/g" ${fileName}
|
||||||
|
|
||||||
|
if [ -f "${configDir}/${adapterName}.toml" ]; then
|
||||||
|
${csudo}cp ${fileName} ${configDir}/${adapterName}.toml.new
|
||||||
|
else
|
||||||
|
${csudo}cp ${fileName} ${configDir}/${adapterName}.toml
|
||||||
|
fi
|
||||||
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
function install_keeper_config() {
|
function install_keeper_config() {
|
||||||
if [ -f ${script_dir}/cfg/${keeperName2}.toml ]; then
|
[ ! -z $1 ] && return 0 || : # only install client
|
||||||
${csudo}sed -i -r "s/127.0.0.1/${serverFqdn}/g" ${script_dir}/cfg/${keeperName2}.toml
|
|
||||||
fi
|
fileName="${script_dir}/cfg/${keeperName}.toml"
|
||||||
if [ -f "${configDir}/keeper.toml" ]; then
|
if [ -f ${fileName} ]; then
|
||||||
echo "The file keeper.toml will be renamed to ${keeperName2}.toml"
|
${csudo}sed -i -r "s/127.0.0.1/${serverFqdn}/g" ${fileName}
|
||||||
${csudo}cp ${script_dir}/cfg/${keeperName2}.toml ${configDir}/${keeperName2}.toml.new
|
|
||||||
${csudo}mv ${configDir}/keeper.toml ${configDir}/${keeperName2}.toml
|
if [ -f "${configDir}/${keeperName}.toml" ]; then
|
||||||
elif [ -f "${configDir}/${keeperName2}.toml" ]; then
|
${csudo}cp ${fileName} ${configDir}/${keeperName}.toml.new
|
||||||
# "taoskeeper.toml exists,new config is taoskeeper.toml.new"
|
|
||||||
${csudo}cp ${script_dir}/cfg/${keeperName2}.toml ${configDir}/${keeperName2}.toml.new
|
|
||||||
else
|
else
|
||||||
${csudo}cp ${script_dir}/cfg/${keeperName2}.toml ${configDir}/${keeperName2}.toml
|
${csudo}cp ${fileName} ${configDir}/${keeperName}.toml
|
||||||
|
fi
|
||||||
fi
|
fi
|
||||||
command -v systemctl >/dev/null 2>&1 && ${csudo}systemctl daemon-reload >/dev/null 2>&1 || true
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function install_taosd_config() {
|
||||||
|
fileName="${script_dir}/cfg/${configFile}"
|
||||||
|
if [ -f ${fileName} ]; then
|
||||||
|
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$serverFqdn/" ${script_dir}/cfg/${configFile}
|
||||||
|
${csudo}echo "monitor 1" >>${script_dir}/cfg/${configFile}
|
||||||
|
${csudo}echo "monitorFQDN ${serverFqdn}" >>${script_dir}/cfg/${configFile}
|
||||||
|
${csudo}echo "audit 1" >>${script_dir}/cfg/${configFile}
|
||||||
|
|
||||||
|
if [ -f "${configDir}/${configFile}" ]; then
|
||||||
|
${csudo}cp ${fileName} ${configDir}/${configFile}.new
|
||||||
|
else
|
||||||
|
${csudo}cp ${fileName} ${configDir}/${configFile}
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
${csudo}ln -sf ${configDir}/${configFile} ${install_main_dir}/cfg
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
function install_config() {
|
function install_config() {
|
||||||
|
|
||||||
if [ ! -f "${cfg_install_dir}/${configFile2}" ]; then
|
|
||||||
${csudo}mkdir -p ${cfg_install_dir}
|
|
||||||
if [ -f ${script_dir}/cfg/${configFile2} ]; then
|
|
||||||
${csudo} echo "monitor 1" >> ${script_dir}/cfg/${configFile2}
|
|
||||||
${csudo} echo "monitorFQDN ${serverFqdn}" >> ${script_dir}/cfg/${configFile2}
|
|
||||||
${csudo} echo "audit 1" >> ${script_dir}/cfg/${configFile2}
|
|
||||||
${csudo}cp ${script_dir}/cfg/${configFile2} ${cfg_install_dir}
|
|
||||||
fi
|
|
||||||
${csudo}chmod 644 ${cfg_install_dir}/*
|
|
||||||
else
|
|
||||||
${csudo} echo "monitor 1" >> ${script_dir}/cfg/${configFile2}
|
|
||||||
${csudo} echo "monitorFQDN ${serverFqdn}" >> ${script_dir}/cfg/${configFile2}
|
|
||||||
${csudo} echo "audit 1" >> ${script_dir}/cfg/${configFile2}
|
|
||||||
${csudo}cp -f ${script_dir}/cfg/${configFile2} ${cfg_install_dir}/${configFile2}.new
|
|
||||||
fi
|
|
||||||
|
|
||||||
${csudo}ln -sf ${cfg_install_dir}/${configFile2} ${install_main_dir}/cfg
|
|
||||||
|
|
||||||
[ ! -z $1 ] && return 0 || : # only install client
|
[ ! -z $1 ] && return 0 || : # only install client
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
if ((${update_flag} == 1)); then
|
if ((${update_flag} == 1)); then
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ "$interactiveFqdn" == "no" ]; then
|
if [ "$interactiveFqdn" == "no" ]; then
|
||||||
|
install_taosd_config
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
local_fqdn_check
|
local_fqdn_check
|
||||||
|
install_taosd_config
|
||||||
|
|
||||||
echo
|
echo
|
||||||
echo -e -n "${GREEN}Enter FQDN:port (like h1.${emailName2}:6030) of an existing ${productName2} cluster node to join${NC}"
|
echo -e -n "${GREEN}Enter FQDN:port (like h1.${emailName}:6030) of an existing ${productName} cluster node to join${NC}"
|
||||||
echo
|
echo
|
||||||
echo -e -n "${GREEN}OR leave it blank to build one${NC}:"
|
echo -e -n "${GREEN}OR leave it blank to build one${NC}:"
|
||||||
read firstEp
|
read firstEp
|
||||||
while true; do
|
while true; do
|
||||||
if [ ! -z "$firstEp" ]; then
|
if [ ! -z "$firstEp" ]; then
|
||||||
if [ -f ${cfg_install_dir}/${configFile2} ]; then
|
if [ -f ${configDir}/${configFile} ]; then
|
||||||
${csudo}sed -i -r "s/#*\s*(firstEp\s*).*/\1$firstEp/" ${cfg_install_dir}/${configFile2}
|
${csudo}sed -i -r "s/#*\s*(firstEp\s*).*/\1$firstEp/" ${configDir}/${configFile}
|
||||||
else
|
else
|
||||||
${csudo}sed -i -r "s/#*\s*(firstEp\s*).*/\1$firstEp/" ${script_dir}/cfg/${configFile2}
|
${csudo}sed -i -r "s/#*\s*(firstEp\s*).*/\1$firstEp/" ${script_dir}/cfg/${configFile}
|
||||||
fi
|
fi
|
||||||
break
|
break
|
||||||
else
|
else
|
||||||
|
@ -605,32 +637,16 @@ function install_config() {
|
||||||
done
|
done
|
||||||
}
|
}
|
||||||
|
|
||||||
function install_share_etc() {
|
|
||||||
[ ! -d ${script_dir}/share/etc ] && return
|
|
||||||
for c in `ls ${script_dir}/share/etc/`; do
|
|
||||||
if [ -e /etc/${clientName2}/$c ]; then
|
|
||||||
out=/etc/${clientName2}/$c.new.`date +%F`
|
|
||||||
${csudo}cp -f ${script_dir}/share/etc/$c $out ||:
|
|
||||||
else
|
|
||||||
${csudo}mkdir -p /etc/${clientName2} >/dev/null 2>/dev/null ||:
|
|
||||||
${csudo}cp -f ${script_dir}/share/etc/$c /etc/${clientName2}/$c ||:
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
[ ! -d ${script_dir}/share/srv ] && return
|
|
||||||
${csudo} cp ${script_dir}/share/srv/* ${service_config_dir} ||:
|
|
||||||
}
|
|
||||||
|
|
||||||
function install_log() {
|
function install_log() {
|
||||||
${csudo}mkdir -p ${log_dir} && ${csudo}chmod 777 ${log_dir}
|
${csudo}mkdir -p ${logDir} && ${csudo}chmod 777 ${logDir}
|
||||||
|
|
||||||
${csudo}ln -sf ${log_dir} ${install_main_dir}/log
|
${csudo}ln -sf ${logDir} ${install_main_dir}/log
|
||||||
}
|
}
|
||||||
|
|
||||||
function install_data() {
|
function install_data() {
|
||||||
${csudo}mkdir -p ${data_dir}
|
${csudo}mkdir -p ${dataDir}
|
||||||
|
|
||||||
${csudo}ln -sf ${data_dir} ${install_main_dir}/data
|
${csudo}ln -sf ${dataDir} ${install_main_dir}/data
|
||||||
}
|
}
|
||||||
|
|
||||||
function install_connector() {
|
function install_connector() {
|
||||||
|
@ -644,59 +660,36 @@ function install_connector() {
|
||||||
|
|
||||||
function install_examples() {
|
function install_examples() {
|
||||||
if [ -d ${script_dir}/examples ]; then
|
if [ -d ${script_dir}/examples ]; then
|
||||||
${csudo}cp -rf ${script_dir}/examples/* ${install_main_dir}/examples || echo "failed to copy examples"
|
${csudo}cp -rf ${script_dir}/examples ${install_main_dir}/ || echo "failed to copy examples"
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
function install_web() {
|
function install_plugins() {
|
||||||
if [ -d "${script_dir}/share" ]; then
|
if [ -d ${script_dir}/${xname}/plugins ]; then
|
||||||
${csudo}cp -rf ${script_dir}/share/* ${install_main_dir}/share > /dev/null 2>&1 ||:
|
${csudo}cp -rf ${script_dir}/${xname}/plugins/ ${install_main_dir}/ || echo "failed to copy ${PREFIX}x plugins"
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function install_taosx() {
|
|
||||||
if [ -f "${script_dir}/taosx/install_taosx.sh" ]; then
|
|
||||||
cd ${script_dir}/taosx
|
|
||||||
chmod a+x install_taosx.sh
|
|
||||||
bash install_taosx.sh -e $serverFqdn
|
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
function clean_service_on_sysvinit() {
|
function clean_service_on_sysvinit() {
|
||||||
if ps aux | grep -v grep | grep ${serverName2} &>/dev/null; then
|
if ps aux | grep -v grep | grep $1 &>/dev/null; then
|
||||||
${csudo}service ${serverName2} stop || :
|
${csudo}service $1 stop || :
|
||||||
fi
|
|
||||||
|
|
||||||
if ps aux | grep -v grep | grep tarbitrator &>/dev/null; then
|
|
||||||
${csudo}service tarbitratord stop || :
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if ((${initd_mod} == 1)); then
|
if ((${initd_mod} == 1)); then
|
||||||
if [ -e ${service_config_dir}/${serverName2} ]; then
|
if [ -e ${service_config_dir}/$1 ]; then
|
||||||
${csudo}chkconfig --del ${serverName2} || :
|
${csudo}chkconfig --del $1 || :
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -e ${service_config_dir}/tarbitratord ]; then
|
|
||||||
${csudo}chkconfig --del tarbitratord || :
|
|
||||||
fi
|
fi
|
||||||
elif ((${initd_mod} == 2)); then
|
elif ((${initd_mod} == 2)); then
|
||||||
if [ -e ${service_config_dir}/${serverName2} ]; then
|
if [ -e ${service_config_dir}/$1 ]; then
|
||||||
${csudo}insserv -r ${serverName2} || :
|
${csudo}insserv -r $1 || :
|
||||||
fi
|
|
||||||
if [ -e ${service_config_dir}/tarbitratord ]; then
|
|
||||||
${csudo}insserv -r tarbitratord || :
|
|
||||||
fi
|
fi
|
||||||
elif ((${initd_mod} == 3)); then
|
elif ((${initd_mod} == 3)); then
|
||||||
if [ -e ${service_config_dir}/${serverName2} ]; then
|
if [ -e ${service_config_dir}/$1 ]; then
|
||||||
${csudo}update-rc.d -f ${serverName2} remove || :
|
${csudo}update-rc.d -f $1 remove || :
|
||||||
fi
|
|
||||||
if [ -e ${service_config_dir}/tarbitratord ]; then
|
|
||||||
${csudo}update-rc.d -f tarbitratord remove || :
|
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
${csudo}rm -f ${service_config_dir}/${serverName2} || :
|
${csudo}rm -f ${service_config_dir}/$1 || :
|
||||||
${csudo}rm -f ${service_config_dir}/tarbitratord || :
|
|
||||||
|
|
||||||
if $(which init &>/dev/null); then
|
if $(which init &>/dev/null); then
|
||||||
${csudo}init q || :
|
${csudo}init q || :
|
||||||
|
@ -704,96 +697,68 @@ function clean_service_on_sysvinit() {
|
||||||
}
|
}
|
||||||
|
|
||||||
function install_service_on_sysvinit() {
|
function install_service_on_sysvinit() {
|
||||||
clean_service_on_sysvinit
|
if [ "$1" != "${serverName}" ]; then
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
clean_service_on_sysvinit $1
|
||||||
sleep 1
|
sleep 1
|
||||||
|
|
||||||
if ((${os_type} == 1)); then
|
if ((${os_type} == 1)); then
|
||||||
# ${csudo}cp -f ${script_dir}/init.d/${serverName}.deb ${install_main_dir}/init.d/${serverName}
|
|
||||||
${csudo}cp ${script_dir}/init.d/${serverName}.deb ${service_config_dir}/${serverName} && ${csudo}chmod a+x ${service_config_dir}/${serverName}
|
${csudo}cp ${script_dir}/init.d/${serverName}.deb ${service_config_dir}/${serverName} && ${csudo}chmod a+x ${service_config_dir}/${serverName}
|
||||||
elif ((${os_type} == 2)); then
|
elif ((${os_type} == 2)); then
|
||||||
# ${csudo}cp -f ${script_dir}/init.d/${serverName}.rpm ${install_main_dir}/init.d/${serverName}
|
|
||||||
${csudo}cp ${script_dir}/init.d/${serverName}.rpm ${service_config_dir}/${serverName} && ${csudo}chmod a+x ${service_config_dir}/${serverName}
|
${csudo}cp ${script_dir}/init.d/${serverName}.rpm ${service_config_dir}/${serverName} && ${csudo}chmod a+x ${service_config_dir}/${serverName}
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if ((${initd_mod} == 1)); then
|
if ((${initd_mod} == 1)); then
|
||||||
${csudo}chkconfig --add ${serverName2} || :
|
${csudo}chkconfig --add $1 || :
|
||||||
${csudo}chkconfig --level 2345 ${serverName2} on || :
|
${csudo}chkconfig --level 2345 $1 on || :
|
||||||
elif ((${initd_mod} == 2)); then
|
elif ((${initd_mod} == 2)); then
|
||||||
${csudo}insserv ${serverName2} || :
|
${csudo}insserv $1} || :
|
||||||
${csudo}insserv -d ${serverName2} || :
|
${csudo}insserv -d $1 || :
|
||||||
elif ((${initd_mod} == 3)); then
|
elif ((${initd_mod} == 3)); then
|
||||||
${csudo}update-rc.d ${serverName2} defaults || :
|
${csudo}update-rc.d $1 defaults || :
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
function clean_service_on_systemd() {
|
function clean_service_on_systemd() {
|
||||||
service_config="${service_config_dir}/${serverName2}.service"
|
service_config="${service_config_dir}/$1.service"
|
||||||
if systemctl is-active --quiet ${serverName2}; then
|
|
||||||
echo "${productName} is running, stopping it..."
|
|
||||||
${csudo}systemctl stop ${serverName2} &>/dev/null || echo &>/dev/null
|
|
||||||
fi
|
|
||||||
${csudo}systemctl disable ${serverName2} &>/dev/null || echo &>/dev/null
|
|
||||||
${csudo}rm -f ${service_config}
|
|
||||||
|
|
||||||
tarbitratord_service_config="${service_config_dir}/tarbitratord.service"
|
if systemctl is-active --quiet $1; then
|
||||||
if systemctl is-active --quiet tarbitratord; then
|
echo "$1 is running, stopping it..."
|
||||||
echo "tarbitrator is running, stopping it..."
|
${csudo}systemctl stop $1 &>/dev/null || echo &>/dev/null
|
||||||
${csudo}systemctl stop tarbitratord &>/dev/null || echo &>/dev/null
|
|
||||||
fi
|
fi
|
||||||
${csudo}systemctl disable tarbitratord &>/dev/null || echo &>/dev/null
|
${csudo}systemctl disable $1 &>/dev/null || echo &>/dev/null
|
||||||
${csudo}rm -f ${tarbitratord_service_config}
|
${csudo}rm -f ${service_config}
|
||||||
}
|
}
|
||||||
|
|
||||||
function install_service_on_systemd() {
|
function install_service_on_systemd() {
|
||||||
clean_service_on_systemd
|
clean_service_on_systemd $1
|
||||||
|
|
||||||
install_share_etc
|
cfg_source_dir=${script_dir}/cfg
|
||||||
|
if [[ "$1" == "${xname}" || "$1" == "${explorerName}" ]]; then
|
||||||
[ -f ${script_dir}/cfg/${serverName2}.service ] &&
|
if [ "$verMode" == "cluster" ]; then
|
||||||
${csudo}cp ${script_dir}/cfg/${serverName2}.service \
|
cfg_source_dir=${script_dir}/${xname}/etc/systemd/system
|
||||||
${service_config_dir}/ || :
|
else
|
||||||
|
cfg_source_dir=${script_dir}/cfg
|
||||||
# if [ "$verMode" == "cluster" ] && [ "$clientName" != "$clientName2" ]; then
|
|
||||||
# [ -f ${script_dir}/cfg/${serverName2}.service ] &&
|
|
||||||
# ${csudo}cp ${script_dir}/cfg/${serverName2}.service \
|
|
||||||
# ${service_config_dir}/${serverName2}.service || :
|
|
||||||
# fi
|
|
||||||
|
|
||||||
${csudo}systemctl daemon-reload
|
|
||||||
|
|
||||||
${csudo}systemctl enable ${serverName2}
|
|
||||||
${csudo}systemctl daemon-reload
|
|
||||||
}
|
|
||||||
|
|
||||||
function install_adapter_service() {
|
|
||||||
if ((${service_mod} == 0)); then
|
|
||||||
[ -f ${script_dir}/cfg/${adapterName2}.service ] &&
|
|
||||||
${csudo}cp ${script_dir}/cfg/${adapterName2}.service \
|
|
||||||
${service_config_dir}/ || :
|
|
||||||
|
|
||||||
${csudo}systemctl enable ${adapterName2}
|
|
||||||
${csudo}systemctl daemon-reload
|
|
||||||
fi
|
fi
|
||||||
}
|
|
||||||
|
|
||||||
function install_keeper_service() {
|
|
||||||
if ((${service_mod} == 0)); then
|
|
||||||
[ -f ${script_dir}/cfg/${clientName2}keeper.service ] &&
|
|
||||||
${csudo}cp ${script_dir}/cfg/${clientName2}keeper.service \
|
|
||||||
${service_config_dir}/ || :
|
|
||||||
|
|
||||||
${csudo}systemctl enable ${clientName2}keeper
|
|
||||||
${csudo}systemctl daemon-reload
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
if [ -f ${cfg_source_dir}/$1.service ]; then
|
||||||
|
${csudo}cp ${cfg_source_dir}/$1.service ${service_config_dir}/ || :
|
||||||
|
fi
|
||||||
|
|
||||||
|
${csudo}systemctl enable $1
|
||||||
|
${csudo}systemctl daemon-reload
|
||||||
}
|
}
|
||||||
|
|
||||||
function install_service() {
|
function install_service() {
|
||||||
if ((${service_mod} == 0)); then
|
if ((${service_mod} == 0)); then
|
||||||
install_service_on_systemd
|
install_service_on_systemd $1
|
||||||
elif ((${service_mod} == 1)); then
|
elif ((${service_mod} == 1)); then
|
||||||
install_service_on_sysvinit
|
install_service_on_sysvinit $1
|
||||||
else
|
else
|
||||||
kill_process ${serverName2}
|
kill_process $1
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -830,10 +795,10 @@ function is_version_compatible() {
|
||||||
if [ -f ${script_dir}/driver/vercomp.txt ]; then
|
if [ -f ${script_dir}/driver/vercomp.txt ]; then
|
||||||
min_compatible_version=$(cat ${script_dir}/driver/vercomp.txt)
|
min_compatible_version=$(cat ${script_dir}/driver/vercomp.txt)
|
||||||
else
|
else
|
||||||
min_compatible_version=$(${script_dir}/bin/${serverName2} -V | head -1 | cut -d ' ' -f 5)
|
min_compatible_version=$(${script_dir}/bin/${serverName} -V | head -1 | cut -d ' ' -f 5)
|
||||||
fi
|
fi
|
||||||
|
|
||||||
exist_version=$(${installDir}/bin/${serverName2} -V | head -1 | cut -d ' ' -f 3)
|
exist_version=$(${installDir}/bin/${serverName} -V | head -1 | cut -d ' ' -f 3)
|
||||||
vercomp $exist_version "3.0.0.0"
|
vercomp $exist_version "3.0.0.0"
|
||||||
case $? in
|
case $? in
|
||||||
2)
|
2)
|
||||||
|
@ -857,7 +822,7 @@ deb_erase() {
|
||||||
echo -e -n "${RED}Existing TDengine deb is detected, do you want to remove it? [yes|no] ${NC}:"
|
echo -e -n "${RED}Existing TDengine deb is detected, do you want to remove it? [yes|no] ${NC}:"
|
||||||
read confirm
|
read confirm
|
||||||
if [ "yes" == "$confirm" ]; then
|
if [ "yes" == "$confirm" ]; then
|
||||||
${csudo}dpkg --remove tdengine ||:
|
${csudo}dpkg --remove tdengine || :
|
||||||
break
|
break
|
||||||
elif [ "no" == "$confirm" ]; then
|
elif [ "no" == "$confirm" ]; then
|
||||||
break
|
break
|
||||||
|
@ -871,7 +836,7 @@ rpm_erase() {
|
||||||
echo -e -n "${RED}Existing TDengine rpm is detected, do you want to remove it? [yes|no] ${NC}:"
|
echo -e -n "${RED}Existing TDengine rpm is detected, do you want to remove it? [yes|no] ${NC}:"
|
||||||
read confirm
|
read confirm
|
||||||
if [ "yes" == "$confirm" ]; then
|
if [ "yes" == "$confirm" ]; then
|
||||||
${csudo}rpm -e tdengine ||:
|
${csudo}rpm -e tdengine || :
|
||||||
break
|
break
|
||||||
elif [ "no" == "$confirm" ]; then
|
elif [ "no" == "$confirm" ]; then
|
||||||
break
|
break
|
||||||
|
@ -893,23 +858,23 @@ function updateProduct() {
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if echo $osinfo | grep -qwi "centos"; then
|
if echo $osinfo | grep -qwi "centos"; then
|
||||||
rpm -q tdengine 2>&1 > /dev/null && rpm_erase tdengine ||:
|
rpm -q tdengine 2>&1 >/dev/null && rpm_erase tdengine || :
|
||||||
elif echo $osinfo | grep -qwi "ubuntu"; then
|
elif echo $osinfo | grep -qwi "ubuntu"; then
|
||||||
dpkg -l tdengine 2>&1 | grep ii > /dev/null && deb_erase tdengine ||:
|
dpkg -l tdengine 2>&1 | grep ii >/dev/null && deb_erase tdengine || :
|
||||||
fi
|
fi
|
||||||
|
|
||||||
tar -zxf ${tarName}
|
tar -zxf ${tarName}
|
||||||
install_jemalloc
|
install_jemalloc
|
||||||
|
|
||||||
echo "Start to update ${productName2}..."
|
echo "Start to update ${productName}..."
|
||||||
# Stop the service if running
|
# Stop the service if running
|
||||||
if ps aux | grep -v grep | grep ${serverName2} &>/dev/null; then
|
if ps aux | grep -v grep | grep ${serverName} &>/dev/null; then
|
||||||
if ((${service_mod} == 0)); then
|
if ((${service_mod} == 0)); then
|
||||||
${csudo}systemctl stop ${serverName2} || :
|
${csudo}systemctl stop ${serverName} || :
|
||||||
elif ((${service_mod} == 1)); then
|
elif ((${service_mod} == 1)); then
|
||||||
${csudo}service ${serverName2} stop || :
|
${csudo}service ${serverName} stop || :
|
||||||
else
|
else
|
||||||
kill_process ${serverName2}
|
kill_process ${serverName}
|
||||||
fi
|
fi
|
||||||
sleep 1
|
sleep 1
|
||||||
fi
|
fi
|
||||||
|
@ -923,69 +888,60 @@ function updateProduct() {
|
||||||
|
|
||||||
if [ "$verMode" == "cluster" ]; then
|
if [ "$verMode" == "cluster" ]; then
|
||||||
install_connector
|
install_connector
|
||||||
install_taosx
|
install_plugins
|
||||||
fi
|
fi
|
||||||
|
|
||||||
install_examples
|
install_examples
|
||||||
install_web
|
|
||||||
if [ -z $1 ]; then
|
if [ -z $1 ]; then
|
||||||
install_bin
|
install_bin
|
||||||
install_service
|
install_services
|
||||||
install_adapter_service
|
|
||||||
|
if [ "${pagMode}" != "lite" ]; then
|
||||||
install_adapter_config
|
install_adapter_config
|
||||||
install_keeper_service
|
install_taosx_config
|
||||||
|
install_explorer_config
|
||||||
if [ "${verMode}" != "cloud" ]; then
|
if [ "${verMode}" != "cloud" ]; then
|
||||||
install_keeper_config
|
install_keeper_config
|
||||||
fi
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
openresty_work=false
|
openresty_work=false
|
||||||
|
|
||||||
echo
|
echo
|
||||||
echo -e "${GREEN_DARK}To configure ${productName2} ${NC}\t\t: edit ${cfg_install_dir}/${configFile2}"
|
echo -e "${GREEN_DARK}To configure ${productName} ${NC}\t\t: edit ${configDir}/${configFile}"
|
||||||
[ -f ${configDir}/${clientName2}adapter.toml ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
|
[ -f ${configDir}/${adapterName}.toml ] && [ -f ${installDir}/bin/${adapterName} ] &&
|
||||||
echo -e "${GREEN_DARK}To configure ${clientName2}Adapter ${NC}\t: edit ${configDir}/${clientName2}adapter.toml"
|
echo -e "${GREEN_DARK}To configure ${adapterName} ${NC}\t: edit ${configDir}/${adapterName}.toml"
|
||||||
if [ "$verMode" == "cluster" ]; then
|
if [ "$verMode" == "cluster" ]; then
|
||||||
echo -e "${GREEN_DARK}To configure ${clientName2}-explorer ${NC}\t: edit ${configDir}/explorer.toml"
|
echo -e "${GREEN_DARK}To configure ${explorerName} ${NC}\t: edit ${configDir}/explorer.toml"
|
||||||
fi
|
fi
|
||||||
if ((${service_mod} == 0)); then
|
if ((${service_mod} == 0)); then
|
||||||
echo -e "${GREEN_DARK}To start ${productName2} ${NC}\t\t: ${csudo}systemctl start ${serverName2}${NC}"
|
echo -e "${GREEN_DARK}To start ${productName} ${NC}\t\t: ${csudo}systemctl start ${serverName}${NC}"
|
||||||
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
|
[ -f ${service_config_dir}/${clientName}adapter.service ] && [ -f ${installDir}/bin/${clientName}adapter ] &&
|
||||||
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}\t\t: ${csudo}systemctl start ${clientName2}adapter ${NC}"
|
echo -e "${GREEN_DARK}To start ${clientName}Adapter ${NC}\t\t: ${csudo}systemctl start ${clientName}adapter ${NC}"
|
||||||
elif ((${service_mod} == 1)); then
|
elif ((${service_mod} == 1)); then
|
||||||
echo -e "${GREEN_DARK}To start ${productName2} ${NC}\t\t: ${csudo}service ${serverName2} start${NC}"
|
echo -e "${GREEN_DARK}To start ${productName} ${NC}\t\t: ${csudo}service ${serverName} start${NC}"
|
||||||
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
|
[ -f ${service_config_dir}/${clientName}adapter.service ] && [ -f ${installDir}/bin/${clientName}adapter ] &&
|
||||||
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}\t\t: ${csudo}service ${clientName2}adapter start${NC}"
|
echo -e "${GREEN_DARK}To start ${clientName}Adapter ${NC}\t\t: ${csudo}service ${clientName}adapter start${NC}"
|
||||||
else
|
else
|
||||||
echo -e "${GREEN_DARK}To start ${productName2} ${NC}\t\t: ./${serverName2}${NC}"
|
echo -e "${GREEN_DARK}To start ${productName} ${NC}\t\t: ./${serverName}${NC}"
|
||||||
[ -f ${installDir}/bin/${clientName2}adapter ] && \
|
[ -f ${installDir}/bin/${clientName}adapter ] &&
|
||||||
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}\t\t: ${clientName2}adapter ${NC}"
|
echo -e "${GREEN_DARK}To start ${clientName}Adapter ${NC}\t\t: ${clientName}adapter ${NC}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
echo -e "${GREEN_DARK}To enable ${clientName2}keeper ${NC}\t\t: sudo systemctl enable ${clientName2}keeper ${NC}"
|
echo -e "${GREEN_DARK}To enable ${clientName}keeper ${NC}\t\t: sudo systemctl enable ${clientName}keeper ${NC}"
|
||||||
if [ "$verMode" == "cluster" ];then
|
if [ "$verMode" == "cluster" ]; then
|
||||||
echo -e "${GREEN_DARK}To start ${clientName2}x ${NC}\t\t\t: sudo systemctl start ${clientName2}x ${NC}"
|
echo -e "${GREEN_DARK}To start ${clientName}x ${NC}\t\t\t: sudo systemctl start ${clientName}x ${NC}"
|
||||||
echo -e "${GREEN_DARK}To start ${clientName2}-explorer ${NC}\t\t: sudo systemctl start ${clientName2}-explorer ${NC}"
|
echo -e "${GREEN_DARK}To start ${clientName}-explorer ${NC}\t\t: sudo systemctl start ${clientName}-explorer ${NC}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# if [ ${openresty_work} = 'true' ]; then
|
|
||||||
# echo -e "${GREEN_DARK}To access ${productName2} ${NC}\t\t: use ${GREEN_UNDERLINE}${clientName2} -h $serverFqdn${NC} in shell OR from ${GREEN_UNDERLINE}http://127.0.0.1:${web_port}${NC}"
|
|
||||||
# else
|
|
||||||
# echo -e "${GREEN_DARK}To access ${productName2} ${NC}\t\t: use ${GREEN_UNDERLINE}${clientName2} -h $serverFqdn${NC} in shell${NC}"
|
|
||||||
# fi
|
|
||||||
|
|
||||||
# if ((${prompt_force} == 1)); then
|
|
||||||
# echo ""
|
|
||||||
# echo -e "${RED}Please run '${serverName2} --force-keep-file' at first time for the exist ${productName2} $exist_version!${NC}"
|
|
||||||
# fi
|
|
||||||
|
|
||||||
echo
|
echo
|
||||||
echo "${productName2} is updated successfully!"
|
echo "${productName} is updated successfully!"
|
||||||
echo
|
echo
|
||||||
if [ "$verMode" == "cluster" ];then
|
if [ "$verMode" == "cluster" ]; then
|
||||||
echo -e "\033[44;32;1mTo start all the components : ./start-all.sh${NC}"
|
echo -e "\033[44;32;1mTo start all the components : ./start-all.sh${NC}"
|
||||||
fi
|
fi
|
||||||
echo -e "\033[44;32;1mTo access ${productName2} : ${clientName2} -h $serverFqdn${NC}"
|
echo -e "\033[44;32;1mTo access ${productName} : ${clientName} -h $serverFqdn${NC}"
|
||||||
if [ "$verMode" == "cluster" ];then
|
if [ "$verMode" == "cluster" ]; then
|
||||||
echo -e "\033[44;32;1mTo access the management system : http://$serverFqdn:6060${NC}"
|
echo -e "\033[44;32;1mTo access the management system : http://$serverFqdn:6060${NC}"
|
||||||
echo -e "\033[44;32;1mTo read the user manual : http://$serverFqdn:6060/docs${NC}"
|
echo -e "\033[44;32;1mTo read the user manual : http://$serverFqdn:6060/docs${NC}"
|
||||||
fi
|
fi
|
||||||
|
@ -993,7 +949,7 @@ function updateProduct() {
|
||||||
install_bin
|
install_bin
|
||||||
|
|
||||||
echo
|
echo
|
||||||
echo -e "\033[44;32;1m${productName2} client is updated successfully!${NC}"
|
echo -e "\033[44;32;1m${productName} client is updated successfully!${NC}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
cd $script_dir
|
cd $script_dir
|
||||||
|
@ -1008,7 +964,7 @@ function installProduct() {
|
||||||
fi
|
fi
|
||||||
tar -zxf ${tarName}
|
tar -zxf ${tarName}
|
||||||
|
|
||||||
echo "Start to install ${productName2}..."
|
echo "Start to install ${productName}..."
|
||||||
|
|
||||||
install_main_path
|
install_main_path
|
||||||
|
|
||||||
|
@ -1026,79 +982,63 @@ function installProduct() {
|
||||||
|
|
||||||
if [ "$verMode" == "cluster" ]; then
|
if [ "$verMode" == "cluster" ]; then
|
||||||
install_connector
|
install_connector
|
||||||
install_taosx
|
install_plugins
|
||||||
fi
|
fi
|
||||||
install_examples
|
install_examples
|
||||||
install_web
|
|
||||||
if [ -z $1 ]; then # install service and client
|
if [ -z $1 ]; then # install service and client
|
||||||
# For installing new
|
# For installing new
|
||||||
install_bin
|
install_bin
|
||||||
install_service
|
install_services
|
||||||
install_adapter_service
|
|
||||||
|
if [ "${pagMode}" != "lite" ]; then
|
||||||
install_adapter_config
|
install_adapter_config
|
||||||
install_keeper_service
|
install_taosx_config
|
||||||
|
install_explorer_config
|
||||||
if [ "${verMode}" != "cloud" ]; then
|
if [ "${verMode}" != "cloud" ]; then
|
||||||
install_keeper_config
|
install_keeper_config
|
||||||
fi
|
fi
|
||||||
openresty_work=false
|
fi
|
||||||
|
|
||||||
|
openresty_work=false
|
||||||
|
|
||||||
# Ask if to start the service
|
# Ask if to start the service
|
||||||
echo
|
echo
|
||||||
echo -e "${GREEN_DARK}To configure ${productName2} ${NC}\t\t: edit ${cfg_install_dir}/${configFile2}"
|
echo -e "${GREEN_DARK}To configure ${productName} ${NC}\t\t: edit ${configDir}/${configFile}"
|
||||||
[ -f ${configDir}/${clientName2}adapter.toml ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
|
[ -f ${configDir}/${clientName}adapter.toml ] && [ -f ${installDir}/bin/${clientName}adapter ] &&
|
||||||
echo -e "${GREEN_DARK}To configure ${clientName2}Adapter ${NC}\t: edit ${configDir}/${clientName2}adapter.toml"
|
echo -e "${GREEN_DARK}To configure ${clientName}Adapter ${NC}\t: edit ${configDir}/${clientName}adapter.toml"
|
||||||
if [ "$verMode" == "cluster" ]; then
|
if [ "$verMode" == "cluster" ]; then
|
||||||
echo -e "${GREEN_DARK}To configure ${clientName2}-explorer ${NC}\t: edit ${configDir}/explorer.toml"
|
echo -e "${GREEN_DARK}To configure ${clientName}-explorer ${NC}\t: edit ${configDir}/explorer.toml"
|
||||||
fi
|
fi
|
||||||
if ((${service_mod} == 0)); then
|
if ((${service_mod} == 0)); then
|
||||||
echo -e "${GREEN_DARK}To start ${productName2} ${NC}\t\t: ${csudo}systemctl start ${serverName2}${NC}"
|
echo -e "${GREEN_DARK}To start ${productName} ${NC}\t\t: ${csudo}systemctl start ${serverName}${NC}"
|
||||||
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
|
[ -f ${service_config_dir}/${clientName}adapter.service ] && [ -f ${installDir}/bin/${clientName}adapter ] &&
|
||||||
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}\t\t: ${csudo}systemctl start ${clientName2}adapter ${NC}"
|
echo -e "${GREEN_DARK}To start ${clientName}Adapter ${NC}\t\t: ${csudo}systemctl start ${clientName}adapter ${NC}"
|
||||||
elif ((${service_mod} == 1)); then
|
elif ((${service_mod} == 1)); then
|
||||||
echo -e "${GREEN_DARK}To start ${productName2} ${NC}\t\t: ${csudo}service ${serverName2} start${NC}"
|
echo -e "${GREEN_DARK}To start ${productName} ${NC}\t\t: ${csudo}service ${serverName} start${NC}"
|
||||||
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
|
[ -f ${service_config_dir}/${clientName}adapter.service ] && [ -f ${installDir}/bin/${clientName}adapter ] &&
|
||||||
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}\t\t: ${csudo}service ${clientName2}adapter start${NC}"
|
echo -e "${GREEN_DARK}To start ${clientName}Adapter ${NC}\t\t: ${csudo}service ${clientName}adapter start${NC}"
|
||||||
else
|
else
|
||||||
echo -e "${GREEN_DARK}To start ${productName2} ${NC}\t\t: ${serverName2}${NC}"
|
echo -e "${GREEN_DARK}To start ${productName} ${NC}\t\t: ${serverName}${NC}"
|
||||||
[ -f ${installDir}/bin/${clientName2}adapter ] && \
|
[ -f ${installDir}/bin/${clientName}adapter ] &&
|
||||||
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}\t\t: ${clientName2}adapter ${NC}"
|
echo -e "${GREEN_DARK}To start ${clientName}Adapter ${NC}\t\t: ${clientName}adapter ${NC}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
echo -e "${GREEN_DARK}To enable ${clientName2}keeper ${NC}\t\t: sudo systemctl enable ${clientName2}keeper ${NC}"
|
echo -e "${GREEN_DARK}To enable ${clientName}keeper ${NC}\t\t: sudo systemctl enable ${clientName}keeper ${NC}"
|
||||||
|
|
||||||
if [ "$verMode" == "cluster" ];then
|
if [ "$verMode" == "cluster" ]; then
|
||||||
echo -e "${GREEN_DARK}To start ${clientName2}x ${NC}\t\t\t: sudo systemctl start ${clientName2}x ${NC}"
|
echo -e "${GREEN_DARK}To start ${clientName}x ${NC}\t\t\t: sudo systemctl start ${clientName}x ${NC}"
|
||||||
echo -e "${GREEN_DARK}To start ${clientName2}-explorer ${NC}\t\t: sudo systemctl start ${clientName2}-explorer ${NC}"
|
echo -e "${GREEN_DARK}To start ${clientName}-explorer ${NC}\t\t: sudo systemctl start ${clientName}-explorer ${NC}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# if [ ! -z "$firstEp" ]; then
|
|
||||||
# tmpFqdn=${firstEp%%:*}
|
|
||||||
# substr=":"
|
|
||||||
# if [[ $firstEp =~ $substr ]]; then
|
|
||||||
# tmpPort=${firstEp#*:}
|
|
||||||
# else
|
|
||||||
# tmpPort=""
|
|
||||||
# fi
|
|
||||||
# if [[ "$tmpPort" != "" ]]; then
|
|
||||||
# echo -e "${GREEN_DARK}To access ${productName2} ${NC}\t\t: ${clientName2} -h $tmpFqdn -P $tmpPort${GREEN_DARK} to login into cluster, then${NC}"
|
|
||||||
# else
|
|
||||||
# echo -e "${GREEN_DARK}To access ${productName2} ${NC}\t\t: ${clientName2} -h $tmpFqdn${GREEN_DARK} to login into cluster, then${NC}"
|
|
||||||
# fi
|
|
||||||
# echo -e "${GREEN_DARK}execute ${NC}: create dnode 'newDnodeFQDN:port'; ${GREEN_DARK}to add this new node${NC}"
|
|
||||||
# echo
|
|
||||||
# elif [ ! -z "$serverFqdn" ]; then
|
|
||||||
# echo -e "${GREEN_DARK}To access ${productName2} ${NC}\t\t: ${clientName2} -h $serverFqdn${GREEN_DARK} to login into ${productName2} server${NC}"
|
|
||||||
# echo
|
|
||||||
# fi
|
|
||||||
echo
|
echo
|
||||||
echo "${productName2} is installed successfully!"
|
echo "${productName} is installed successfully!"
|
||||||
echo
|
echo
|
||||||
if [ "$verMode" == "cluster" ];then
|
if [ "$verMode" == "cluster" ]; then
|
||||||
echo -e "\033[44;32;1mTo start all the components : sudo ./start-all.sh${NC}"
|
echo -e "\033[44;32;1mTo start all the components : sudo ./start-all.sh${NC}"
|
||||||
fi
|
fi
|
||||||
echo -e "\033[44;32;1mTo access ${productName2} : ${clientName2} -h $serverFqdn${NC}"
|
echo -e "\033[44;32;1mTo access ${productName} : ${clientName} -h $serverFqdn${NC}"
|
||||||
if [ "$verMode" == "cluster" ];then
|
if [ "$verMode" == "cluster" ]; then
|
||||||
echo -e "\033[44;32;1mTo access the management system : http://$serverFqdn:6060${NC}"
|
echo -e "\033[44;32;1mTo access the management system : http://$serverFqdn:6060${NC}"
|
||||||
echo -e "\033[44;32;1mTo read the user manual : http://$serverFqdn:6060/docs-en${NC}"
|
echo -e "\033[44;32;1mTo read the user manual : http://$serverFqdn:6060/docs-en${NC}"
|
||||||
fi
|
fi
|
||||||
|
@ -1107,7 +1047,7 @@ function installProduct() {
|
||||||
install_bin
|
install_bin
|
||||||
|
|
||||||
echo
|
echo
|
||||||
echo -e "\033[44;32;1m${productName2} client is installed successfully!${NC}"
|
echo -e "\033[44;32;1m${productName} client is installed successfully!${NC}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
cd $script_dir
|
cd $script_dir
|
||||||
|
@ -1115,15 +1055,40 @@ function installProduct() {
|
||||||
rm -rf $(tar -tf ${tarName} | grep -Ev "^\./$|^\/")
|
rm -rf $(tar -tf ${tarName} | grep -Ev "^\./$|^\/")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
check_java_env() {
|
||||||
|
if ! command -v java &> /dev/null
|
||||||
|
then
|
||||||
|
echo -e "\033[31mWarning: Java command not found. Version 1.8+ is required.\033[0m"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
java_version=$(java -version 2>&1 | awk -F '"' '/version/ {print $2}')
|
||||||
|
java_version_ok=false
|
||||||
|
if [[ $(echo "$java_version" | cut -d"." -f1) -gt 1 ]]; then
|
||||||
|
java_version_ok=true
|
||||||
|
elif [[ $(echo "$java_version" | cut -d"." -f1) -eq 1 && $(echo "$java_version" | cut -d"." -f2) -ge 8 ]]; then
|
||||||
|
java_version_ok=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if $java_version_ok; then
|
||||||
|
echo -e "\033[32mJava ${java_version} has been found.\033[0m"
|
||||||
|
else
|
||||||
|
echo -e "\033[31mWarning: Java Version 1.8+ is required, but version ${java_version} has been found.\033[0m"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
## ==============================Main program starts from here============================
|
## ==============================Main program starts from here============================
|
||||||
serverFqdn=$(hostname)
|
serverFqdn=$(hostname)
|
||||||
if [ "$verType" == "server" ]; then
|
if [ "$verType" == "server" ]; then
|
||||||
|
if [ -x ${script_dir}/${xname}/bin/${xname} ]; then
|
||||||
|
check_java_env
|
||||||
|
fi
|
||||||
# Check default 2.x data file.
|
# Check default 2.x data file.
|
||||||
if [ -x ${data_dir}/dnode/dnodeCfg.json ]; then
|
if [ -x ${dataDir}/dnode/dnodeCfg.json ]; then
|
||||||
echo -e "\033[44;31;5mThe default data directory ${data_dir} contains old data of ${productName2} 2.x, please clear it before installing!\033[0m"
|
echo -e "\033[44;31;5mThe default data directory ${dataDir} contains old data of ${productName} 2.x, please clear it before installing!\033[0m"
|
||||||
else
|
else
|
||||||
# Install server and client
|
# Install server and client
|
||||||
if [ -x ${bin_dir}/${serverName2} ]; then
|
if [ -x ${bin_dir}/${serverName} ]; then
|
||||||
update_flag=1
|
update_flag=1
|
||||||
updateProduct
|
updateProduct
|
||||||
else
|
else
|
||||||
|
@ -1133,7 +1098,7 @@ if [ "$verType" == "server" ]; then
|
||||||
elif [ "$verType" == "client" ]; then
|
elif [ "$verType" == "client" ]; then
|
||||||
interactiveFqdn=no
|
interactiveFqdn=no
|
||||||
# Only install client
|
# Only install client
|
||||||
if [ -x ${bin_dir}/${clientName2} ]; then
|
if [ -x ${bin_dir}/${clientName} ]; then
|
||||||
update_flag=1
|
update_flag=1
|
||||||
updateProduct client
|
updateProduct client
|
||||||
else
|
else
|
||||||
|
@ -1142,5 +1107,3 @@ elif [ "$verType" == "client" ]; then
|
||||||
else
|
else
|
||||||
echo "please input correct verType"
|
echo "please input correct verType"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -231,12 +231,8 @@ fi
|
||||||
|
|
||||||
if [ "$verMode" == "cluster" ]; then
|
if [ "$verMode" == "cluster" ]; then
|
||||||
sed 's/verMode=edge/verMode=cluster/g' ${install_dir}/bin/remove.sh >>remove_temp.sh
|
sed 's/verMode=edge/verMode=cluster/g' ${install_dir}/bin/remove.sh >>remove_temp.sh
|
||||||
sed -i "s/serverName2=\"taosd\"/serverName2=\"${serverName2}\"/g" remove_temp.sh
|
sed -i "s/PREFIX=\"taos\"/PREFIX=\"${serverName2}\"/g" remove_temp.sh
|
||||||
sed -i "s/clientName2=\"taos\"/clientName2=\"${clientName2}\"/g" remove_temp.sh
|
sed -i "s/productName=\"TDengine\"/productName=\"${productName2}\"/g" remove_temp.sh
|
||||||
sed -i "s/configFile2=\"taos.cfg\"/configFile2=\"${clientName2}.cfg\"/g" remove_temp.sh
|
|
||||||
sed -i "s/productName2=\"TDengine\"/productName2=\"${productName2}\"/g" remove_temp.sh
|
|
||||||
cusDomain=`echo "${cusEmail2}" | sed 's/^[^@]*@//'`
|
|
||||||
sed -i "s/emailName2=\"taosdata.com\"/emailName2=\"${cusDomain}\"/g" remove_temp.sh
|
|
||||||
mv remove_temp.sh ${install_dir}/bin/remove.sh
|
mv remove_temp.sh ${install_dir}/bin/remove.sh
|
||||||
fi
|
fi
|
||||||
if [ "$verMode" == "cloud" ]; then
|
if [ "$verMode" == "cloud" ]; then
|
||||||
|
@ -262,12 +258,10 @@ cp ${install_files} ${install_dir}
|
||||||
cp ${install_dir}/install.sh install_temp.sh
|
cp ${install_dir}/install.sh install_temp.sh
|
||||||
if [ "$verMode" == "cluster" ]; then
|
if [ "$verMode" == "cluster" ]; then
|
||||||
sed -i 's/verMode=edge/verMode=cluster/g' install_temp.sh
|
sed -i 's/verMode=edge/verMode=cluster/g' install_temp.sh
|
||||||
sed -i "s/serverName2=\"taosd\"/serverName2=\"${serverName2}\"/g" install_temp.sh
|
sed -i "s/PREFIX=\"taos\"/PREFIX=\"${serverName2}\"/g" install_temp.sh
|
||||||
sed -i "s/clientName2=\"taos\"/clientName2=\"${clientName2}\"/g" install_temp.sh
|
sed -i "s/productName=\"TDengine\"/productName=\"${productName2}\"/g" install_temp.sh
|
||||||
sed -i "s/configFile2=\"taos.cfg\"/configFile2=\"${clientName2}.cfg\"/g" install_temp.sh
|
|
||||||
sed -i "s/productName2=\"TDengine\"/productName2=\"${productName2}\"/g" install_temp.sh
|
|
||||||
cusDomain=`echo "${cusEmail2}" | sed 's/^[^@]*@//'`
|
cusDomain=`echo "${cusEmail2}" | sed 's/^[^@]*@//'`
|
||||||
sed -i "s/emailName2=\"taosdata.com\"/emailName2=\"${cusDomain}\"/g" install_temp.sh
|
sed -i "s/emailName=\"taosdata.com\"/emailName=\"${cusDomain}\"/g" install_temp.sh
|
||||||
mv install_temp.sh ${install_dir}/install.sh
|
mv install_temp.sh ${install_dir}/install.sh
|
||||||
fi
|
fi
|
||||||
if [ "$verMode" == "cloud" ]; then
|
if [ "$verMode" == "cloud" ]; then
|
||||||
|
@ -368,7 +362,6 @@ if [ "$verMode" == "cluster" ]; then
|
||||||
# copy taosx
|
# copy taosx
|
||||||
if [ -d ${top_dir}/../enterprise/src/plugins/taosx/release/taosx ]; then
|
if [ -d ${top_dir}/../enterprise/src/plugins/taosx/release/taosx ]; then
|
||||||
cp -r ${top_dir}/../enterprise/src/plugins/taosx/release/taosx ${install_dir}
|
cp -r ${top_dir}/../enterprise/src/plugins/taosx/release/taosx ${install_dir}
|
||||||
cp ${top_dir}/../enterprise/packaging/install_taosx.sh ${install_dir}/taosx
|
|
||||||
cp ${top_dir}/../enterprise/src/plugins/taosx/packaging/uninstall.sh ${install_dir}/taosx
|
cp ${top_dir}/../enterprise/src/plugins/taosx/packaging/uninstall.sh ${install_dir}/taosx
|
||||||
sed -i 's/target=\"\"/target=\"taosx\"/g' ${install_dir}/taosx/uninstall.sh
|
sed -i 's/target=\"\"/target=\"taosx\"/g' ${install_dir}/taosx/uninstall.sh
|
||||||
fi
|
fi
|
||||||
|
|
|
@ -16,6 +16,8 @@
|
||||||
#include "cJSON.h"
|
#include "cJSON.h"
|
||||||
#include "clientInt.h"
|
#include "clientInt.h"
|
||||||
#include "parser.h"
|
#include "parser.h"
|
||||||
|
#include "tcol.h"
|
||||||
|
#include "tcompression.h"
|
||||||
#include "tdatablock.h"
|
#include "tdatablock.h"
|
||||||
#include "tdef.h"
|
#include "tdef.h"
|
||||||
#include "tglobal.h"
|
#include "tglobal.h"
|
||||||
|
@ -27,7 +29,7 @@
|
||||||
static tb_uid_t processSuid(tb_uid_t suid, char* db) { return suid + MurmurHash3_32(db, strlen(db)); }
|
static tb_uid_t processSuid(tb_uid_t suid, char* db) { return suid + MurmurHash3_32(db, strlen(db)); }
|
||||||
|
|
||||||
static char* buildCreateTableJson(SSchemaWrapper* schemaRow, SSchemaWrapper* schemaTag, char* name, int64_t id,
|
static char* buildCreateTableJson(SSchemaWrapper* schemaRow, SSchemaWrapper* schemaTag, char* name, int64_t id,
|
||||||
int8_t t) {
|
int8_t t, SColCmprWrapper* pColCmprRow) {
|
||||||
char* string = NULL;
|
char* string = NULL;
|
||||||
cJSON* json = cJSON_CreateObject();
|
cJSON* json = cJSON_CreateObject();
|
||||||
if (json == NULL) {
|
if (json == NULL) {
|
||||||
|
@ -67,6 +69,23 @@ static char* buildCreateTableJson(SSchemaWrapper* schemaRow, SSchemaWrapper* sch
|
||||||
cJSON* isPk = cJSON_CreateBool(s->flags & COL_IS_KEY);
|
cJSON* isPk = cJSON_CreateBool(s->flags & COL_IS_KEY);
|
||||||
cJSON_AddItemToObject(column, "isPrimarykey", isPk);
|
cJSON_AddItemToObject(column, "isPrimarykey", isPk);
|
||||||
cJSON_AddItemToArray(columns, column);
|
cJSON_AddItemToArray(columns, column);
|
||||||
|
|
||||||
|
if (pColCmprRow == NULL || pColCmprRow->nCols <= i) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
SColCmpr* pColCmpr = pColCmprRow->pColCmpr + i;
|
||||||
|
const char* encode = columnEncodeStr(COMPRESS_L1_TYPE_U32(pColCmpr->alg));
|
||||||
|
const char* compress = columnCompressStr(COMPRESS_L2_TYPE_U32(pColCmpr->alg));
|
||||||
|
const char* level = columnLevelStr(COMPRESS_L2_TYPE_LEVEL_U32(pColCmpr->alg));
|
||||||
|
|
||||||
|
cJSON* encodeJson = cJSON_CreateString(encode);
|
||||||
|
cJSON_AddItemToObject(column, "encode", encodeJson);
|
||||||
|
|
||||||
|
cJSON* compressJson = cJSON_CreateString(compress);
|
||||||
|
cJSON_AddItemToObject(column, "compress", compressJson);
|
||||||
|
|
||||||
|
cJSON* levelJson = cJSON_CreateString(level);
|
||||||
|
cJSON_AddItemToObject(column, "level", levelJson);
|
||||||
}
|
}
|
||||||
cJSON_AddItemToObject(json, "columns", columns);
|
cJSON_AddItemToObject(json, "columns", columns);
|
||||||
|
|
||||||
|
@ -96,6 +115,30 @@ static char* buildCreateTableJson(SSchemaWrapper* schemaRow, SSchemaWrapper* sch
|
||||||
return string;
|
return string;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int32_t setCompressOption(cJSON* json, uint32_t para) {
|
||||||
|
uint8_t encode = COMPRESS_L1_TYPE_U32(para);
|
||||||
|
if (encode != 0) {
|
||||||
|
const char* encodeStr = columnEncodeStr(encode);
|
||||||
|
cJSON* encodeJson = cJSON_CreateString(encodeStr);
|
||||||
|
cJSON_AddItemToObject(json, "encode", encodeJson);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
uint8_t compress = COMPRESS_L2_TYPE_U32(para);
|
||||||
|
if (compress != 0) {
|
||||||
|
const char* compressStr = columnCompressStr(compress);
|
||||||
|
cJSON* compressJson = cJSON_CreateString(compressStr);
|
||||||
|
cJSON_AddItemToObject(json, "compress", compressJson);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
uint8_t level = COMPRESS_L2_TYPE_LEVEL_U32(para);
|
||||||
|
if (level != 0) {
|
||||||
|
const char* levelStr = columnLevelStr(level);
|
||||||
|
cJSON* levelJson = cJSON_CreateString(levelStr);
|
||||||
|
cJSON_AddItemToObject(json, "level", levelJson);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
static char* buildAlterSTableJson(void* alterData, int32_t alterDataLen) {
|
static char* buildAlterSTableJson(void* alterData, int32_t alterDataLen) {
|
||||||
SMAlterStbReq req = {0};
|
SMAlterStbReq req = {0};
|
||||||
cJSON* json = NULL;
|
cJSON* json = NULL;
|
||||||
|
@ -180,6 +223,13 @@ static char* buildAlterSTableJson(void* alterData, int32_t alterDataLen) {
|
||||||
cJSON_AddItemToObject(json, "colNewName", colNewName);
|
cJSON_AddItemToObject(json, "colNewName", colNewName);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
case TSDB_ALTER_TABLE_UPDATE_COLUMN_COMPRESS: {
|
||||||
|
TAOS_FIELD* field = taosArrayGet(req.pFields, 0);
|
||||||
|
cJSON* colName = cJSON_CreateString(field->name);
|
||||||
|
cJSON_AddItemToObject(json, "colName", colName);
|
||||||
|
setCompressOption(json, field->bytes);
|
||||||
|
break;
|
||||||
|
}
|
||||||
default:
|
default:
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
@ -205,7 +255,7 @@ static char* processCreateStb(SMqMetaRsp* metaRsp) {
|
||||||
if (tDecodeSVCreateStbReq(&coder, &req) < 0) {
|
if (tDecodeSVCreateStbReq(&coder, &req) < 0) {
|
||||||
goto _err;
|
goto _err;
|
||||||
}
|
}
|
||||||
string = buildCreateTableJson(&req.schemaRow, &req.schemaTag, req.name, req.suid, TSDB_SUPER_TABLE);
|
string = buildCreateTableJson(&req.schemaRow, &req.schemaTag, req.name, req.suid, TSDB_SUPER_TABLE, &req.colCmpr);
|
||||||
_err:
|
_err:
|
||||||
uDebug("create stable return, sql json:%s", string);
|
uDebug("create stable return, sql json:%s", string);
|
||||||
tDecoderClear(&coder);
|
tDecoderClear(&coder);
|
||||||
|
@ -373,8 +423,8 @@ static char* processCreateTable(SMqMetaRsp* metaRsp) {
|
||||||
if (pCreateReq->type == TSDB_CHILD_TABLE) {
|
if (pCreateReq->type == TSDB_CHILD_TABLE) {
|
||||||
string = buildCreateCTableJson(req.pReqs, req.nReqs);
|
string = buildCreateCTableJson(req.pReqs, req.nReqs);
|
||||||
} else if (pCreateReq->type == TSDB_NORMAL_TABLE) {
|
} else if (pCreateReq->type == TSDB_NORMAL_TABLE) {
|
||||||
string =
|
string = buildCreateTableJson(&pCreateReq->ntb.schemaRow, NULL, pCreateReq->name, pCreateReq->uid,
|
||||||
buildCreateTableJson(&pCreateReq->ntb.schemaRow, NULL, pCreateReq->name, pCreateReq->uid, TSDB_NORMAL_TABLE);
|
TSDB_NORMAL_TABLE, &pCreateReq->colCmpr);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -549,6 +599,12 @@ static char* processAlterTable(SMqMetaRsp* metaRsp) {
|
||||||
cJSON_AddItemToObject(json, "colValueNull", isNullCJson);
|
cJSON_AddItemToObject(json, "colValueNull", isNullCJson);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
case TSDB_ALTER_TABLE_UPDATE_COLUMN_COMPRESS: {
|
||||||
|
cJSON* colName = cJSON_CreateString(vAlterTbReq.colName);
|
||||||
|
cJSON_AddItemToObject(json, "colName", colName);
|
||||||
|
setCompressOption(json, vAlterTbReq.compress);
|
||||||
|
break;
|
||||||
|
}
|
||||||
default:
|
default:
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
@ -717,8 +773,8 @@ static int32_t taosCreateStb(TAOS* taos, void* meta, int32_t metaLen) {
|
||||||
SSchema* pSchema = req.schemaRow.pSchema + i;
|
SSchema* pSchema = req.schemaRow.pSchema + i;
|
||||||
SFieldWithOptions field = {.type = pSchema->type, .flags = pSchema->flags, .bytes = pSchema->bytes};
|
SFieldWithOptions field = {.type = pSchema->type, .flags = pSchema->flags, .bytes = pSchema->bytes};
|
||||||
strcpy(field.name, pSchema->name);
|
strcpy(field.name, pSchema->name);
|
||||||
// todo get active compress param
|
SColCmpr *p = &req.colCmpr.pColCmpr[i];
|
||||||
setDefaultOptionsForField(&field);
|
field.compress = p->alg;
|
||||||
taosArrayPush(pReq.pColumns, &field);
|
taosArrayPush(pReq.pColumns, &field);
|
||||||
}
|
}
|
||||||
pReq.pTags = taosArrayInit(req.schemaTag.nCols, sizeof(SField));
|
pReq.pTags = taosArrayInit(req.schemaTag.nCols, sizeof(SField));
|
||||||
|
@ -1349,7 +1405,7 @@ static int32_t taosAlterTable(TAOS* taos, void* meta, int32_t metaLen) {
|
||||||
int tlen = 0;
|
int tlen = 0;
|
||||||
req.source = TD_REQ_FROM_TAOX;
|
req.source = TD_REQ_FROM_TAOX;
|
||||||
tEncodeSize(tEncodeSVAlterTbReq, &req, tlen, code);
|
tEncodeSize(tEncodeSVAlterTbReq, &req, tlen, code);
|
||||||
if(code != 0){
|
if (code != 0) {
|
||||||
code = TSDB_CODE_OUT_OF_MEMORY;
|
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
goto end;
|
goto end;
|
||||||
}
|
}
|
||||||
|
@ -1365,7 +1421,7 @@ static int32_t taosAlterTable(TAOS* taos, void* meta, int32_t metaLen) {
|
||||||
SEncoder coder = {0};
|
SEncoder coder = {0};
|
||||||
tEncoderInit(&coder, pBuf, tlen - sizeof(SMsgHead));
|
tEncoderInit(&coder, pBuf, tlen - sizeof(SMsgHead));
|
||||||
code = tEncodeSVAlterTbReq(&coder, &req);
|
code = tEncodeSVAlterTbReq(&coder, &req);
|
||||||
if(code != 0){
|
if (code != 0) {
|
||||||
tEncoderClear(&coder);
|
tEncoderClear(&coder);
|
||||||
code = TSDB_CODE_OUT_OF_MEMORY;
|
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
goto end;
|
goto end;
|
||||||
|
@ -1631,7 +1687,7 @@ static int32_t tmqWriteRawDataImpl(TAOS* taos, void* data, int32_t dataLen) {
|
||||||
rspObj.common.resType = RES_TYPE__TMQ;
|
rspObj.common.resType = RES_TYPE__TMQ;
|
||||||
|
|
||||||
int8_t dataVersion = *(int8_t*)data;
|
int8_t dataVersion = *(int8_t*)data;
|
||||||
if (dataVersion >= MQ_DATA_RSP_VERSION){
|
if (dataVersion >= MQ_DATA_RSP_VERSION) {
|
||||||
data = POINTER_SHIFT(data, sizeof(int8_t) + sizeof(int32_t));
|
data = POINTER_SHIFT(data, sizeof(int8_t) + sizeof(int32_t));
|
||||||
dataLen -= sizeof(int8_t) + sizeof(int32_t);
|
dataLen -= sizeof(int8_t) + sizeof(int32_t);
|
||||||
}
|
}
|
||||||
|
@ -1777,7 +1833,7 @@ static int32_t tmqWriteRawMetaDataImpl(TAOS* taos, void* data, int32_t dataLen)
|
||||||
rspObj.common.resType = RES_TYPE__TMQ_METADATA;
|
rspObj.common.resType = RES_TYPE__TMQ_METADATA;
|
||||||
|
|
||||||
int8_t dataVersion = *(int8_t*)data;
|
int8_t dataVersion = *(int8_t*)data;
|
||||||
if (dataVersion >= MQ_DATA_RSP_VERSION){
|
if (dataVersion >= MQ_DATA_RSP_VERSION) {
|
||||||
data = POINTER_SHIFT(data, sizeof(int8_t) + sizeof(int32_t));
|
data = POINTER_SHIFT(data, sizeof(int8_t) + sizeof(int32_t));
|
||||||
dataLen -= sizeof(int8_t) + sizeof(int32_t);
|
dataLen -= sizeof(int8_t) + sizeof(int32_t);
|
||||||
}
|
}
|
||||||
|
@ -1982,8 +2038,8 @@ char* tmq_get_json_meta(TAOS_RES* res) {
|
||||||
|
|
||||||
void tmq_free_json_meta(char* jsonMeta) { taosMemoryFreeClear(jsonMeta); }
|
void tmq_free_json_meta(char* jsonMeta) { taosMemoryFreeClear(jsonMeta); }
|
||||||
|
|
||||||
static int32_t getOffSetLen(const void *rsp){
|
static int32_t getOffSetLen(const void* rsp) {
|
||||||
const SMqDataRspCommon *pRsp = rsp;
|
const SMqDataRspCommon* pRsp = rsp;
|
||||||
SEncoder coder = {0};
|
SEncoder coder = {0};
|
||||||
tEncoderInit(&coder, NULL, 0);
|
tEncoderInit(&coder, NULL, 0);
|
||||||
if (tEncodeSTqOffsetVal(&coder, &pRsp->reqOffset) < 0) return -1;
|
if (tEncodeSTqOffsetVal(&coder, &pRsp->reqOffset) < 0) return -1;
|
||||||
|
@ -1993,9 +2049,9 @@ static int32_t getOffSetLen(const void *rsp){
|
||||||
return pos;
|
return pos;
|
||||||
}
|
}
|
||||||
|
|
||||||
typedef int32_t __encode_func__(SEncoder *pEncoder, const void *pRsp);
|
typedef int32_t __encode_func__(SEncoder* pEncoder, const void* pRsp);
|
||||||
|
|
||||||
static int32_t encodeMqDataRsp(__encode_func__* encodeFunc, void* rspObj, tmq_raw_data* raw){
|
static int32_t encodeMqDataRsp(__encode_func__* encodeFunc, void* rspObj, tmq_raw_data* raw) {
|
||||||
int32_t len = 0;
|
int32_t len = 0;
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
SEncoder encoder = {0};
|
SEncoder encoder = {0};
|
||||||
|
@ -2007,7 +2063,7 @@ static int32_t encodeMqDataRsp(__encode_func__* encodeFunc, void* rspObj, tmq_ra
|
||||||
}
|
}
|
||||||
len += sizeof(int8_t) + sizeof(int32_t);
|
len += sizeof(int8_t) + sizeof(int32_t);
|
||||||
buf = taosMemoryCalloc(1, len);
|
buf = taosMemoryCalloc(1, len);
|
||||||
if(buf == NULL){
|
if (buf == NULL) {
|
||||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
goto FAILED;
|
goto FAILED;
|
||||||
}
|
}
|
||||||
|
@ -2017,7 +2073,7 @@ static int32_t encodeMqDataRsp(__encode_func__* encodeFunc, void* rspObj, tmq_ra
|
||||||
goto FAILED;
|
goto FAILED;
|
||||||
}
|
}
|
||||||
int32_t offsetLen = getOffSetLen(rspObj);
|
int32_t offsetLen = getOffSetLen(rspObj);
|
||||||
if(offsetLen <= 0){
|
if (offsetLen <= 0) {
|
||||||
terrno = TSDB_CODE_INVALID_MSG;
|
terrno = TSDB_CODE_INVALID_MSG;
|
||||||
goto FAILED;
|
goto FAILED;
|
||||||
}
|
}
|
||||||
|
@ -2025,7 +2081,7 @@ static int32_t encodeMqDataRsp(__encode_func__* encodeFunc, void* rspObj, tmq_ra
|
||||||
terrno = TSDB_CODE_INVALID_MSG;
|
terrno = TSDB_CODE_INVALID_MSG;
|
||||||
goto FAILED;
|
goto FAILED;
|
||||||
}
|
}
|
||||||
if(encodeFunc(&encoder, rspObj) < 0){
|
if (encodeFunc(&encoder, rspObj) < 0) {
|
||||||
terrno = TSDB_CODE_INVALID_MSG;
|
terrno = TSDB_CODE_INVALID_MSG;
|
||||||
goto FAILED;
|
goto FAILED;
|
||||||
}
|
}
|
||||||
|
@ -2053,7 +2109,7 @@ int32_t tmq_get_raw(TAOS_RES* res, tmq_raw_data* raw) {
|
||||||
uDebug("tmq get raw type meta:%p", raw);
|
uDebug("tmq get raw type meta:%p", raw);
|
||||||
} else if (TD_RES_TMQ(res)) {
|
} else if (TD_RES_TMQ(res)) {
|
||||||
SMqRspObj* rspObj = ((SMqRspObj*)res);
|
SMqRspObj* rspObj = ((SMqRspObj*)res);
|
||||||
if(encodeMqDataRsp(tEncodeMqDataRsp, &rspObj->rsp, raw) != 0){
|
if (encodeMqDataRsp(tEncodeMqDataRsp, &rspObj->rsp, raw) != 0) {
|
||||||
uError("tmq get raw type error:%d", terrno);
|
uError("tmq get raw type error:%d", terrno);
|
||||||
return terrno;
|
return terrno;
|
||||||
}
|
}
|
||||||
|
@ -2062,7 +2118,7 @@ int32_t tmq_get_raw(TAOS_RES* res, tmq_raw_data* raw) {
|
||||||
} else if (TD_RES_TMQ_METADATA(res)) {
|
} else if (TD_RES_TMQ_METADATA(res)) {
|
||||||
SMqTaosxRspObj* rspObj = ((SMqTaosxRspObj*)res);
|
SMqTaosxRspObj* rspObj = ((SMqTaosxRspObj*)res);
|
||||||
|
|
||||||
if(encodeMqDataRsp(tEncodeSTaosxRsp, &rspObj->rsp, raw) != 0){
|
if (encodeMqDataRsp(tEncodeSTaosxRsp, &rspObj->rsp, raw) != 0) {
|
||||||
uError("tmq get raw type error:%d", terrno);
|
uError("tmq get raw type error:%d", terrno);
|
||||||
return terrno;
|
return terrno;
|
||||||
}
|
}
|
||||||
|
|
|
@ -832,7 +832,7 @@ static int32_t smlFindNearestPowerOf2(int32_t length, uint8_t type) {
|
||||||
return result;
|
return result;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t smlProcessSchemaAction(SSmlHandle *info, SSchema *schemaField, SHashObj *schemaHash, SArray *cols,
|
static int32_t smlProcessSchemaAction(SSmlHandle *info, SSchema *schemaField, SHashObj *schemaHash, SArray *cols, SArray *checkDumplicateCols,
|
||||||
ESchemaAction *action, bool isTag) {
|
ESchemaAction *action, bool isTag) {
|
||||||
int32_t code = TSDB_CODE_SUCCESS;
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
for (int j = 0; j < taosArrayGetSize(cols); ++j) {
|
for (int j = 0; j < taosArrayGetSize(cols); ++j) {
|
||||||
|
@ -843,6 +843,13 @@ static int32_t smlProcessSchemaAction(SSmlHandle *info, SSchema *schemaField, SH
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
for (int j = 0; j < taosArrayGetSize(checkDumplicateCols); ++j) {
|
||||||
|
SSmlKv *kv = (SSmlKv *)taosArrayGet(checkDumplicateCols, j);
|
||||||
|
if(taosHashGet(schemaHash, kv->key, kv->keyLen) != NULL){
|
||||||
|
return TSDB_CODE_PAR_DUPLICATED_COLUMN;
|
||||||
|
}
|
||||||
|
}
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1106,7 +1113,7 @@ static int32_t smlModifyDBSchemas(SSmlHandle *info) {
|
||||||
}
|
}
|
||||||
|
|
||||||
ESchemaAction action = SCHEMA_ACTION_NULL;
|
ESchemaAction action = SCHEMA_ACTION_NULL;
|
||||||
code = smlProcessSchemaAction(info, pTableMeta->schema, hashTmp, sTableData->tags, &action, true);
|
code = smlProcessSchemaAction(info, pTableMeta->schema, hashTmp, sTableData->tags, sTableData->cols, &action, true);
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
goto end;
|
goto end;
|
||||||
}
|
}
|
||||||
|
@ -1181,7 +1188,7 @@ static int32_t smlModifyDBSchemas(SSmlHandle *info) {
|
||||||
taosHashPut(hashTmp, pTableMeta->schema[i].name, strlen(pTableMeta->schema[i].name), &i, SHORT_BYTES);
|
taosHashPut(hashTmp, pTableMeta->schema[i].name, strlen(pTableMeta->schema[i].name), &i, SHORT_BYTES);
|
||||||
}
|
}
|
||||||
action = SCHEMA_ACTION_NULL;
|
action = SCHEMA_ACTION_NULL;
|
||||||
code = smlProcessSchemaAction(info, pTableMeta->schema, hashTmp, sTableData->cols, &action, false);
|
code = smlProcessSchemaAction(info, pTableMeta->schema, hashTmp, sTableData->cols, sTableData->tags, &action, false);
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
goto end;
|
goto end;
|
||||||
}
|
}
|
||||||
|
@ -1290,17 +1297,24 @@ end:
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void smlInsertMeta(SHashObj *metaHash, SArray *metaArray, SArray *cols) {
|
static int32_t smlInsertMeta(SHashObj *metaHash, SArray *metaArray, SArray *cols, SHashObj *checkDuplicate) {
|
||||||
|
terrno = 0;
|
||||||
for (int16_t i = 0; i < taosArrayGetSize(cols); ++i) {
|
for (int16_t i = 0; i < taosArrayGetSize(cols); ++i) {
|
||||||
SSmlKv *kv = (SSmlKv *)taosArrayGet(cols, i);
|
SSmlKv *kv = (SSmlKv *)taosArrayGet(cols, i);
|
||||||
int ret = taosHashPut(metaHash, kv->key, kv->keyLen, &i, SHORT_BYTES);
|
int ret = taosHashPut(metaHash, kv->key, kv->keyLen, &i, SHORT_BYTES);
|
||||||
if (ret == 0) {
|
if (ret == 0) {
|
||||||
taosArrayPush(metaArray, kv);
|
taosArrayPush(metaArray, kv);
|
||||||
|
if(taosHashGet(checkDuplicate, kv->key, kv->keyLen) != NULL) {
|
||||||
|
return TSDB_CODE_PAR_DUPLICATED_COLUMN;
|
||||||
|
}
|
||||||
|
}else if(terrno == TSDB_CODE_DUP_KEY){
|
||||||
|
return TSDB_CODE_PAR_DUPLICATED_COLUMN;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t smlUpdateMeta(SHashObj *metaHash, SArray *metaArray, SArray *cols, bool isTag, SSmlMsgBuf *msg) {
|
static int32_t smlUpdateMeta(SHashObj *metaHash, SArray *metaArray, SArray *cols, bool isTag, SSmlMsgBuf *msg, SHashObj* checkDuplicate) {
|
||||||
for (int i = 0; i < taosArrayGetSize(cols); ++i) {
|
for (int i = 0; i < taosArrayGetSize(cols); ++i) {
|
||||||
SSmlKv *kv = (SSmlKv *)taosArrayGet(cols, i);
|
SSmlKv *kv = (SSmlKv *)taosArrayGet(cols, i);
|
||||||
|
|
||||||
|
@ -1332,6 +1346,11 @@ static int32_t smlUpdateMeta(SHashObj *metaHash, SArray *metaArray, SArray *cols
|
||||||
int ret = taosHashPut(metaHash, kv->key, kv->keyLen, &size, SHORT_BYTES);
|
int ret = taosHashPut(metaHash, kv->key, kv->keyLen, &size, SHORT_BYTES);
|
||||||
if (ret == 0) {
|
if (ret == 0) {
|
||||||
taosArrayPush(metaArray, kv);
|
taosArrayPush(metaArray, kv);
|
||||||
|
if(taosHashGet(checkDuplicate, kv->key, kv->keyLen) != NULL) {
|
||||||
|
return TSDB_CODE_PAR_DUPLICATED_COLUMN;
|
||||||
|
}
|
||||||
|
}else{
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1456,7 +1475,7 @@ static int32_t smlPushCols(SArray *colsArray, SArray *cols) {
|
||||||
taosHashPut(kvHash, kv->key, kv->keyLen, &kv, POINTER_BYTES);
|
taosHashPut(kvHash, kv->key, kv->keyLen, &kv, POINTER_BYTES);
|
||||||
if (terrno == TSDB_CODE_DUP_KEY) {
|
if (terrno == TSDB_CODE_DUP_KEY) {
|
||||||
taosHashCleanup(kvHash);
|
taosHashCleanup(kvHash);
|
||||||
return terrno;
|
return TSDB_CODE_PAR_DUPLICATED_COLUMN;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1512,12 +1531,12 @@ static int32_t smlParseLineBottom(SSmlHandle *info) {
|
||||||
if (tableMeta) { // update meta
|
if (tableMeta) { // update meta
|
||||||
uDebug("SML:0x%" PRIx64 " smlParseLineBottom update meta, format:%d, linenum:%d", info->id, info->dataFormat,
|
uDebug("SML:0x%" PRIx64 " smlParseLineBottom update meta, format:%d, linenum:%d", info->id, info->dataFormat,
|
||||||
info->lineNum);
|
info->lineNum);
|
||||||
ret = smlUpdateMeta((*tableMeta)->colHash, (*tableMeta)->cols, elements->colArray, false, &info->msgBuf);
|
ret = smlUpdateMeta((*tableMeta)->colHash, (*tableMeta)->cols, elements->colArray, false, &info->msgBuf, (*tableMeta)->tagHash);
|
||||||
if (ret == TSDB_CODE_SUCCESS) {
|
if (ret == TSDB_CODE_SUCCESS) {
|
||||||
ret = smlUpdateMeta((*tableMeta)->tagHash, (*tableMeta)->tags, tinfo->tags, true, &info->msgBuf);
|
ret = smlUpdateMeta((*tableMeta)->tagHash, (*tableMeta)->tags, tinfo->tags, true, &info->msgBuf, (*tableMeta)->colHash);
|
||||||
}
|
}
|
||||||
if (ret != TSDB_CODE_SUCCESS) {
|
if (ret != TSDB_CODE_SUCCESS) {
|
||||||
uError("SML:0x%" PRIx64 " smlUpdateMeta failed", info->id);
|
uError("SML:0x%" PRIx64 " smlUpdateMeta failed, ret:%d", info->id, ret);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
@ -1527,13 +1546,19 @@ static int32_t smlParseLineBottom(SSmlHandle *info) {
|
||||||
if (meta == NULL) {
|
if (meta == NULL) {
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
}
|
}
|
||||||
taosHashPut(info->superTables, elements->measure, elements->measureLen, &meta, POINTER_BYTES);
|
ret = taosHashPut(info->superTables, elements->measure, elements->measureLen, &meta, POINTER_BYTES);
|
||||||
terrno = 0;
|
if (ret != TSDB_CODE_SUCCESS) {
|
||||||
smlInsertMeta(meta->tagHash, meta->tags, tinfo->tags);
|
uError("SML:0x%" PRIx64 " put measuer to hash failed", info->id);
|
||||||
if (terrno == TSDB_CODE_DUP_KEY) {
|
return ret;
|
||||||
return terrno;
|
}
|
||||||
|
ret = smlInsertMeta(meta->tagHash, meta->tags, tinfo->tags, NULL);
|
||||||
|
if (ret == TSDB_CODE_SUCCESS) {
|
||||||
|
ret = smlInsertMeta(meta->colHash, meta->cols, elements->colArray, meta->tagHash);
|
||||||
|
}
|
||||||
|
if (ret != TSDB_CODE_SUCCESS) {
|
||||||
|
uError("SML:0x%" PRIx64 " insert meta failed:%s", info->id, tstrerror(ret));
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
smlInsertMeta(meta->colHash, meta->cols, elements->colArray);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
uDebug("SML:0x%" PRIx64 " smlParseLineBottom end, format:%d, linenum:%d", info->id, info->dataFormat, info->lineNum);
|
uDebug("SML:0x%" PRIx64 " smlParseLineBottom end, format:%d, linenum:%d", info->id, info->dataFormat, info->lineNum);
|
||||||
|
|
|
@ -660,13 +660,13 @@ static void asyncCommitAllOffsets(tmq_t* tmq, tmq_commit_cb* pCommitFp, void* us
|
||||||
|
|
||||||
taosRLockLatch(&tmq->lock);
|
taosRLockLatch(&tmq->lock);
|
||||||
int32_t numOfTopics = taosArrayGetSize(tmq->clientTopics);
|
int32_t numOfTopics = taosArrayGetSize(tmq->clientTopics);
|
||||||
tscInfo("consumer:0x%" PRIx64 " start to commit offset for %d topics", tmq->consumerId, numOfTopics);
|
tscDebug("consumer:0x%" PRIx64 " start to commit offset for %d topics", tmq->consumerId, numOfTopics);
|
||||||
|
|
||||||
for (int32_t i = 0; i < numOfTopics; i++) {
|
for (int32_t i = 0; i < numOfTopics; i++) {
|
||||||
SMqClientTopic* pTopic = taosArrayGet(tmq->clientTopics, i);
|
SMqClientTopic* pTopic = taosArrayGet(tmq->clientTopics, i);
|
||||||
int32_t numOfVgroups = taosArrayGetSize(pTopic->vgs);
|
int32_t numOfVgroups = taosArrayGetSize(pTopic->vgs);
|
||||||
|
|
||||||
tscInfo("consumer:0x%" PRIx64 " commit offset for topics:%s, numOfVgs:%d", tmq->consumerId, pTopic->topicName,
|
tscDebug("consumer:0x%" PRIx64 " commit offset for topics:%s, numOfVgs:%d", tmq->consumerId, pTopic->topicName,
|
||||||
numOfVgroups);
|
numOfVgroups);
|
||||||
for (int32_t j = 0; j < numOfVgroups; j++) {
|
for (int32_t j = 0; j < numOfVgroups; j++) {
|
||||||
SMqClientVg* pVg = taosArrayGet(pTopic->vgs, j);
|
SMqClientVg* pVg = taosArrayGet(pTopic->vgs, j);
|
||||||
|
@ -688,19 +688,19 @@ static void asyncCommitAllOffsets(tmq_t* tmq, tmq_commit_cb* pCommitFp, void* us
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
tscInfo("consumer:0x%" PRIx64
|
tscDebug("consumer:0x%" PRIx64
|
||||||
" topic:%s on vgId:%d send commit msg success, send offset:%s committed:%s, ordinal:%d/%d",
|
" topic:%s on vgId:%d send commit msg success, send offset:%s committed:%s, ordinal:%d/%d",
|
||||||
tmq->consumerId, pTopic->topicName, pVg->vgId, offsetBuf, commitBuf, j + 1, numOfVgroups);
|
tmq->consumerId, pTopic->topicName, pVg->vgId, offsetBuf, commitBuf, j + 1, numOfVgroups);
|
||||||
tOffsetCopy(&pVg->offsetInfo.committedOffset, &pVg->offsetInfo.endOffset);
|
tOffsetCopy(&pVg->offsetInfo.committedOffset, &pVg->offsetInfo.endOffset);
|
||||||
} else {
|
} else {
|
||||||
tscInfo("consumer:0x%" PRIx64 " topic:%s vgId:%d, no commit, current:%" PRId64 ", ordinal:%d/%d",
|
tscDebug("consumer:0x%" PRIx64 " topic:%s vgId:%d, no commit, current:%" PRId64 ", ordinal:%d/%d",
|
||||||
tmq->consumerId, pTopic->topicName, pVg->vgId, pVg->offsetInfo.endOffset.version, j + 1, numOfVgroups);
|
tmq->consumerId, pTopic->topicName, pVg->vgId, pVg->offsetInfo.endOffset.version, j + 1, numOfVgroups);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
taosRUnLockLatch(&tmq->lock);
|
taosRUnLockLatch(&tmq->lock);
|
||||||
|
|
||||||
tscInfo("consumer:0x%" PRIx64 " total commit:%d for %d topics", tmq->consumerId, pParamSet->waitingRspNum - 1,
|
tscDebug("consumer:0x%" PRIx64 " total commit:%d for %d topics", tmq->consumerId, pParamSet->waitingRspNum - 1,
|
||||||
numOfTopics);
|
numOfTopics);
|
||||||
|
|
||||||
// request is sent
|
// request is sent
|
||||||
|
@ -815,7 +815,7 @@ void tmqSendHbReq(void* param, void* tmrId) {
|
||||||
offRows->ever = pVg->offsetInfo.walVerEnd;
|
offRows->ever = pVg->offsetInfo.walVerEnd;
|
||||||
char buf[TSDB_OFFSET_LEN] = {0};
|
char buf[TSDB_OFFSET_LEN] = {0};
|
||||||
tFormatOffset(buf, TSDB_OFFSET_LEN, &offRows->offset);
|
tFormatOffset(buf, TSDB_OFFSET_LEN, &offRows->offset);
|
||||||
tscInfo("consumer:0x%" PRIx64 ",report offset, group:%s vgId:%d, offset:%s/%" PRId64 ", rows:%" PRId64,
|
tscDebug("consumer:0x%" PRIx64 ",report offset, group:%s vgId:%d, offset:%s/%" PRId64 ", rows:%" PRId64,
|
||||||
tmq->consumerId, tmq->groupId, offRows->vgId, buf, offRows->ever, offRows->rows);
|
tmq->consumerId, tmq->groupId, offRows->vgId, buf, offRows->ever, offRows->rows);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1058,6 +1058,7 @@ static void tmqMgmtInit(void) {
|
||||||
|
|
||||||
#define SET_ERROR_MSG_TMQ(MSG) \
|
#define SET_ERROR_MSG_TMQ(MSG) \
|
||||||
if (errstr != NULL) snprintf(errstr, errstrLen, MSG);
|
if (errstr != NULL) snprintf(errstr, errstrLen, MSG);
|
||||||
|
|
||||||
tmq_t* tmq_consumer_new(tmq_conf_t* conf, char* errstr, int32_t errstrLen) {
|
tmq_t* tmq_consumer_new(tmq_conf_t* conf, char* errstr, int32_t errstrLen) {
|
||||||
if (conf == NULL) {
|
if (conf == NULL) {
|
||||||
SET_ERROR_MSG_TMQ("configure is null")
|
SET_ERROR_MSG_TMQ("configure is null")
|
||||||
|
@ -1504,7 +1505,7 @@ static bool doUpdateLocalEp(tmq_t* tmq, int32_t epoch, const SMqAskEpRsp* pRsp)
|
||||||
|
|
||||||
int32_t topicNumGet = taosArrayGetSize(pRsp->topics);
|
int32_t topicNumGet = taosArrayGetSize(pRsp->topics);
|
||||||
if (epoch < tmq->epoch || (epoch == tmq->epoch && topicNumGet == 0)) {
|
if (epoch < tmq->epoch || (epoch == tmq->epoch && topicNumGet == 0)) {
|
||||||
tscInfo("consumer:0x%" PRIx64 " no update ep epoch from %d to epoch %d, incoming topics:%d", tmq->consumerId,
|
tscDebug("consumer:0x%" PRIx64 " no update ep epoch from %d to epoch %d, incoming topics:%d", tmq->consumerId,
|
||||||
tmq->epoch, epoch, topicNumGet);
|
tmq->epoch, epoch, topicNumGet);
|
||||||
if (atomic_load_8(&tmq->status) == TMQ_CONSUMER_STATUS__RECOVER) {
|
if (atomic_load_8(&tmq->status) == TMQ_CONSUMER_STATUS__RECOVER) {
|
||||||
atomic_store_8(&tmq->status, TMQ_CONSUMER_STATUS__READY);
|
atomic_store_8(&tmq->status, TMQ_CONSUMER_STATUS__READY);
|
||||||
|
@ -1800,14 +1801,14 @@ static int32_t tmqPollImpl(tmq_t* tmq, int64_t timeout) {
|
||||||
for (int j = 0; j < numOfVg; j++) {
|
for (int j = 0; j < numOfVg; j++) {
|
||||||
SMqClientVg* pVg = taosArrayGet(pTopic->vgs, j);
|
SMqClientVg* pVg = taosArrayGet(pTopic->vgs, j);
|
||||||
if (taosGetTimestampMs() - pVg->emptyBlockReceiveTs < EMPTY_BLOCK_POLL_IDLE_DURATION) { // less than 10ms
|
if (taosGetTimestampMs() - pVg->emptyBlockReceiveTs < EMPTY_BLOCK_POLL_IDLE_DURATION) { // less than 10ms
|
||||||
tscTrace("consumer:0x%" PRIx64 " epoch %d, vgId:%d idle for 10ms before start next poll", tmq->consumerId,
|
tscDebug("consumer:0x%" PRIx64 " epoch %d, vgId:%d idle for 10ms before start next poll", tmq->consumerId,
|
||||||
tmq->epoch, pVg->vgId);
|
tmq->epoch, pVg->vgId);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (tmq->replayEnable &&
|
if (tmq->replayEnable &&
|
||||||
taosGetTimestampMs() - pVg->blockReceiveTs < pVg->blockSleepForReplay) { // less than 10ms
|
taosGetTimestampMs() - pVg->blockReceiveTs < pVg->blockSleepForReplay) { // less than 10ms
|
||||||
tscTrace("consumer:0x%" PRIx64 " epoch %d, vgId:%d idle for %" PRId64 "ms before start next poll when replay",
|
tscDebug("consumer:0x%" PRIx64 " epoch %d, vgId:%d idle for %" PRId64 "ms before start next poll when replay",
|
||||||
tmq->consumerId, tmq->epoch, pVg->vgId, pVg->blockSleepForReplay);
|
tmq->consumerId, tmq->epoch, pVg->vgId, pVg->blockSleepForReplay);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
@ -1815,7 +1816,7 @@ static int32_t tmqPollImpl(tmq_t* tmq, int64_t timeout) {
|
||||||
int32_t vgStatus = atomic_val_compare_exchange_32(&pVg->vgStatus, TMQ_VG_STATUS__IDLE, TMQ_VG_STATUS__WAIT);
|
int32_t vgStatus = atomic_val_compare_exchange_32(&pVg->vgStatus, TMQ_VG_STATUS__IDLE, TMQ_VG_STATUS__WAIT);
|
||||||
if (vgStatus == TMQ_VG_STATUS__WAIT) {
|
if (vgStatus == TMQ_VG_STATUS__WAIT) {
|
||||||
int32_t vgSkipCnt = atomic_add_fetch_32(&pVg->vgSkipCnt, 1);
|
int32_t vgSkipCnt = atomic_add_fetch_32(&pVg->vgSkipCnt, 1);
|
||||||
tscTrace("consumer:0x%" PRIx64 " epoch %d wait poll-rsp, skip vgId:%d skip cnt %d", tmq->consumerId, tmq->epoch,
|
tscDebug("consumer:0x%" PRIx64 " epoch %d wait poll-rsp, skip vgId:%d skip cnt %d", tmq->consumerId, tmq->epoch,
|
||||||
pVg->vgId, vgSkipCnt);
|
pVg->vgId, vgSkipCnt);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
@ -1875,7 +1876,7 @@ static void* tmqHandleAllRsp(tmq_t* tmq, int64_t timeout) {
|
||||||
SMqPollRspWrapper* pollRspWrapper = (SMqPollRspWrapper*)pRspWrapper;
|
SMqPollRspWrapper* pollRspWrapper = (SMqPollRspWrapper*)pRspWrapper;
|
||||||
if (pRspWrapper->code == TSDB_CODE_TMQ_CONSUMER_MISMATCH) {
|
if (pRspWrapper->code == TSDB_CODE_TMQ_CONSUMER_MISMATCH) {
|
||||||
atomic_store_8(&tmq->status, TMQ_CONSUMER_STATUS__RECOVER);
|
atomic_store_8(&tmq->status, TMQ_CONSUMER_STATUS__RECOVER);
|
||||||
tscDebug("consumer:0x%" PRIx64 " wait for the re-balance, set status to be RECOVER", tmq->consumerId);
|
tscDebug("consumer:0x%" PRIx64 " wait for the rebalance, set status to be RECOVER", tmq->consumerId);
|
||||||
} else if (pRspWrapper->code == TSDB_CODE_TQ_NO_COMMITTED_OFFSET) {
|
} else if (pRspWrapper->code == TSDB_CODE_TQ_NO_COMMITTED_OFFSET) {
|
||||||
terrno = pRspWrapper->code;
|
terrno = pRspWrapper->code;
|
||||||
tscError("consumer:0x%" PRIx64 " unexpected rsp from poll, code:%s", tmq->consumerId,
|
tscError("consumer:0x%" PRIx64 " unexpected rsp from poll, code:%s", tmq->consumerId,
|
||||||
|
@ -2476,7 +2477,7 @@ int32_t askEpCb(void* param, SDataBuf* pMsg, int32_t code) {
|
||||||
|
|
||||||
SMqRspHead* head = pMsg->pData;
|
SMqRspHead* head = pMsg->pData;
|
||||||
int32_t epoch = atomic_load_32(&tmq->epoch);
|
int32_t epoch = atomic_load_32(&tmq->epoch);
|
||||||
tscInfo("consumer:0x%" PRIx64 ", recv ep, msg epoch %d, current epoch %d", tmq->consumerId, head->epoch, epoch);
|
tscDebug("consumer:0x%" PRIx64 ", recv ep, msg epoch %d, current epoch %d", tmq->consumerId, head->epoch, epoch);
|
||||||
if (pParam->sync) {
|
if (pParam->sync) {
|
||||||
SMqAskEpRsp rsp = {0};
|
SMqAskEpRsp rsp = {0};
|
||||||
tDecodeSMqAskEpRsp(POINTER_SHIFT(pMsg->pData, sizeof(SMqRspHead)), &rsp);
|
tDecodeSMqAskEpRsp(POINTER_SHIFT(pMsg->pData, sizeof(SMqRspHead)), &rsp);
|
||||||
|
@ -2581,7 +2582,7 @@ void askEp(tmq_t* pTmq, void* param, bool sync, bool updateEpSet) {
|
||||||
sendInfo->msgType = TDMT_MND_TMQ_ASK_EP;
|
sendInfo->msgType = TDMT_MND_TMQ_ASK_EP;
|
||||||
|
|
||||||
SEpSet epSet = getEpSet_s(&pTmq->pTscObj->pAppInfo->mgmtEp);
|
SEpSet epSet = getEpSet_s(&pTmq->pTscObj->pAppInfo->mgmtEp);
|
||||||
tscInfo("consumer:0x%" PRIx64 " ask ep from mnode, reqId:0x%" PRIx64, pTmq->consumerId, sendInfo->requestId);
|
tscDebug("consumer:0x%" PRIx64 " ask ep from mnode, reqId:0x%" PRIx64, pTmq->consumerId, sendInfo->requestId);
|
||||||
|
|
||||||
int64_t transporterId = 0;
|
int64_t transporterId = 0;
|
||||||
code = asyncSendMsgToServer(pTmq->pTscObj->pAppInfo->pTransporter, &epSet, &transporterId, sendInfo);
|
code = asyncSendMsgToServer(pTmq->pTscObj->pAppInfo->pTransporter, &epSet, &transporterId, sendInfo);
|
||||||
|
|
|
@ -306,22 +306,22 @@ void setColLevel(uint32_t* compress, uint8_t level) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
int8_t setColCompressByOption(uint8_t type, uint8_t encode, uint16_t compressType, uint8_t level, bool check,
|
int32_t setColCompressByOption(uint8_t type, uint8_t encode, uint16_t compressType, uint8_t level, bool check,
|
||||||
uint32_t* compress) {
|
uint32_t* compress) {
|
||||||
if (check && !validColEncode(type, encode)) return 0;
|
if (check && !validColEncode(type, encode)) return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
||||||
setColEncode(compress, encode);
|
setColEncode(compress, encode);
|
||||||
|
|
||||||
if (compressType == TSDB_COLVAL_COMPRESS_DISABLED) {
|
if (compressType == TSDB_COLVAL_COMPRESS_DISABLED) {
|
||||||
setColCompress(compress, compressType);
|
setColCompress(compress, compressType);
|
||||||
setColLevel(compress, TSDB_COLVAL_LEVEL_DISABLED);
|
setColLevel(compress, TSDB_COLVAL_LEVEL_DISABLED);
|
||||||
} else {
|
} else {
|
||||||
if (check && !validColCompress(type, compressType)) return 0;
|
if (check && !validColCompress(type, compressType)) return TSDB_CODE_TSC_COMPRESS_PARAM_ERROR;
|
||||||
setColCompress(compress, compressType);
|
setColCompress(compress, compressType);
|
||||||
|
|
||||||
if (check && !validColCompressLevel(type, level)) return 0;
|
if (check && !validColCompressLevel(type, level)) return TSDB_CODE_TSC_COMPRESS_LEVEL_ERROR;
|
||||||
setColLevel(compress, level);
|
setColLevel(compress, level);
|
||||||
}
|
}
|
||||||
return 1;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool useCompress(uint8_t tableType) { return TSDB_SUPER_TABLE == tableType || TSDB_NORMAL_TABLE == tableType; }
|
bool useCompress(uint8_t tableType) { return TSDB_SUPER_TABLE == tableType || TSDB_NORMAL_TABLE == tableType; }
|
||||||
|
@ -397,10 +397,17 @@ uint32_t createDefaultColCmprByType(uint8_t type) {
|
||||||
SET_COMPRESS(encode, compress, lvl, ret);
|
SET_COMPRESS(encode, compress, lvl, ret);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
bool validColCmprByType(uint8_t type, uint32_t cmpr) {
|
int32_t validColCmprByType(uint8_t type, uint32_t cmpr) {
|
||||||
DEFINE_VAR(cmpr);
|
DEFINE_VAR(cmpr);
|
||||||
if (validColEncode(type, l1) && validColCompress(type, l2) && validColCompressLevel(type, lvl)) {
|
if (!validColEncode(type, l1)) {
|
||||||
return true;
|
return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
||||||
}
|
}
|
||||||
return false;
|
if (!validColCompress(type, l2)) {
|
||||||
|
return TSDB_CODE_TSC_COMPRESS_PARAM_ERROR;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!validColCompressLevel(type, lvl)) {
|
||||||
|
return TSDB_CODE_TSC_COMPRESS_LEVEL_ERROR;
|
||||||
|
}
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
|
@ -193,15 +193,15 @@ static int32_t doCopyNItems(struct SColumnInfoData* pColumnInfoData, int32_t cur
|
||||||
size_t start = 1;
|
size_t start = 1;
|
||||||
int32_t t = 0;
|
int32_t t = 0;
|
||||||
int32_t count = log(numOfRows) / log(2);
|
int32_t count = log(numOfRows) / log(2);
|
||||||
uint32_t startOffset = (IS_VAR_DATA_TYPE(pColumnInfoData->info.type)) ? pColumnInfoData->varmeta.length : (currentRow * itemLen);
|
uint32_t startOffset =
|
||||||
|
(IS_VAR_DATA_TYPE(pColumnInfoData->info.type)) ? pColumnInfoData->varmeta.length : (currentRow * itemLen);
|
||||||
|
|
||||||
// the first item
|
// the first item
|
||||||
memcpy(pColumnInfoData->pData + startOffset, pData, itemLen);
|
memcpy(pColumnInfoData->pData + startOffset, pData, itemLen);
|
||||||
|
|
||||||
while (t < count) {
|
while (t < count) {
|
||||||
int32_t xlen = 1 << t;
|
int32_t xlen = 1 << t;
|
||||||
memcpy(pColumnInfoData->pData + start * itemLen + startOffset,
|
memcpy(pColumnInfoData->pData + start * itemLen + startOffset, pColumnInfoData->pData + startOffset,
|
||||||
pColumnInfoData->pData + startOffset,
|
|
||||||
xlen * itemLen);
|
xlen * itemLen);
|
||||||
t += 1;
|
t += 1;
|
||||||
start += xlen;
|
start += xlen;
|
||||||
|
@ -209,8 +209,7 @@ static int32_t doCopyNItems(struct SColumnInfoData* pColumnInfoData, int32_t cur
|
||||||
|
|
||||||
// the tail part
|
// the tail part
|
||||||
if (numOfRows > start) {
|
if (numOfRows > start) {
|
||||||
memcpy(pColumnInfoData->pData + start * itemLen + startOffset,
|
memcpy(pColumnInfoData->pData + start * itemLen + startOffset, pColumnInfoData->pData + startOffset,
|
||||||
pColumnInfoData->pData + startOffset,
|
|
||||||
(numOfRows - start) * itemLen);
|
(numOfRows - start) * itemLen);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -491,7 +490,8 @@ int32_t colDataAssign(SColumnInfoData* pColumnInfoData, const SColumnInfoData* p
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t colDataAssignNRows(SColumnInfoData* pDst, int32_t dstIdx, const SColumnInfoData* pSrc, int32_t srcIdx, int32_t numOfRows) {
|
int32_t colDataAssignNRows(SColumnInfoData* pDst, int32_t dstIdx, const SColumnInfoData* pSrc, int32_t srcIdx,
|
||||||
|
int32_t numOfRows) {
|
||||||
if (pDst->info.type != pSrc->info.type || pDst->info.bytes != pSrc->info.bytes || pSrc->reassigned) {
|
if (pDst->info.type != pSrc->info.type || pDst->info.bytes != pSrc->info.bytes || pSrc->reassigned) {
|
||||||
return TSDB_CODE_FAILED;
|
return TSDB_CODE_FAILED;
|
||||||
}
|
}
|
||||||
|
@ -588,14 +588,14 @@ int32_t colDataAssignNRows(SColumnInfoData* pDst, int32_t dstIdx, const SColumnI
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pSrc->pData != NULL) {
|
if (pSrc->pData != NULL) {
|
||||||
memcpy(pDst->pData + pDst->info.bytes * dstIdx, pSrc->pData + pSrc->info.bytes * srcIdx, pDst->info.bytes * numOfRows);
|
memcpy(pDst->pData + pDst->info.bytes * dstIdx, pSrc->pData + pSrc->info.bytes * srcIdx,
|
||||||
|
pDst->info.bytes * numOfRows);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
size_t blockDataGetNumOfCols(const SSDataBlock* pBlock) { return taosArrayGetSize(pBlock->pDataBlock); }
|
size_t blockDataGetNumOfCols(const SSDataBlock* pBlock) { return taosArrayGetSize(pBlock->pDataBlock); }
|
||||||
|
|
||||||
size_t blockDataGetNumOfRows(const SSDataBlock* pBlock) { return pBlock->info.rows; }
|
size_t blockDataGetNumOfRows(const SSDataBlock* pBlock) { return pBlock->info.rows; }
|
||||||
|
@ -742,7 +742,6 @@ void blockDataShrinkNRows(SSDataBlock* pBlock, int32_t numOfRows) {
|
||||||
pBlock->info.rows -= numOfRows;
|
pBlock->info.rows -= numOfRows;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
size_t blockDataGetSize(const SSDataBlock* pBlock) {
|
size_t blockDataGetSize(const SSDataBlock* pBlock) {
|
||||||
size_t total = 0;
|
size_t total = 0;
|
||||||
size_t numOfCols = taosArrayGetSize(pBlock->pDataBlock);
|
size_t numOfCols = taosArrayGetSize(pBlock->pDataBlock);
|
||||||
|
@ -827,19 +826,16 @@ SSDataBlock* blockDataExtractBlock(SSDataBlock* pBlock, int32_t startIndex, int3
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
blockDataEnsureCapacity(pDst, rowCount);
|
blockDataEnsureCapacity(pDst, rowCount);
|
||||||
|
|
||||||
|
/* may have disorder varchar data, TODO
|
||||||
/* may have disorder varchar data, TODO
|
|
||||||
for (int32_t i = 0; i < numOfCols; ++i) {
|
for (int32_t i = 0; i < numOfCols; ++i) {
|
||||||
SColumnInfoData* pColData = taosArrayGet(pBlock->pDataBlock, i);
|
SColumnInfoData* pColData = taosArrayGet(pBlock->pDataBlock, i);
|
||||||
SColumnInfoData* pDstCol = taosArrayGet(pDst->pDataBlock, i);
|
SColumnInfoData* pDstCol = taosArrayGet(pDst->pDataBlock, i);
|
||||||
|
|
||||||
colDataAssignNRows(pDstCol, 0, pColData, startIndex, rowCount);
|
colDataAssignNRows(pDstCol, 0, pColData, startIndex, rowCount);
|
||||||
}
|
}
|
||||||
*/
|
*/
|
||||||
|
|
||||||
|
|
||||||
size_t numOfCols = taosArrayGetSize(pBlock->pDataBlock);
|
size_t numOfCols = taosArrayGetSize(pBlock->pDataBlock);
|
||||||
for (int32_t i = 0; i < numOfCols; ++i) {
|
for (int32_t i = 0; i < numOfCols; ++i) {
|
||||||
|
@ -1322,7 +1318,7 @@ int32_t blockDataSort(SSDataBlock* pDataBlock, SArray* pOrderInfo) {
|
||||||
}
|
}
|
||||||
|
|
||||||
terrno = 0;
|
terrno = 0;
|
||||||
taosqsort(index, rows, sizeof(int32_t), &helper, dataBlockCompar);
|
taosqsort_r(index, rows, sizeof(int32_t), &helper, dataBlockCompar);
|
||||||
if (terrno) return terrno;
|
if (terrno) return terrno;
|
||||||
|
|
||||||
int64_t p1 = taosGetTimestampUs();
|
int64_t p1 = taosGetTimestampUs();
|
||||||
|
@ -1400,7 +1396,6 @@ void blockDataReset(SSDataBlock* pDataBlock) {
|
||||||
pInfo->id.groupId = 0;
|
pInfo->id.groupId = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* NOTE: the type of the input column may be TSDB_DATA_TYPE_NULL, which is used to denote
|
* NOTE: the type of the input column may be TSDB_DATA_TYPE_NULL, which is used to denote
|
||||||
* the all NULL value in this column. It is an internal representation of all NULL value column, and no visible to
|
* the all NULL value in this column. It is an internal representation of all NULL value column, and no visible to
|
||||||
|
@ -2402,12 +2397,12 @@ _end:
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
void buildCtbNameAddGroupId(const char* stbName, char* ctbName, uint64_t groupId){
|
void buildCtbNameAddGroupId(const char* stbName, char* ctbName, uint64_t groupId) {
|
||||||
char tmp[TSDB_TABLE_NAME_LEN] = {0};
|
char tmp[TSDB_TABLE_NAME_LEN] = {0};
|
||||||
if (stbName == NULL){
|
if (stbName == NULL) {
|
||||||
snprintf(tmp, TSDB_TABLE_NAME_LEN, "_%"PRIu64, groupId);
|
snprintf(tmp, TSDB_TABLE_NAME_LEN, "_%" PRIu64, groupId);
|
||||||
}else{
|
} else {
|
||||||
snprintf(tmp, TSDB_TABLE_NAME_LEN, "_%s_%"PRIu64, stbName, groupId);
|
snprintf(tmp, TSDB_TABLE_NAME_LEN, "_%s_%" PRIu64, stbName, groupId);
|
||||||
}
|
}
|
||||||
ctbName[TSDB_TABLE_NAME_LEN - strlen(tmp) - 1] = 0; // put stbname + groupId to the end
|
ctbName[TSDB_TABLE_NAME_LEN - strlen(tmp) - 1] = 0; // put stbname + groupId to the end
|
||||||
strcat(ctbName, tmp);
|
strcat(ctbName, tmp);
|
||||||
|
|
|
@ -69,7 +69,7 @@
|
||||||
static int32_t tDecodeSVAlterTbReqCommon(SDecoder *pDecoder, SVAlterTbReq *pReq);
|
static int32_t tDecodeSVAlterTbReqCommon(SDecoder *pDecoder, SVAlterTbReq *pReq);
|
||||||
static int32_t tDecodeSBatchDeleteReqCommon(SDecoder *pDecoder, SBatchDeleteReq *pReq);
|
static int32_t tDecodeSBatchDeleteReqCommon(SDecoder *pDecoder, SBatchDeleteReq *pReq);
|
||||||
static int32_t tEncodeTableTSMAInfoRsp(SEncoder *pEncoder, const STableTSMAInfoRsp *pRsp);
|
static int32_t tEncodeTableTSMAInfoRsp(SEncoder *pEncoder, const STableTSMAInfoRsp *pRsp);
|
||||||
static int32_t tDecodeTableTSMAInfoRsp(SDecoder* pDecoder, STableTSMAInfoRsp* pRsp);
|
static int32_t tDecodeTableTSMAInfoRsp(SDecoder *pDecoder, STableTSMAInfoRsp *pRsp);
|
||||||
|
|
||||||
int32_t tInitSubmitMsgIter(const SSubmitReq *pMsg, SSubmitMsgIter *pIter) {
|
int32_t tInitSubmitMsgIter(const SSubmitReq *pMsg, SSubmitMsgIter *pIter) {
|
||||||
if (pMsg == NULL) {
|
if (pMsg == NULL) {
|
||||||
|
@ -895,8 +895,8 @@ int32_t tSerializeSMCreateSmaReq(void *buf, int32_t bufLen, SMCreateSmaReq *pReq
|
||||||
if (tEncodeI64(&encoder, pReq->normSourceTbUid) < 0) return -1;
|
if (tEncodeI64(&encoder, pReq->normSourceTbUid) < 0) return -1;
|
||||||
if (tEncodeI32(&encoder, taosArrayGetSize(pReq->pVgroupVerList)) < 0) return -1;
|
if (tEncodeI32(&encoder, taosArrayGetSize(pReq->pVgroupVerList)) < 0) return -1;
|
||||||
|
|
||||||
for(int32_t i = 0; i < taosArrayGetSize(pReq->pVgroupVerList); ++i) {
|
for (int32_t i = 0; i < taosArrayGetSize(pReq->pVgroupVerList); ++i) {
|
||||||
SVgroupVer* p = taosArrayGet(pReq->pVgroupVerList, i);
|
SVgroupVer *p = taosArrayGet(pReq->pVgroupVerList, i);
|
||||||
if (tEncodeI32(&encoder, p->vgId) < 0) return -1;
|
if (tEncodeI32(&encoder, p->vgId) < 0) return -1;
|
||||||
if (tEncodeI64(&encoder, p->ver) < 0) return -1;
|
if (tEncodeI64(&encoder, p->ver) < 0) return -1;
|
||||||
}
|
}
|
||||||
|
@ -8000,7 +8000,7 @@ int32_t tDeserializeSCMCreateStreamReq(void *buf, int32_t bufLen, SCMCreateStrea
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (!tDecodeIsEnd(&decoder)) {
|
if (!tDecodeIsEnd(&decoder)) {
|
||||||
if (tDecodeI64(&decoder, &pReq->smaId)< 0) return -1;
|
if (tDecodeI64(&decoder, &pReq->smaId) < 0) return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
tEndDecode(&decoder);
|
tEndDecode(&decoder);
|
||||||
|
@ -8709,6 +8709,7 @@ int32_t tEncodeSVAlterTbReq(SEncoder *pEncoder, const SVAlterTbReq *pReq) {
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case TSDB_ALTER_TABLE_UPDATE_COLUMN_COMPRESS:
|
case TSDB_ALTER_TABLE_UPDATE_COLUMN_COMPRESS:
|
||||||
|
if (tEncodeCStr(pEncoder, pReq->colName) < 0) return -1;
|
||||||
if (tEncodeU32(pEncoder, pReq->compress) < 0) return -1;
|
if (tEncodeU32(pEncoder, pReq->compress) < 0) return -1;
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
|
@ -8763,6 +8764,7 @@ static int32_t tDecodeSVAlterTbReqCommon(SDecoder *pDecoder, SVAlterTbReq *pReq)
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case TSDB_ALTER_TABLE_UPDATE_COLUMN_COMPRESS:
|
case TSDB_ALTER_TABLE_UPDATE_COLUMN_COMPRESS:
|
||||||
|
if (tDecodeCStr(pDecoder, &pReq->colName) < 0) return -1;
|
||||||
if (tDecodeU32(pDecoder, &pReq->compress) < 0) return -1;
|
if (tDecodeU32(pDecoder, &pReq->compress) < 0) return -1;
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
|
@ -9200,7 +9202,7 @@ int32_t tEncodeMqDataRspCommon(SEncoder *pEncoder, const SMqDataRspCommon *pRsp)
|
||||||
|
|
||||||
int32_t tEncodeMqDataRsp(SEncoder *pEncoder, const void *pRsp) {
|
int32_t tEncodeMqDataRsp(SEncoder *pEncoder, const void *pRsp) {
|
||||||
if (tEncodeMqDataRspCommon(pEncoder, pRsp) < 0) return -1;
|
if (tEncodeMqDataRspCommon(pEncoder, pRsp) < 0) return -1;
|
||||||
if (tEncodeI64(pEncoder, ((SMqDataRsp*)pRsp)->sleepTime) < 0) return -1;
|
if (tEncodeI64(pEncoder, ((SMqDataRsp *)pRsp)->sleepTime) < 0) return -1;
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -9253,7 +9255,7 @@ int32_t tDecodeMqDataRspCommon(SDecoder *pDecoder, SMqDataRspCommon *pRsp) {
|
||||||
int32_t tDecodeMqDataRsp(SDecoder *pDecoder, void *pRsp) {
|
int32_t tDecodeMqDataRsp(SDecoder *pDecoder, void *pRsp) {
|
||||||
if (tDecodeMqDataRspCommon(pDecoder, pRsp) < 0) return -1;
|
if (tDecodeMqDataRspCommon(pDecoder, pRsp) < 0) return -1;
|
||||||
if (!tDecodeIsEnd(pDecoder)) {
|
if (!tDecodeIsEnd(pDecoder)) {
|
||||||
if (tDecodeI64(pDecoder, &((SMqDataRsp*)pRsp)->sleepTime) < 0) return -1;
|
if (tDecodeI64(pDecoder, &((SMqDataRsp *)pRsp)->sleepTime) < 0) return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -9272,9 +9274,7 @@ static void tDeleteMqDataRspCommon(void *rsp) {
|
||||||
tOffsetDestroy(&pRsp->rspOffset);
|
tOffsetDestroy(&pRsp->rspOffset);
|
||||||
}
|
}
|
||||||
|
|
||||||
void tDeleteMqDataRsp(void *rsp) {
|
void tDeleteMqDataRsp(void *rsp) { tDeleteMqDataRspCommon(rsp); }
|
||||||
tDeleteMqDataRspCommon(rsp);
|
|
||||||
}
|
|
||||||
|
|
||||||
int32_t tEncodeSTaosxRsp(SEncoder *pEncoder, const void *rsp) {
|
int32_t tEncodeSTaosxRsp(SEncoder *pEncoder, const void *rsp) {
|
||||||
if (tEncodeMqDataRspCommon(pEncoder, rsp) < 0) return -1;
|
if (tEncodeMqDataRspCommon(pEncoder, rsp) < 0) return -1;
|
||||||
|
@ -9300,7 +9300,7 @@ int32_t tDecodeSTaosxRsp(SDecoder *pDecoder, void *rsp) {
|
||||||
pRsp->createTableLen = taosArrayInit(pRsp->createTableNum, sizeof(int32_t));
|
pRsp->createTableLen = taosArrayInit(pRsp->createTableNum, sizeof(int32_t));
|
||||||
pRsp->createTableReq = taosArrayInit(pRsp->createTableNum, sizeof(void *));
|
pRsp->createTableReq = taosArrayInit(pRsp->createTableNum, sizeof(void *));
|
||||||
for (int32_t i = 0; i < pRsp->createTableNum; i++) {
|
for (int32_t i = 0; i < pRsp->createTableNum; i++) {
|
||||||
void * pCreate = NULL;
|
void *pCreate = NULL;
|
||||||
uint64_t len = 0;
|
uint64_t len = 0;
|
||||||
if (tDecodeBinaryAlloc(pDecoder, &pCreate, &len) < 0) return -1;
|
if (tDecodeBinaryAlloc(pDecoder, &pCreate, &len) < 0) return -1;
|
||||||
int32_t l = (int32_t)len;
|
int32_t l = (int32_t)len;
|
||||||
|
@ -10114,7 +10114,7 @@ void setFieldWithOptions(SFieldWithOptions *fieldWithOptions, SField *field) {
|
||||||
fieldWithOptions->type = field->type;
|
fieldWithOptions->type = field->type;
|
||||||
strncpy(fieldWithOptions->name, field->name, TSDB_COL_NAME_LEN);
|
strncpy(fieldWithOptions->name, field->name, TSDB_COL_NAME_LEN);
|
||||||
}
|
}
|
||||||
int32_t tSerializeTableTSMAInfoReq(void* buf, int32_t bufLen, const STableTSMAInfoReq* pReq) {
|
int32_t tSerializeTableTSMAInfoReq(void *buf, int32_t bufLen, const STableTSMAInfoReq *pReq) {
|
||||||
SEncoder encoder = {0};
|
SEncoder encoder = {0};
|
||||||
tEncoderInit(&encoder, buf, bufLen);
|
tEncoderInit(&encoder, buf, bufLen);
|
||||||
|
|
||||||
|
@ -10129,13 +10129,13 @@ int32_t tSerializeTableTSMAInfoReq(void* buf, int32_t bufLen, const STableTSMAIn
|
||||||
return tlen;
|
return tlen;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tDeserializeTableTSMAInfoReq(void* buf, int32_t bufLen, STableTSMAInfoReq* pReq) {
|
int32_t tDeserializeTableTSMAInfoReq(void *buf, int32_t bufLen, STableTSMAInfoReq *pReq) {
|
||||||
SDecoder decoder = {0};
|
SDecoder decoder = {0};
|
||||||
tDecoderInit(&decoder, buf, bufLen);
|
tDecoderInit(&decoder, buf, bufLen);
|
||||||
|
|
||||||
if (tStartDecode(&decoder) < 0) return -1;
|
if (tStartDecode(&decoder) < 0) return -1;
|
||||||
if (tDecodeCStrTo(&decoder, pReq->name) < 0) return -1;
|
if (tDecodeCStrTo(&decoder, pReq->name) < 0) return -1;
|
||||||
if (tDecodeI8(&decoder, (uint8_t*)&pReq->fetchingWithTsmaName) < 0) return -1;
|
if (tDecodeI8(&decoder, (uint8_t *)&pReq->fetchingWithTsmaName) < 0) return -1;
|
||||||
|
|
||||||
tEndDecode(&decoder);
|
tEndDecode(&decoder);
|
||||||
|
|
||||||
|
@ -10143,7 +10143,7 @@ int32_t tDeserializeTableTSMAInfoReq(void* buf, int32_t bufLen, STableTSMAInfoRe
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t tEncodeTableTSMAInfo(SEncoder* pEncoder, const STableTSMAInfo* pTsmaInfo) {
|
static int32_t tEncodeTableTSMAInfo(SEncoder *pEncoder, const STableTSMAInfo *pTsmaInfo) {
|
||||||
if (tEncodeCStr(pEncoder, pTsmaInfo->name) < 0) return -1;
|
if (tEncodeCStr(pEncoder, pTsmaInfo->name) < 0) return -1;
|
||||||
if (tEncodeU64(pEncoder, pTsmaInfo->tsmaId) < 0) return -1;
|
if (tEncodeU64(pEncoder, pTsmaInfo->tsmaId) < 0) return -1;
|
||||||
if (tEncodeCStr(pEncoder, pTsmaInfo->tb) < 0) return -1;
|
if (tEncodeCStr(pEncoder, pTsmaInfo->tb) < 0) return -1;
|
||||||
|
@ -10160,7 +10160,7 @@ static int32_t tEncodeTableTSMAInfo(SEncoder* pEncoder, const STableTSMAInfo* pT
|
||||||
int32_t size = pTsmaInfo->pFuncs ? pTsmaInfo->pFuncs->size : 0;
|
int32_t size = pTsmaInfo->pFuncs ? pTsmaInfo->pFuncs->size : 0;
|
||||||
if (tEncodeI32(pEncoder, size) < 0) return -1;
|
if (tEncodeI32(pEncoder, size) < 0) return -1;
|
||||||
for (int32_t i = 0; i < size; ++i) {
|
for (int32_t i = 0; i < size; ++i) {
|
||||||
STableTSMAFuncInfo* pFuncInfo = taosArrayGet(pTsmaInfo->pFuncs, i);
|
STableTSMAFuncInfo *pFuncInfo = taosArrayGet(pTsmaInfo->pFuncs, i);
|
||||||
if (tEncodeI32(pEncoder, pFuncInfo->funcId) < 0) return -1;
|
if (tEncodeI32(pEncoder, pFuncInfo->funcId) < 0) return -1;
|
||||||
if (tEncodeI16(pEncoder, pFuncInfo->colId) < 0) return -1;
|
if (tEncodeI16(pEncoder, pFuncInfo->colId) < 0) return -1;
|
||||||
}
|
}
|
||||||
|
@ -10168,13 +10168,13 @@ static int32_t tEncodeTableTSMAInfo(SEncoder* pEncoder, const STableTSMAInfo* pT
|
||||||
size = pTsmaInfo->pTags ? pTsmaInfo->pTags->size : 0;
|
size = pTsmaInfo->pTags ? pTsmaInfo->pTags->size : 0;
|
||||||
if (tEncodeI32(pEncoder, size) < 0) return -1;
|
if (tEncodeI32(pEncoder, size) < 0) return -1;
|
||||||
for (int32_t i = 0; i < size; ++i) {
|
for (int32_t i = 0; i < size; ++i) {
|
||||||
const SSchema* pSchema = taosArrayGet(pTsmaInfo->pTags, i);
|
const SSchema *pSchema = taosArrayGet(pTsmaInfo->pTags, i);
|
||||||
if (tEncodeSSchema(pEncoder, pSchema) < 0) return -1;
|
if (tEncodeSSchema(pEncoder, pSchema) < 0) return -1;
|
||||||
}
|
}
|
||||||
size = pTsmaInfo->pUsedCols ? pTsmaInfo->pUsedCols->size : 0;
|
size = pTsmaInfo->pUsedCols ? pTsmaInfo->pUsedCols->size : 0;
|
||||||
if (tEncodeI32(pEncoder, size) < 0) return -1;
|
if (tEncodeI32(pEncoder, size) < 0) return -1;
|
||||||
for (int32_t i = 0; i < size; ++i) {
|
for (int32_t i = 0; i < size; ++i) {
|
||||||
const SSchema* pSchema = taosArrayGet(pTsmaInfo->pUsedCols, i);
|
const SSchema *pSchema = taosArrayGet(pTsmaInfo->pUsedCols, i);
|
||||||
if (tEncodeSSchema(pEncoder, pSchema) < 0) return -1;
|
if (tEncodeSSchema(pEncoder, pSchema) < 0) return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -10187,7 +10187,7 @@ static int32_t tEncodeTableTSMAInfo(SEncoder* pEncoder, const STableTSMAInfo* pT
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t tDecodeTableTSMAInfo(SDecoder* pDecoder, STableTSMAInfo* pTsmaInfo) {
|
static int32_t tDecodeTableTSMAInfo(SDecoder *pDecoder, STableTSMAInfo *pTsmaInfo) {
|
||||||
if (tDecodeCStrTo(pDecoder, pTsmaInfo->name) < 0) return -1;
|
if (tDecodeCStrTo(pDecoder, pTsmaInfo->name) < 0) return -1;
|
||||||
if (tDecodeU64(pDecoder, &pTsmaInfo->tsmaId) < 0) return -1;
|
if (tDecodeU64(pDecoder, &pTsmaInfo->tsmaId) < 0) return -1;
|
||||||
if (tDecodeCStrTo(pDecoder, pTsmaInfo->tb) < 0) return -1;
|
if (tDecodeCStrTo(pDecoder, pTsmaInfo->tb) < 0) return -1;
|
||||||
|
@ -10219,7 +10219,7 @@ static int32_t tDecodeTableTSMAInfo(SDecoder* pDecoder, STableTSMAInfo* pTsmaInf
|
||||||
if (!pTsmaInfo->pTags) return -1;
|
if (!pTsmaInfo->pTags) return -1;
|
||||||
for (int32_t i = 0; i < size; ++i) {
|
for (int32_t i = 0; i < size; ++i) {
|
||||||
SSchema schema = {0};
|
SSchema schema = {0};
|
||||||
if(tDecodeSSchema(pDecoder, &schema) < 0) return -1;
|
if (tDecodeSSchema(pDecoder, &schema) < 0) return -1;
|
||||||
taosArrayPush(pTsmaInfo->pTags, &schema);
|
taosArrayPush(pTsmaInfo->pTags, &schema);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -10239,7 +10239,7 @@ static int32_t tDecodeTableTSMAInfo(SDecoder* pDecoder, STableTSMAInfo* pTsmaInf
|
||||||
if (tDecodeI64(pDecoder, &pTsmaInfo->reqTs) < 0) return -1;
|
if (tDecodeI64(pDecoder, &pTsmaInfo->reqTs) < 0) return -1;
|
||||||
if (tDecodeI64(pDecoder, &pTsmaInfo->rspTs) < 0) return -1;
|
if (tDecodeI64(pDecoder, &pTsmaInfo->rspTs) < 0) return -1;
|
||||||
if (tDecodeI64(pDecoder, &pTsmaInfo->delayDuration) < 0) return -1;
|
if (tDecodeI64(pDecoder, &pTsmaInfo->delayDuration) < 0) return -1;
|
||||||
if (tDecodeI8(pDecoder, (int8_t*)&pTsmaInfo->fillHistoryFinished) < 0) return -1;
|
if (tDecodeI8(pDecoder, (int8_t *)&pTsmaInfo->fillHistoryFinished) < 0) return -1;
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -10247,13 +10247,13 @@ static int32_t tEncodeTableTSMAInfoRsp(SEncoder *pEncoder, const STableTSMAInfoR
|
||||||
int32_t size = pRsp->pTsmas ? pRsp->pTsmas->size : 0;
|
int32_t size = pRsp->pTsmas ? pRsp->pTsmas->size : 0;
|
||||||
if (tEncodeI32(pEncoder, size) < 0) return -1;
|
if (tEncodeI32(pEncoder, size) < 0) return -1;
|
||||||
for (int32_t i = 0; i < size; ++i) {
|
for (int32_t i = 0; i < size; ++i) {
|
||||||
STableTSMAInfo* pInfo = taosArrayGetP(pRsp->pTsmas, i);
|
STableTSMAInfo *pInfo = taosArrayGetP(pRsp->pTsmas, i);
|
||||||
if (tEncodeTableTSMAInfo(pEncoder, pInfo) < 0) return -1;
|
if (tEncodeTableTSMAInfo(pEncoder, pInfo) < 0) return -1;
|
||||||
}
|
}
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t tDecodeTableTSMAInfoRsp(SDecoder* pDecoder, STableTSMAInfoRsp* pRsp) {
|
static int32_t tDecodeTableTSMAInfoRsp(SDecoder *pDecoder, STableTSMAInfoRsp *pRsp) {
|
||||||
int32_t size = 0;
|
int32_t size = 0;
|
||||||
if (tDecodeI32(pDecoder, &size) < 0) return -1;
|
if (tDecodeI32(pDecoder, &size) < 0) return -1;
|
||||||
if (size <= 0) return 0;
|
if (size <= 0) return 0;
|
||||||
|
@ -10268,7 +10268,7 @@ static int32_t tDecodeTableTSMAInfoRsp(SDecoder* pDecoder, STableTSMAInfoRsp* pR
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tSerializeTableTSMAInfoRsp(void* buf, int32_t bufLen, const STableTSMAInfoRsp* pRsp) {
|
int32_t tSerializeTableTSMAInfoRsp(void *buf, int32_t bufLen, const STableTSMAInfoRsp *pRsp) {
|
||||||
SEncoder encoder = {0};
|
SEncoder encoder = {0};
|
||||||
tEncoderInit(&encoder, buf, bufLen);
|
tEncoderInit(&encoder, buf, bufLen);
|
||||||
|
|
||||||
|
@ -10282,7 +10282,7 @@ int32_t tSerializeTableTSMAInfoRsp(void* buf, int32_t bufLen, const STableTSMAIn
|
||||||
return tlen;
|
return tlen;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tDeserializeTableTSMAInfoRsp(void* buf, int32_t bufLen, STableTSMAInfoRsp* pRsp) {
|
int32_t tDeserializeTableTSMAInfoRsp(void *buf, int32_t bufLen, STableTSMAInfoRsp *pRsp) {
|
||||||
SDecoder decoder = {0};
|
SDecoder decoder = {0};
|
||||||
tDecoderInit(&decoder, buf, bufLen);
|
tDecoderInit(&decoder, buf, bufLen);
|
||||||
|
|
||||||
|
@ -10295,7 +10295,7 @@ int32_t tDeserializeTableTSMAInfoRsp(void* buf, int32_t bufLen, STableTSMAInfoRs
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
void tFreeTableTSMAInfo(void* p) {
|
void tFreeTableTSMAInfo(void *p) {
|
||||||
STableTSMAInfo *pTsmaInfo = p;
|
STableTSMAInfo *pTsmaInfo = p;
|
||||||
if (pTsmaInfo) {
|
if (pTsmaInfo) {
|
||||||
taosArrayDestroy(pTsmaInfo->pFuncs);
|
taosArrayDestroy(pTsmaInfo->pFuncs);
|
||||||
|
@ -10305,20 +10305,20 @@ void tFreeTableTSMAInfo(void* p) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void tFreeAndClearTableTSMAInfo(void* p) {
|
void tFreeAndClearTableTSMAInfo(void *p) {
|
||||||
STableTSMAInfo* pTsmaInfo = (STableTSMAInfo*)p;
|
STableTSMAInfo *pTsmaInfo = (STableTSMAInfo *)p;
|
||||||
if (pTsmaInfo) {
|
if (pTsmaInfo) {
|
||||||
tFreeTableTSMAInfo(pTsmaInfo);
|
tFreeTableTSMAInfo(pTsmaInfo);
|
||||||
taosMemoryFree(pTsmaInfo);
|
taosMemoryFree(pTsmaInfo);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tCloneTbTSMAInfo(STableTSMAInfo* pInfo, STableTSMAInfo** pRes) {
|
int32_t tCloneTbTSMAInfo(STableTSMAInfo *pInfo, STableTSMAInfo **pRes) {
|
||||||
int32_t code = TSDB_CODE_SUCCESS;
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
if (NULL == pInfo) {
|
if (NULL == pInfo) {
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
STableTSMAInfo* pRet = taosMemoryCalloc(1, sizeof(STableTSMAInfo));
|
STableTSMAInfo *pRet = taosMemoryCalloc(1, sizeof(STableTSMAInfo));
|
||||||
if (!pRet) return TSDB_CODE_OUT_OF_MEMORY;
|
if (!pRet) return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
|
||||||
*pRet = *pInfo;
|
*pRet = *pInfo;
|
||||||
|
@ -10357,7 +10357,7 @@ static int32_t tEncodeStreamProgressReq(SEncoder *pEncoder, const SStreamProgres
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tSerializeStreamProgressReq(void* buf, int32_t bufLen, const SStreamProgressReq* pReq) {
|
int32_t tSerializeStreamProgressReq(void *buf, int32_t bufLen, const SStreamProgressReq *pReq) {
|
||||||
SEncoder encoder = {0};
|
SEncoder encoder = {0};
|
||||||
tEncoderInit(&encoder, buf, bufLen);
|
tEncoderInit(&encoder, buf, bufLen);
|
||||||
|
|
||||||
|
@ -10371,7 +10371,7 @@ int32_t tSerializeStreamProgressReq(void* buf, int32_t bufLen, const SStreamProg
|
||||||
return tlen;
|
return tlen;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t tDecodeStreamProgressReq(SDecoder* pDecoder, SStreamProgressReq* pReq) {
|
static int32_t tDecodeStreamProgressReq(SDecoder *pDecoder, SStreamProgressReq *pReq) {
|
||||||
if (tDecodeI64(pDecoder, &pReq->streamId) < 0) return -1;
|
if (tDecodeI64(pDecoder, &pReq->streamId) < 0) return -1;
|
||||||
if (tDecodeI32(pDecoder, &pReq->vgId) < 0) return -1;
|
if (tDecodeI32(pDecoder, &pReq->vgId) < 0) return -1;
|
||||||
if (tDecodeI32(pDecoder, &pReq->fetchIdx) < 0) return -1;
|
if (tDecodeI32(pDecoder, &pReq->fetchIdx) < 0) return -1;
|
||||||
|
@ -10379,7 +10379,7 @@ static int32_t tDecodeStreamProgressReq(SDecoder* pDecoder, SStreamProgressReq*
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tDeserializeStreamProgressReq(void* buf, int32_t bufLen, SStreamProgressReq* pReq) {
|
int32_t tDeserializeStreamProgressReq(void *buf, int32_t bufLen, SStreamProgressReq *pReq) {
|
||||||
SDecoder decoder = {0};
|
SDecoder decoder = {0};
|
||||||
tDecoderInit(&decoder, (char *)buf, bufLen);
|
tDecoderInit(&decoder, (char *)buf, bufLen);
|
||||||
|
|
||||||
|
@ -10392,7 +10392,7 @@ int32_t tDeserializeStreamProgressReq(void* buf, int32_t bufLen, SStreamProgress
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t tEncodeStreamProgressRsp(SEncoder* pEncoder, const SStreamProgressRsp* pRsp) {
|
static int32_t tEncodeStreamProgressRsp(SEncoder *pEncoder, const SStreamProgressRsp *pRsp) {
|
||||||
if (tEncodeI64(pEncoder, pRsp->streamId) < 0) return -1;
|
if (tEncodeI64(pEncoder, pRsp->streamId) < 0) return -1;
|
||||||
if (tEncodeI32(pEncoder, pRsp->vgId) < 0) return -1;
|
if (tEncodeI32(pEncoder, pRsp->vgId) < 0) return -1;
|
||||||
if (tEncodeI8(pEncoder, pRsp->fillHisFinished) < 0) return -1;
|
if (tEncodeI8(pEncoder, pRsp->fillHisFinished) < 0) return -1;
|
||||||
|
@ -10402,7 +10402,7 @@ static int32_t tEncodeStreamProgressRsp(SEncoder* pEncoder, const SStreamProgres
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tSerializeStreamProgressRsp(void* buf, int32_t bufLen, const SStreamProgressRsp* pRsp) {
|
int32_t tSerializeStreamProgressRsp(void *buf, int32_t bufLen, const SStreamProgressRsp *pRsp) {
|
||||||
SEncoder encoder = {0};
|
SEncoder encoder = {0};
|
||||||
tEncoderInit(&encoder, buf, bufLen);
|
tEncoderInit(&encoder, buf, bufLen);
|
||||||
|
|
||||||
|
@ -10416,17 +10416,17 @@ int32_t tSerializeStreamProgressRsp(void* buf, int32_t bufLen, const SStreamProg
|
||||||
return tlen;
|
return tlen;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t tDecodeStreamProgressRsp(SDecoder* pDecoder, SStreamProgressRsp* pRsp) {
|
static int32_t tDecodeStreamProgressRsp(SDecoder *pDecoder, SStreamProgressRsp *pRsp) {
|
||||||
if (tDecodeI64(pDecoder, &pRsp->streamId) < 0) return -1;
|
if (tDecodeI64(pDecoder, &pRsp->streamId) < 0) return -1;
|
||||||
if (tDecodeI32(pDecoder, &pRsp->vgId) < 0) return -1;
|
if (tDecodeI32(pDecoder, &pRsp->vgId) < 0) return -1;
|
||||||
if (tDecodeI8(pDecoder, (int8_t*)&pRsp->fillHisFinished) < 0) return -1;
|
if (tDecodeI8(pDecoder, (int8_t *)&pRsp->fillHisFinished) < 0) return -1;
|
||||||
if (tDecodeI64(pDecoder, &pRsp->progressDelay) < 0) return -1;
|
if (tDecodeI64(pDecoder, &pRsp->progressDelay) < 0) return -1;
|
||||||
if (tDecodeI32(pDecoder, &pRsp->fetchIdx) < 0) return -1;
|
if (tDecodeI32(pDecoder, &pRsp->fetchIdx) < 0) return -1;
|
||||||
if (tDecodeI32(pDecoder, &pRsp->subFetchIdx) < 0) return -1;
|
if (tDecodeI32(pDecoder, &pRsp->subFetchIdx) < 0) return -1;
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tDeserializeSStreamProgressRsp(void* buf, int32_t bufLen, SStreamProgressRsp* pRsp) {
|
int32_t tDeserializeSStreamProgressRsp(void *buf, int32_t bufLen, SStreamProgressRsp *pRsp) {
|
||||||
SDecoder decoder = {0};
|
SDecoder decoder = {0};
|
||||||
tDecoderInit(&decoder, buf, bufLen);
|
tDecoderInit(&decoder, buf, bufLen);
|
||||||
|
|
||||||
|
@ -10440,22 +10440,22 @@ int32_t tDeserializeSStreamProgressRsp(void* buf, int32_t bufLen, SStreamProgres
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tEncodeSMDropTbReqOnSingleVg(SEncoder *pEncoder, const SMDropTbReqsOnSingleVg *pReq) {
|
int32_t tEncodeSMDropTbReqOnSingleVg(SEncoder *pEncoder, const SMDropTbReqsOnSingleVg *pReq) {
|
||||||
const SVgroupInfo* pVgInfo = &pReq->vgInfo;
|
const SVgroupInfo *pVgInfo = &pReq->vgInfo;
|
||||||
if (tEncodeI32(pEncoder, pVgInfo->vgId) < 0) return -1;
|
if (tEncodeI32(pEncoder, pVgInfo->vgId) < 0) return -1;
|
||||||
if (tEncodeU32(pEncoder, pVgInfo->hashBegin) < 0) return -1;
|
if (tEncodeU32(pEncoder, pVgInfo->hashBegin) < 0) return -1;
|
||||||
if (tEncodeU32(pEncoder, pVgInfo->hashEnd) < 0) return -1;
|
if (tEncodeU32(pEncoder, pVgInfo->hashEnd) < 0) return -1;
|
||||||
if (tEncodeSEpSet(pEncoder, &pVgInfo->epSet) < 0) return -1;
|
if (tEncodeSEpSet(pEncoder, &pVgInfo->epSet) < 0) return -1;
|
||||||
if (tEncodeI32(pEncoder, pVgInfo->numOfTable) < 0) return -1;
|
if (tEncodeI32(pEncoder, pVgInfo->numOfTable) < 0) return -1;
|
||||||
int32_t size = pReq->pTbs ? pReq->pTbs->size: 0;
|
int32_t size = pReq->pTbs ? pReq->pTbs->size : 0;
|
||||||
if (tEncodeI32(pEncoder, size) < 0) return -1;
|
if (tEncodeI32(pEncoder, size) < 0) return -1;
|
||||||
for (int32_t i = 0; i < size; ++i) {
|
for (int32_t i = 0; i < size; ++i) {
|
||||||
const SVDropTbReq* pInfo = taosArrayGet(pReq->pTbs, i);
|
const SVDropTbReq *pInfo = taosArrayGet(pReq->pTbs, i);
|
||||||
if (tEncodeSVDropTbReq(pEncoder, pInfo) < 0) return -1;
|
if (tEncodeSVDropTbReq(pEncoder, pInfo) < 0) return -1;
|
||||||
}
|
}
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tDecodeSMDropTbReqOnSingleVg(SDecoder* pDecoder, SMDropTbReqsOnSingleVg* pReq) {
|
int32_t tDecodeSMDropTbReqOnSingleVg(SDecoder *pDecoder, SMDropTbReqsOnSingleVg *pReq) {
|
||||||
if (tDecodeI32(pDecoder, &pReq->vgInfo.vgId) < 0) return -1;
|
if (tDecodeI32(pDecoder, &pReq->vgInfo.vgId) < 0) return -1;
|
||||||
if (tDecodeU32(pDecoder, &pReq->vgInfo.hashBegin) < 0) return -1;
|
if (tDecodeU32(pDecoder, &pReq->vgInfo.hashBegin) < 0) return -1;
|
||||||
if (tDecodeU32(pDecoder, &pReq->vgInfo.hashEnd) < 0) return -1;
|
if (tDecodeU32(pDecoder, &pReq->vgInfo.hashEnd) < 0) return -1;
|
||||||
|
@ -10477,18 +10477,18 @@ int32_t tDecodeSMDropTbReqOnSingleVg(SDecoder* pDecoder, SMDropTbReqsOnSingleVg*
|
||||||
}
|
}
|
||||||
|
|
||||||
void tFreeSMDropTbReqOnSingleVg(void *p) {
|
void tFreeSMDropTbReqOnSingleVg(void *p) {
|
||||||
SMDropTbReqsOnSingleVg* pReq = p;
|
SMDropTbReqsOnSingleVg *pReq = p;
|
||||||
taosArrayDestroy(pReq->pTbs);
|
taosArrayDestroy(pReq->pTbs);
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tSerializeSMDropTbsReq(void* buf, int32_t bufLen, const SMDropTbsReq* pReq){
|
int32_t tSerializeSMDropTbsReq(void *buf, int32_t bufLen, const SMDropTbsReq *pReq) {
|
||||||
SEncoder encoder = {0};
|
SEncoder encoder = {0};
|
||||||
tEncoderInit(&encoder, buf, bufLen);
|
tEncoderInit(&encoder, buf, bufLen);
|
||||||
tStartEncode(&encoder);
|
tStartEncode(&encoder);
|
||||||
int32_t size = pReq->pVgReqs ? pReq->pVgReqs->size : 0;
|
int32_t size = pReq->pVgReqs ? pReq->pVgReqs->size : 0;
|
||||||
if (tEncodeI32(&encoder, size) < 0) return -1;
|
if (tEncodeI32(&encoder, size) < 0) return -1;
|
||||||
for (int32_t i = 0; i < size; ++i) {
|
for (int32_t i = 0; i < size; ++i) {
|
||||||
SMDropTbReqsOnSingleVg* pVgReq = taosArrayGet(pReq->pVgReqs, i);
|
SMDropTbReqsOnSingleVg *pVgReq = taosArrayGet(pReq->pVgReqs, i);
|
||||||
if (tEncodeSMDropTbReqOnSingleVg(&encoder, pVgReq) < 0) return -1;
|
if (tEncodeSMDropTbReqOnSingleVg(&encoder, pVgReq) < 0) return -1;
|
||||||
}
|
}
|
||||||
tEndEncode(&encoder);
|
tEndEncode(&encoder);
|
||||||
|
@ -10497,7 +10497,7 @@ int32_t tSerializeSMDropTbsReq(void* buf, int32_t bufLen, const SMDropTbsReq* pR
|
||||||
return tlen;
|
return tlen;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tDeserializeSMDropTbsReq(void* buf, int32_t bufLen, SMDropTbsReq* pReq) {
|
int32_t tDeserializeSMDropTbsReq(void *buf, int32_t bufLen, SMDropTbsReq *pReq) {
|
||||||
SDecoder decoder = {0};
|
SDecoder decoder = {0};
|
||||||
tDecoderInit(&decoder, buf, bufLen);
|
tDecoderInit(&decoder, buf, bufLen);
|
||||||
tStartDecode(&decoder);
|
tStartDecode(&decoder);
|
||||||
|
@ -10518,12 +10518,12 @@ int32_t tDeserializeSMDropTbsReq(void* buf, int32_t bufLen, SMDropTbsReq* pReq)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
void tFreeSMDropTbsReq(void* p) {
|
void tFreeSMDropTbsReq(void *p) {
|
||||||
SMDropTbsReq* pReq = p;
|
SMDropTbsReq *pReq = p;
|
||||||
taosArrayDestroyEx(pReq->pVgReqs, tFreeSMDropTbReqOnSingleVg);
|
taosArrayDestroyEx(pReq->pVgReqs, tFreeSMDropTbReqOnSingleVg);
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tEncodeVFetchTtlExpiredTbsRsp(SEncoder* pCoder, const SVFetchTtlExpiredTbsRsp* pRsp) {
|
int32_t tEncodeVFetchTtlExpiredTbsRsp(SEncoder *pCoder, const SVFetchTtlExpiredTbsRsp *pRsp) {
|
||||||
if (tEncodeI32(pCoder, pRsp->vgId) < 0) return -1;
|
if (tEncodeI32(pCoder, pRsp->vgId) < 0) return -1;
|
||||||
int32_t size = pRsp->pExpiredTbs ? pRsp->pExpiredTbs->size : 0;
|
int32_t size = pRsp->pExpiredTbs ? pRsp->pExpiredTbs->size : 0;
|
||||||
if (tEncodeI32(pCoder, size) < 0) return -1;
|
if (tEncodeI32(pCoder, size) < 0) return -1;
|
||||||
|
@ -10533,7 +10533,7 @@ int32_t tEncodeVFetchTtlExpiredTbsRsp(SEncoder* pCoder, const SVFetchTtlExpiredT
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tDecodeVFetchTtlExpiredTbsRsp(SDecoder* pCoder, SVFetchTtlExpiredTbsRsp* pRsp) {
|
int32_t tDecodeVFetchTtlExpiredTbsRsp(SDecoder *pCoder, SVFetchTtlExpiredTbsRsp *pRsp) {
|
||||||
if (tDecodeI32(pCoder, &pRsp->vgId) < 0) return -1;
|
if (tDecodeI32(pCoder, &pRsp->vgId) < 0) return -1;
|
||||||
int32_t size = 0;
|
int32_t size = 0;
|
||||||
if (tDecodeI32(pCoder, &size) < 0) return -1;
|
if (tDecodeI32(pCoder, &size) < 0) return -1;
|
||||||
|
@ -10549,7 +10549,7 @@ int32_t tDecodeVFetchTtlExpiredTbsRsp(SDecoder* pCoder, SVFetchTtlExpiredTbsRsp*
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
void tFreeFetchTtlExpiredTbsRsp(void* p) {
|
void tFreeFetchTtlExpiredTbsRsp(void *p) {
|
||||||
SVFetchTtlExpiredTbsRsp* pRsp = p;
|
SVFetchTtlExpiredTbsRsp *pRsp = p;
|
||||||
taosArrayDestroy(pRsp->pExpiredTbs);
|
taosArrayDestroy(pRsp->pExpiredTbs);
|
||||||
}
|
}
|
||||||
|
|
|
@ -16,6 +16,8 @@
|
||||||
#define _DEFAULT_SOURCE
|
#define _DEFAULT_SOURCE
|
||||||
#include "mmInt.h"
|
#include "mmInt.h"
|
||||||
|
|
||||||
|
#define PROCESS_THRESHOLD (2000 * 1000)
|
||||||
|
|
||||||
static inline int32_t mmAcquire(SMnodeMgmt *pMgmt) {
|
static inline int32_t mmAcquire(SMnodeMgmt *pMgmt) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
taosThreadRwlockRdlock(&pMgmt->lock);
|
taosThreadRwlockRdlock(&pMgmt->lock);
|
||||||
|
@ -53,6 +55,14 @@ static void mmProcessRpcMsg(SQueueInfo *pInfo, SRpcMsg *pMsg) {
|
||||||
|
|
||||||
int32_t code = mndProcessRpcMsg(pMsg);
|
int32_t code = mndProcessRpcMsg(pMsg);
|
||||||
|
|
||||||
|
if (pInfo->timestamp != 0) {
|
||||||
|
int64_t cost = taosGetTimestampUs() - pInfo->timestamp;
|
||||||
|
if (cost > PROCESS_THRESHOLD) {
|
||||||
|
dGWarn("worker:%d,message has been processed for too long, type:%s, cost: %" PRId64 "s", pInfo->threadNum,
|
||||||
|
TMSG_INFO(pMsg->msgType), cost / (1000 * 1000));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (IsReq(pMsg) && pMsg->info.handle != NULL && code != TSDB_CODE_ACTION_IN_PROGRESS) {
|
if (IsReq(pMsg) && pMsg->info.handle != NULL && code != TSDB_CODE_ACTION_IN_PROGRESS) {
|
||||||
if (code != 0 && terrno != 0) code = terrno;
|
if (code != 0 && terrno != 0) code = terrno;
|
||||||
mmSendRsp(pMsg, code);
|
mmSendRsp(pMsg, code);
|
||||||
|
|
|
@ -166,7 +166,7 @@ typedef struct {
|
||||||
int32_t failedTimes;
|
int32_t failedTimes;
|
||||||
void* rpcRsp;
|
void* rpcRsp;
|
||||||
int32_t rpcRspLen;
|
int32_t rpcRspLen;
|
||||||
int32_t redoActionPos;
|
int32_t actionPos;
|
||||||
SArray* prepareActions;
|
SArray* prepareActions;
|
||||||
SArray* redoActions;
|
SArray* redoActions;
|
||||||
SArray* undoActions;
|
SArray* undoActions;
|
||||||
|
|
|
@ -27,6 +27,8 @@
|
||||||
#define ARBGROUP_VER_NUMBER 1
|
#define ARBGROUP_VER_NUMBER 1
|
||||||
#define ARBGROUP_RESERVE_SIZE 64
|
#define ARBGROUP_RESERVE_SIZE 64
|
||||||
|
|
||||||
|
static SHashObj *arbUpdateHash = NULL;
|
||||||
|
|
||||||
static int32_t mndArbGroupActionInsert(SSdb *pSdb, SArbGroup *pGroup);
|
static int32_t mndArbGroupActionInsert(SSdb *pSdb, SArbGroup *pGroup);
|
||||||
static int32_t mndArbGroupActionUpdate(SSdb *pSdb, SArbGroup *pOld, SArbGroup *pNew);
|
static int32_t mndArbGroupActionUpdate(SSdb *pSdb, SArbGroup *pOld, SArbGroup *pNew);
|
||||||
static int32_t mndArbGroupActionDelete(SSdb *pSdb, SArbGroup *pGroup);
|
static int32_t mndArbGroupActionDelete(SSdb *pSdb, SArbGroup *pGroup);
|
||||||
|
@ -74,10 +76,14 @@ int32_t mndInitArbGroup(SMnode *pMnode) {
|
||||||
mndAddShowRetrieveHandle(pMnode, TSDB_MGMT_TABLE_ARBGROUP, mndRetrieveArbGroups);
|
mndAddShowRetrieveHandle(pMnode, TSDB_MGMT_TABLE_ARBGROUP, mndRetrieveArbGroups);
|
||||||
mndAddShowFreeIterHandle(pMnode, TSDB_MGMT_TABLE_ARBGROUP, mndCancelGetNextArbGroup);
|
mndAddShowFreeIterHandle(pMnode, TSDB_MGMT_TABLE_ARBGROUP, mndCancelGetNextArbGroup);
|
||||||
|
|
||||||
|
arbUpdateHash = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT), false, HASH_ENTRY_LOCK);
|
||||||
|
|
||||||
return sdbSetTable(pMnode->pSdb, table);
|
return sdbSetTable(pMnode->pSdb, table);
|
||||||
}
|
}
|
||||||
|
|
||||||
void mndCleanupArbGroup(SMnode *pMnode) {}
|
void mndCleanupArbGroup(SMnode *pMnode) {
|
||||||
|
taosHashCleanup(arbUpdateHash);
|
||||||
|
}
|
||||||
|
|
||||||
SArbGroup *mndAcquireArbGroup(SMnode *pMnode, int32_t vgId) {
|
SArbGroup *mndAcquireArbGroup(SMnode *pMnode, int32_t vgId) {
|
||||||
SArbGroup *pGroup = sdbAcquire(pMnode->pSdb, SDB_ARBGROUP, &vgId);
|
SArbGroup *pGroup = sdbAcquire(pMnode->pSdb, SDB_ARBGROUP, &vgId);
|
||||||
|
@ -221,8 +227,7 @@ static int32_t mndArbGroupActionUpdate(SSdb *pSdb, SArbGroup *pOld, SArbGroup *p
|
||||||
mInfo("arbgroup:%d, skip to perform update action, old row:%p new row:%p, old version:%" PRId64
|
mInfo("arbgroup:%d, skip to perform update action, old row:%p new row:%p, old version:%" PRId64
|
||||||
" new version:%" PRId64,
|
" new version:%" PRId64,
|
||||||
pOld->vgId, pOld, pNew, pOld->version, pNew->version);
|
pOld->vgId, pOld, pNew, pOld->version, pNew->version);
|
||||||
taosThreadMutexUnlock(&pOld->mutex);
|
goto _OVER;
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
for (int i = 0; i < TSDB_ARB_GROUP_MEMBER_NUM; i++) {
|
for (int i = 0; i < TSDB_ARB_GROUP_MEMBER_NUM; i++) {
|
||||||
|
@ -232,7 +237,11 @@ static int32_t mndArbGroupActionUpdate(SSdb *pSdb, SArbGroup *pOld, SArbGroup *p
|
||||||
pOld->assignedLeader.dnodeId = pNew->assignedLeader.dnodeId;
|
pOld->assignedLeader.dnodeId = pNew->assignedLeader.dnodeId;
|
||||||
memcpy(pOld->assignedLeader.token, pNew->assignedLeader.token, TSDB_ARB_TOKEN_SIZE);
|
memcpy(pOld->assignedLeader.token, pNew->assignedLeader.token, TSDB_ARB_TOKEN_SIZE);
|
||||||
pOld->version++;
|
pOld->version++;
|
||||||
|
|
||||||
|
_OVER:
|
||||||
taosThreadMutexUnlock(&pOld->mutex);
|
taosThreadMutexUnlock(&pOld->mutex);
|
||||||
|
|
||||||
|
taosHashRemove(arbUpdateHash, &pOld->vgId, sizeof(int32_t));
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -645,6 +654,11 @@ static void *mndBuildArbUpdateGroupReq(int32_t *pContLen, SArbGroup *pNewGroup)
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t mndPullupArbUpdateGroup(SMnode *pMnode, SArbGroup *pNewGroup) {
|
static int32_t mndPullupArbUpdateGroup(SMnode *pMnode, SArbGroup *pNewGroup) {
|
||||||
|
if (taosHashGet(arbUpdateHash, &pNewGroup->vgId, sizeof(pNewGroup->vgId)) != NULL) {
|
||||||
|
mInfo("vgId:%d, arb skip to pullup arb-update-group request, since it is in process", pNewGroup->vgId);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
int32_t contLen = 0;
|
int32_t contLen = 0;
|
||||||
void *pHead = mndBuildArbUpdateGroupReq(&contLen, pNewGroup);
|
void *pHead = mndBuildArbUpdateGroupReq(&contLen, pNewGroup);
|
||||||
if (!pHead) {
|
if (!pHead) {
|
||||||
|
@ -653,7 +667,11 @@ static int32_t mndPullupArbUpdateGroup(SMnode *pMnode, SArbGroup *pNewGroup) {
|
||||||
}
|
}
|
||||||
SRpcMsg rpcMsg = {.msgType = TDMT_MND_ARB_UPDATE_GROUP, .pCont = pHead, .contLen = contLen, .info.noResp = true};
|
SRpcMsg rpcMsg = {.msgType = TDMT_MND_ARB_UPDATE_GROUP, .pCont = pHead, .contLen = contLen, .info.noResp = true};
|
||||||
|
|
||||||
return tmsgPutToQueue(&pMnode->msgCb, WRITE_QUEUE, &rpcMsg);
|
int32_t ret = tmsgPutToQueue(&pMnode->msgCb, WRITE_QUEUE, &rpcMsg);
|
||||||
|
if (ret == 0) {
|
||||||
|
taosHashPut(arbUpdateHash, &pNewGroup->vgId, sizeof(pNewGroup->vgId), NULL, 0);
|
||||||
|
}
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t mndProcessArbUpdateGroupReq(SRpcMsg *pReq) {
|
static int32_t mndProcessArbUpdateGroupReq(SRpcMsg *pReq) {
|
||||||
|
@ -930,8 +948,12 @@ static int32_t mndProcessArbCheckSyncRsp(SRpcMsg *pRsp) {
|
||||||
|
|
||||||
SVArbCheckSyncRsp syncRsp = {0};
|
SVArbCheckSyncRsp syncRsp = {0};
|
||||||
if (tDeserializeSVArbCheckSyncRsp(pRsp->pCont, pRsp->contLen, &syncRsp) != 0) {
|
if (tDeserializeSVArbCheckSyncRsp(pRsp->pCont, pRsp->contLen, &syncRsp) != 0) {
|
||||||
terrno = TSDB_CODE_INVALID_MSG;
|
|
||||||
mInfo("arb sync check failed, since:%s", tstrerror(pRsp->code));
|
mInfo("arb sync check failed, since:%s", tstrerror(pRsp->code));
|
||||||
|
if (pRsp->code == TSDB_CODE_MND_ARB_TOKEN_MISMATCH) {
|
||||||
|
terrno = TSDB_CODE_SUCCESS;
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
terrno = TSDB_CODE_INVALID_MSG;
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -860,11 +860,6 @@ static int32_t mndProcessCreateDbReq(SRpcMsg *pReq) {
|
||||||
SUserObj *pUser = NULL;
|
SUserObj *pUser = NULL;
|
||||||
SCreateDbReq createReq = {0};
|
SCreateDbReq createReq = {0};
|
||||||
|
|
||||||
if ((terrno = grantCheck(TSDB_GRANT_DB)) != 0) {
|
|
||||||
code = terrno;
|
|
||||||
goto _OVER;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (tDeserializeSCreateDbReq(pReq->pCont, pReq->contLen, &createReq) != 0) {
|
if (tDeserializeSCreateDbReq(pReq->pCont, pReq->contLen, &createReq) != 0) {
|
||||||
terrno = TSDB_CODE_INVALID_MSG;
|
terrno = TSDB_CODE_INVALID_MSG;
|
||||||
goto _OVER;
|
goto _OVER;
|
||||||
|
@ -903,6 +898,11 @@ static int32_t mndProcessCreateDbReq(SRpcMsg *pReq) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if ((terrno = grantCheck(TSDB_GRANT_DB)) != 0) {
|
||||||
|
code = terrno;
|
||||||
|
goto _OVER;
|
||||||
|
}
|
||||||
|
|
||||||
if ((code = mndCheckDbEncryptKey(pMnode, &createReq)) != 0) {
|
if ((code = mndCheckDbEncryptKey(pMnode, &createReq)) != 0) {
|
||||||
terrno = code;
|
terrno = code;
|
||||||
goto _OVER;
|
goto _OVER;
|
||||||
|
|
|
@ -1457,8 +1457,8 @@ static void mndCreateTSMABuildCreateStreamReq(SCreateTSMACxt *pCxt) {
|
||||||
pCxt->pCreateStreamReq->igUpdate = 0;
|
pCxt->pCreateStreamReq->igUpdate = 0;
|
||||||
pCxt->pCreateStreamReq->lastTs = pCxt->pCreateSmaReq->lastTs;
|
pCxt->pCreateStreamReq->lastTs = pCxt->pCreateSmaReq->lastTs;
|
||||||
pCxt->pCreateStreamReq->smaId = pCxt->pSma->uid;
|
pCxt->pCreateStreamReq->smaId = pCxt->pSma->uid;
|
||||||
pCxt->pCreateStreamReq->ast = strdup(pCxt->pCreateSmaReq->ast);
|
pCxt->pCreateStreamReq->ast = taosStrdup(pCxt->pCreateSmaReq->ast);
|
||||||
pCxt->pCreateStreamReq->sql = strdup(pCxt->pCreateSmaReq->sql);
|
pCxt->pCreateStreamReq->sql = taosStrdup(pCxt->pCreateSmaReq->sql);
|
||||||
|
|
||||||
// construct tags
|
// construct tags
|
||||||
pCxt->pCreateStreamReq->pTags = taosArrayInit(pCxt->pCreateStreamReq->numOfTags, sizeof(SField));
|
pCxt->pCreateStreamReq->pTags = taosArrayInit(pCxt->pCreateStreamReq->numOfTags, sizeof(SField));
|
||||||
|
@ -1494,7 +1494,7 @@ static void mndCreateTSMABuildCreateStreamReq(SCreateTSMACxt *pCxt) {
|
||||||
static void mndCreateTSMABuildDropStreamReq(SCreateTSMACxt* pCxt) {
|
static void mndCreateTSMABuildDropStreamReq(SCreateTSMACxt* pCxt) {
|
||||||
tstrncpy(pCxt->pDropStreamReq->name, pCxt->streamName, TSDB_STREAM_FNAME_LEN);
|
tstrncpy(pCxt->pDropStreamReq->name, pCxt->streamName, TSDB_STREAM_FNAME_LEN);
|
||||||
pCxt->pDropStreamReq->igNotExists = false;
|
pCxt->pDropStreamReq->igNotExists = false;
|
||||||
pCxt->pDropStreamReq->sql = strdup(pCxt->pDropSmaReq->name);
|
pCxt->pDropStreamReq->sql = taosStrdup(pCxt->pDropSmaReq->name);
|
||||||
pCxt->pDropStreamReq->sqlLen = strlen(pCxt->pDropStreamReq->sql);
|
pCxt->pDropStreamReq->sqlLen = strlen(pCxt->pDropStreamReq->sql);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -63,7 +63,7 @@ static int32_t mndProcessCreateIndexReq(SRpcMsg *pReq);
|
||||||
static int32_t mndProcessDropIndexReq(SRpcMsg *pReq);
|
static int32_t mndProcessDropIndexReq(SRpcMsg *pReq);
|
||||||
|
|
||||||
static int32_t mndProcessDropStbReqFromMNode(SRpcMsg *pReq);
|
static int32_t mndProcessDropStbReqFromMNode(SRpcMsg *pReq);
|
||||||
static int32_t mndProcessDropTbWithTsma(SRpcMsg* pReq);
|
static int32_t mndProcessDropTbWithTsma(SRpcMsg *pReq);
|
||||||
static int32_t mndProcessFetchTtlExpiredTbs(SRpcMsg *pReq);
|
static int32_t mndProcessFetchTtlExpiredTbs(SRpcMsg *pReq);
|
||||||
|
|
||||||
int32_t mndInitStb(SMnode *pMnode) {
|
int32_t mndInitStb(SMnode *pMnode) {
|
||||||
|
@ -1006,7 +1006,8 @@ static int32_t mndProcessTtlTimer(SRpcMsg *pReq) {
|
||||||
pHead->vgId = htonl(pVgroup->vgId);
|
pHead->vgId = htonl(pVgroup->vgId);
|
||||||
tSerializeSVDropTtlTableReq((char *)pHead + sizeof(SMsgHead), reqLen, &ttlReq);
|
tSerializeSVDropTtlTableReq((char *)pHead + sizeof(SMsgHead), reqLen, &ttlReq);
|
||||||
|
|
||||||
SRpcMsg rpcMsg = {.msgType = TDMT_VND_FETCH_TTL_EXPIRED_TBS, .pCont = pHead, .contLen = contLen, .info = pReq->info};
|
SRpcMsg rpcMsg = {
|
||||||
|
.msgType = TDMT_VND_FETCH_TTL_EXPIRED_TBS, .pCont = pHead, .contLen = contLen, .info = pReq->info};
|
||||||
SEpSet epSet = mndGetVgroupEpset(pMnode, pVgroup);
|
SEpSet epSet = mndGetVgroupEpset(pMnode, pVgroup);
|
||||||
int32_t code = tmsgSendReq(&epSet, &rpcMsg);
|
int32_t code = tmsgSendReq(&epSet, &rpcMsg);
|
||||||
if (code != 0) {
|
if (code != 0) {
|
||||||
|
@ -1752,9 +1753,10 @@ static int32_t mndUpdateSuperTableColumnCompress(SMnode *pMnode, const SStbObj *
|
||||||
if (mndAllocStbSchemas(pOld, pNew) != 0) {
|
if (mndAllocStbSchemas(pOld, pNew) != 0) {
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
if (!validColCmprByType(pTarget->type, p->bytes)) {
|
code = validColCmprByType(pTarget->type, p->bytes);
|
||||||
terrno = TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
return -1;
|
terrno = code;
|
||||||
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
int8_t updated = 0;
|
int8_t updated = 0;
|
||||||
|
@ -3892,25 +3894,26 @@ typedef struct SMDropTbTsmaInfo {
|
||||||
} SMDropTbTsmaInfo;
|
} SMDropTbTsmaInfo;
|
||||||
|
|
||||||
typedef struct SMDropTbTsmaInfos {
|
typedef struct SMDropTbTsmaInfos {
|
||||||
SArray* pTsmaInfos; // SMDropTbTsmaInfo
|
SArray *pTsmaInfos; // SMDropTbTsmaInfo
|
||||||
} SMDropTbTsmaInfos;
|
} SMDropTbTsmaInfos;
|
||||||
|
|
||||||
typedef struct SMndDropTbsWithTsmaCtx {
|
typedef struct SMndDropTbsWithTsmaCtx {
|
||||||
SHashObj* pTsmaMap; // <suid, SMDropTbTsmaInfos>
|
SHashObj *pTsmaMap; // <suid, SMDropTbTsmaInfos>
|
||||||
SHashObj* pDbMap; // <dbuid, SMDropTbDbInfo>
|
SHashObj *pDbMap; // <dbuid, SMDropTbDbInfo>
|
||||||
SHashObj* pVgMap; // <vgId, SVDropTbVgReqs>
|
SHashObj *pVgMap; // <vgId, SVDropTbVgReqs>
|
||||||
SArray* pResTbNames; // SArray<char*>
|
SArray *pResTbNames; // SArray<char*>
|
||||||
} SMndDropTbsWithTsmaCtx;
|
} SMndDropTbsWithTsmaCtx;
|
||||||
|
|
||||||
static int32_t mndDropTbAddTsmaResTbsForSingleVg(SMnode* pMnode, SMndDropTbsWithTsmaCtx* pCtx, SArray* pTbs, int32_t vgId);
|
static int32_t mndDropTbAddTsmaResTbsForSingleVg(SMnode *pMnode, SMndDropTbsWithTsmaCtx *pCtx, SArray *pTbs,
|
||||||
|
int32_t vgId);
|
||||||
|
|
||||||
static void mndDestroyDropTbsWithTsmaCtx(SMndDropTbsWithTsmaCtx* p) {
|
static void mndDestroyDropTbsWithTsmaCtx(SMndDropTbsWithTsmaCtx *p) {
|
||||||
if (!p) return;
|
if (!p) return;
|
||||||
|
|
||||||
if (p->pDbMap) {
|
if (p->pDbMap) {
|
||||||
void* pIter = taosHashIterate(p->pDbMap, NULL);
|
void *pIter = taosHashIterate(p->pDbMap, NULL);
|
||||||
while (pIter) {
|
while (pIter) {
|
||||||
SMDropTbDbInfo* pInfo = pIter;
|
SMDropTbDbInfo *pInfo = pIter;
|
||||||
taosArrayDestroy(pInfo->dbVgInfos);
|
taosArrayDestroy(pInfo->dbVgInfos);
|
||||||
pIter = taosHashIterate(p->pDbMap, pIter);
|
pIter = taosHashIterate(p->pDbMap, pIter);
|
||||||
}
|
}
|
||||||
|
@ -3920,9 +3923,9 @@ static void mndDestroyDropTbsWithTsmaCtx(SMndDropTbsWithTsmaCtx* p) {
|
||||||
taosArrayDestroyP(p->pResTbNames, taosMemoryFree);
|
taosArrayDestroyP(p->pResTbNames, taosMemoryFree);
|
||||||
}
|
}
|
||||||
if (p->pTsmaMap) {
|
if (p->pTsmaMap) {
|
||||||
void* pIter = taosHashIterate(p->pTsmaMap, NULL);
|
void *pIter = taosHashIterate(p->pTsmaMap, NULL);
|
||||||
while (pIter) {
|
while (pIter) {
|
||||||
SMDropTbTsmaInfos* pInfos = pIter;
|
SMDropTbTsmaInfos *pInfos = pIter;
|
||||||
taosArrayDestroy(pInfos->pTsmaInfos);
|
taosArrayDestroy(pInfos->pTsmaInfos);
|
||||||
pIter = taosHashIterate(p->pTsmaMap, pIter);
|
pIter = taosHashIterate(p->pTsmaMap, pIter);
|
||||||
}
|
}
|
||||||
|
@ -3930,7 +3933,7 @@ static void mndDestroyDropTbsWithTsmaCtx(SMndDropTbsWithTsmaCtx* p) {
|
||||||
}
|
}
|
||||||
|
|
||||||
if (p->pVgMap) {
|
if (p->pVgMap) {
|
||||||
void* pIter = taosHashIterate(p->pVgMap, NULL);
|
void *pIter = taosHashIterate(p->pVgMap, NULL);
|
||||||
while (pIter) {
|
while (pIter) {
|
||||||
SVDropTbVgReqs *pReqs = pIter;
|
SVDropTbVgReqs *pReqs = pIter;
|
||||||
taosArrayDestroy(pReqs->req.pArray);
|
taosArrayDestroy(pReqs->req.pArray);
|
||||||
|
@ -3941,9 +3944,9 @@ static void mndDestroyDropTbsWithTsmaCtx(SMndDropTbsWithTsmaCtx* p) {
|
||||||
taosMemoryFree(p);
|
taosMemoryFree(p);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t mndInitDropTbsWithTsmaCtx(SMndDropTbsWithTsmaCtx** ppCtx) {
|
static int32_t mndInitDropTbsWithTsmaCtx(SMndDropTbsWithTsmaCtx **ppCtx) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
SMndDropTbsWithTsmaCtx* pCtx = taosMemoryCalloc(1, sizeof(SMndDropTbsWithTsmaCtx));
|
SMndDropTbsWithTsmaCtx *pCtx = taosMemoryCalloc(1, sizeof(SMndDropTbsWithTsmaCtx));
|
||||||
if (!pCtx) return TSDB_CODE_OUT_OF_MEMORY;
|
if (!pCtx) return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
pCtx->pTsmaMap = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_NO_LOCK);
|
pCtx->pTsmaMap = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_NO_LOCK);
|
||||||
if (!pCtx->pTsmaMap) {
|
if (!pCtx->pTsmaMap) {
|
||||||
|
@ -3969,8 +3972,8 @@ _end:
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void *mndBuildVDropTbsReq(SMnode *pMnode, const SVgroupInfo *pVgInfo, const SVDropTbBatchReq *pReq,
|
||||||
static void* mndBuildVDropTbsReq(SMnode* pMnode, const SVgroupInfo* pVgInfo, const SVDropTbBatchReq* pReq, int32_t *len) {
|
int32_t *len) {
|
||||||
int32_t contLen = 0;
|
int32_t contLen = 0;
|
||||||
int32_t ret = 0;
|
int32_t ret = 0;
|
||||||
SMsgHead *pHead = NULL;
|
SMsgHead *pHead = NULL;
|
||||||
|
@ -3999,7 +4002,8 @@ static void* mndBuildVDropTbsReq(SMnode* pMnode, const SVgroupInfo* pVgInfo, con
|
||||||
return pHead;
|
return pHead;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t mndSetDropTbsRedoActions(SMnode* pMnode, STrans* pTrans, const SVDropTbVgReqs* pVgReqs, void* pCont, int32_t contLen) {
|
static int32_t mndSetDropTbsRedoActions(SMnode *pMnode, STrans *pTrans, const SVDropTbVgReqs *pVgReqs, void *pCont,
|
||||||
|
int32_t contLen) {
|
||||||
STransAction action = {0};
|
STransAction action = {0};
|
||||||
action.epSet = pVgReqs->info.epSet;
|
action.epSet = pVgReqs->info.epSet;
|
||||||
action.pCont = pCont;
|
action.pCont = pCont;
|
||||||
|
@ -4009,7 +4013,7 @@ static int32_t mndSetDropTbsRedoActions(SMnode* pMnode, STrans* pTrans, const SV
|
||||||
return mndTransAppendRedoAction(pTrans, &action);
|
return mndTransAppendRedoAction(pTrans, &action);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t mndCreateDropTbsTxnPrepare(SRpcMsg* pRsp, SMndDropTbsWithTsmaCtx* pCtx) {
|
static int32_t mndCreateDropTbsTxnPrepare(SRpcMsg *pRsp, SMndDropTbsWithTsmaCtx *pCtx) {
|
||||||
SMnode *pMnode = pRsp->info.node;
|
SMnode *pMnode = pRsp->info.node;
|
||||||
STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_GLOBAL, pRsp, "drop-tbs");
|
STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_GLOBAL, pRsp, "drop-tbs");
|
||||||
mndTransSetChangeless(pTrans);
|
mndTransSetChangeless(pTrans);
|
||||||
|
@ -4017,11 +4021,11 @@ static int32_t mndCreateDropTbsTxnPrepare(SRpcMsg* pRsp, SMndDropTbsWithTsmaCtx*
|
||||||
|
|
||||||
if (mndTransCheckConflict(pMnode, pTrans) != 0) goto _OVER;
|
if (mndTransCheckConflict(pMnode, pTrans) != 0) goto _OVER;
|
||||||
|
|
||||||
void* pIter = taosHashIterate(pCtx->pVgMap, NULL);
|
void *pIter = taosHashIterate(pCtx->pVgMap, NULL);
|
||||||
while (pIter) {
|
while (pIter) {
|
||||||
const SVDropTbVgReqs* pVgReqs = pIter;
|
const SVDropTbVgReqs *pVgReqs = pIter;
|
||||||
int32_t len = 0;
|
int32_t len = 0;
|
||||||
void* p = mndBuildVDropTbsReq(pMnode, &pVgReqs->info, &pVgReqs->req, &len);
|
void *p = mndBuildVDropTbsReq(pMnode, &pVgReqs->info, &pVgReqs->req, &len);
|
||||||
if (!p || mndSetDropTbsRedoActions(pMnode, pTrans, pVgReqs, p, len) != 0) {
|
if (!p || mndSetDropTbsRedoActions(pMnode, pTrans, pVgReqs, p, len) != 0) {
|
||||||
taosHashCancelIterate(pCtx->pVgMap, pIter);
|
taosHashCancelIterate(pCtx->pVgMap, pIter);
|
||||||
goto _OVER;
|
goto _OVER;
|
||||||
|
@ -4035,7 +4039,7 @@ _OVER:
|
||||||
return terrno;
|
return terrno;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t mndProcessDropTbWithTsma(SRpcMsg* pReq) {
|
static int32_t mndProcessDropTbWithTsma(SRpcMsg *pReq) {
|
||||||
int32_t code = -1;
|
int32_t code = -1;
|
||||||
SMnode *pMnode = pReq->info.node;
|
SMnode *pMnode = pReq->info.node;
|
||||||
SDbObj *pDb = NULL;
|
SDbObj *pDb = NULL;
|
||||||
|
@ -4047,16 +4051,15 @@ static int32_t mndProcessDropTbWithTsma(SRpcMsg* pReq) {
|
||||||
goto _OVER;
|
goto _OVER;
|
||||||
}
|
}
|
||||||
|
|
||||||
SMndDropTbsWithTsmaCtx* pCtx = NULL;
|
SMndDropTbsWithTsmaCtx *pCtx = NULL;
|
||||||
terrno = mndInitDropTbsWithTsmaCtx(&pCtx);
|
terrno = mndInitDropTbsWithTsmaCtx(&pCtx);
|
||||||
if (terrno) goto _OVER;
|
if (terrno) goto _OVER;
|
||||||
for (int32_t i = 0; i < dropReq.pVgReqs->size; ++i) {
|
for (int32_t i = 0; i < dropReq.pVgReqs->size; ++i) {
|
||||||
SMDropTbReqsOnSingleVg* pReq = taosArrayGet(dropReq.pVgReqs, i);
|
SMDropTbReqsOnSingleVg *pReq = taosArrayGet(dropReq.pVgReqs, i);
|
||||||
terrno = mndDropTbAddTsmaResTbsForSingleVg(pMnode, pCtx, pReq->pTbs, pReq->vgInfo.vgId);
|
terrno = mndDropTbAddTsmaResTbsForSingleVg(pMnode, pCtx, pReq->pTbs, pReq->vgInfo.vgId);
|
||||||
if (terrno) goto _OVER;
|
if (terrno) goto _OVER;
|
||||||
}
|
}
|
||||||
if (mndCreateDropTbsTxnPrepare(pReq, pCtx) == 0)
|
if (mndCreateDropTbsTxnPrepare(pReq, pCtx) == 0) code = 0;
|
||||||
code = 0;
|
|
||||||
_OVER:
|
_OVER:
|
||||||
tFreeSMDropTbsReq(&dropReq);
|
tFreeSMDropTbsReq(&dropReq);
|
||||||
if (pCtx) mndDestroyDropTbsWithTsmaCtx(pCtx);
|
if (pCtx) mndDestroyDropTbsWithTsmaCtx(pCtx);
|
||||||
|
@ -4067,7 +4070,7 @@ static int32_t mndDropTbAdd(SMnode *pMnode, SHashObj *pVgHashMap, const SVgroupI
|
||||||
bool ignoreNotExists) {
|
bool ignoreNotExists) {
|
||||||
SVDropTbReq req = {.name = name, .suid = suid, .igNotExists = ignoreNotExists};
|
SVDropTbReq req = {.name = name, .suid = suid, .igNotExists = ignoreNotExists};
|
||||||
|
|
||||||
SVDropTbVgReqs * pReqs = taosHashGet(pVgHashMap, &pVgInfo->vgId, sizeof(pVgInfo->vgId));
|
SVDropTbVgReqs *pReqs = taosHashGet(pVgHashMap, &pVgInfo->vgId, sizeof(pVgInfo->vgId));
|
||||||
SVDropTbVgReqs reqs = {0};
|
SVDropTbVgReqs reqs = {0};
|
||||||
if (pReqs == NULL) {
|
if (pReqs == NULL) {
|
||||||
reqs.info = *pVgInfo;
|
reqs.info = *pVgInfo;
|
||||||
|
@ -4080,16 +4083,16 @@ static int32_t mndDropTbAdd(SMnode *pMnode, SHashObj *pVgHashMap, const SVgroupI
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t mndGetDbVgInfoForTsma(SMnode* pMnode, const char* dbname, SMDropTbTsmaInfo* pInfo) {
|
static int32_t mndGetDbVgInfoForTsma(SMnode *pMnode, const char *dbname, SMDropTbTsmaInfo *pInfo) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
SDbObj* pDb = mndAcquireDb(pMnode, dbname);
|
SDbObj *pDb = mndAcquireDb(pMnode, dbname);
|
||||||
if (!pDb) {
|
if (!pDb) {
|
||||||
code = TSDB_CODE_MND_DB_NOT_EXIST;
|
code = TSDB_CODE_MND_DB_NOT_EXIST;
|
||||||
goto _end;
|
goto _end;
|
||||||
}
|
}
|
||||||
|
|
||||||
pInfo->dbInfo.dbVgInfos = taosArrayInit(pDb->cfg.numOfVgroups, sizeof(SVgroupInfo));
|
pInfo->dbInfo.dbVgInfos = taosArrayInit(pDb->cfg.numOfVgroups, sizeof(SVgroupInfo));
|
||||||
if ( !pInfo->dbInfo.dbVgInfos) {
|
if (!pInfo->dbInfo.dbVgInfos) {
|
||||||
code = TSDB_CODE_OUT_OF_MEMORY;
|
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
goto _end;
|
goto _end;
|
||||||
}
|
}
|
||||||
|
@ -4108,9 +4111,9 @@ _end:
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t vgHashValCmp(const void* lp, const void* rp) {
|
int32_t vgHashValCmp(const void *lp, const void *rp) {
|
||||||
uint32_t* key = (uint32_t*)lp;
|
uint32_t *key = (uint32_t *)lp;
|
||||||
SVgroupInfo* pVg = (SVgroupInfo*)rp;
|
SVgroupInfo *pVg = (SVgroupInfo *)rp;
|
||||||
|
|
||||||
if (*key < pVg->hashBegin) {
|
if (*key < pVg->hashBegin) {
|
||||||
return -1;
|
return -1;
|
||||||
|
@ -4121,23 +4124,26 @@ int32_t vgHashValCmp(const void* lp, const void* rp) {
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t mndDropTbAddTsmaResTbsForSingleVg(SMnode* pMnode, SMndDropTbsWithTsmaCtx* pCtx, SArray* pTbs, int32_t vgId) {
|
static int32_t mndDropTbAddTsmaResTbsForSingleVg(SMnode *pMnode, SMndDropTbsWithTsmaCtx *pCtx, SArray *pTbs,
|
||||||
|
int32_t vgId) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
|
|
||||||
SVgObj* pVgObj = mndAcquireVgroup(pMnode, vgId);
|
SVgObj *pVgObj = mndAcquireVgroup(pMnode, vgId);
|
||||||
if (!pVgObj) {
|
if (!pVgObj) {
|
||||||
code = 0;
|
code = 0;
|
||||||
goto _end;
|
goto _end;
|
||||||
}
|
}
|
||||||
SVgroupInfo vgInfo = {.hashBegin = pVgObj->hashBegin, .hashEnd = pVgObj->hashEnd, .numOfTable = pVgObj->numOfTables, .vgId = pVgObj->vgId};
|
SVgroupInfo vgInfo = {.hashBegin = pVgObj->hashBegin,
|
||||||
|
.hashEnd = pVgObj->hashEnd,
|
||||||
|
.numOfTable = pVgObj->numOfTables,
|
||||||
|
.vgId = pVgObj->vgId};
|
||||||
vgInfo.epSet = mndGetVgroupEpset(pMnode, pVgObj);
|
vgInfo.epSet = mndGetVgroupEpset(pMnode, pVgObj);
|
||||||
mndReleaseVgroup(pMnode, pVgObj);
|
mndReleaseVgroup(pMnode, pVgObj);
|
||||||
|
|
||||||
// get all stb uids
|
// get all stb uids
|
||||||
for (int32_t i = 0; i < pTbs->size; ++i) {
|
for (int32_t i = 0; i < pTbs->size; ++i) {
|
||||||
const SVDropTbReq* pTb = taosArrayGet(pTbs, i);
|
const SVDropTbReq *pTb = taosArrayGet(pTbs, i);
|
||||||
if (taosHashGet(pCtx->pTsmaMap, &pTb->suid, sizeof(pTb->suid))) {
|
if (taosHashGet(pCtx->pTsmaMap, &pTb->suid, sizeof(pTb->suid))) {
|
||||||
|
|
||||||
} else {
|
} else {
|
||||||
SMDropTbTsmaInfos infos = {0};
|
SMDropTbTsmaInfos infos = {0};
|
||||||
infos.pTsmaInfos = taosArrayInit(2, sizeof(SMDropTbTsmaInfo));
|
infos.pTsmaInfos = taosArrayInit(2, sizeof(SMDropTbTsmaInfo));
|
||||||
|
@ -4156,14 +4162,14 @@ static int32_t mndDropTbAddTsmaResTbsForSingleVg(SMnode* pMnode, SMndDropTbsWith
|
||||||
while (1) {
|
while (1) {
|
||||||
pIter = sdbFetch(pMnode->pSdb, SDB_SMA, pIter, (void **)&pSma);
|
pIter = sdbFetch(pMnode->pSdb, SDB_SMA, pIter, (void **)&pSma);
|
||||||
if (!pIter) break;
|
if (!pIter) break;
|
||||||
SMDropTbTsmaInfos* pInfos = taosHashGet(pCtx->pTsmaMap, &pSma->stbUid, sizeof(pSma->stbUid));
|
SMDropTbTsmaInfos *pInfos = taosHashGet(pCtx->pTsmaMap, &pSma->stbUid, sizeof(pSma->stbUid));
|
||||||
if (pInfos) {
|
if (pInfos) {
|
||||||
SMDropTbTsmaInfo info = {0};
|
SMDropTbTsmaInfo info = {0};
|
||||||
int32_t len = sprintf(buf, "%s", pSma->name);
|
int32_t len = sprintf(buf, "%s", pSma->name);
|
||||||
len = taosCreateMD5Hash(buf, len);
|
len = taosCreateMD5Hash(buf, len);
|
||||||
sprintf(info.tsmaResTbDbFName, "%s", pSma->db);
|
sprintf(info.tsmaResTbDbFName, "%s", pSma->db);
|
||||||
snprintf(info.tsmaResTbNamePrefix, TSDB_TABLE_NAME_LEN, "%s", buf);
|
snprintf(info.tsmaResTbNamePrefix, TSDB_TABLE_NAME_LEN, "%s", buf);
|
||||||
SMDropTbDbInfo* pDbInfo = taosHashGet(pCtx->pDbMap, pSma->db, TSDB_DB_FNAME_LEN);
|
SMDropTbDbInfo *pDbInfo = taosHashGet(pCtx->pDbMap, pSma->db, TSDB_DB_FNAME_LEN);
|
||||||
info.suid = pSma->dstTbUid;
|
info.suid = pSma->dstTbUid;
|
||||||
if (!pDbInfo) {
|
if (!pDbInfo) {
|
||||||
code = mndGetDbVgInfoForTsma(pMnode, pSma->db, &info);
|
code = mndGetDbVgInfoForTsma(pMnode, pSma->db, &info);
|
||||||
|
@ -4183,7 +4189,7 @@ static int32_t mndDropTbAddTsmaResTbsForSingleVg(SMnode* pMnode, SMndDropTbsWith
|
||||||
|
|
||||||
// generate vg req map
|
// generate vg req map
|
||||||
for (int32_t i = 0; i < pTbs->size; ++i) {
|
for (int32_t i = 0; i < pTbs->size; ++i) {
|
||||||
SVDropTbReq* pTb = taosArrayGet(pTbs, i);
|
SVDropTbReq *pTb = taosArrayGet(pTbs, i);
|
||||||
mndDropTbAdd(pMnode, pCtx->pVgMap, &vgInfo, pTb->name, pTb->suid, pTb->igNotExists);
|
mndDropTbAdd(pMnode, pCtx->pVgMap, &vgInfo, pTb->name, pTb->suid, pTb->igNotExists);
|
||||||
|
|
||||||
SMDropTbTsmaInfos *pInfos = taosHashGet(pCtx->pTsmaMap, &pTb->suid, sizeof(pTb->suid));
|
SMDropTbTsmaInfos *pInfos = taosHashGet(pCtx->pTsmaMap, &pTb->suid, sizeof(pTb->suid));
|
||||||
|
@ -4195,7 +4201,7 @@ static int32_t mndDropTbAddTsmaResTbsForSingleVg(SMnode* pMnode, SMndDropTbsWith
|
||||||
uint32_t hashVal =
|
uint32_t hashVal =
|
||||||
taosGetTbHashVal(buf, len, pInfo->dbInfo.hashMethod, pInfo->dbInfo.hashPrefix, pInfo->dbInfo.hashSuffix);
|
taosGetTbHashVal(buf, len, pInfo->dbInfo.hashMethod, pInfo->dbInfo.hashPrefix, pInfo->dbInfo.hashSuffix);
|
||||||
const SVgroupInfo *pVgInfo = taosArraySearch(pInfo->dbInfo.dbVgInfos, &hashVal, vgHashValCmp, TD_EQ);
|
const SVgroupInfo *pVgInfo = taosArraySearch(pInfo->dbInfo.dbVgInfos, &hashVal, vgHashValCmp, TD_EQ);
|
||||||
void* p = taosStrdup(buf + strlen(pInfo->tsmaResTbDbFName) + TSDB_NAME_DELIMITER_LEN);
|
void *p = taosStrdup(buf + strlen(pInfo->tsmaResTbDbFName) + TSDB_NAME_DELIMITER_LEN);
|
||||||
taosArrayPush(pCtx->pResTbNames, &p);
|
taosArrayPush(pCtx->pResTbNames, &p);
|
||||||
mndDropTbAdd(pMnode, pCtx->pVgMap, pVgInfo, p, pInfo->suid, true);
|
mndDropTbAdd(pMnode, pCtx->pVgMap, pVgInfo, p, pInfo->suid, true);
|
||||||
}
|
}
|
||||||
|
@ -4225,8 +4231,7 @@ static int32_t mndProcessFetchTtlExpiredTbs(SRpcMsg *pRsp) {
|
||||||
|
|
||||||
terrno = mndDropTbAddTsmaResTbsForSingleVg(pMnode, pCtx, rsp.pExpiredTbs, rsp.vgId);
|
terrno = mndDropTbAddTsmaResTbsForSingleVg(pMnode, pCtx, rsp.pExpiredTbs, rsp.vgId);
|
||||||
if (terrno) goto _end;
|
if (terrno) goto _end;
|
||||||
if (mndCreateDropTbsTxnPrepare(pRsp, pCtx) == 0)
|
if (mndCreateDropTbsTxnPrepare(pRsp, pCtx) == 0) code = 0;
|
||||||
code = 0;
|
|
||||||
_end:
|
_end:
|
||||||
if (pCtx) mndDestroyDropTbsWithTsmaCtx(pCtx);
|
if (pCtx) mndDestroyDropTbsWithTsmaCtx(pCtx);
|
||||||
tDecoderClear(&decoder);
|
tDecoderClear(&decoder);
|
||||||
|
|
|
@ -325,7 +325,7 @@ static int32_t createSchemaByFields(const SArray* pFields, SSchemaWrapper* pWrap
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool hasPrimaryKey(SSchemaWrapper* pWrapper) {
|
static bool hasDestPrimaryKey(SSchemaWrapper* pWrapper) {
|
||||||
if (pWrapper->nCols < 2) {
|
if (pWrapper->nCols < 2) {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
@ -442,7 +442,7 @@ static int32_t mndBuildStreamObjFromCreateReq(SMnode *pMnode, SStreamObj *pObj,
|
||||||
pObj->outputSchema.pSchema = pFullSchema;
|
pObj->outputSchema.pSchema = pFullSchema;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool hasKey = hasPrimaryKey(&pObj->outputSchema);
|
bool hasKey = hasDestPrimaryKey(&pObj->outputSchema);
|
||||||
SPlanContext cxt = {
|
SPlanContext cxt = {
|
||||||
.pAstRoot = pAst,
|
.pAstRoot = pAst,
|
||||||
.topicQuery = false,
|
.topicQuery = false,
|
||||||
|
@ -699,10 +699,6 @@ static int32_t mndProcessCreateStreamReq(SRpcMsg *pReq) {
|
||||||
int32_t sqlLen = 0;
|
int32_t sqlLen = 0;
|
||||||
terrno = TSDB_CODE_SUCCESS;
|
terrno = TSDB_CODE_SUCCESS;
|
||||||
|
|
||||||
if ((terrno = grantCheck(TSDB_GRANT_STREAMS)) < 0) {
|
|
||||||
return terrno;
|
|
||||||
}
|
|
||||||
|
|
||||||
SCMCreateStreamReq createReq = {0};
|
SCMCreateStreamReq createReq = {0};
|
||||||
if (tDeserializeSCMCreateStreamReq(pReq->pCont, pReq->contLen, &createReq) != 0) {
|
if (tDeserializeSCMCreateStreamReq(pReq->pCont, pReq->contLen, &createReq) != 0) {
|
||||||
terrno = TSDB_CODE_INVALID_MSG;
|
terrno = TSDB_CODE_INVALID_MSG;
|
||||||
|
@ -733,6 +729,10 @@ static int32_t mndProcessCreateStreamReq(SRpcMsg *pReq) {
|
||||||
goto _OVER;
|
goto _OVER;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if ((terrno = grantCheck(TSDB_GRANT_STREAMS)) < 0) {
|
||||||
|
goto _OVER;
|
||||||
|
}
|
||||||
|
|
||||||
if (createReq.sql != NULL) {
|
if (createReq.sql != NULL) {
|
||||||
sqlLen = strlen(createReq.sql);
|
sqlLen = strlen(createReq.sql);
|
||||||
sql = taosMemoryMalloc(sqlLen + 1);
|
sql = taosMemoryMalloc(sqlLen + 1);
|
||||||
|
|
|
@ -438,10 +438,10 @@ static void processSubOffsetRows(SMnode *pMnode, const SMqRebInputObj *pInput, S
|
||||||
}
|
}
|
||||||
|
|
||||||
static void printRebalanceLog(SMqRebOutputObj *pOutput){
|
static void printRebalanceLog(SMqRebOutputObj *pOutput){
|
||||||
mInfo("sub:%s mq re-balance calculation completed, re-balanced vg", pOutput->pSub->key);
|
mInfo("sub:%s mq rebalance calculation completed, re-balanced vg", pOutput->pSub->key);
|
||||||
for (int32_t i = 0; i < taosArrayGetSize(pOutput->rebVgs); i++) {
|
for (int32_t i = 0; i < taosArrayGetSize(pOutput->rebVgs); i++) {
|
||||||
SMqRebOutputVg *pOutputRebVg = taosArrayGet(pOutput->rebVgs, i);
|
SMqRebOutputVg *pOutputRebVg = taosArrayGet(pOutput->rebVgs, i);
|
||||||
mInfo("sub:%s mq re-balance vgId:%d, moved from consumer:0x%" PRIx64 ", to consumer:0x%" PRIx64, pOutput->pSub->key,
|
mInfo("sub:%s mq rebalance vgId:%d, moved from consumer:0x%" PRIx64 ", to consumer:0x%" PRIx64, pOutput->pSub->key,
|
||||||
pOutputRebVg->pVgEp->vgId, pOutputRebVg->oldConsumerId, pOutputRebVg->newConsumerId);
|
pOutputRebVg->pVgEp->vgId, pOutputRebVg->oldConsumerId, pOutputRebVg->newConsumerId);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -451,10 +451,10 @@ static void printRebalanceLog(SMqRebOutputObj *pOutput){
|
||||||
if (pIter == NULL) break;
|
if (pIter == NULL) break;
|
||||||
SMqConsumerEp *pConsumerEp = (SMqConsumerEp *)pIter;
|
SMqConsumerEp *pConsumerEp = (SMqConsumerEp *)pIter;
|
||||||
int32_t sz = taosArrayGetSize(pConsumerEp->vgs);
|
int32_t sz = taosArrayGetSize(pConsumerEp->vgs);
|
||||||
mInfo("sub:%s mq re-balance final cfg: consumer:0x%" PRIx64 " has %d vg", pOutput->pSub->key, pConsumerEp->consumerId, sz);
|
mInfo("sub:%s mq rebalance final cfg: consumer:0x%" PRIx64 " has %d vg", pOutput->pSub->key, pConsumerEp->consumerId, sz);
|
||||||
for (int32_t i = 0; i < sz; i++) {
|
for (int32_t i = 0; i < sz; i++) {
|
||||||
SMqVgEp *pVgEp = taosArrayGetP(pConsumerEp->vgs, i);
|
SMqVgEp *pVgEp = taosArrayGetP(pConsumerEp->vgs, i);
|
||||||
mInfo("sub:%s mq re-balance final cfg: vg %d to consumer:0x%" PRIx64, pOutput->pSub->key, pVgEp->vgId,
|
mInfo("sub:%s mq rebalance final cfg: vg %d to consumer:0x%" PRIx64, pOutput->pSub->key, pVgEp->vgId,
|
||||||
pConsumerEp->consumerId);
|
pConsumerEp->consumerId);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -762,18 +762,18 @@ static void mndCheckConsumer(SRpcMsg *pMsg, SHashObj* rebSubHash) {
|
||||||
|
|
||||||
bool mndRebTryStart() {
|
bool mndRebTryStart() {
|
||||||
int32_t old = atomic_val_compare_exchange_32(&mqRebInExecCnt, 0, 1);
|
int32_t old = atomic_val_compare_exchange_32(&mqRebInExecCnt, 0, 1);
|
||||||
mInfo("rebalance counter old val:%d", old);
|
if (old > 0) mInfo("[rebalance] counter old val:%d", old)
|
||||||
return old == 0;
|
return old == 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
void mndRebCntInc() {
|
void mndRebCntInc() {
|
||||||
int32_t val = atomic_add_fetch_32(&mqRebInExecCnt, 1);
|
int32_t val = atomic_add_fetch_32(&mqRebInExecCnt, 1);
|
||||||
mInfo("rebalance cnt inc, value:%d", val);
|
if (val > 0) mInfo("[rebalance] cnt inc, value:%d", val)
|
||||||
}
|
}
|
||||||
|
|
||||||
void mndRebCntDec() {
|
void mndRebCntDec() {
|
||||||
int32_t val = atomic_sub_fetch_32(&mqRebInExecCnt, 1);
|
int32_t val = atomic_sub_fetch_32(&mqRebInExecCnt, 1);
|
||||||
mInfo("rebalance cnt sub, value:%d", val);
|
if (val > 0) mInfo("[rebalance] cnt sub, value:%d", val)
|
||||||
}
|
}
|
||||||
|
|
||||||
static void clearRebOutput(SMqRebOutputObj *rebOutput){
|
static void clearRebOutput(SMqRebOutputObj *rebOutput){
|
||||||
|
@ -848,10 +848,10 @@ static int32_t mndProcessRebalanceReq(SRpcMsg *pMsg) {
|
||||||
int code = 0;
|
int code = 0;
|
||||||
void *pIter = NULL;
|
void *pIter = NULL;
|
||||||
SMnode *pMnode = pMsg->info.node;
|
SMnode *pMnode = pMsg->info.node;
|
||||||
mInfo("[rebalance] start to process mq timer");
|
mDebug("[rebalance] start to process mq timer")
|
||||||
|
|
||||||
if (!mndRebTryStart()) {
|
if (!mndRebTryStart()) {
|
||||||
mInfo("[rebalance] mq rebalance already in progress, do nothing");
|
mInfo("[rebalance] mq rebalance already in progress, do nothing")
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -863,7 +863,9 @@ static int32_t mndProcessRebalanceReq(SRpcMsg *pMsg) {
|
||||||
taosHashSetFreeFp(rebSubHash, freeRebalanceItem);
|
taosHashSetFreeFp(rebSubHash, freeRebalanceItem);
|
||||||
|
|
||||||
mndCheckConsumer(pMsg, rebSubHash);
|
mndCheckConsumer(pMsg, rebSubHash);
|
||||||
mInfo("[rebalance] mq re-balance start, total required re-balanced trans:%d", taosHashGetSize(rebSubHash));
|
if (taosHashGetSize(rebSubHash) > 0) {
|
||||||
|
mInfo("[rebalance] mq rebalance start, total required re-balanced trans:%d", taosHashGetSize(rebSubHash))
|
||||||
|
}
|
||||||
|
|
||||||
while (1) {
|
while (1) {
|
||||||
pIter = taosHashIterate(rebSubHash, pIter);
|
pIter = taosHashIterate(rebSubHash, pIter);
|
||||||
|
@ -887,13 +889,15 @@ static int32_t mndProcessRebalanceReq(SRpcMsg *pMsg) {
|
||||||
mndDoRebalance(pMnode, &rebInput, &rebOutput);
|
mndDoRebalance(pMnode, &rebInput, &rebOutput);
|
||||||
|
|
||||||
if (mndPersistRebResult(pMnode, pMsg, &rebOutput) != 0) {
|
if (mndPersistRebResult(pMnode, pMsg, &rebOutput) != 0) {
|
||||||
mError("mq re-balance persist output error, possibly vnode splitted or dropped,msg:%s", terrstr());
|
mError("mq rebalance persist output error, possibly vnode splitted or dropped,msg:%s", terrstr())
|
||||||
}
|
}
|
||||||
|
|
||||||
clearRebOutput(&rebOutput);
|
clearRebOutput(&rebOutput);
|
||||||
}
|
}
|
||||||
|
|
||||||
mInfo("[rebalance] mq re-balance completed successfully, wait trans finish");
|
if (taosHashGetSize(rebSubHash) > 0) {
|
||||||
|
mInfo("[rebalance] mq rebalance completed successfully, wait trans finish")
|
||||||
|
}
|
||||||
|
|
||||||
END:
|
END:
|
||||||
taosHashCancelIterate(rebSubHash, pIter);
|
taosHashCancelIterate(rebSubHash, pIter);
|
||||||
|
|
|
@ -561,15 +561,6 @@ static int32_t mndProcessCreateTopicReq(SRpcMsg *pReq) {
|
||||||
SMqTopicObj *pTopic = NULL;
|
SMqTopicObj *pTopic = NULL;
|
||||||
SDbObj *pDb = NULL;
|
SDbObj *pDb = NULL;
|
||||||
SCMCreateTopicReq createTopicReq = {0};
|
SCMCreateTopicReq createTopicReq = {0};
|
||||||
if (sdbGetSize(pMnode->pSdb, SDB_TOPIC) >= tmqMaxTopicNum){
|
|
||||||
terrno = TSDB_CODE_TMQ_TOPIC_OUT_OF_RANGE;
|
|
||||||
mError("topic num out of range");
|
|
||||||
return code;
|
|
||||||
}
|
|
||||||
|
|
||||||
if ((terrno = grantCheck(TSDB_GRANT_SUBSCRIPTION)) < 0) {
|
|
||||||
return code;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (tDeserializeSCMCreateTopicReq(pReq->pCont, pReq->contLen, &createTopicReq) != 0) {
|
if (tDeserializeSCMCreateTopicReq(pReq->pCont, pReq->contLen, &createTopicReq) != 0) {
|
||||||
terrno = TSDB_CODE_INVALID_MSG;
|
terrno = TSDB_CODE_INVALID_MSG;
|
||||||
|
@ -609,6 +600,16 @@ static int32_t mndProcessCreateTopicReq(SRpcMsg *pReq) {
|
||||||
goto _OVER;
|
goto _OVER;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (sdbGetSize(pMnode->pSdb, SDB_TOPIC) >= tmqMaxTopicNum){
|
||||||
|
terrno = TSDB_CODE_TMQ_TOPIC_OUT_OF_RANGE;
|
||||||
|
mError("topic num out of range");
|
||||||
|
goto _OVER;
|
||||||
|
}
|
||||||
|
|
||||||
|
if ((terrno = grantCheck(TSDB_GRANT_SUBSCRIPTION)) < 0) {
|
||||||
|
goto _OVER;
|
||||||
|
}
|
||||||
|
|
||||||
code = mndCreateTopic(pMnode, pReq, &createTopicReq, pDb, pReq->info.conn.user);
|
code = mndCreateTopic(pMnode, pReq, &createTopicReq, pDb, pReq->info.conn.user);
|
||||||
if (code == 0) {
|
if (code == 0) {
|
||||||
code = TSDB_CODE_ACTION_IN_PROGRESS;
|
code = TSDB_CODE_ACTION_IN_PROGRESS;
|
||||||
|
|
|
@ -169,7 +169,7 @@ SSdbRaw *mndTransEncode(STrans *pTrans) {
|
||||||
SDB_SET_INT64(pRaw, dataPos, pTrans->createdTime, _OVER)
|
SDB_SET_INT64(pRaw, dataPos, pTrans->createdTime, _OVER)
|
||||||
SDB_SET_BINARY(pRaw, dataPos, pTrans->dbname, TSDB_TABLE_FNAME_LEN, _OVER)
|
SDB_SET_BINARY(pRaw, dataPos, pTrans->dbname, TSDB_TABLE_FNAME_LEN, _OVER)
|
||||||
SDB_SET_BINARY(pRaw, dataPos, pTrans->stbname, TSDB_TABLE_FNAME_LEN, _OVER)
|
SDB_SET_BINARY(pRaw, dataPos, pTrans->stbname, TSDB_TABLE_FNAME_LEN, _OVER)
|
||||||
SDB_SET_INT32(pRaw, dataPos, pTrans->redoActionPos, _OVER)
|
SDB_SET_INT32(pRaw, dataPos, pTrans->actionPos, _OVER)
|
||||||
|
|
||||||
int32_t prepareActionNum = taosArrayGetSize(pTrans->prepareActions);
|
int32_t prepareActionNum = taosArrayGetSize(pTrans->prepareActions);
|
||||||
int32_t redoActionNum = taosArrayGetSize(pTrans->redoActions);
|
int32_t redoActionNum = taosArrayGetSize(pTrans->redoActions);
|
||||||
|
@ -317,7 +317,7 @@ SSdbRow *mndTransDecode(SSdbRaw *pRaw) {
|
||||||
SDB_GET_INT64(pRaw, dataPos, &pTrans->createdTime, _OVER)
|
SDB_GET_INT64(pRaw, dataPos, &pTrans->createdTime, _OVER)
|
||||||
SDB_GET_BINARY(pRaw, dataPos, pTrans->dbname, TSDB_TABLE_FNAME_LEN, _OVER)
|
SDB_GET_BINARY(pRaw, dataPos, pTrans->dbname, TSDB_TABLE_FNAME_LEN, _OVER)
|
||||||
SDB_GET_BINARY(pRaw, dataPos, pTrans->stbname, TSDB_TABLE_FNAME_LEN, _OVER)
|
SDB_GET_BINARY(pRaw, dataPos, pTrans->stbname, TSDB_TABLE_FNAME_LEN, _OVER)
|
||||||
SDB_GET_INT32(pRaw, dataPos, &pTrans->redoActionPos, _OVER)
|
SDB_GET_INT32(pRaw, dataPos, &pTrans->actionPos, _OVER)
|
||||||
|
|
||||||
if (sver > TRANS_VER1_NUMBER) {
|
if (sver > TRANS_VER1_NUMBER) {
|
||||||
SDB_GET_INT32(pRaw, dataPos, &prepareActionNum, _OVER)
|
SDB_GET_INT32(pRaw, dataPos, &prepareActionNum, _OVER)
|
||||||
|
@ -525,7 +525,7 @@ static int32_t mndTransActionUpdate(SSdb *pSdb, STrans *pOld, STrans *pNew) {
|
||||||
mndTransUpdateActions(pOld->undoActions, pNew->undoActions);
|
mndTransUpdateActions(pOld->undoActions, pNew->undoActions);
|
||||||
mndTransUpdateActions(pOld->commitActions, pNew->commitActions);
|
mndTransUpdateActions(pOld->commitActions, pNew->commitActions);
|
||||||
pOld->stage = pNew->stage;
|
pOld->stage = pNew->stage;
|
||||||
pOld->redoActionPos = pNew->redoActionPos;
|
pOld->actionPos = pNew->actionPos;
|
||||||
|
|
||||||
if (pOld->stage == TRN_STAGE_COMMIT) {
|
if (pOld->stage == TRN_STAGE_COMMIT) {
|
||||||
pOld->stage = TRN_STAGE_COMMIT_ACTION;
|
pOld->stage = TRN_STAGE_COMMIT_ACTION;
|
||||||
|
@ -1360,22 +1360,19 @@ static int32_t mndTransExecuteCommitActions(SMnode *pMnode, STrans *pTrans, bool
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t mndTransExecuteRedoActionsSerial(SMnode *pMnode, STrans *pTrans, bool topHalf) {
|
static int32_t mndTransExecuteActionsSerial(SMnode *pMnode, STrans *pTrans, SArray *pActions, bool topHalf) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
int32_t numOfActions = taosArrayGetSize(pTrans->redoActions);
|
int32_t numOfActions = taosArrayGetSize(pActions);
|
||||||
if (numOfActions == 0) return code;
|
if (numOfActions == 0) return code;
|
||||||
|
|
||||||
taosThreadMutexLock(&pTrans->mutex);
|
if (pTrans->actionPos >= numOfActions) {
|
||||||
|
|
||||||
if (pTrans->redoActionPos >= numOfActions) {
|
|
||||||
taosThreadMutexUnlock(&pTrans->mutex);
|
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
mInfo("trans:%d, execute %d actions serial, current redoAction:%d", pTrans->id, numOfActions, pTrans->redoActionPos);
|
mInfo("trans:%d, execute %d actions serial, current redoAction:%d", pTrans->id, numOfActions, pTrans->actionPos);
|
||||||
|
|
||||||
for (int32_t action = pTrans->redoActionPos; action < numOfActions; ++action) {
|
for (int32_t action = pTrans->actionPos; action < numOfActions; ++action) {
|
||||||
STransAction *pAction = taosArrayGet(pTrans->redoActions, pTrans->redoActionPos);
|
STransAction *pAction = taosArrayGet(pActions, pTrans->actionPos);
|
||||||
|
|
||||||
code = mndTransExecSingleAction(pMnode, pTrans, pAction, topHalf);
|
code = mndTransExecSingleAction(pMnode, pTrans, pAction, topHalf);
|
||||||
if (code == 0) {
|
if (code == 0) {
|
||||||
|
@ -1409,14 +1406,14 @@ static int32_t mndTransExecuteRedoActionsSerial(SMnode *pMnode, STrans *pTrans,
|
||||||
|
|
||||||
if (code == 0) {
|
if (code == 0) {
|
||||||
pTrans->code = 0;
|
pTrans->code = 0;
|
||||||
pTrans->redoActionPos++;
|
pTrans->actionPos++;
|
||||||
mInfo("trans:%d, %s:%d is executed and need sync to other mnodes", pTrans->id, mndTransStr(pAction->stage),
|
mInfo("trans:%d, %s:%d is executed and need sync to other mnodes", pTrans->id, mndTransStr(pAction->stage),
|
||||||
pAction->id);
|
pAction->id);
|
||||||
taosThreadMutexUnlock(&pTrans->mutex);
|
taosThreadMutexUnlock(&pTrans->mutex);
|
||||||
code = mndTransSync(pMnode, pTrans);
|
code = mndTransSync(pMnode, pTrans);
|
||||||
taosThreadMutexLock(&pTrans->mutex);
|
taosThreadMutexLock(&pTrans->mutex);
|
||||||
if (code != 0) {
|
if (code != 0) {
|
||||||
pTrans->redoActionPos--;
|
pTrans->actionPos--;
|
||||||
pTrans->code = terrno;
|
pTrans->code = terrno;
|
||||||
mError("trans:%d, %s:%d is executed and failed to sync to other mnodes since %s", pTrans->id,
|
mError("trans:%d, %s:%d is executed and failed to sync to other mnodes since %s", pTrans->id,
|
||||||
mndTransStr(pAction->stage), pAction->id, terrstr());
|
mndTransStr(pAction->stage), pAction->id, terrstr());
|
||||||
|
@ -1442,8 +1439,26 @@ static int32_t mndTransExecuteRedoActionsSerial(SMnode *pMnode, STrans *pTrans,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
taosThreadMutexUnlock(&pTrans->mutex);
|
return code;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int32_t mndTransExecuteRedoActionsSerial(SMnode *pMnode, STrans *pTrans, bool topHalf) {
|
||||||
|
int32_t code = TSDB_CODE_ACTION_IN_PROGRESS;
|
||||||
|
taosThreadMutexLock(&pTrans->mutex);
|
||||||
|
if (pTrans->stage == TRN_STAGE_REDO_ACTION) {
|
||||||
|
code = mndTransExecuteActionsSerial(pMnode, pTrans, pTrans->redoActions, topHalf);
|
||||||
|
}
|
||||||
|
taosThreadMutexUnlock(&pTrans->mutex);
|
||||||
|
return code;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int32_t mndTransExecuteUndoActionsSerial(SMnode *pMnode, STrans *pTrans, bool topHalf) {
|
||||||
|
int32_t code = TSDB_CODE_ACTION_IN_PROGRESS;
|
||||||
|
taosThreadMutexLock(&pTrans->mutex);
|
||||||
|
if (pTrans->stage == TRN_STAGE_UNDO_ACTION) {
|
||||||
|
code = mndTransExecuteActionsSerial(pMnode, pTrans, pTrans->undoActions, topHalf);
|
||||||
|
}
|
||||||
|
taosThreadMutexUnlock(&pTrans->mutex);
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1563,13 +1578,22 @@ static bool mndTransPerformCommitActionStage(SMnode *pMnode, STrans *pTrans, boo
|
||||||
|
|
||||||
static bool mndTransPerformUndoActionStage(SMnode *pMnode, STrans *pTrans, bool topHalf) {
|
static bool mndTransPerformUndoActionStage(SMnode *pMnode, STrans *pTrans, bool topHalf) {
|
||||||
bool continueExec = true;
|
bool continueExec = true;
|
||||||
int32_t code = mndTransExecuteUndoActions(pMnode, pTrans, topHalf);
|
int32_t code = 0;
|
||||||
|
|
||||||
|
if (pTrans->exec == TRN_EXEC_SERIAL) {
|
||||||
|
code = mndTransExecuteUndoActionsSerial(pMnode, pTrans, topHalf);
|
||||||
|
} else {
|
||||||
|
code = mndTransExecuteUndoActions(pMnode, pTrans, topHalf);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (mndCannotExecuteTransAction(pMnode, topHalf)) return false;
|
||||||
|
terrno = code;
|
||||||
|
|
||||||
if (code == 0) {
|
if (code == 0) {
|
||||||
pTrans->stage = TRN_STAGE_PRE_FINISH;
|
pTrans->stage = TRN_STAGE_PRE_FINISH;
|
||||||
mInfo("trans:%d, stage from undoAction to pre-finish", pTrans->id);
|
mInfo("trans:%d, stage from undoAction to pre-finish", pTrans->id);
|
||||||
continueExec = true;
|
continueExec = true;
|
||||||
} else if (code == TSDB_CODE_ACTION_IN_PROGRESS) {
|
} else if (code == TSDB_CODE_ACTION_IN_PROGRESS || code == TSDB_CODE_MND_TRANS_CTX_SWITCH) {
|
||||||
mInfo("trans:%d, stage keep on undoAction since %s", pTrans->id, tstrerror(code));
|
mInfo("trans:%d, stage keep on undoAction since %s", pTrans->id, tstrerror(code));
|
||||||
continueExec = false;
|
continueExec = false;
|
||||||
} else {
|
} else {
|
||||||
|
|
|
@ -887,6 +887,8 @@ int metaCreateTable(SMeta *pMeta, int64_t ver, SVCreateTbReq *pReq, STableMetaRs
|
||||||
|
|
||||||
bool sysTbl = (pReq->type == TSDB_CHILD_TABLE) && metaTbInFilterCache(pMeta, pReq->ctb.stbName, 1);
|
bool sysTbl = (pReq->type == TSDB_CHILD_TABLE) && metaTbInFilterCache(pMeta, pReq->ctb.stbName, 1);
|
||||||
|
|
||||||
|
if (!sysTbl && ((terrno = grantCheck(TSDB_GRANT_TIMESERIES)) < 0)) goto _err;
|
||||||
|
|
||||||
// build SMetaEntry
|
// build SMetaEntry
|
||||||
SVnodeStats *pStats = &pMeta->pVnode->config.vndStats;
|
SVnodeStats *pStats = &pMeta->pVnode->config.vndStats;
|
||||||
me.version = ver;
|
me.version = ver;
|
||||||
|
@ -2659,6 +2661,8 @@ int32_t metaGetColCmpr(SMeta *pMeta, tb_uid_t uid, SHashObj **ppColCmprObj) {
|
||||||
SMetaEntry e = {0};
|
SMetaEntry e = {0};
|
||||||
SDecoder dc = {0};
|
SDecoder dc = {0};
|
||||||
|
|
||||||
|
*ppColCmprObj = NULL;
|
||||||
|
|
||||||
metaRLock(pMeta);
|
metaRLock(pMeta);
|
||||||
rc = tdbTbGet(pMeta->pUidIdx, &uid, sizeof(uid), &pData, &nData);
|
rc = tdbTbGet(pMeta->pUidIdx, &uid, sizeof(uid), &pData, &nData);
|
||||||
if (rc < 0) {
|
if (rc < 0) {
|
||||||
|
|
|
@ -367,7 +367,7 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
|
||||||
} while (0);
|
} while (0);
|
||||||
}
|
}
|
||||||
|
|
||||||
// 2. check re-balance status
|
// 2. check rebalance status
|
||||||
if (pHandle->consumerId != consumerId) {
|
if (pHandle->consumerId != consumerId) {
|
||||||
tqError("ERROR tmq poll: consumer:0x%" PRIx64
|
tqError("ERROR tmq poll: consumer:0x%" PRIx64
|
||||||
" vgId:%d, subkey %s, mismatch for saved handle consumer:0x%" PRIx64,
|
" vgId:%d, subkey %s, mismatch for saved handle consumer:0x%" PRIx64,
|
||||||
|
@ -485,7 +485,7 @@ int32_t tqProcessVgWalInfoReq(STQ* pTq, SRpcMsg* pMsg) {
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
// 2. check re-balance status
|
// 2. check rebalance status
|
||||||
if (pHandle->consumerId != consumerId) {
|
if (pHandle->consumerId != consumerId) {
|
||||||
tqDebug("ERROR consumer:0x%" PRIx64 " vgId:%d, subkey %s, mismatch for saved handle consumer:0x%" PRIx64,
|
tqDebug("ERROR consumer:0x%" PRIx64 " vgId:%d, subkey %s, mismatch for saved handle consumer:0x%" PRIx64,
|
||||||
consumerId, vgId, req.subKey, pHandle->consumerId);
|
consumerId, vgId, req.subKey, pHandle->consumerId);
|
||||||
|
@ -666,7 +666,7 @@ int32_t tqProcessSubscribeReq(STQ* pTq, int64_t sversion, char* msg, int32_t msg
|
||||||
req.vgId, req.subKey, req.newConsumerId, req.oldConsumerId);
|
req.vgId, req.subKey, req.newConsumerId, req.oldConsumerId);
|
||||||
}
|
}
|
||||||
if (req.newConsumerId == -1) {
|
if (req.newConsumerId == -1) {
|
||||||
tqError("vgId:%d, tq invalid re-balance request, new consumerId %" PRId64 "", req.vgId, req.newConsumerId);
|
tqError("vgId:%d, tq invalid rebalance request, new consumerId %" PRId64 "", req.vgId, req.newConsumerId);
|
||||||
goto end;
|
goto end;
|
||||||
}
|
}
|
||||||
STqHandle handle = {0};
|
STqHandle handle = {0};
|
||||||
|
|
|
@ -48,9 +48,24 @@ static int32_t tsdbDataFileReadHeadFooter(SDataFileReader *reader) {
|
||||||
if (reader->fd[ftype]) {
|
if (reader->fd[ftype]) {
|
||||||
int32_t encryptAlgorithm = reader->config->tsdb->pVnode->config.tsdbCfg.encryptAlgorithm;
|
int32_t encryptAlgorithm = reader->config->tsdb->pVnode->config.tsdbCfg.encryptAlgorithm;
|
||||||
char* encryptKey = reader->config->tsdb->pVnode->config.tsdbCfg.encryptKey;
|
char* encryptKey = reader->config->tsdb->pVnode->config.tsdbCfg.encryptKey;
|
||||||
|
#if 1
|
||||||
code = tsdbReadFile(reader->fd[ftype], reader->config->files[ftype].file.size - sizeof(SHeadFooter),
|
code = tsdbReadFile(reader->fd[ftype], reader->config->files[ftype].file.size - sizeof(SHeadFooter),
|
||||||
(uint8_t *)reader->headFooter, sizeof(SHeadFooter), 0, encryptAlgorithm, encryptKey);
|
(uint8_t *)reader->headFooter, sizeof(SHeadFooter), 0, encryptAlgorithm, encryptKey);
|
||||||
TSDB_CHECK_CODE(code, lino, _exit);
|
TSDB_CHECK_CODE(code, lino, _exit);
|
||||||
|
#else
|
||||||
|
int64_t size = reader->config->files[ftype].file.size;
|
||||||
|
for (; size > TSDB_FHDR_SIZE; size--) {
|
||||||
|
code = tsdbReadFile(reader->fd[ftype], size - sizeof(SHeadFooter), (uint8_t *)reader->headFooter,
|
||||||
|
sizeof(SHeadFooter), 0, encryptAlgorithm, encryptKey);
|
||||||
|
if (code) continue;
|
||||||
|
if (reader->headFooter->brinBlkPtr->offset + reader->headFooter->brinBlkPtr->size + sizeof(SHeadFooter) == size) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (size <= TSDB_FHDR_SIZE) {
|
||||||
|
TSDB_CHECK_CODE(code = TSDB_CODE_FILE_CORRUPTED, lino, _exit);
|
||||||
|
}
|
||||||
|
#endif
|
||||||
}
|
}
|
||||||
|
|
||||||
reader->ctx->headFooterLoaded = true;
|
reader->ctx->headFooterLoaded = true;
|
||||||
|
|
|
@ -897,6 +897,7 @@ int32_t tsdbFSEditCommit(STFileSystem *fs) {
|
||||||
|
|
||||||
// commit
|
// commit
|
||||||
code = commit_edit(fs);
|
code = commit_edit(fs);
|
||||||
|
ASSERT(code == 0);
|
||||||
TSDB_CHECK_CODE(code, lino, _exit);
|
TSDB_CHECK_CODE(code, lino, _exit);
|
||||||
|
|
||||||
// schedule merge
|
// schedule merge
|
||||||
|
@ -973,11 +974,11 @@ int32_t tsdbFSEditCommit(STFileSystem *fs) {
|
||||||
|
|
||||||
_exit:
|
_exit:
|
||||||
if (code) {
|
if (code) {
|
||||||
TSDB_ERROR_LOG(TD_VID(fs->tsdb->pVnode), lino, code);
|
tsdbError("vgId:%d %s failed at line %d since %s", TD_VID(fs->tsdb->pVnode), __func__, lino, tstrerror(code));
|
||||||
} else {
|
} else {
|
||||||
tsdbDebug("vgId:%d %s done, etype:%d", TD_VID(fs->tsdb->pVnode), __func__, fs->etype);
|
tsdbInfo("vgId:%d %s done, etype:%d", TD_VID(fs->tsdb->pVnode), __func__, fs->etype);
|
||||||
tsem_post(&fs->canEdit);
|
|
||||||
}
|
}
|
||||||
|
tsem_post(&fs->canEdit);
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -46,7 +46,7 @@ static int32_t tsdbFSetWriteTableDataBegin(SFSetWriter *writer, const TABLEID *t
|
||||||
code = tsdbUpdateSkmTb(writer->config->tsdb, writer->ctx->tbid, writer->skmTb);
|
code = tsdbUpdateSkmTb(writer->config->tsdb, writer->ctx->tbid, writer->skmTb);
|
||||||
|
|
||||||
code = metaGetColCmpr(writer->config->tsdb->pVnode->pMeta, tbid->suid ? tbid->suid : tbid->uid, &writer->pColCmprObj);
|
code = metaGetColCmpr(writer->config->tsdb->pVnode->pMeta, tbid->suid ? tbid->suid : tbid->uid, &writer->pColCmprObj);
|
||||||
TSDB_CHECK_CODE(code, lino, _exit);
|
// TSDB_CHECK_CODE(code, lino, _exit);
|
||||||
|
|
||||||
writer->blockDataIdx = 0;
|
writer->blockDataIdx = 0;
|
||||||
for (int32_t i = 0; i < ARRAY_SIZE(writer->blockData); i++) {
|
for (int32_t i = 0; i < ARRAY_SIZE(writer->blockData); i++) {
|
||||||
|
@ -301,15 +301,3 @@ _exit:
|
||||||
}
|
}
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
// int32_t tsdbGetCompressByUid(SFSetWriter *writer, tb_uid_t uid, struct SColCompressInfo *info) {
|
|
||||||
// SHashObj *p = NULL;
|
|
||||||
// int32_t code = metaGetColCmpr(writer->config->tsdb->pVnode->pMeta, uid, &p);
|
|
||||||
// if (code < 0) {
|
|
||||||
// ASSERT(0);
|
|
||||||
// taosHashCleanup(p);
|
|
||||||
// p = NULL;
|
|
||||||
// } else {
|
|
||||||
// }
|
|
||||||
// info->pColCmpr = p;
|
|
||||||
// return code;
|
|
||||||
// }
|
|
||||||
|
|
|
@ -580,9 +580,9 @@ int32_t tsdbMerge(void *arg) {
|
||||||
}
|
}
|
||||||
*/
|
*/
|
||||||
// do merge
|
// do merge
|
||||||
tsdbDebug("vgId:%d merge begin, fid:%d", TD_VID(tsdb->pVnode), merger->fid);
|
tsdbInfo("vgId:%d merge begin, fid:%d", TD_VID(tsdb->pVnode), merger->fid);
|
||||||
code = tsdbDoMerge(merger);
|
code = tsdbDoMerge(merger);
|
||||||
tsdbDebug("vgId:%d merge done, fid:%d", TD_VID(tsdb->pVnode), mergeArg->fid);
|
tsdbInfo("vgId:%d merge done, fid:%d", TD_VID(tsdb->pVnode), mergeArg->fid);
|
||||||
TSDB_CHECK_CODE(code, lino, _exit);
|
TSDB_CHECK_CODE(code, lino, _exit);
|
||||||
|
|
||||||
_exit:
|
_exit:
|
||||||
|
|
|
@ -14,8 +14,8 @@
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#include "cos.h"
|
#include "cos.h"
|
||||||
#include "tsdb.h"
|
|
||||||
#include "crypt.h"
|
#include "crypt.h"
|
||||||
|
#include "tsdb.h"
|
||||||
#include "vnd.h"
|
#include "vnd.h"
|
||||||
|
|
||||||
static int32_t tsdbOpenFileImpl(STsdbFD *pFD) {
|
static int32_t tsdbOpenFileImpl(STsdbFD *pFD) {
|
||||||
|
@ -61,6 +61,7 @@ static int32_t tsdbOpenFileImpl(STsdbFD *pFD) {
|
||||||
// taosMemoryFree(pFD);
|
// taosMemoryFree(pFD);
|
||||||
goto _exit;
|
goto _exit;
|
||||||
}
|
}
|
||||||
|
pFD->s3File = 1;
|
||||||
/*
|
/*
|
||||||
const char *object_name = taosDirEntryBaseName((char *)path);
|
const char *object_name = taosDirEntryBaseName((char *)path);
|
||||||
long s3_size = 0;
|
long s3_size = 0;
|
||||||
|
@ -86,7 +87,6 @@ static int32_t tsdbOpenFileImpl(STsdbFD *pFD) {
|
||||||
goto _exit;
|
goto _exit;
|
||||||
}
|
}
|
||||||
#else
|
#else
|
||||||
pFD->s3File = 1;
|
|
||||||
pFD->pFD = (TdFilePtr)&pFD->s3File;
|
pFD->pFD = (TdFilePtr)&pFD->s3File;
|
||||||
int32_t vid = 0;
|
int32_t vid = 0;
|
||||||
sscanf(object_name, "v%df%dver%" PRId64 ".data", &vid, &pFD->fid, &pFD->cid);
|
sscanf(object_name, "v%df%dver%" PRId64 ".data", &vid, &pFD->fid, &pFD->cid);
|
||||||
|
@ -170,7 +170,7 @@ void tsdbCloseFile(STsdbFD **ppFD) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t tsdbWriteFilePage(STsdbFD *pFD, int32_t encryptAlgorithm, char* encryptKey) {
|
static int32_t tsdbWriteFilePage(STsdbFD *pFD, int32_t encryptAlgorithm, char *encryptKey) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
|
|
||||||
if (!pFD->pFD) {
|
if (!pFD->pFD) {
|
||||||
|
@ -182,7 +182,7 @@ static int32_t tsdbWriteFilePage(STsdbFD *pFD, int32_t encryptAlgorithm, char* e
|
||||||
|
|
||||||
if (pFD->pgno > 0) {
|
if (pFD->pgno > 0) {
|
||||||
int64_t offset = PAGE_OFFSET(pFD->pgno, pFD->szPage);
|
int64_t offset = PAGE_OFFSET(pFD->pgno, pFD->szPage);
|
||||||
if (pFD->lcn > 1) {
|
if (pFD->s3File && pFD->lcn > 1) {
|
||||||
SVnodeCfg *pCfg = &pFD->pTsdb->pVnode->config;
|
SVnodeCfg *pCfg = &pFD->pTsdb->pVnode->config;
|
||||||
int64_t chunksize = (int64_t)pCfg->tsdbPageSize * pCfg->s3ChunkSize;
|
int64_t chunksize = (int64_t)pCfg->tsdbPageSize * pCfg->s3ChunkSize;
|
||||||
int64_t chunkoffset = chunksize * (pFD->lcn - 1);
|
int64_t chunkoffset = chunksize * (pFD->lcn - 1);
|
||||||
|
@ -199,8 +199,8 @@ static int32_t tsdbWriteFilePage(STsdbFD *pFD, int32_t encryptAlgorithm, char* e
|
||||||
|
|
||||||
taosCalcChecksumAppend(0, pFD->pBuf, pFD->szPage);
|
taosCalcChecksumAppend(0, pFD->pBuf, pFD->szPage);
|
||||||
|
|
||||||
if(encryptAlgorithm == DND_CA_SM4){
|
if (encryptAlgorithm == DND_CA_SM4) {
|
||||||
//if(tsiEncryptAlgorithm == DND_CA_SM4 && (tsiEncryptScope & DND_CS_TSDB) == DND_CS_TSDB){
|
// if(tsiEncryptAlgorithm == DND_CA_SM4 && (tsiEncryptScope & DND_CS_TSDB) == DND_CS_TSDB){
|
||||||
unsigned char PacketData[128];
|
unsigned char PacketData[128];
|
||||||
int NewLen;
|
int NewLen;
|
||||||
int32_t count = 0;
|
int32_t count = 0;
|
||||||
|
@ -210,7 +210,7 @@ static int32_t tsdbWriteFilePage(STsdbFD *pFD, int32_t encryptAlgorithm, char* e
|
||||||
opts.source = pFD->pBuf + count;
|
opts.source = pFD->pBuf + count;
|
||||||
opts.result = PacketData;
|
opts.result = PacketData;
|
||||||
opts.unitLen = 128;
|
opts.unitLen = 128;
|
||||||
//strncpy(opts.key, tsEncryptKey, 16);
|
// strncpy(opts.key, tsEncryptKey, 16);
|
||||||
strncpy(opts.key, encryptKey, ENCRYPT_KEY_LEN);
|
strncpy(opts.key, encryptKey, ENCRYPT_KEY_LEN);
|
||||||
|
|
||||||
NewLen = CBC_Encrypt(&opts);
|
NewLen = CBC_Encrypt(&opts);
|
||||||
|
@ -218,7 +218,7 @@ static int32_t tsdbWriteFilePage(STsdbFD *pFD, int32_t encryptAlgorithm, char* e
|
||||||
memcpy(pFD->pBuf + count, PacketData, NewLen);
|
memcpy(pFD->pBuf + count, PacketData, NewLen);
|
||||||
count += NewLen;
|
count += NewLen;
|
||||||
}
|
}
|
||||||
//tsdbDebug("CBC_Encrypt count:%d %s", count, __FUNCTION__);
|
// tsdbDebug("CBC_Encrypt count:%d %s", count, __FUNCTION__);
|
||||||
}
|
}
|
||||||
|
|
||||||
n = taosWriteFile(pFD->pFD, pFD->pBuf, pFD->szPage);
|
n = taosWriteFile(pFD->pFD, pFD->pBuf, pFD->szPage);
|
||||||
|
@ -237,7 +237,7 @@ _exit:
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t tsdbReadFilePage(STsdbFD *pFD, int64_t pgno, int32_t encryptAlgorithm, char* encryptKey) {
|
static int32_t tsdbReadFilePage(STsdbFD *pFD, int64_t pgno, int32_t encryptAlgorithm, char *encryptKey) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
|
|
||||||
// ASSERT(pgno <= pFD->szFile);
|
// ASSERT(pgno <= pFD->szFile);
|
||||||
|
@ -297,20 +297,19 @@ static int32_t tsdbReadFilePage(STsdbFD *pFD, int64_t pgno, int32_t encryptAlgor
|
||||||
}
|
}
|
||||||
//}
|
//}
|
||||||
|
|
||||||
if(encryptAlgorithm == DND_CA_SM4){
|
if (encryptAlgorithm == DND_CA_SM4) {
|
||||||
//if(tsiEncryptAlgorithm == DND_CA_SM4 && (tsiEncryptScope & DND_CS_TSDB) == DND_CS_TSDB){
|
// if(tsiEncryptAlgorithm == DND_CA_SM4 && (tsiEncryptScope & DND_CS_TSDB) == DND_CS_TSDB){
|
||||||
unsigned char PacketData[128];
|
unsigned char PacketData[128];
|
||||||
int NewLen;
|
int NewLen;
|
||||||
|
|
||||||
int32_t count = 0;
|
int32_t count = 0;
|
||||||
while(count < pFD->szPage)
|
while (count < pFD->szPage) {
|
||||||
{
|
|
||||||
SCryptOpts opts = {0};
|
SCryptOpts opts = {0};
|
||||||
opts.len = 128;
|
opts.len = 128;
|
||||||
opts.source = pFD->pBuf + count;
|
opts.source = pFD->pBuf + count;
|
||||||
opts.result = PacketData;
|
opts.result = PacketData;
|
||||||
opts.unitLen = 128;
|
opts.unitLen = 128;
|
||||||
//strncpy(opts.key, tsEncryptKey, 16);
|
// strncpy(opts.key, tsEncryptKey, 16);
|
||||||
strncpy(opts.key, encryptKey, ENCRYPT_KEY_LEN);
|
strncpy(opts.key, encryptKey, ENCRYPT_KEY_LEN);
|
||||||
|
|
||||||
NewLen = CBC_Decrypt(&opts);
|
NewLen = CBC_Decrypt(&opts);
|
||||||
|
@ -318,7 +317,7 @@ static int32_t tsdbReadFilePage(STsdbFD *pFD, int64_t pgno, int32_t encryptAlgor
|
||||||
memcpy(pFD->pBuf + count, PacketData, NewLen);
|
memcpy(pFD->pBuf + count, PacketData, NewLen);
|
||||||
count += NewLen;
|
count += NewLen;
|
||||||
}
|
}
|
||||||
//tsdbDebug("CBC_Decrypt count:%d %s", count, __FUNCTION__);
|
// tsdbDebug("CBC_Decrypt count:%d %s", count, __FUNCTION__);
|
||||||
}
|
}
|
||||||
|
|
||||||
// check
|
// check
|
||||||
|
@ -334,7 +333,7 @@ _exit:
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tsdbWriteFile(STsdbFD *pFD, int64_t offset, const uint8_t *pBuf, int64_t size, int32_t encryptAlgorithm,
|
int32_t tsdbWriteFile(STsdbFD *pFD, int64_t offset, const uint8_t *pBuf, int64_t size, int32_t encryptAlgorithm,
|
||||||
char* encryptKey) {
|
char *encryptKey) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
int64_t fOffset = LOGIC_TO_FILE_OFFSET(offset, pFD->szPage);
|
int64_t fOffset = LOGIC_TO_FILE_OFFSET(offset, pFD->szPage);
|
||||||
int64_t pgno = OFFSET_PGNO(fOffset, pFD->szPage);
|
int64_t pgno = OFFSET_PGNO(fOffset, pFD->szPage);
|
||||||
|
@ -367,7 +366,7 @@ _exit:
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t tsdbReadFileImp(STsdbFD *pFD, int64_t offset, uint8_t *pBuf, int64_t size, int32_t encryptAlgorithm,
|
static int32_t tsdbReadFileImp(STsdbFD *pFD, int64_t offset, uint8_t *pBuf, int64_t size, int32_t encryptAlgorithm,
|
||||||
char* encryptKey) {
|
char *encryptKey) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
int64_t n = 0;
|
int64_t n = 0;
|
||||||
int64_t fOffset = LOGIC_TO_FILE_OFFSET(offset, pFD->szPage);
|
int64_t fOffset = LOGIC_TO_FILE_OFFSET(offset, pFD->szPage);
|
||||||
|
@ -573,7 +572,7 @@ _exit:
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tsdbReadFile(STsdbFD *pFD, int64_t offset, uint8_t *pBuf, int64_t size, int64_t szHint,
|
int32_t tsdbReadFile(STsdbFD *pFD, int64_t offset, uint8_t *pBuf, int64_t size, int64_t szHint,
|
||||||
int32_t encryptAlgorithm, char* encryptKey) {
|
int32_t encryptAlgorithm, char *encryptKey) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
if (!pFD->pFD) {
|
if (!pFD->pFD) {
|
||||||
code = tsdbOpenFileImpl(pFD);
|
code = tsdbOpenFileImpl(pFD);
|
||||||
|
@ -582,7 +581,7 @@ int32_t tsdbReadFile(STsdbFD *pFD, int64_t offset, uint8_t *pBuf, int64_t size,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pFD->lcn > 1 /*pFD->s3File && tsS3BlockSize < 0*/) {
|
if (pFD->s3File && pFD->lcn > 1 /* && tsS3BlockSize < 0*/) {
|
||||||
return tsdbReadFileS3(pFD, offset, pBuf, size, szHint);
|
return tsdbReadFileS3(pFD, offset, pBuf, size, szHint);
|
||||||
} else {
|
} else {
|
||||||
return tsdbReadFileImp(pFD, offset, pBuf, size, encryptAlgorithm, encryptKey);
|
return tsdbReadFileImp(pFD, offset, pBuf, size, encryptAlgorithm, encryptKey);
|
||||||
|
@ -593,20 +592,19 @@ _exit:
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tsdbReadFileToBuffer(STsdbFD *pFD, int64_t offset, int64_t size, SBuffer *buffer, int64_t szHint,
|
int32_t tsdbReadFileToBuffer(STsdbFD *pFD, int64_t offset, int64_t size, SBuffer *buffer, int64_t szHint,
|
||||||
int32_t encryptAlgorithm, char* encryptKey) {
|
int32_t encryptAlgorithm, char *encryptKey) {
|
||||||
int32_t code;
|
int32_t code;
|
||||||
|
|
||||||
code = tBufferEnsureCapacity(buffer, buffer->size + size);
|
code = tBufferEnsureCapacity(buffer, buffer->size + size);
|
||||||
if (code) return code;
|
if (code) return code;
|
||||||
code = tsdbReadFile(pFD, offset, (uint8_t *)tBufferGetDataEnd(buffer), size, szHint,
|
code = tsdbReadFile(pFD, offset, (uint8_t *)tBufferGetDataEnd(buffer), size, szHint, encryptAlgorithm, encryptKey);
|
||||||
encryptAlgorithm, encryptKey);
|
|
||||||
if (code) return code;
|
if (code) return code;
|
||||||
buffer->size += size;
|
buffer->size += size;
|
||||||
|
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tsdbFsyncFile(STsdbFD *pFD, int32_t encryptAlgorithm, char* encryptKey) {
|
int32_t tsdbFsyncFile(STsdbFD *pFD, int32_t encryptAlgorithm, char *encryptKey) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
/*
|
/*
|
||||||
if (pFD->s3File) {
|
if (pFD->s3File) {
|
||||||
|
@ -726,7 +724,7 @@ int32_t tsdbReadBlockIdx(SDataFReader *pReader, SArray *aBlockIdx) {
|
||||||
|
|
||||||
// read
|
// read
|
||||||
int32_t encryptAlgorithm = pReader->pTsdb->pVnode->config.tsdbCfg.encryptAlgorithm;
|
int32_t encryptAlgorithm = pReader->pTsdb->pVnode->config.tsdbCfg.encryptAlgorithm;
|
||||||
char* encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey;
|
char *encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey;
|
||||||
code = tsdbReadFile(pReader->pHeadFD, offset, pReader->aBuf[0], size, 0, encryptAlgorithm, encryptKey);
|
code = tsdbReadFile(pReader->pHeadFD, offset, pReader->aBuf[0], size, 0, encryptAlgorithm, encryptKey);
|
||||||
if (code) goto _err;
|
if (code) goto _err;
|
||||||
|
|
||||||
|
@ -765,7 +763,7 @@ int32_t tsdbReadSttBlk(SDataFReader *pReader, int32_t iStt, SArray *aSttBlk) {
|
||||||
|
|
||||||
// read
|
// read
|
||||||
int32_t encryptAlgorithm = pReader->pTsdb->pVnode->config.tsdbCfg.encryptAlgorithm;
|
int32_t encryptAlgorithm = pReader->pTsdb->pVnode->config.tsdbCfg.encryptAlgorithm;
|
||||||
char* encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey;
|
char *encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey;
|
||||||
code = tsdbReadFile(pReader->aSttFD[iStt], offset, pReader->aBuf[0], size, 0, encryptAlgorithm, encryptKey);
|
code = tsdbReadFile(pReader->aSttFD[iStt], offset, pReader->aBuf[0], size, 0, encryptAlgorithm, encryptKey);
|
||||||
if (code) goto _err;
|
if (code) goto _err;
|
||||||
|
|
||||||
|
@ -800,7 +798,7 @@ int32_t tsdbReadDataBlk(SDataFReader *pReader, SBlockIdx *pBlockIdx, SMapData *m
|
||||||
|
|
||||||
// read
|
// read
|
||||||
int32_t encryptAlgorithm = pReader->pTsdb->pVnode->config.tsdbCfg.encryptAlgorithm;
|
int32_t encryptAlgorithm = pReader->pTsdb->pVnode->config.tsdbCfg.encryptAlgorithm;
|
||||||
char* encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey;
|
char *encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey;
|
||||||
code = tsdbReadFile(pReader->pHeadFD, offset, pReader->aBuf[0], size, 0, encryptAlgorithm, encryptKey);
|
code = tsdbReadFile(pReader->pHeadFD, offset, pReader->aBuf[0], size, 0, encryptAlgorithm, encryptKey);
|
||||||
if (code) goto _err;
|
if (code) goto _err;
|
||||||
|
|
||||||
|
@ -895,7 +893,7 @@ int32_t tsdbReadDelDatav1(SDelFReader *pReader, SDelIdx *pDelIdx, SArray *aDelDa
|
||||||
|
|
||||||
// read
|
// read
|
||||||
int32_t encryptAlgorithm = pReader->pTsdb->pVnode->config.tsdbCfg.encryptAlgorithm;
|
int32_t encryptAlgorithm = pReader->pTsdb->pVnode->config.tsdbCfg.encryptAlgorithm;
|
||||||
char* encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey;
|
char *encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey;
|
||||||
code = tsdbReadFile(pReader->pReadH, offset, pReader->aBuf[0], size, 0, encryptAlgorithm, encryptKey);
|
code = tsdbReadFile(pReader->pReadH, offset, pReader->aBuf[0], size, 0, encryptAlgorithm, encryptKey);
|
||||||
if (code) goto _err;
|
if (code) goto _err;
|
||||||
|
|
||||||
|
@ -937,7 +935,7 @@ int32_t tsdbReadDelIdx(SDelFReader *pReader, SArray *aDelIdx) {
|
||||||
|
|
||||||
// read
|
// read
|
||||||
int32_t encryptAlgorithm = pReader->pTsdb->pVnode->config.tsdbCfg.encryptAlgorithm;
|
int32_t encryptAlgorithm = pReader->pTsdb->pVnode->config.tsdbCfg.encryptAlgorithm;
|
||||||
char* encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey;
|
char *encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey;
|
||||||
code = tsdbReadFile(pReader->pReadH, offset, pReader->aBuf[0], size, 0, encryptAlgorithm, encryptKey);
|
code = tsdbReadFile(pReader->pReadH, offset, pReader->aBuf[0], size, 0, encryptAlgorithm, encryptKey);
|
||||||
if (code) goto _err;
|
if (code) goto _err;
|
||||||
|
|
||||||
|
|
|
@ -528,7 +528,7 @@ static int32_t tsdbMigrateDataFileLCS3(SRTNer *rtner, const STFileObj *fobj, int
|
||||||
if (fdFrom == NULL) code = terrno;
|
if (fdFrom == NULL) code = terrno;
|
||||||
TSDB_CHECK_CODE(code, lino, _exit);
|
TSDB_CHECK_CODE(code, lino, _exit);
|
||||||
|
|
||||||
tsdbInfo("vgId: %d, open lcfile: %s size: %" PRId64, TD_VID(rtner->tsdb->pVnode), fname, lc_size);
|
tsdbInfo("vgId:%d, open lcfile: %s size: %" PRId64, TD_VID(rtner->tsdb->pVnode), fname, lc_size);
|
||||||
|
|
||||||
snprintf(dot2 + 1, TSDB_FQDN_LEN - (dot2 + 1 - object_name), "%d.data", lcn);
|
snprintf(dot2 + 1, TSDB_FQDN_LEN - (dot2 + 1 - object_name), "%d.data", lcn);
|
||||||
fdTo = taosOpenFile(fname, TD_FILE_WRITE | TD_FILE_CREATE | TD_FILE_TRUNC);
|
fdTo = taosOpenFile(fname, TD_FILE_WRITE | TD_FILE_CREATE | TD_FILE_TRUNC);
|
||||||
|
@ -557,6 +557,7 @@ static int32_t tsdbMigrateDataFileS3(SRTNer *rtner, const STFileObj *fobj, int64
|
||||||
int32_t lino = 0;
|
int32_t lino = 0;
|
||||||
STFileOp op = {0};
|
STFileOp op = {0};
|
||||||
int32_t lcn = (size - 1) / chunksize + 1;
|
int32_t lcn = (size - 1) / chunksize + 1;
|
||||||
|
TdFilePtr fdFrom = NULL, fdTo = NULL;
|
||||||
|
|
||||||
// remove old
|
// remove old
|
||||||
op = (STFileOp){
|
op = (STFileOp){
|
||||||
|
@ -615,7 +616,6 @@ static int32_t tsdbMigrateDataFileS3(SRTNer *rtner, const STFileObj *fobj, int64
|
||||||
}
|
}
|
||||||
|
|
||||||
// copy last chunk
|
// copy last chunk
|
||||||
TdFilePtr fdFrom = NULL, fdTo = NULL;
|
|
||||||
int64_t lc_offset = (int64_t)(lcn - 1) * chunksize;
|
int64_t lc_offset = (int64_t)(lcn - 1) * chunksize;
|
||||||
int64_t lc_size = size - lc_offset;
|
int64_t lc_size = size - lc_offset;
|
||||||
|
|
||||||
|
@ -671,7 +671,7 @@ static int32_t tsdbDoS3MigrateOnFileSet(SRTNer *rtner, STFileSet *fset) {
|
||||||
int64_t chunksize = (int64_t)pCfg->tsdbPageSize * pCfg->s3ChunkSize;
|
int64_t chunksize = (int64_t)pCfg->tsdbPageSize * pCfg->s3ChunkSize;
|
||||||
int32_t lcn = fobj->f->lcn;
|
int32_t lcn = fobj->f->lcn;
|
||||||
|
|
||||||
if (lcn < 1 && taosCheckExistFile(fobj->fname)) {
|
if (/*lcn < 1 && */ taosCheckExistFile(fobj->fname)) {
|
||||||
int32_t mtime = 0;
|
int32_t mtime = 0;
|
||||||
int64_t size = 0;
|
int64_t size = 0;
|
||||||
taosStatFile(fobj->fname, &size, &mtime, NULL);
|
taosStatFile(fobj->fname, &size, &mtime, NULL);
|
||||||
|
|
|
@ -65,9 +65,27 @@ int32_t tsdbSttFileReaderOpen(const char *fname, const SSttFileReaderConfig *con
|
||||||
|
|
||||||
int32_t encryptAlgoirthm = config->tsdb->pVnode->config.tsdbCfg.encryptAlgorithm;
|
int32_t encryptAlgoirthm = config->tsdb->pVnode->config.tsdbCfg.encryptAlgorithm;
|
||||||
char* encryptKey = config->tsdb->pVnode->config.tsdbCfg.encryptKey;
|
char* encryptKey = config->tsdb->pVnode->config.tsdbCfg.encryptKey;
|
||||||
|
#if 1
|
||||||
code = tsdbReadFile(reader[0]->fd, offset, (uint8_t *)(reader[0]->footer), sizeof(SSttFooter), 0, encryptAlgoirthm,
|
code = tsdbReadFile(reader[0]->fd, offset, (uint8_t *)(reader[0]->footer), sizeof(SSttFooter), 0, encryptAlgoirthm,
|
||||||
encryptKey);
|
encryptKey);
|
||||||
TSDB_CHECK_CODE(code, lino, _exit);
|
TSDB_CHECK_CODE(code, lino, _exit);
|
||||||
|
#else
|
||||||
|
int64_t size = config->file->size;
|
||||||
|
|
||||||
|
for (; size > TSDB_FHDR_SIZE; size--) {
|
||||||
|
code = tsdbReadFile(reader[0]->fd, size - sizeof(SSttFooter), (uint8_t *)(reader[0]->footer), sizeof(SSttFooter), 0, encryptAlgoirthm,
|
||||||
|
encryptKey);
|
||||||
|
if (code) continue;
|
||||||
|
if ((*reader)->footer->sttBlkPtr->offset + (*reader)->footer->sttBlkPtr->size + sizeof(SSttFooter) == size ||
|
||||||
|
(*reader)->footer->statisBlkPtr->offset + (*reader)->footer->statisBlkPtr->size + sizeof(SSttFooter) == size ||
|
||||||
|
(*reader)->footer->tombBlkPtr->offset + (*reader)->footer->tombBlkPtr->size + sizeof(SSttFooter) == size) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (size <= TSDB_FHDR_SIZE) {
|
||||||
|
TSDB_CHECK_CODE(code = TSDB_CODE_FILE_CORRUPTED, lino, _exit);
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
_exit:
|
_exit:
|
||||||
if (code) {
|
if (code) {
|
||||||
|
|
|
@ -486,8 +486,6 @@ SVnode *vnodeOpen(const char *path, int32_t diskPrimary, STfs *pTfs, SMsgCb msgC
|
||||||
|
|
||||||
if (tsEnableMonitor && pVnode->monitor.insertCounter == NULL) {
|
if (tsEnableMonitor && pVnode->monitor.insertCounter == NULL) {
|
||||||
taos_counter_t *counter = NULL;
|
taos_counter_t *counter = NULL;
|
||||||
counter = taos_collector_registry_get_metric(VNODE_METRIC_SQL_COUNT);
|
|
||||||
if(counter == NULL){
|
|
||||||
int32_t label_count = 7;
|
int32_t label_count = 7;
|
||||||
const char *sample_labels[] = {VNODE_METRIC_TAG_NAME_SQL_TYPE, VNODE_METRIC_TAG_NAME_CLUSTER_ID,
|
const char *sample_labels[] = {VNODE_METRIC_TAG_NAME_SQL_TYPE, VNODE_METRIC_TAG_NAME_CLUSTER_ID,
|
||||||
VNODE_METRIC_TAG_NAME_DNODE_ID, VNODE_METRIC_TAG_NAME_DNODE_EP,
|
VNODE_METRIC_TAG_NAME_DNODE_ID, VNODE_METRIC_TAG_NAME_DNODE_EP,
|
||||||
|
@ -501,7 +499,6 @@ SVnode *vnodeOpen(const char *path, int32_t diskPrimary, STfs *pTfs, SMsgCb msgC
|
||||||
counter = taos_collector_registry_get_metric(VNODE_METRIC_SQL_COUNT);
|
counter = taos_collector_registry_get_metric(VNODE_METRIC_SQL_COUNT);
|
||||||
vInfo("vgId:%d, get metric from registry:%p", TD_VID(pVnode), counter);
|
vInfo("vgId:%d, get metric from registry:%p", TD_VID(pVnode), counter);
|
||||||
}
|
}
|
||||||
}
|
|
||||||
pVnode->monitor.insertCounter = counter;
|
pVnode->monitor.insertCounter = counter;
|
||||||
vInfo("vgId:%d, succeed to set metric:%p", TD_VID(pVnode), counter);
|
vInfo("vgId:%d, succeed to set metric:%p", TD_VID(pVnode), counter);
|
||||||
}
|
}
|
||||||
|
|
|
@ -1076,16 +1076,6 @@ static int32_t vnodeProcessCreateTbReq(SVnode *pVnode, int64_t ver, void *pReq,
|
||||||
pCreateReq = req.pReqs + iReq;
|
pCreateReq = req.pReqs + iReq;
|
||||||
memset(&cRsp, 0, sizeof(cRsp));
|
memset(&cRsp, 0, sizeof(cRsp));
|
||||||
|
|
||||||
if ((terrno = grantCheck(TSDB_GRANT_TIMESERIES)) < 0) {
|
|
||||||
rcode = -1;
|
|
||||||
goto _exit;
|
|
||||||
}
|
|
||||||
|
|
||||||
if ((terrno = grantCheck(TSDB_GRANT_TABLE)) < 0) {
|
|
||||||
rcode = -1;
|
|
||||||
goto _exit;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (tsEnableAudit && tsEnableAuditCreateTable) {
|
if (tsEnableAudit && tsEnableAuditCreateTable) {
|
||||||
char *str = taosMemoryCalloc(1, TSDB_TABLE_FNAME_LEN);
|
char *str = taosMemoryCalloc(1, TSDB_TABLE_FNAME_LEN);
|
||||||
if (str == NULL) {
|
if (str == NULL) {
|
||||||
|
@ -1778,13 +1768,6 @@ static int32_t vnodeProcessSubmitReq(SVnode *pVnode, int64_t ver, void *pReq, in
|
||||||
|
|
||||||
// create table
|
// create table
|
||||||
if (pSubmitTbData->pCreateTbReq) {
|
if (pSubmitTbData->pCreateTbReq) {
|
||||||
// check (TODO: move check to create table)
|
|
||||||
code = grantCheck(TSDB_GRANT_TIMESERIES);
|
|
||||||
if (code) goto _exit;
|
|
||||||
|
|
||||||
code = grantCheck(TSDB_GRANT_TABLE);
|
|
||||||
if (code) goto _exit;
|
|
||||||
|
|
||||||
// alloc if need
|
// alloc if need
|
||||||
if (pSubmitRsp->aCreateTbRsp == NULL &&
|
if (pSubmitRsp->aCreateTbRsp == NULL &&
|
||||||
(pSubmitRsp->aCreateTbRsp = taosArrayInit(TARRAY_SIZE(pSubmitReq->aSubmitTbData), sizeof(SVCreateTbRsp))) ==
|
(pSubmitRsp->aCreateTbRsp = taosArrayInit(TARRAY_SIZE(pSubmitReq->aSubmitTbData), sizeof(SVCreateTbRsp))) ==
|
||||||
|
|
|
@ -596,6 +596,7 @@ int32_t ctgCopyTbMeta(SCatalog *pCtg, SCtgTbMetaCtx *ctx, SCtgDBCache **pDb, SCt
|
||||||
}
|
}
|
||||||
|
|
||||||
memcpy(&(*pTableMeta)->sversion, &stbMeta->sversion, metaSize - sizeof(SCTableMeta));
|
memcpy(&(*pTableMeta)->sversion, &stbMeta->sversion, metaSize - sizeof(SCTableMeta));
|
||||||
|
(*pTableMeta)->schemaExt = NULL;
|
||||||
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
@ -2883,14 +2884,24 @@ int32_t ctgGetTbMetasFromCache(SCatalog *pCtg, SRequestConnInfo *pConn, SCtgTbMe
|
||||||
SMetaRes res = {0};
|
SMetaRes res = {0};
|
||||||
STableMeta *pTableMeta = NULL;
|
STableMeta *pTableMeta = NULL;
|
||||||
if (tbMeta->tableType != TSDB_CHILD_TABLE) {
|
if (tbMeta->tableType != TSDB_CHILD_TABLE) {
|
||||||
|
int32_t schemaExtSize = 0;
|
||||||
int32_t metaSize = CTG_META_SIZE(tbMeta);
|
int32_t metaSize = CTG_META_SIZE(tbMeta);
|
||||||
pTableMeta = taosMemoryCalloc(1, metaSize);
|
if (tbMeta->schemaExt != NULL) {
|
||||||
|
schemaExtSize = tbMeta->tableInfo.numOfColumns * sizeof(SSchemaExt);
|
||||||
|
}
|
||||||
|
pTableMeta = taosMemoryCalloc(1, metaSize + schemaExtSize);
|
||||||
if (NULL == pTableMeta) {
|
if (NULL == pTableMeta) {
|
||||||
ctgReleaseTbMetaToCache(pCtg, dbCache, pCache);
|
ctgReleaseTbMetaToCache(pCtg, dbCache, pCache);
|
||||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||||
}
|
}
|
||||||
|
|
||||||
memcpy(pTableMeta, tbMeta, metaSize);
|
memcpy(pTableMeta, tbMeta, metaSize);
|
||||||
|
if (tbMeta->schemaExt != NULL) {
|
||||||
|
pTableMeta->schemaExt = (SSchemaExt *)((char *)pTableMeta + metaSize);
|
||||||
|
memcpy(pTableMeta->schemaExt, tbMeta->schemaExt, schemaExtSize);
|
||||||
|
} else {
|
||||||
|
pTableMeta->schemaExt = NULL;
|
||||||
|
}
|
||||||
|
|
||||||
CTG_UNLOCK(CTG_READ, &pCache->metaLock);
|
CTG_UNLOCK(CTG_READ, &pCache->metaLock);
|
||||||
taosHashRelease(dbCache->tbCache, pCache);
|
taosHashRelease(dbCache->tbCache, pCache);
|
||||||
|
@ -2999,6 +3010,7 @@ int32_t ctgGetTbMetasFromCache(SCatalog *pCtg, SRequestConnInfo *pConn, SCtgTbMe
|
||||||
}
|
}
|
||||||
|
|
||||||
memcpy(&pTableMeta->sversion, &stbMeta->sversion, metaSize - sizeof(SCTableMeta));
|
memcpy(&pTableMeta->sversion, &stbMeta->sversion, metaSize - sizeof(SCTableMeta));
|
||||||
|
pTableMeta->schemaExt = NULL;
|
||||||
|
|
||||||
CTG_UNLOCK(CTG_READ, &pCache->metaLock);
|
CTG_UNLOCK(CTG_READ, &pCache->metaLock);
|
||||||
taosHashRelease(dbCache->tbCache, pCache);
|
taosHashRelease(dbCache->tbCache, pCache);
|
||||||
|
|
|
@ -18,10 +18,10 @@
|
||||||
#include "commandInt.h"
|
#include "commandInt.h"
|
||||||
#include "scheduler.h"
|
#include "scheduler.h"
|
||||||
#include "systable.h"
|
#include "systable.h"
|
||||||
|
#include "taosdef.h"
|
||||||
#include "tdatablock.h"
|
#include "tdatablock.h"
|
||||||
#include "tglobal.h"
|
#include "tglobal.h"
|
||||||
#include "tgrant.h"
|
#include "tgrant.h"
|
||||||
#include "taosdef.h"
|
|
||||||
|
|
||||||
extern SConfig* tsCfg;
|
extern SConfig* tsCfg;
|
||||||
|
|
||||||
|
@ -126,6 +126,7 @@ static int32_t setDescResultIntoDataBlock(bool sysInfoUser, SSDataBlock* pBlock,
|
||||||
pCol7 = taosArrayGet(pBlock->pDataBlock, 6);
|
pCol7 = taosArrayGet(pBlock->pDataBlock, 6);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int32_t fillTagCol = 0;
|
||||||
char buf[DESCRIBE_RESULT_FIELD_LEN] = {0};
|
char buf[DESCRIBE_RESULT_FIELD_LEN] = {0};
|
||||||
for (int32_t i = 0; i < numOfRows; ++i) {
|
for (int32_t i = 0; i < numOfRows; ++i) {
|
||||||
if (invisibleColumn(sysInfoUser, pMeta->tableType, pMeta->schema[i].flags)) {
|
if (invisibleColumn(sysInfoUser, pMeta->tableType, pMeta->schema[i].flags)) {
|
||||||
|
@ -140,6 +141,7 @@ static int32_t setDescResultIntoDataBlock(bool sysInfoUser, SSDataBlock* pBlock,
|
||||||
if (TSDB_VIEW_TABLE != pMeta->tableType) {
|
if (TSDB_VIEW_TABLE != pMeta->tableType) {
|
||||||
if (i >= pMeta->tableInfo.numOfColumns) {
|
if (i >= pMeta->tableInfo.numOfColumns) {
|
||||||
STR_TO_VARSTR(buf, "TAG");
|
STR_TO_VARSTR(buf, "TAG");
|
||||||
|
fillTagCol = 1;
|
||||||
} else if (i == 1 && pMeta->schema[i].flags & COL_IS_KEY) {
|
} else if (i == 1 && pMeta->schema[i].flags & COL_IS_KEY) {
|
||||||
STR_TO_VARSTR(buf, "PRIMARY KEY")
|
STR_TO_VARSTR(buf, "PRIMARY KEY")
|
||||||
} else {
|
} else {
|
||||||
|
@ -158,15 +160,17 @@ static int32_t setDescResultIntoDataBlock(bool sysInfoUser, SSDataBlock* pBlock,
|
||||||
STR_TO_VARSTR(buf, columnLevelStr(COMPRESS_L2_TYPE_LEVEL_U32(pMeta->schemaExt[i].compress)));
|
STR_TO_VARSTR(buf, columnLevelStr(COMPRESS_L2_TYPE_LEVEL_U32(pMeta->schemaExt[i].compress)));
|
||||||
colDataSetVal(pCol7, pBlock->info.rows, buf, false);
|
colDataSetVal(pCol7, pBlock->info.rows, buf, false);
|
||||||
} else {
|
} else {
|
||||||
STR_TO_VARSTR(buf, "");
|
STR_TO_VARSTR(buf, fillTagCol == 0 ? "" : "disabled");
|
||||||
colDataSetVal(pCol5, pBlock->info.rows, buf, false);
|
colDataSetVal(pCol5, pBlock->info.rows, buf, false);
|
||||||
STR_TO_VARSTR(buf, "");
|
STR_TO_VARSTR(buf, fillTagCol == 0 ? "" : "disabled");
|
||||||
colDataSetVal(pCol6, pBlock->info.rows, buf, false);
|
colDataSetVal(pCol6, pBlock->info.rows, buf, false);
|
||||||
STR_TO_VARSTR(buf, "");
|
STR_TO_VARSTR(buf, fillTagCol == 0 ? "" : "disabled");
|
||||||
colDataSetVal(pCol7, pBlock->info.rows, buf, false);
|
colDataSetVal(pCol7, pBlock->info.rows, buf, false);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fillTagCol = 0;
|
||||||
|
|
||||||
++(pBlock->info.rows);
|
++(pBlock->info.rows);
|
||||||
}
|
}
|
||||||
if (pMeta->tableType == TSDB_SUPER_TABLE && biMode != 0) {
|
if (pMeta->tableType == TSDB_SUPER_TABLE && biMode != 0) {
|
||||||
|
@ -367,17 +371,20 @@ static void setCreateDBResultIntoDataBlock(SSDataBlock* pBlock, char* dbName, ch
|
||||||
if (IS_SYS_DBNAME(dbName)) {
|
if (IS_SYS_DBNAME(dbName)) {
|
||||||
len += sprintf(buf2 + VARSTR_HEADER_SIZE, "CREATE DATABASE `%s`", dbName);
|
len += sprintf(buf2 + VARSTR_HEADER_SIZE, "CREATE DATABASE `%s`", dbName);
|
||||||
} else {
|
} else {
|
||||||
len += sprintf(
|
len += sprintf(buf2 + VARSTR_HEADER_SIZE,
|
||||||
buf2 + VARSTR_HEADER_SIZE,
|
|
||||||
"CREATE DATABASE `%s` BUFFER %d CACHESIZE %d CACHEMODEL '%s' COMP %d DURATION %dm "
|
"CREATE DATABASE `%s` BUFFER %d CACHESIZE %d CACHEMODEL '%s' COMP %d DURATION %dm "
|
||||||
"WAL_FSYNC_PERIOD %d MAXROWS %d MINROWS %d STT_TRIGGER %d KEEP %dm,%dm,%dm PAGES %d PAGESIZE %d PRECISION '%s' REPLICA %d "
|
"WAL_FSYNC_PERIOD %d MAXROWS %d MINROWS %d STT_TRIGGER %d KEEP %dm,%dm,%dm PAGES %d PAGESIZE %d "
|
||||||
|
"PRECISION '%s' REPLICA %d "
|
||||||
"WAL_LEVEL %d VGROUPS %d SINGLE_STABLE %d TABLE_PREFIX %d TABLE_SUFFIX %d TSDB_PAGESIZE %d "
|
"WAL_LEVEL %d VGROUPS %d SINGLE_STABLE %d TABLE_PREFIX %d TABLE_SUFFIX %d TSDB_PAGESIZE %d "
|
||||||
"WAL_RETENTION_PERIOD %d WAL_RETENTION_SIZE %" PRId64 " KEEP_TIME_OFFSET %d ENCRYPT_ALGORITHM '%s' S3_CHUNKSIZE %d S3_KEEPLOCAL %dm S3_COMPACT %d",
|
"WAL_RETENTION_PERIOD %d WAL_RETENTION_SIZE %" PRId64
|
||||||
dbName, pCfg->buffer, pCfg->cacheSize, cacheModelStr(pCfg->cacheLast), pCfg->compression, pCfg->daysPerFile,
|
" KEEP_TIME_OFFSET %d ENCRYPT_ALGORITHM '%s' S3_CHUNKSIZE %d S3_KEEPLOCAL %dm S3_COMPACT %d",
|
||||||
pCfg->walFsyncPeriod, pCfg->maxRows, pCfg->minRows, pCfg->sstTrigger, pCfg->daysToKeep0, pCfg->daysToKeep1, pCfg->daysToKeep2,
|
dbName, pCfg->buffer, pCfg->cacheSize, cacheModelStr(pCfg->cacheLast), pCfg->compression,
|
||||||
pCfg->pages, pCfg->pageSize, prec, pCfg->replications, pCfg->walLevel, pCfg->numOfVgroups,
|
pCfg->daysPerFile, pCfg->walFsyncPeriod, pCfg->maxRows, pCfg->minRows, pCfg->sstTrigger,
|
||||||
1 == pCfg->numOfStables, hashPrefix, pCfg->hashSuffix, pCfg->tsdbPageSize, pCfg->walRetentionPeriod, pCfg->walRetentionSize,
|
pCfg->daysToKeep0, pCfg->daysToKeep1, pCfg->daysToKeep2, pCfg->pages, pCfg->pageSize, prec,
|
||||||
pCfg->keepTimeOffset, encryptAlgorithmStr(pCfg->encryptAlgorithm), pCfg->s3ChunkSize, pCfg->s3KeepLocal, pCfg->s3Compact);
|
pCfg->replications, pCfg->walLevel, pCfg->numOfVgroups, 1 == pCfg->numOfStables, hashPrefix,
|
||||||
|
pCfg->hashSuffix, pCfg->tsdbPageSize, pCfg->walRetentionPeriod, pCfg->walRetentionSize,
|
||||||
|
pCfg->keepTimeOffset, encryptAlgorithmStr(pCfg->encryptAlgorithm), pCfg->s3ChunkSize,
|
||||||
|
pCfg->s3KeepLocal, pCfg->s3Compact);
|
||||||
|
|
||||||
if (retentions) {
|
if (retentions) {
|
||||||
len += sprintf(buf2 + VARSTR_HEADER_SIZE + len, " RETENTIONS %s", retentions);
|
len += sprintf(buf2 + VARSTR_HEADER_SIZE + len, " RETENTIONS %s", retentions);
|
||||||
|
@ -391,7 +398,9 @@ static void setCreateDBResultIntoDataBlock(SSDataBlock* pBlock, char* dbName, ch
|
||||||
colDataSetVal(pCol2, 0, buf2, false);
|
colDataSetVal(pCol2, 0, buf2, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
#define CHECK_LEADER(n) (row[n] && (fields[n].type == TSDB_DATA_TYPE_VARCHAR && strncasecmp(row[n], "leader", varDataLen((char *)row[n] - VARSTR_HEADER_SIZE)) == 0))
|
#define CHECK_LEADER(n) \
|
||||||
|
(row[n] && (fields[n].type == TSDB_DATA_TYPE_VARCHAR && \
|
||||||
|
strncasecmp(row[n], "leader", varDataLen((char*)row[n] - VARSTR_HEADER_SIZE)) == 0))
|
||||||
// on this row, if have leader return true else return false
|
// on this row, if have leader return true else return false
|
||||||
bool existLeaderRole(TAOS_ROW row, TAOS_FIELD* fields, int nFields) {
|
bool existLeaderRole(TAOS_ROW row, TAOS_FIELD* fields, int nFields) {
|
||||||
// vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode |
|
// vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode |
|
||||||
|
@ -548,23 +557,25 @@ static int32_t buildCreateViewResultDataBlock(SSDataBlock** pOutput) {
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
void appendColumnFields(char* buf, int32_t* len, STableCfg* pCfg) {
|
void appendColumnFields(char* buf, int32_t* len, STableCfg* pCfg) {
|
||||||
for (int32_t i = 0; i < pCfg->numOfColumns; ++i) {
|
for (int32_t i = 0; i < pCfg->numOfColumns; ++i) {
|
||||||
SSchema* pSchema = pCfg->pSchemas + i;
|
SSchema* pSchema = pCfg->pSchemas + i;
|
||||||
char type[32 + 60]; // 60 byte for compress info
|
char type[32 + 60]; // 60 byte for compress info
|
||||||
sprintf(type, "%s", tDataTypes[pSchema->type].name);
|
sprintf(type, "%s", tDataTypes[pSchema->type].name);
|
||||||
if (TSDB_DATA_TYPE_VARCHAR == pSchema->type || TSDB_DATA_TYPE_VARBINARY == pSchema->type || TSDB_DATA_TYPE_GEOMETRY == pSchema->type) {
|
if (TSDB_DATA_TYPE_VARCHAR == pSchema->type || TSDB_DATA_TYPE_VARBINARY == pSchema->type ||
|
||||||
|
TSDB_DATA_TYPE_GEOMETRY == pSchema->type) {
|
||||||
sprintf(type + strlen(type), "(%d)", (int32_t)(pSchema->bytes - VARSTR_HEADER_SIZE));
|
sprintf(type + strlen(type), "(%d)", (int32_t)(pSchema->bytes - VARSTR_HEADER_SIZE));
|
||||||
} else if (TSDB_DATA_TYPE_NCHAR == pSchema->type) {
|
} else if (TSDB_DATA_TYPE_NCHAR == pSchema->type) {
|
||||||
sprintf(type + strlen(type), "(%d)", (int32_t)((pSchema->bytes - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE));
|
sprintf(type + strlen(type), "(%d)", (int32_t)((pSchema->bytes - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE));
|
||||||
}
|
}
|
||||||
|
|
||||||
if (useCompress(pCfg->tableType)) {
|
if (useCompress(pCfg->tableType)) {
|
||||||
sprintf(type + strlen(type), " ENCODE \'%s\'", columnEncodeStr(COMPRESS_L1_TYPE_U32(pCfg->pSchemaExt[i].compress)));
|
sprintf(type + strlen(type), " ENCODE \'%s\'",
|
||||||
sprintf(type + strlen(type), " COMPRESS \'%s\'", columnCompressStr(COMPRESS_L2_TYPE_U32(pCfg->pSchemaExt[i].compress)));
|
columnEncodeStr(COMPRESS_L1_TYPE_U32(pCfg->pSchemaExt[i].compress)));
|
||||||
sprintf(type + strlen(type), " LEVEL \'%s\'", columnLevelStr(COMPRESS_L2_TYPE_LEVEL_U32(pCfg->pSchemaExt[i].compress)));
|
sprintf(type + strlen(type), " COMPRESS \'%s\'",
|
||||||
|
columnCompressStr(COMPRESS_L2_TYPE_U32(pCfg->pSchemaExt[i].compress)));
|
||||||
|
sprintf(type + strlen(type), " LEVEL \'%s\'",
|
||||||
|
columnLevelStr(COMPRESS_L2_TYPE_LEVEL_U32(pCfg->pSchemaExt[i].compress)));
|
||||||
}
|
}
|
||||||
if (!(pSchema->flags & COL_IS_KEY)) {
|
if (!(pSchema->flags & COL_IS_KEY)) {
|
||||||
*len += sprintf(buf + VARSTR_HEADER_SIZE + *len, "%s`%s` %s", ((i > 0) ? ", " : ""), pSchema->name, type);
|
*len += sprintf(buf + VARSTR_HEADER_SIZE + *len, "%s`%s` %s", ((i > 0) ? ", " : ""), pSchema->name, type);
|
||||||
|
@ -580,7 +591,8 @@ void appendTagFields(char* buf, int32_t* len, STableCfg* pCfg) {
|
||||||
SSchema* pSchema = pCfg->pSchemas + pCfg->numOfColumns + i;
|
SSchema* pSchema = pCfg->pSchemas + pCfg->numOfColumns + i;
|
||||||
char type[32];
|
char type[32];
|
||||||
sprintf(type, "%s", tDataTypes[pSchema->type].name);
|
sprintf(type, "%s", tDataTypes[pSchema->type].name);
|
||||||
if (TSDB_DATA_TYPE_VARCHAR == pSchema->type || TSDB_DATA_TYPE_VARBINARY == pSchema->type || TSDB_DATA_TYPE_GEOMETRY == pSchema->type) {
|
if (TSDB_DATA_TYPE_VARCHAR == pSchema->type || TSDB_DATA_TYPE_VARBINARY == pSchema->type ||
|
||||||
|
TSDB_DATA_TYPE_GEOMETRY == pSchema->type) {
|
||||||
sprintf(type + strlen(type), "(%d)", (int32_t)(pSchema->bytes - VARSTR_HEADER_SIZE));
|
sprintf(type + strlen(type), "(%d)", (int32_t)(pSchema->bytes - VARSTR_HEADER_SIZE));
|
||||||
} else if (TSDB_DATA_TYPE_NCHAR == pSchema->type) {
|
} else if (TSDB_DATA_TYPE_NCHAR == pSchema->type) {
|
||||||
sprintf(type + strlen(type), "(%d)", (int32_t)((pSchema->bytes - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE));
|
sprintf(type + strlen(type), "(%d)", (int32_t)((pSchema->bytes - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE));
|
||||||
|
@ -823,7 +835,8 @@ static int32_t setCreateViewResultIntoDataBlock(SSDataBlock* pBlock, SShowCreate
|
||||||
|
|
||||||
SViewMeta* pMeta = pStmt->pViewMeta;
|
SViewMeta* pMeta = pStmt->pViewMeta;
|
||||||
ASSERT(pMeta);
|
ASSERT(pMeta);
|
||||||
snprintf(varDataVal(buf2), SHOW_CREATE_VIEW_RESULT_FIELD2_LEN - VARSTR_HEADER_SIZE, "CREATE VIEW `%s`.`%s` AS %s", pStmt->dbName, pStmt->viewName, pMeta->querySql);
|
snprintf(varDataVal(buf2), SHOW_CREATE_VIEW_RESULT_FIELD2_LEN - VARSTR_HEADER_SIZE, "CREATE VIEW `%s`.`%s` AS %s",
|
||||||
|
pStmt->dbName, pStmt->viewName, pMeta->querySql);
|
||||||
int32_t len = strlen(varDataVal(buf2));
|
int32_t len = strlen(varDataVal(buf2));
|
||||||
varDataLen(buf2) = (len > 65535) ? 65535 : len;
|
varDataLen(buf2) = (len > 65535) ? 65535 : len;
|
||||||
colDataSetVal(pCol2, 0, buf2, false);
|
colDataSetVal(pCol2, 0, buf2, false);
|
||||||
|
@ -833,7 +846,6 @@ static int32_t setCreateViewResultIntoDataBlock(SSDataBlock* pBlock, SShowCreate
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
static int32_t execShowCreateTable(SShowCreateTableStmt* pStmt, SRetrieveTableRsp** pRsp) {
|
static int32_t execShowCreateTable(SShowCreateTableStmt* pStmt, SRetrieveTableRsp** pRsp) {
|
||||||
SSDataBlock* pBlock = NULL;
|
SSDataBlock* pBlock = NULL;
|
||||||
int32_t code = buildCreateTbResultDataBlock(&pBlock);
|
int32_t code = buildCreateTbResultDataBlock(&pBlock);
|
||||||
|
|
|
@ -500,6 +500,8 @@ typedef struct SStreamScanInfo {
|
||||||
SStoreTqReader readerFn;
|
SStoreTqReader readerFn;
|
||||||
SStateStore stateStore;
|
SStateStore stateStore;
|
||||||
SSDataBlock* pCheckpointRes;
|
SSDataBlock* pCheckpointRes;
|
||||||
|
int8_t pkColType;
|
||||||
|
int32_t pkColLen;
|
||||||
} SStreamScanInfo;
|
} SStreamScanInfo;
|
||||||
|
|
||||||
typedef struct {
|
typedef struct {
|
||||||
|
@ -566,8 +568,13 @@ typedef struct SOpCheckPointInfo {
|
||||||
SHashObj* children; // key:child id
|
SHashObj* children; // key:child id
|
||||||
} SOpCheckPointInfo;
|
} SOpCheckPointInfo;
|
||||||
|
|
||||||
|
typedef struct SSteamOpBasicInfo {
|
||||||
|
int32_t primaryPkIndex;
|
||||||
|
} SSteamOpBasicInfo;
|
||||||
|
|
||||||
typedef struct SStreamIntervalOperatorInfo {
|
typedef struct SStreamIntervalOperatorInfo {
|
||||||
SOptrBasicInfo binfo; // basic info
|
SOptrBasicInfo binfo; // basic info
|
||||||
|
SSteamOpBasicInfo basic;
|
||||||
SAggSupporter aggSup; // aggregate supporter
|
SAggSupporter aggSup; // aggregate supporter
|
||||||
SExprSupp scalarSupp; // supporter for perform scalar function
|
SExprSupp scalarSupp; // supporter for perform scalar function
|
||||||
SGroupResInfo groupResInfo; // multiple results build supporter
|
SGroupResInfo groupResInfo; // multiple results build supporter
|
||||||
|
@ -633,6 +640,7 @@ typedef struct SResultWindowInfo {
|
||||||
|
|
||||||
typedef struct SStreamSessionAggOperatorInfo {
|
typedef struct SStreamSessionAggOperatorInfo {
|
||||||
SOptrBasicInfo binfo;
|
SOptrBasicInfo binfo;
|
||||||
|
SSteamOpBasicInfo basic;
|
||||||
SStreamAggSupporter streamAggSup;
|
SStreamAggSupporter streamAggSup;
|
||||||
SExprSupp scalarSupp; // supporter for perform scalar function
|
SExprSupp scalarSupp; // supporter for perform scalar function
|
||||||
SGroupResInfo groupResInfo;
|
SGroupResInfo groupResInfo;
|
||||||
|
@ -665,6 +673,7 @@ typedef struct SStreamSessionAggOperatorInfo {
|
||||||
|
|
||||||
typedef struct SStreamStateAggOperatorInfo {
|
typedef struct SStreamStateAggOperatorInfo {
|
||||||
SOptrBasicInfo binfo;
|
SOptrBasicInfo binfo;
|
||||||
|
SSteamOpBasicInfo basic;
|
||||||
SStreamAggSupporter streamAggSup;
|
SStreamAggSupporter streamAggSup;
|
||||||
SExprSupp scalarSupp; // supporter for perform scalar function
|
SExprSupp scalarSupp; // supporter for perform scalar function
|
||||||
SGroupResInfo groupResInfo;
|
SGroupResInfo groupResInfo;
|
||||||
|
@ -691,6 +700,7 @@ typedef struct SStreamStateAggOperatorInfo {
|
||||||
|
|
||||||
typedef struct SStreamEventAggOperatorInfo {
|
typedef struct SStreamEventAggOperatorInfo {
|
||||||
SOptrBasicInfo binfo;
|
SOptrBasicInfo binfo;
|
||||||
|
SSteamOpBasicInfo basic;
|
||||||
SStreamAggSupporter streamAggSup;
|
SStreamAggSupporter streamAggSup;
|
||||||
SExprSupp scalarSupp; // supporter for perform scalar function
|
SExprSupp scalarSupp; // supporter for perform scalar function
|
||||||
SGroupResInfo groupResInfo;
|
SGroupResInfo groupResInfo;
|
||||||
|
@ -719,6 +729,7 @@ typedef struct SStreamEventAggOperatorInfo {
|
||||||
|
|
||||||
typedef struct SStreamCountAggOperatorInfo {
|
typedef struct SStreamCountAggOperatorInfo {
|
||||||
SOptrBasicInfo binfo;
|
SOptrBasicInfo binfo;
|
||||||
|
SSteamOpBasicInfo basic;
|
||||||
SStreamAggSupporter streamAggSup;
|
SStreamAggSupporter streamAggSup;
|
||||||
SExprSupp scalarSupp; // supporter for perform scalar function
|
SExprSupp scalarSupp; // supporter for perform scalar function
|
||||||
SGroupResInfo groupResInfo;
|
SGroupResInfo groupResInfo;
|
||||||
|
@ -742,6 +753,7 @@ typedef struct SStreamCountAggOperatorInfo {
|
||||||
|
|
||||||
typedef struct SStreamPartitionOperatorInfo {
|
typedef struct SStreamPartitionOperatorInfo {
|
||||||
SOptrBasicInfo binfo;
|
SOptrBasicInfo binfo;
|
||||||
|
SSteamOpBasicInfo basic;
|
||||||
SPartitionBySupporter partitionSup;
|
SPartitionBySupporter partitionSup;
|
||||||
SExprSupp scalarSup;
|
SExprSupp scalarSup;
|
||||||
SExprSupp tbnameCalSup;
|
SExprSupp tbnameCalSup;
|
||||||
|
@ -775,6 +787,7 @@ typedef struct SStreamFillSupporter {
|
||||||
} SStreamFillSupporter;
|
} SStreamFillSupporter;
|
||||||
|
|
||||||
typedef struct SStreamFillOperatorInfo {
|
typedef struct SStreamFillOperatorInfo {
|
||||||
|
SSteamOpBasicInfo basic;
|
||||||
SStreamFillSupporter* pFillSup;
|
SStreamFillSupporter* pFillSup;
|
||||||
SSDataBlock* pRes;
|
SSDataBlock* pRes;
|
||||||
SSDataBlock* pSrcBlock;
|
SSDataBlock* pSrcBlock;
|
||||||
|
@ -911,7 +924,7 @@ int32_t initStreamAggSupporter(SStreamAggSupporter* pSup, SExprSupp* pExpSup, i
|
||||||
SReadHandle* pHandle, STimeWindowAggSupp* pTwAggSup, const char* taskIdStr,
|
SReadHandle* pHandle, STimeWindowAggSupp* pTwAggSup, const char* taskIdStr,
|
||||||
SStorageAPI* pApi, int32_t tsIndex);
|
SStorageAPI* pApi, int32_t tsIndex);
|
||||||
void initDownStream(struct SOperatorInfo* downstream, SStreamAggSupporter* pAggSup, uint16_t type, int32_t tsColIndex,
|
void initDownStream(struct SOperatorInfo* downstream, SStreamAggSupporter* pAggSup, uint16_t type, int32_t tsColIndex,
|
||||||
STimeWindowAggSupp* pTwSup);
|
STimeWindowAggSupp* pTwSup, struct SSteamOpBasicInfo* pBasic);
|
||||||
void getMaxTsWins(const SArray* pAllWins, SArray* pMaxWins);
|
void getMaxTsWins(const SArray* pAllWins, SArray* pMaxWins);
|
||||||
void initGroupResInfoFromArrayList(SGroupResInfo* pGroupResInfo, SArray* pArrayList);
|
void initGroupResInfoFromArrayList(SGroupResInfo* pGroupResInfo, SArray* pArrayList);
|
||||||
void getSessionHashKey(const SSessionKey* pKey, SSessionKey* pHashKey);
|
void getSessionHashKey(const SSessionKey* pKey, SSessionKey* pHashKey);
|
||||||
|
@ -939,11 +952,12 @@ void compactTimeWindow(SExprSupp* pSup, SStreamAggSupporter* pAggSup, STimeW
|
||||||
SSHashObj* pStUpdated, SSHashObj* pStDeleted, bool addGap);
|
SSHashObj* pStUpdated, SSHashObj* pStDeleted, bool addGap);
|
||||||
int32_t releaseOutputBuf(void* pState, SRowBuffPos* pPos, SStateStore* pAPI);
|
int32_t releaseOutputBuf(void* pState, SRowBuffPos* pPos, SStateStore* pAPI);
|
||||||
void resetWinRange(STimeWindow* winRange);
|
void resetWinRange(STimeWindow* winRange);
|
||||||
bool checkExpiredData(SStateStore* pAPI, SUpdateInfo* pUpdateInfo, STimeWindowAggSupp* pTwSup, uint64_t tableId, TSKEY ts);
|
bool checkExpiredData(SStateStore* pAPI, SUpdateInfo* pUpdateInfo, STimeWindowAggSupp* pTwSup, uint64_t tableId, TSKEY ts, void* pPkVal, int32_t len);
|
||||||
int64_t getDeleteMark(SWindowPhysiNode* pWinPhyNode, int64_t interval);
|
int64_t getDeleteMark(SWindowPhysiNode* pWinPhyNode, int64_t interval);
|
||||||
void resetUnCloseSessionWinInfo(SSHashObj* winMap);
|
void resetUnCloseSessionWinInfo(SSHashObj* winMap);
|
||||||
void setStreamOperatorCompleted(struct SOperatorInfo* pOperator);
|
void setStreamOperatorCompleted(struct SOperatorInfo* pOperator);
|
||||||
void reloadAggSupFromDownStream(struct SOperatorInfo* downstream, SStreamAggSupporter* pAggSup);
|
void reloadAggSupFromDownStream(struct SOperatorInfo* downstream, SStreamAggSupporter* pAggSup);
|
||||||
|
void destroyFlusedPos(void* pRes);
|
||||||
|
|
||||||
int32_t encodeSSessionKey(void** buf, SSessionKey* key);
|
int32_t encodeSSessionKey(void** buf, SSessionKey* key);
|
||||||
void* decodeSSessionKey(void* buf, SSessionKey* key);
|
void* decodeSSessionKey(void* buf, SSessionKey* key);
|
||||||
|
|
|
@ -1267,7 +1267,7 @@ void initParDownStream(SOperatorInfo* downstream, SPartitionBySupporter* pParSup
|
||||||
pScanInfo->pPartScalarSup = pExpr;
|
pScanInfo->pPartScalarSup = pExpr;
|
||||||
pScanInfo->pPartTbnameSup = pTbnameExpr;
|
pScanInfo->pPartTbnameSup = pTbnameExpr;
|
||||||
if (!pScanInfo->pUpdateInfo) {
|
if (!pScanInfo->pUpdateInfo) {
|
||||||
pScanInfo->pUpdateInfo = pAPI->stateStore.updateInfoInit(60000, TSDB_TIME_PRECISION_MILLI, 0, pScanInfo->igCheckUpdate);
|
pScanInfo->pUpdateInfo = pAPI->stateStore.updateInfoInit(60000, TSDB_TIME_PRECISION_MILLI, 0, pScanInfo->igCheckUpdate, pScanInfo->pkColType, pScanInfo->pkColLen);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1380,7 +1380,7 @@ bool comparePrimaryKey(SColumnInfoData* pCol, int32_t rowId, void* pVal) {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool hasPrimaryKey(SStreamScanInfo* pInfo) {
|
bool hasPrimaryKeyCol(SStreamScanInfo* pInfo) {
|
||||||
return pInfo->primaryKeyIndex != -1;
|
return pInfo->primaryKeyIndex != -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1391,7 +1391,7 @@ static uint64_t getGroupIdByCol(SStreamScanInfo* pInfo, uint64_t uid, TSKEY ts,
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t rowId = 0;
|
int32_t rowId = 0;
|
||||||
if (hasPrimaryKey(pInfo)) {
|
if (hasPrimaryKeyCol(pInfo)) {
|
||||||
SColumnInfoData* pPkCol = taosArrayGet(pPreRes->pDataBlock, pInfo->primaryKeyIndex);
|
SColumnInfoData* pPkCol = taosArrayGet(pPreRes->pDataBlock, pInfo->primaryKeyIndex);
|
||||||
for (; rowId < pPreRes->info.rows; rowId++) {
|
for (; rowId < pPreRes->info.rows; rowId++) {
|
||||||
if (comparePrimaryKey(pPkCol, rowId, pVal)) {
|
if (comparePrimaryKey(pPkCol, rowId, pVal)) {
|
||||||
|
@ -1630,7 +1630,7 @@ static void getPreVersionDataBlock(uint64_t uid, TSKEY startTs, TSKEY endTs, int
|
||||||
|
|
||||||
SColumnInfoData* pTsCol = (SColumnInfoData*)taosArrayGet(pPreRes->pDataBlock, pInfo->primaryTsIndex);
|
SColumnInfoData* pTsCol = (SColumnInfoData*)taosArrayGet(pPreRes->pDataBlock, pInfo->primaryTsIndex);
|
||||||
SColumnInfoData* pPkCol = NULL;
|
SColumnInfoData* pPkCol = NULL;
|
||||||
if (hasPrimaryKey(pInfo)) {
|
if (hasPrimaryKeyCol(pInfo)) {
|
||||||
pPkCol = (SColumnInfoData*)taosArrayGet(pPreRes->pDataBlock, pInfo->primaryKeyIndex);
|
pPkCol = (SColumnInfoData*)taosArrayGet(pPreRes->pDataBlock, pInfo->primaryKeyIndex);
|
||||||
}
|
}
|
||||||
for (int32_t i = 0; i < pPreRes->info.rows; i++) {
|
for (int32_t i = 0; i < pPreRes->info.rows; i++) {
|
||||||
|
@ -1659,7 +1659,7 @@ static int32_t generateSessionScanRange(SStreamScanInfo* pInfo, SSDataBlock* pSr
|
||||||
}
|
}
|
||||||
int64_t ver = pSrcBlock->info.version - 1;
|
int64_t ver = pSrcBlock->info.version - 1;
|
||||||
|
|
||||||
if (pInfo->partitionSup.needCalc && ( startData[0] != endData[0] || (hasPrimaryKey(pInfo) && mode == STREAM_DELETE_DATA) )) {
|
if (pInfo->partitionSup.needCalc && ( startData[0] != endData[0] || (hasPrimaryKeyCol(pInfo) && mode == STREAM_DELETE_DATA) )) {
|
||||||
getPreVersionDataBlock(uidCol[0], startData[0], endData[0], ver, GET_TASKID(pTaskInfo), pInfo, pSrcBlock);
|
getPreVersionDataBlock(uidCol[0], startData[0], endData[0], ver, GET_TASKID(pTaskInfo), pInfo, pSrcBlock);
|
||||||
startData = (TSKEY*)pStartTsCol->pData;
|
startData = (TSKEY*)pStartTsCol->pData;
|
||||||
endData = (TSKEY*)pEndTsCol->pData;
|
endData = (TSKEY*)pEndTsCol->pData;
|
||||||
|
@ -1682,7 +1682,7 @@ static int32_t generateSessionScanRange(SStreamScanInfo* pInfo, SSDataBlock* pSr
|
||||||
uint64_t groupId = pSrcGp[i];
|
uint64_t groupId = pSrcGp[i];
|
||||||
if (groupId == 0) {
|
if (groupId == 0) {
|
||||||
void* pVal = NULL;
|
void* pVal = NULL;
|
||||||
if (hasPrimaryKey(pInfo) && pSrcPkCol) {
|
if (hasPrimaryKeyCol(pInfo) && pSrcPkCol) {
|
||||||
pVal = colDataGetData(pSrcPkCol, i);
|
pVal = colDataGetData(pSrcPkCol, i);
|
||||||
}
|
}
|
||||||
groupId = getGroupIdByData(pInfo, uidCol[i], startData[i], ver, pVal);
|
groupId = getGroupIdByData(pInfo, uidCol[i], startData[i], ver, pVal);
|
||||||
|
@ -1736,7 +1736,7 @@ static int32_t generateCountScanRange(SStreamScanInfo* pInfo, SSDataBlock* pSrcB
|
||||||
}
|
}
|
||||||
int64_t ver = pSrcBlock->info.version - 1;
|
int64_t ver = pSrcBlock->info.version - 1;
|
||||||
|
|
||||||
if (pInfo->partitionSup.needCalc && ( startData[0] != endData[0] || (hasPrimaryKey(pInfo) && mode == STREAM_DELETE_DATA) )) {
|
if (pInfo->partitionSup.needCalc && ( startData[0] != endData[0] || (hasPrimaryKeyCol(pInfo) && mode == STREAM_DELETE_DATA) )) {
|
||||||
getPreVersionDataBlock(uidCol[0], startData[0], endData[0], ver, GET_TASKID(pTaskInfo), pInfo, pSrcBlock);
|
getPreVersionDataBlock(uidCol[0], startData[0], endData[0], ver, GET_TASKID(pTaskInfo), pInfo, pSrcBlock);
|
||||||
startData = (TSKEY*)pStartTsCol->pData;
|
startData = (TSKEY*)pStartTsCol->pData;
|
||||||
endData = (TSKEY*)pEndTsCol->pData;
|
endData = (TSKEY*)pEndTsCol->pData;
|
||||||
|
@ -1759,7 +1759,7 @@ static int32_t generateCountScanRange(SStreamScanInfo* pInfo, SSDataBlock* pSrcB
|
||||||
uint64_t groupId = pSrcGp[i];
|
uint64_t groupId = pSrcGp[i];
|
||||||
if (groupId == 0) {
|
if (groupId == 0) {
|
||||||
void* pVal = NULL;
|
void* pVal = NULL;
|
||||||
if (hasPrimaryKey(pInfo) && pSrcPkCol) {
|
if (hasPrimaryKeyCol(pInfo) && pSrcPkCol) {
|
||||||
pVal = colDataGetData(pSrcPkCol, i);
|
pVal = colDataGetData(pSrcPkCol, i);
|
||||||
}
|
}
|
||||||
groupId = getGroupIdByData(pInfo, uidCol[i], startData[i], ver, pVal);
|
groupId = getGroupIdByData(pInfo, uidCol[i], startData[i], ver, pVal);
|
||||||
|
@ -1800,7 +1800,7 @@ static int32_t generateIntervalScanRange(SStreamScanInfo* pInfo, SSDataBlock* pS
|
||||||
TSKEY* srcEndTsCol = (TSKEY*)pSrcEndTsCol->pData;
|
TSKEY* srcEndTsCol = (TSKEY*)pSrcEndTsCol->pData;
|
||||||
int64_t ver = pSrcBlock->info.version - 1;
|
int64_t ver = pSrcBlock->info.version - 1;
|
||||||
|
|
||||||
if (pInfo->partitionSup.needCalc && ( srcStartTsCol[0] != srcEndTsCol[0] || (hasPrimaryKey(pInfo) && mode == STREAM_DELETE_DATA) )) {
|
if (pInfo->partitionSup.needCalc && ( srcStartTsCol[0] != srcEndTsCol[0] || (hasPrimaryKeyCol(pInfo) && mode == STREAM_DELETE_DATA) )) {
|
||||||
getPreVersionDataBlock(srcUidData[0], srcStartTsCol[0], srcEndTsCol[0], ver, GET_TASKID(pTaskInfo), pInfo, pSrcBlock);
|
getPreVersionDataBlock(srcUidData[0], srcStartTsCol[0], srcEndTsCol[0], ver, GET_TASKID(pTaskInfo), pInfo, pSrcBlock);
|
||||||
srcStartTsCol = (TSKEY*)pSrcStartTsCol->pData;
|
srcStartTsCol = (TSKEY*)pSrcStartTsCol->pData;
|
||||||
srcEndTsCol = (TSKEY*)pSrcEndTsCol->pData;
|
srcEndTsCol = (TSKEY*)pSrcEndTsCol->pData;
|
||||||
|
@ -1824,7 +1824,7 @@ static int32_t generateIntervalScanRange(SStreamScanInfo* pInfo, SSDataBlock* pS
|
||||||
uint64_t groupId = srcGp[i];
|
uint64_t groupId = srcGp[i];
|
||||||
if (groupId == 0) {
|
if (groupId == 0) {
|
||||||
void* pVal = NULL;
|
void* pVal = NULL;
|
||||||
if (hasPrimaryKey(pInfo) && pSrcPkCol) {
|
if (hasPrimaryKeyCol(pInfo) && pSrcPkCol) {
|
||||||
pVal = colDataGetData(pSrcPkCol, i);
|
pVal = colDataGetData(pSrcPkCol, i);
|
||||||
}
|
}
|
||||||
groupId = getGroupIdByData(pInfo, srcUid, srcStartTsCol[i], ver, pVal);
|
groupId = getGroupIdByData(pInfo, srcUid, srcStartTsCol[i], ver, pVal);
|
||||||
|
@ -1915,7 +1915,7 @@ static int32_t generateDeleteResultBlockImpl(SStreamScanInfo* pInfo, SSDataBlock
|
||||||
char tbname[VARSTR_HEADER_SIZE + TSDB_TABLE_NAME_LEN] = {0};
|
char tbname[VARSTR_HEADER_SIZE + TSDB_TABLE_NAME_LEN] = {0};
|
||||||
if (groupId == 0) {
|
if (groupId == 0) {
|
||||||
void* pVal = NULL;
|
void* pVal = NULL;
|
||||||
if (hasPrimaryKey(pInfo) && pSrcPkCol) {
|
if (hasPrimaryKeyCol(pInfo) && pSrcPkCol) {
|
||||||
pVal = colDataGetData(pSrcPkCol, i);
|
pVal = colDataGetData(pSrcPkCol, i);
|
||||||
}
|
}
|
||||||
groupId = getGroupIdByData(pInfo, srcUid, srcStartTsCol[i], ver, pVal);
|
groupId = getGroupIdByData(pInfo, srcUid, srcStartTsCol[i], ver, pVal);
|
||||||
|
@ -1979,9 +1979,9 @@ void appendDataToSpecialBlock(SSDataBlock* pBlock, TSKEY* pStartTs, TSKEY* pEndT
|
||||||
appendOneRowToSpecialBlockImpl(pBlock, pStartTs, pEndTs, pStartTs, pEndTs, pUid, pGp, pTbName, NULL);
|
appendOneRowToSpecialBlockImpl(pBlock, pStartTs, pEndTs, pStartTs, pEndTs, pUid, pGp, pTbName, NULL);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool checkExpiredData(SStateStore* pAPI, SUpdateInfo* pUpdateInfo, STimeWindowAggSupp* pTwSup, uint64_t tableId, TSKEY ts) {
|
bool checkExpiredData(SStateStore* pAPI, SUpdateInfo* pUpdateInfo, STimeWindowAggSupp* pTwSup, uint64_t tableId, TSKEY ts, void* pPkVal, int32_t len) {
|
||||||
bool isExpired = false;
|
bool isExpired = false;
|
||||||
bool isInc = pAPI->isIncrementalTimeStamp(pUpdateInfo, tableId, ts);
|
bool isInc = pAPI->isIncrementalTimeStamp(pUpdateInfo, tableId, ts, pPkVal, len);
|
||||||
if (!isInc) {
|
if (!isInc) {
|
||||||
isExpired = isOverdue(ts, pTwSup);
|
isExpired = isOverdue(ts, pTwSup);
|
||||||
}
|
}
|
||||||
|
@ -1997,7 +1997,7 @@ static void checkUpdateData(SStreamScanInfo* pInfo, bool invertible, SSDataBlock
|
||||||
ASSERT(pColDataInfo->info.type == TSDB_DATA_TYPE_TIMESTAMP);
|
ASSERT(pColDataInfo->info.type == TSDB_DATA_TYPE_TIMESTAMP);
|
||||||
TSKEY* tsCol = (TSKEY*)pColDataInfo->pData;
|
TSKEY* tsCol = (TSKEY*)pColDataInfo->pData;
|
||||||
SColumnInfoData* pPkColDataInfo = NULL;
|
SColumnInfoData* pPkColDataInfo = NULL;
|
||||||
if (hasPrimaryKey(pInfo)) {
|
if (hasPrimaryKeyCol(pInfo)) {
|
||||||
pPkColDataInfo = taosArrayGet(pBlock->pDataBlock, pInfo->primaryKeyIndex);
|
pPkColDataInfo = taosArrayGet(pBlock->pDataBlock, pInfo->primaryKeyIndex);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2017,7 +2017,13 @@ static void checkUpdateData(SStreamScanInfo* pInfo, bool invertible, SSDataBlock
|
||||||
isClosed = isCloseWindow(&win, &pInfo->twAggSup);
|
isClosed = isCloseWindow(&win, &pInfo->twAggSup);
|
||||||
}
|
}
|
||||||
// must check update info first.
|
// must check update info first.
|
||||||
bool update = pInfo->stateStore.updateInfoIsUpdated(pInfo->pUpdateInfo, pBlock->info.id.uid, tsCol[rowId]);
|
void* pPkVal = NULL;
|
||||||
|
int32_t pkLen = 0;
|
||||||
|
if (hasPrimaryKeyCol(pInfo)) {
|
||||||
|
pPkVal = colDataGetData(pPkColDataInfo, rowId);
|
||||||
|
pkLen = colDataGetRowLength(pPkColDataInfo, rowId);
|
||||||
|
}
|
||||||
|
bool update = pInfo->stateStore.updateInfoIsUpdated(pInfo->pUpdateInfo, pBlock->info.id.uid, tsCol[rowId], pPkVal, pkLen);
|
||||||
bool isDeleted = isClosed && isSignleIntervalWindow(pInfo) &&
|
bool isDeleted = isClosed && isSignleIntervalWindow(pInfo) &&
|
||||||
isDeletedStreamWindow(&win, pBlock->info.id.groupId, pInfo->pState, &pInfo->twAggSup, &pInfo->stateStore);
|
isDeletedStreamWindow(&win, pBlock->info.id.groupId, pInfo->pState, &pInfo->twAggSup, &pInfo->stateStore);
|
||||||
if ((update || isDeleted) && out) {
|
if ((update || isDeleted) && out) {
|
||||||
|
@ -2517,7 +2523,7 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
|
||||||
if (pInfo->pRecoverRes != NULL) {
|
if (pInfo->pRecoverRes != NULL) {
|
||||||
calBlockTbName(pInfo, pInfo->pRecoverRes, 0);
|
calBlockTbName(pInfo, pInfo->pRecoverRes, 0);
|
||||||
if (!pInfo->igCheckUpdate && pInfo->pUpdateInfo) {
|
if (!pInfo->igCheckUpdate && pInfo->pUpdateInfo) {
|
||||||
TSKEY maxTs = pAPI->stateStore.updateInfoFillBlockData(pInfo->pUpdateInfo, pInfo->pRecoverRes, pInfo->primaryTsIndex);
|
TSKEY maxTs = pAPI->stateStore.updateInfoFillBlockData(pInfo->pUpdateInfo, pInfo->pRecoverRes, pInfo->primaryTsIndex, pInfo->primaryKeyIndex);
|
||||||
pInfo->twAggSup.maxTs = TMAX(pInfo->twAggSup.maxTs, maxTs);
|
pInfo->twAggSup.maxTs = TMAX(pInfo->twAggSup.maxTs, maxTs);
|
||||||
}
|
}
|
||||||
if (pInfo->pCreateTbRes->info.rows > 0) {
|
if (pInfo->pCreateTbRes->info.rows > 0) {
|
||||||
|
@ -3202,8 +3208,10 @@ SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhys
|
||||||
pInfo->pDeleteDataRes = createSpecialDataBlock(STREAM_DELETE_DATA);
|
pInfo->pDeleteDataRes = createSpecialDataBlock(STREAM_DELETE_DATA);
|
||||||
pInfo->updateWin = (STimeWindow){.skey = INT64_MAX, .ekey = INT64_MAX};
|
pInfo->updateWin = (STimeWindow){.skey = INT64_MAX, .ekey = INT64_MAX};
|
||||||
pInfo->pUpdateDataRes = createSpecialDataBlock(STREAM_CLEAR);
|
pInfo->pUpdateDataRes = createSpecialDataBlock(STREAM_CLEAR);
|
||||||
if (hasPrimaryKey(pInfo)) {
|
if (hasPrimaryKeyCol(pInfo)) {
|
||||||
addPrimaryKeyCol(pInfo->pUpdateDataRes, pkType.type, pkType.bytes);
|
addPrimaryKeyCol(pInfo->pUpdateDataRes, pkType.type, pkType.bytes);
|
||||||
|
pInfo->pkColType = pkType.type;
|
||||||
|
pInfo->pkColLen = pkType.bytes;
|
||||||
}
|
}
|
||||||
pInfo->assignBlockUid = pTableScanNode->assignBlockUid;
|
pInfo->assignBlockUid = pTableScanNode->assignBlockUid;
|
||||||
pInfo->partitionSup.needCalc = false;
|
pInfo->partitionSup.needCalc = false;
|
||||||
|
|
|
@ -50,12 +50,13 @@ void destroyStreamCountAggOperatorInfo(void* param) {
|
||||||
destroyStreamAggSupporter(&pInfo->streamAggSup);
|
destroyStreamAggSupporter(&pInfo->streamAggSup);
|
||||||
cleanupExprSupp(&pInfo->scalarSupp);
|
cleanupExprSupp(&pInfo->scalarSupp);
|
||||||
clearGroupResInfo(&pInfo->groupResInfo);
|
clearGroupResInfo(&pInfo->groupResInfo);
|
||||||
|
taosArrayDestroyP(pInfo->pUpdated, destroyFlusedPos);
|
||||||
|
pInfo->pUpdated = NULL;
|
||||||
|
|
||||||
colDataDestroy(&pInfo->twAggSup.timeWindowData);
|
colDataDestroy(&pInfo->twAggSup.timeWindowData);
|
||||||
blockDataDestroy(pInfo->pDelRes);
|
blockDataDestroy(pInfo->pDelRes);
|
||||||
tSimpleHashCleanup(pInfo->pStUpdated);
|
tSimpleHashCleanup(pInfo->pStUpdated);
|
||||||
tSimpleHashCleanup(pInfo->pStDeleted);
|
tSimpleHashCleanup(pInfo->pStDeleted);
|
||||||
pInfo->pUpdated = taosArrayDestroy(pInfo->pUpdated);
|
|
||||||
cleanupGroupResInfo(&pInfo->groupResInfo);
|
cleanupGroupResInfo(&pInfo->groupResInfo);
|
||||||
|
|
||||||
taosArrayDestroy(pInfo->historyWins);
|
taosArrayDestroy(pInfo->historyWins);
|
||||||
|
@ -242,7 +243,7 @@ static void doStreamCountAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSDataBl
|
||||||
for (int32_t i = 0; i < rows;) {
|
for (int32_t i = 0; i < rows;) {
|
||||||
if (pInfo->ignoreExpiredData &&
|
if (pInfo->ignoreExpiredData &&
|
||||||
checkExpiredData(&pInfo->streamAggSup.stateStore, pInfo->streamAggSup.pUpdateInfo, &pInfo->twAggSup,
|
checkExpiredData(&pInfo->streamAggSup.stateStore, pInfo->streamAggSup.pUpdateInfo, &pInfo->twAggSup,
|
||||||
pSDataBlock->info.id.uid, startTsCols[i])) {
|
pSDataBlock->info.id.uid, startTsCols[i], NULL, 0)) {
|
||||||
i++;
|
i++;
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
@ -728,7 +729,7 @@ SOperatorInfo* createStreamCountAggOperatorInfo(SOperatorInfo* downstream, SPhys
|
||||||
setOperatorStreamStateFn(pOperator, streamCountReleaseState, streamCountReloadState);
|
setOperatorStreamStateFn(pOperator, streamCountReleaseState, streamCountReloadState);
|
||||||
|
|
||||||
if (downstream) {
|
if (downstream) {
|
||||||
initDownStream(downstream, &pInfo->streamAggSup, pOperator->operatorType, pInfo->primaryTsIndex, &pInfo->twAggSup);
|
initDownStream(downstream, &pInfo->streamAggSup, pOperator->operatorType, pInfo->primaryTsIndex, &pInfo->twAggSup, &pInfo->basic);
|
||||||
code = appendDownstream(pOperator, &downstream, 1);
|
code = appendDownstream(pOperator, &downstream, 1);
|
||||||
}
|
}
|
||||||
return pOperator;
|
return pOperator;
|
||||||
|
|
|
@ -46,6 +46,9 @@ void destroyStreamEventOperatorInfo(void* param) {
|
||||||
cleanupBasicInfo(&pInfo->binfo);
|
cleanupBasicInfo(&pInfo->binfo);
|
||||||
destroyStreamAggSupporter(&pInfo->streamAggSup);
|
destroyStreamAggSupporter(&pInfo->streamAggSup);
|
||||||
clearGroupResInfo(&pInfo->groupResInfo);
|
clearGroupResInfo(&pInfo->groupResInfo);
|
||||||
|
taosArrayDestroyP(pInfo->pUpdated, destroyFlusedPos);
|
||||||
|
pInfo->pUpdated = NULL;
|
||||||
|
|
||||||
cleanupExprSupp(&pInfo->scalarSupp);
|
cleanupExprSupp(&pInfo->scalarSupp);
|
||||||
if (pInfo->pChildren != NULL) {
|
if (pInfo->pChildren != NULL) {
|
||||||
int32_t size = taosArrayGetSize(pInfo->pChildren);
|
int32_t size = taosArrayGetSize(pInfo->pChildren);
|
||||||
|
@ -60,7 +63,6 @@ void destroyStreamEventOperatorInfo(void* param) {
|
||||||
tSimpleHashCleanup(pInfo->pSeUpdated);
|
tSimpleHashCleanup(pInfo->pSeUpdated);
|
||||||
tSimpleHashCleanup(pInfo->pAllUpdated);
|
tSimpleHashCleanup(pInfo->pAllUpdated);
|
||||||
tSimpleHashCleanup(pInfo->pSeDeleted);
|
tSimpleHashCleanup(pInfo->pSeDeleted);
|
||||||
pInfo->pUpdated = taosArrayDestroy(pInfo->pUpdated);
|
|
||||||
cleanupGroupResInfo(&pInfo->groupResInfo);
|
cleanupGroupResInfo(&pInfo->groupResInfo);
|
||||||
|
|
||||||
taosArrayDestroy(pInfo->historyWins);
|
taosArrayDestroy(pInfo->historyWins);
|
||||||
|
@ -310,7 +312,7 @@ static void doStreamEventAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSDataBl
|
||||||
blockDataEnsureCapacity(pAggSup->pScanBlock, rows);
|
blockDataEnsureCapacity(pAggSup->pScanBlock, rows);
|
||||||
for (int32_t i = 0; i < rows; i += winRows) {
|
for (int32_t i = 0; i < rows; i += winRows) {
|
||||||
if (pInfo->ignoreExpiredData && checkExpiredData(&pInfo->streamAggSup.stateStore, pInfo->streamAggSup.pUpdateInfo,
|
if (pInfo->ignoreExpiredData && checkExpiredData(&pInfo->streamAggSup.stateStore, pInfo->streamAggSup.pUpdateInfo,
|
||||||
&pInfo->twAggSup, pSDataBlock->info.id.uid, tsCols[i])) {
|
&pInfo->twAggSup, pSDataBlock->info.id.uid, tsCols[i], NULL, 0)) {
|
||||||
i++;
|
i++;
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
@ -776,7 +778,7 @@ SOperatorInfo* createStreamEventAggOperatorInfo(SOperatorInfo* downstream, SPhys
|
||||||
pOperator->fpSet = createOperatorFpSet(optrDummyOpenFn, doStreamEventAgg, NULL, destroyStreamEventOperatorInfo,
|
pOperator->fpSet = createOperatorFpSet(optrDummyOpenFn, doStreamEventAgg, NULL, destroyStreamEventOperatorInfo,
|
||||||
optrDefaultBufFn, NULL, optrDefaultGetNextExtFn, NULL);
|
optrDefaultBufFn, NULL, optrDefaultGetNextExtFn, NULL);
|
||||||
setOperatorStreamStateFn(pOperator, streamEventReleaseState, streamEventReloadState);
|
setOperatorStreamStateFn(pOperator, streamEventReleaseState, streamEventReloadState);
|
||||||
initDownStream(downstream, &pInfo->streamAggSup, pOperator->operatorType, pInfo->primaryTsIndex, &pInfo->twAggSup);
|
initDownStream(downstream, &pInfo->streamAggSup, pOperator->operatorType, pInfo->primaryTsIndex, &pInfo->twAggSup, &pInfo->basic);
|
||||||
code = appendDownstream(pOperator, &downstream, 1);
|
code = appendDownstream(pOperator, &downstream, 1);
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
goto _error;
|
goto _error;
|
||||||
|
|
|
@ -388,15 +388,20 @@ static void doBuildDeleteResult(SStreamIntervalOperatorInfo* pInfo, SArray* pWin
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void clearGroupResInfo(SGroupResInfo* pGroupResInfo) {
|
void destroyFlusedPos(void* pRes) {
|
||||||
if (pGroupResInfo->freeItem) {
|
SRowBuffPos* pPos = (SRowBuffPos*) pRes;
|
||||||
int32_t size = taosArrayGetSize(pGroupResInfo->pRows);
|
|
||||||
for (int32_t i = pGroupResInfo->index; i < size; i++) {
|
|
||||||
SRowBuffPos* pPos = taosArrayGetP(pGroupResInfo->pRows, i);
|
|
||||||
if (!pPos->needFree && !pPos->pRowBuff) {
|
if (!pPos->needFree && !pPos->pRowBuff) {
|
||||||
taosMemoryFreeClear(pPos->pKey);
|
taosMemoryFreeClear(pPos->pKey);
|
||||||
taosMemoryFree(pPos);
|
taosMemoryFree(pPos);
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void clearGroupResInfo(SGroupResInfo* pGroupResInfo) {
|
||||||
|
if (pGroupResInfo->freeItem) {
|
||||||
|
int32_t size = taosArrayGetSize(pGroupResInfo->pRows);
|
||||||
|
for (int32_t i = pGroupResInfo->index; i < size; i++) {
|
||||||
|
void* pPos = taosArrayGetP(pGroupResInfo->pRows, i);
|
||||||
|
destroyFlusedPos(pPos);
|
||||||
}
|
}
|
||||||
pGroupResInfo->freeItem = false;
|
pGroupResInfo->freeItem = false;
|
||||||
}
|
}
|
||||||
|
@ -409,6 +414,8 @@ void destroyStreamFinalIntervalOperatorInfo(void* param) {
|
||||||
cleanupBasicInfo(&pInfo->binfo);
|
cleanupBasicInfo(&pInfo->binfo);
|
||||||
cleanupAggSup(&pInfo->aggSup);
|
cleanupAggSup(&pInfo->aggSup);
|
||||||
clearGroupResInfo(&pInfo->groupResInfo);
|
clearGroupResInfo(&pInfo->groupResInfo);
|
||||||
|
taosArrayDestroyP(pInfo->pUpdated, destroyFlusedPos);
|
||||||
|
pInfo->pUpdated = NULL;
|
||||||
|
|
||||||
// it should be empty.
|
// it should be empty.
|
||||||
void* pIte = NULL;
|
void* pIte = NULL;
|
||||||
|
@ -437,7 +444,6 @@ void destroyStreamFinalIntervalOperatorInfo(void* param) {
|
||||||
cleanupExprSupp(&pInfo->scalarSupp);
|
cleanupExprSupp(&pInfo->scalarSupp);
|
||||||
tSimpleHashCleanup(pInfo->pUpdatedMap);
|
tSimpleHashCleanup(pInfo->pUpdatedMap);
|
||||||
pInfo->pUpdatedMap = NULL;
|
pInfo->pUpdatedMap = NULL;
|
||||||
pInfo->pUpdated = taosArrayDestroy(pInfo->pUpdated);
|
|
||||||
tSimpleHashCleanup(pInfo->pDeletedMap);
|
tSimpleHashCleanup(pInfo->pDeletedMap);
|
||||||
|
|
||||||
blockDataDestroy(pInfo->pCheckpointRes);
|
blockDataDestroy(pInfo->pCheckpointRes);
|
||||||
|
@ -481,13 +487,14 @@ void initIntervalDownStream(SOperatorInfo* downstream, uint16_t type, SStreamInt
|
||||||
pScanInfo->windowSup.pIntervalAggSup = &pInfo->aggSup;
|
pScanInfo->windowSup.pIntervalAggSup = &pInfo->aggSup;
|
||||||
if (!pScanInfo->pUpdateInfo) {
|
if (!pScanInfo->pUpdateInfo) {
|
||||||
pScanInfo->pUpdateInfo =
|
pScanInfo->pUpdateInfo =
|
||||||
pAPI->updateInfoInitP(&pInfo->interval, pInfo->twAggSup.waterMark, pScanInfo->igCheckUpdate);
|
pAPI->updateInfoInitP(&pInfo->interval, pInfo->twAggSup.waterMark, pScanInfo->igCheckUpdate, pScanInfo->pkColType, pScanInfo->pkColLen);
|
||||||
}
|
}
|
||||||
|
|
||||||
pScanInfo->interval = pInfo->interval;
|
pScanInfo->interval = pInfo->interval;
|
||||||
pScanInfo->twAggSup = pInfo->twAggSup;
|
pScanInfo->twAggSup = pInfo->twAggSup;
|
||||||
pScanInfo->pState = pInfo->pState;
|
pScanInfo->pState = pInfo->pState;
|
||||||
pInfo->pUpdateInfo = pScanInfo->pUpdateInfo;
|
pInfo->pUpdateInfo = pScanInfo->pUpdateInfo;
|
||||||
|
pInfo->basic.primaryPkIndex = pScanInfo->primaryKeyIndex;
|
||||||
}
|
}
|
||||||
|
|
||||||
void compactFunctions(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx, int32_t numOfOutput,
|
void compactFunctions(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx, int32_t numOfOutput,
|
||||||
|
@ -820,6 +827,10 @@ static int32_t getNextQualifiedFinalWindow(SInterval* pInterval, STimeWindow* pN
|
||||||
return startPos;
|
return startPos;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool hasSrcPrimaryKeyCol(SSteamOpBasicInfo* pInfo) {
|
||||||
|
return pInfo->primaryPkIndex != -1;
|
||||||
|
}
|
||||||
|
|
||||||
static void doStreamIntervalAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSDataBlock, uint64_t groupId,
|
static void doStreamIntervalAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSDataBlock, uint64_t groupId,
|
||||||
SSHashObj* pUpdatedMap, SSHashObj* pDeletedMap) {
|
SSHashObj* pUpdatedMap, SSHashObj* pDeletedMap) {
|
||||||
SStreamIntervalOperatorInfo* pInfo = (SStreamIntervalOperatorInfo*)pOperator->info;
|
SStreamIntervalOperatorInfo* pInfo = (SStreamIntervalOperatorInfo*)pOperator->info;
|
||||||
|
@ -839,6 +850,13 @@ static void doStreamIntervalAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSDat
|
||||||
SColumnInfoData* pColDataInfo = taosArrayGet(pSDataBlock->pDataBlock, pInfo->primaryTsIndex);
|
SColumnInfoData* pColDataInfo = taosArrayGet(pSDataBlock->pDataBlock, pInfo->primaryTsIndex);
|
||||||
tsCols = (int64_t*)pColDataInfo->pData;
|
tsCols = (int64_t*)pColDataInfo->pData;
|
||||||
|
|
||||||
|
void* pPkVal = NULL;
|
||||||
|
int32_t pkLen = 0;
|
||||||
|
SColumnInfoData* pPkColDataInfo = NULL;
|
||||||
|
if (hasSrcPrimaryKeyCol(&pInfo->basic)) {
|
||||||
|
pPkColDataInfo = taosArrayGet(pSDataBlock->pDataBlock, pInfo->primaryTsIndex);
|
||||||
|
}
|
||||||
|
|
||||||
if (pSDataBlock->info.window.skey != tsCols[0] || pSDataBlock->info.window.ekey != tsCols[endRowId]) {
|
if (pSDataBlock->info.window.skey != tsCols[0] || pSDataBlock->info.window.ekey != tsCols[endRowId]) {
|
||||||
qError("table uid %" PRIu64 " data block timestamp range may not be calculated! minKey %" PRId64
|
qError("table uid %" PRIu64 " data block timestamp range may not be calculated! minKey %" PRId64
|
||||||
",maxKey %" PRId64,
|
",maxKey %" PRId64,
|
||||||
|
@ -862,9 +880,15 @@ static void doStreamIntervalAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSDat
|
||||||
}
|
}
|
||||||
while (1) {
|
while (1) {
|
||||||
bool isClosed = isCloseWindow(&nextWin, &pInfo->twAggSup);
|
bool isClosed = isCloseWindow(&nextWin, &pInfo->twAggSup);
|
||||||
|
if (hasSrcPrimaryKeyCol(&pInfo->basic) && !IS_FINAL_INTERVAL_OP(pOperator) && pInfo->ignoreExpiredData &&
|
||||||
|
pSDataBlock->info.type != STREAM_PULL_DATA) {
|
||||||
|
pPkVal = colDataGetData(pPkColDataInfo, startPos);
|
||||||
|
pkLen = colDataGetRowLength(pPkColDataInfo, startPos);
|
||||||
|
}
|
||||||
|
|
||||||
if ((!IS_FINAL_INTERVAL_OP(pOperator) && pInfo->ignoreExpiredData && pSDataBlock->info.type != STREAM_PULL_DATA &&
|
if ((!IS_FINAL_INTERVAL_OP(pOperator) && pInfo->ignoreExpiredData && pSDataBlock->info.type != STREAM_PULL_DATA &&
|
||||||
checkExpiredData(&pInfo->stateStore, pInfo->pUpdateInfo, &pInfo->twAggSup, pSDataBlock->info.id.uid,
|
checkExpiredData(&pInfo->stateStore, pInfo->pUpdateInfo, &pInfo->twAggSup, pSDataBlock->info.id.uid,
|
||||||
nextWin.ekey)) ||
|
nextWin.ekey, pPkVal, pkLen)) ||
|
||||||
!inSlidingWindow(&pInfo->interval, &nextWin, &pSDataBlock->info)) {
|
!inSlidingWindow(&pInfo->interval, &nextWin, &pSDataBlock->info)) {
|
||||||
startPos = getNexWindowPos(&pInfo->interval, &pSDataBlock->info, tsCols, startPos, nextWin.ekey, &nextWin);
|
startPos = getNexWindowPos(&pInfo->interval, &pSDataBlock->info, tsCols, startPos, nextWin.ekey, &nextWin);
|
||||||
if (startPos < 0) {
|
if (startPos < 0) {
|
||||||
|
@ -1664,6 +1688,8 @@ void destroyStreamSessionAggOperatorInfo(void* param) {
|
||||||
destroyStreamAggSupporter(&pInfo->streamAggSup);
|
destroyStreamAggSupporter(&pInfo->streamAggSup);
|
||||||
cleanupExprSupp(&pInfo->scalarSupp);
|
cleanupExprSupp(&pInfo->scalarSupp);
|
||||||
clearGroupResInfo(&pInfo->groupResInfo);
|
clearGroupResInfo(&pInfo->groupResInfo);
|
||||||
|
taosArrayDestroyP(pInfo->pUpdated, destroyFlusedPos);
|
||||||
|
pInfo->pUpdated = NULL;
|
||||||
|
|
||||||
if (pInfo->pChildren != NULL) {
|
if (pInfo->pChildren != NULL) {
|
||||||
int32_t size = taosArrayGetSize(pInfo->pChildren);
|
int32_t size = taosArrayGetSize(pInfo->pChildren);
|
||||||
|
@ -1679,7 +1705,6 @@ void destroyStreamSessionAggOperatorInfo(void* param) {
|
||||||
blockDataDestroy(pInfo->pWinBlock);
|
blockDataDestroy(pInfo->pWinBlock);
|
||||||
tSimpleHashCleanup(pInfo->pStUpdated);
|
tSimpleHashCleanup(pInfo->pStUpdated);
|
||||||
tSimpleHashCleanup(pInfo->pStDeleted);
|
tSimpleHashCleanup(pInfo->pStDeleted);
|
||||||
pInfo->pUpdated = taosArrayDestroy(pInfo->pUpdated);
|
|
||||||
cleanupGroupResInfo(&pInfo->groupResInfo);
|
cleanupGroupResInfo(&pInfo->groupResInfo);
|
||||||
|
|
||||||
taosArrayDestroy(pInfo->historyWins);
|
taosArrayDestroy(pInfo->historyWins);
|
||||||
|
@ -1715,14 +1740,14 @@ void initDummyFunction(SqlFunctionCtx* pDummy, SqlFunctionCtx* pCtx, int32_t num
|
||||||
}
|
}
|
||||||
|
|
||||||
void initDownStream(SOperatorInfo* downstream, SStreamAggSupporter* pAggSup, uint16_t type, int32_t tsColIndex,
|
void initDownStream(SOperatorInfo* downstream, SStreamAggSupporter* pAggSup, uint16_t type, int32_t tsColIndex,
|
||||||
STimeWindowAggSupp* pTwSup) {
|
STimeWindowAggSupp* pTwSup, struct SSteamOpBasicInfo* pBasic) {
|
||||||
if (downstream->operatorType == QUERY_NODE_PHYSICAL_PLAN_STREAM_PARTITION) {
|
if (downstream->operatorType == QUERY_NODE_PHYSICAL_PLAN_STREAM_PARTITION) {
|
||||||
SStreamPartitionOperatorInfo* pScanInfo = downstream->info;
|
SStreamPartitionOperatorInfo* pScanInfo = downstream->info;
|
||||||
pScanInfo->tsColIndex = tsColIndex;
|
pScanInfo->tsColIndex = tsColIndex;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (downstream->operatorType != QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN) {
|
if (downstream->operatorType != QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN) {
|
||||||
initDownStream(downstream->pDownstream[0], pAggSup, type, tsColIndex, pTwSup);
|
initDownStream(downstream->pDownstream[0], pAggSup, type, tsColIndex, pTwSup, pBasic);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
SStreamScanInfo* pScanInfo = downstream->info;
|
SStreamScanInfo* pScanInfo = downstream->info;
|
||||||
|
@ -1730,10 +1755,11 @@ void initDownStream(SOperatorInfo* downstream, SStreamAggSupporter* pAggSup, uin
|
||||||
pScanInfo->pState = pAggSup->pState;
|
pScanInfo->pState = pAggSup->pState;
|
||||||
if (!pScanInfo->pUpdateInfo) {
|
if (!pScanInfo->pUpdateInfo) {
|
||||||
pScanInfo->pUpdateInfo = pAggSup->stateStore.updateInfoInit(60000, TSDB_TIME_PRECISION_MILLI, pTwSup->waterMark,
|
pScanInfo->pUpdateInfo = pAggSup->stateStore.updateInfoInit(60000, TSDB_TIME_PRECISION_MILLI, pTwSup->waterMark,
|
||||||
pScanInfo->igCheckUpdate);
|
pScanInfo->igCheckUpdate, pScanInfo->pkColType, pScanInfo->pkColLen);
|
||||||
}
|
}
|
||||||
pScanInfo->twAggSup = *pTwSup;
|
pScanInfo->twAggSup = *pTwSup;
|
||||||
pAggSup->pUpdateInfo = pScanInfo->pUpdateInfo;
|
pAggSup->pUpdateInfo = pScanInfo->pUpdateInfo;
|
||||||
|
pBasic->primaryPkIndex = pScanInfo->primaryKeyIndex;
|
||||||
}
|
}
|
||||||
|
|
||||||
static TSKEY sesionTs(void* pKey) {
|
static TSKEY sesionTs(void* pKey) {
|
||||||
|
@ -2106,10 +2132,22 @@ static void doStreamSessionAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSData
|
||||||
}
|
}
|
||||||
|
|
||||||
TSKEY* endTsCols = (int64_t*)pEndTsCol->pData;
|
TSKEY* endTsCols = (int64_t*)pEndTsCol->pData;
|
||||||
|
|
||||||
|
void* pPkVal = NULL;
|
||||||
|
int32_t pkLen = 0;
|
||||||
|
SColumnInfoData* pPkColDataInfo = NULL;
|
||||||
|
if (hasSrcPrimaryKeyCol(&pInfo->basic)) {
|
||||||
|
pPkColDataInfo = taosArrayGet(pSDataBlock->pDataBlock, pInfo->primaryTsIndex);
|
||||||
|
}
|
||||||
|
|
||||||
for (int32_t i = 0; i < rows;) {
|
for (int32_t i = 0; i < rows;) {
|
||||||
|
if (hasSrcPrimaryKeyCol(&pInfo->basic) && !IS_FINAL_SESSION_OP(pOperator) && pInfo->ignoreExpiredData) {
|
||||||
|
pPkVal = colDataGetData(pPkColDataInfo, i);
|
||||||
|
pkLen = colDataGetRowLength(pPkColDataInfo, i);
|
||||||
|
}
|
||||||
if (!IS_FINAL_SESSION_OP(pOperator) && pInfo->ignoreExpiredData &&
|
if (!IS_FINAL_SESSION_OP(pOperator) && pInfo->ignoreExpiredData &&
|
||||||
checkExpiredData(&pInfo->streamAggSup.stateStore, pInfo->streamAggSup.pUpdateInfo, &pInfo->twAggSup,
|
checkExpiredData(&pInfo->streamAggSup.stateStore, pInfo->streamAggSup.pUpdateInfo, &pInfo->twAggSup,
|
||||||
pSDataBlock->info.id.uid, endTsCols[i])) {
|
pSDataBlock->info.id.uid, endTsCols[i], pPkVal, pkLen)) {
|
||||||
i++;
|
i++;
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
@ -3051,7 +3089,7 @@ SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream, SPh
|
||||||
setOperatorStreamStateFn(pOperator, streamSessionReleaseState, streamSessionReloadState);
|
setOperatorStreamStateFn(pOperator, streamSessionReleaseState, streamSessionReloadState);
|
||||||
|
|
||||||
if (downstream) {
|
if (downstream) {
|
||||||
initDownStream(downstream, &pInfo->streamAggSup, pOperator->operatorType, pInfo->primaryTsIndex, &pInfo->twAggSup);
|
initDownStream(downstream, &pInfo->streamAggSup, pOperator->operatorType, pInfo->primaryTsIndex, &pInfo->twAggSup, &pInfo->basic);
|
||||||
code = appendDownstream(pOperator, &downstream, 1);
|
code = appendDownstream(pOperator, &downstream, 1);
|
||||||
}
|
}
|
||||||
return pOperator;
|
return pOperator;
|
||||||
|
@ -3250,6 +3288,9 @@ void destroyStreamStateOperatorInfo(void* param) {
|
||||||
cleanupBasicInfo(&pInfo->binfo);
|
cleanupBasicInfo(&pInfo->binfo);
|
||||||
destroyStreamAggSupporter(&pInfo->streamAggSup);
|
destroyStreamAggSupporter(&pInfo->streamAggSup);
|
||||||
clearGroupResInfo(&pInfo->groupResInfo);
|
clearGroupResInfo(&pInfo->groupResInfo);
|
||||||
|
taosArrayDestroyP(pInfo->pUpdated, destroyFlusedPos);
|
||||||
|
pInfo->pUpdated = NULL;
|
||||||
|
|
||||||
cleanupExprSupp(&pInfo->scalarSupp);
|
cleanupExprSupp(&pInfo->scalarSupp);
|
||||||
if (pInfo->pChildren != NULL) {
|
if (pInfo->pChildren != NULL) {
|
||||||
int32_t size = taosArrayGetSize(pInfo->pChildren);
|
int32_t size = taosArrayGetSize(pInfo->pChildren);
|
||||||
|
@ -3263,7 +3304,6 @@ void destroyStreamStateOperatorInfo(void* param) {
|
||||||
blockDataDestroy(pInfo->pDelRes);
|
blockDataDestroy(pInfo->pDelRes);
|
||||||
tSimpleHashCleanup(pInfo->pSeUpdated);
|
tSimpleHashCleanup(pInfo->pSeUpdated);
|
||||||
tSimpleHashCleanup(pInfo->pSeDeleted);
|
tSimpleHashCleanup(pInfo->pSeDeleted);
|
||||||
pInfo->pUpdated = taosArrayDestroy(pInfo->pUpdated);
|
|
||||||
cleanupGroupResInfo(&pInfo->groupResInfo);
|
cleanupGroupResInfo(&pInfo->groupResInfo);
|
||||||
|
|
||||||
taosArrayDestroy(pInfo->historyWins);
|
taosArrayDestroy(pInfo->historyWins);
|
||||||
|
@ -3481,7 +3521,7 @@ static void doStreamStateAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSDataBl
|
||||||
SColumnInfoData* pKeyColInfo = taosArrayGet(pSDataBlock->pDataBlock, pInfo->stateCol.slotId);
|
SColumnInfoData* pKeyColInfo = taosArrayGet(pSDataBlock->pDataBlock, pInfo->stateCol.slotId);
|
||||||
for (int32_t i = 0; i < rows; i += winRows) {
|
for (int32_t i = 0; i < rows; i += winRows) {
|
||||||
if (pInfo->ignoreExpiredData && checkExpiredData(&pInfo->streamAggSup.stateStore, pInfo->streamAggSup.pUpdateInfo,
|
if (pInfo->ignoreExpiredData && checkExpiredData(&pInfo->streamAggSup.stateStore, pInfo->streamAggSup.pUpdateInfo,
|
||||||
&pInfo->twAggSup, pSDataBlock->info.id.uid, tsCols[i]) ||
|
&pInfo->twAggSup, pSDataBlock->info.id.uid, tsCols[i], NULL, 0) ||
|
||||||
colDataIsNull_s(pKeyColInfo, i)) {
|
colDataIsNull_s(pKeyColInfo, i)) {
|
||||||
i++;
|
i++;
|
||||||
continue;
|
continue;
|
||||||
|
@ -3948,7 +3988,7 @@ SOperatorInfo* createStreamStateAggOperatorInfo(SOperatorInfo* downstream, SPhys
|
||||||
pOperator->fpSet = createOperatorFpSet(optrDummyOpenFn, doStreamStateAgg, NULL, destroyStreamStateOperatorInfo,
|
pOperator->fpSet = createOperatorFpSet(optrDummyOpenFn, doStreamStateAgg, NULL, destroyStreamStateOperatorInfo,
|
||||||
optrDefaultBufFn, NULL, optrDefaultGetNextExtFn, NULL);
|
optrDefaultBufFn, NULL, optrDefaultGetNextExtFn, NULL);
|
||||||
setOperatorStreamStateFn(pOperator, streamStateReleaseState, streamStateReloadState);
|
setOperatorStreamStateFn(pOperator, streamStateReleaseState, streamStateReloadState);
|
||||||
initDownStream(downstream, &pInfo->streamAggSup, pOperator->operatorType, pInfo->primaryTsIndex, &pInfo->twAggSup);
|
initDownStream(downstream, &pInfo->streamAggSup, pOperator->operatorType, pInfo->primaryTsIndex, &pInfo->twAggSup, &pInfo->basic);
|
||||||
code = appendDownstream(pOperator, &downstream, 1);
|
code = appendDownstream(pOperator, &downstream, 1);
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
goto _error;
|
goto _error;
|
||||||
|
|
|
@ -720,7 +720,6 @@ static SSDataBlock* sysTableScanUserTags(SOperatorInfo* pOperator) {
|
||||||
pAPI->metaFn.resumeTableMetaCursor(pInfo->pCur, 0, 0);
|
pAPI->metaFn.resumeTableMetaCursor(pInfo->pCur, 0, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool blockFull = false;
|
|
||||||
while ((ret = pAPI->metaFn.cursorNext(pInfo->pCur, TSDB_SUPER_TABLE)) == 0) {
|
while ((ret = pAPI->metaFn.cursorNext(pInfo->pCur, TSDB_SUPER_TABLE)) == 0) {
|
||||||
if (pInfo->pCur->mr.me.type != TSDB_CHILD_TABLE) {
|
if (pInfo->pCur->mr.me.type != TSDB_CHILD_TABLE) {
|
||||||
continue;
|
continue;
|
||||||
|
@ -743,25 +742,19 @@ static SSDataBlock* sysTableScanUserTags(SOperatorInfo* pOperator) {
|
||||||
}
|
}
|
||||||
|
|
||||||
if ((smrSuperTable.me.stbEntry.schemaTag.nCols + numOfRows) > pOperator->resultInfo.capacity) {
|
if ((smrSuperTable.me.stbEntry.schemaTag.nCols + numOfRows) > pOperator->resultInfo.capacity) {
|
||||||
blockFull = true;
|
|
||||||
} else {
|
|
||||||
sysTableUserTagsFillOneTableTags(pInfo, &smrSuperTable, &pInfo->pCur->mr, dbname, tableName, &numOfRows,
|
|
||||||
dataBlock);
|
|
||||||
}
|
|
||||||
|
|
||||||
pAPI->metaReaderFn.clearReader(&smrSuperTable);
|
|
||||||
|
|
||||||
if (blockFull || numOfRows >= pOperator->resultInfo.capacity) {
|
|
||||||
relocateAndFilterSysTagsScanResult(pInfo, numOfRows, dataBlock, pOperator->exprSupp.pFilterInfo);
|
relocateAndFilterSysTagsScanResult(pInfo, numOfRows, dataBlock, pOperator->exprSupp.pFilterInfo);
|
||||||
numOfRows = 0;
|
numOfRows = 0;
|
||||||
|
|
||||||
if (pInfo->pRes->info.rows > 0) {
|
if (pInfo->pRes->info.rows > 0) {
|
||||||
pAPI->metaFn.pauseTableMetaCursor(pInfo->pCur);
|
pAPI->metaFn.pauseTableMetaCursor(pInfo->pCur);
|
||||||
|
pAPI->metaReaderFn.clearReader(&smrSuperTable);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
blockFull = false;
|
sysTableUserTagsFillOneTableTags(pInfo, &smrSuperTable, &pInfo->pCur->mr, dbname, tableName, &numOfRows,
|
||||||
|
dataBlock);
|
||||||
}
|
}
|
||||||
|
pAPI->metaReaderFn.clearReader(&smrSuperTable);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (numOfRows > 0) {
|
if (numOfRows > 0) {
|
||||||
|
|
|
@ -300,6 +300,9 @@ static int32_t joinTableNodeCopy(const SJoinTableNode* pSrc, SJoinTableNode* pDs
|
||||||
COPY_BASE_OBJECT_FIELD(table, tableNodeCopy);
|
COPY_BASE_OBJECT_FIELD(table, tableNodeCopy);
|
||||||
COPY_SCALAR_FIELD(joinType);
|
COPY_SCALAR_FIELD(joinType);
|
||||||
COPY_SCALAR_FIELD(subType);
|
COPY_SCALAR_FIELD(subType);
|
||||||
|
CLONE_NODE_FIELD(pWindowOffset);
|
||||||
|
CLONE_NODE_FIELD(pJLimit);
|
||||||
|
CLONE_NODE_FIELD(addPrimCond);
|
||||||
COPY_SCALAR_FIELD(hasSubQuery);
|
COPY_SCALAR_FIELD(hasSubQuery);
|
||||||
COPY_SCALAR_FIELD(isLowLevelJoin);
|
COPY_SCALAR_FIELD(isLowLevelJoin);
|
||||||
CLONE_NODE_FIELD(pLeft);
|
CLONE_NODE_FIELD(pLeft);
|
||||||
|
|
|
@ -745,7 +745,6 @@ SNode* createTimeOffsetValueNode(SAstCreateContext* pCxt, const SToken* pLiteral
|
||||||
return (SNode*)val;
|
return (SNode*)val;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
SNode* createDefaultDatabaseCondValue(SAstCreateContext* pCxt) {
|
SNode* createDefaultDatabaseCondValue(SAstCreateContext* pCxt) {
|
||||||
CHECK_PARSER_STATUS(pCxt);
|
CHECK_PARSER_STATUS(pCxt);
|
||||||
if (NULL == pCxt->pQueryCxt->db) {
|
if (NULL == pCxt->pQueryCxt->db) {
|
||||||
|
@ -965,7 +964,8 @@ SNode* createTempTableNode(SAstCreateContext* pCxt, SNode* pSubquery, const STok
|
||||||
return (SNode*)tempTable;
|
return (SNode*)tempTable;
|
||||||
}
|
}
|
||||||
|
|
||||||
SNode* createJoinTableNode(SAstCreateContext* pCxt, EJoinType type, EJoinSubType stype, SNode* pLeft, SNode* pRight, SNode* pJoinCond) {
|
SNode* createJoinTableNode(SAstCreateContext* pCxt, EJoinType type, EJoinSubType stype, SNode* pLeft, SNode* pRight,
|
||||||
|
SNode* pJoinCond) {
|
||||||
CHECK_PARSER_STATUS(pCxt);
|
CHECK_PARSER_STATUS(pCxt);
|
||||||
SJoinTableNode* joinTable = (SJoinTableNode*)nodesMakeNode(QUERY_NODE_JOIN_TABLE);
|
SJoinTableNode* joinTable = (SJoinTableNode*)nodesMakeNode(QUERY_NODE_JOIN_TABLE);
|
||||||
CHECK_OUT_OF_MEM(joinTable);
|
CHECK_OUT_OF_MEM(joinTable);
|
||||||
|
@ -1264,7 +1264,6 @@ SNode* addFillClause(SAstCreateContext* pCxt, SNode* pStmt, SNode* pFill) {
|
||||||
return pStmt;
|
return pStmt;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
SNode* addJLimitClause(SAstCreateContext* pCxt, SNode* pJoin, SNode* pJLimit) {
|
SNode* addJLimitClause(SAstCreateContext* pCxt, SNode* pJoin, SNode* pJLimit) {
|
||||||
CHECK_PARSER_STATUS(pCxt);
|
CHECK_PARSER_STATUS(pCxt);
|
||||||
if (NULL == pJLimit) {
|
if (NULL == pJLimit) {
|
||||||
|
@ -1276,7 +1275,6 @@ SNode* addJLimitClause(SAstCreateContext* pCxt, SNode* pJoin, SNode* pJLimit) {
|
||||||
return pJoin;
|
return pJoin;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
SNode* addWindowOffsetClause(SAstCreateContext* pCxt, SNode* pJoin, SNode* pWinOffset) {
|
SNode* addWindowOffsetClause(SAstCreateContext* pCxt, SNode* pJoin, SNode* pWinOffset) {
|
||||||
CHECK_PARSER_STATUS(pCxt);
|
CHECK_PARSER_STATUS(pCxt);
|
||||||
if (NULL == pWinOffset) {
|
if (NULL == pWinOffset) {
|
||||||
|
@ -1288,7 +1286,6 @@ SNode* addWindowOffsetClause(SAstCreateContext* pCxt, SNode* pJoin, SNode* pWinO
|
||||||
return pJoin;
|
return pJoin;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
SNode* createSelectStmt(SAstCreateContext* pCxt, bool isDistinct, SNodeList* pProjectionList, SNode* pTable,
|
SNode* createSelectStmt(SAstCreateContext* pCxt, bool isDistinct, SNodeList* pProjectionList, SNode* pTable,
|
||||||
SNodeList* pHint) {
|
SNodeList* pHint) {
|
||||||
CHECK_PARSER_STATUS(pCxt);
|
CHECK_PARSER_STATUS(pCxt);
|
||||||
|
@ -1744,14 +1741,14 @@ SNode* setColumnOptions(SAstCreateContext* pCxt, SNode* pOptions, EColumnOptionT
|
||||||
memset(((SColumnOptions*)pOptions)->compress, 0, TSDB_CL_COMPRESS_OPTION_LEN);
|
memset(((SColumnOptions*)pOptions)->compress, 0, TSDB_CL_COMPRESS_OPTION_LEN);
|
||||||
COPY_STRING_FORM_STR_TOKEN(((SColumnOptions*)pOptions)->compress, (SToken*)pVal);
|
COPY_STRING_FORM_STR_TOKEN(((SColumnOptions*)pOptions)->compress, (SToken*)pVal);
|
||||||
if (0 == strlen(((SColumnOptions*)pOptions)->compress)) {
|
if (0 == strlen(((SColumnOptions*)pOptions)->compress)) {
|
||||||
pCxt->errCode = TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
pCxt->errCode = TSDB_CODE_TSC_COMPRESS_PARAM_ERROR;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case COLUMN_OPTION_LEVEL:
|
case COLUMN_OPTION_LEVEL:
|
||||||
memset(((SColumnOptions*)pOptions)->compressLevel, 0, TSDB_CL_COMPRESS_OPTION_LEN);
|
memset(((SColumnOptions*)pOptions)->compressLevel, 0, TSDB_CL_COMPRESS_OPTION_LEN);
|
||||||
COPY_STRING_FORM_STR_TOKEN(((SColumnOptions*)pOptions)->compressLevel, (SToken*)pVal);
|
COPY_STRING_FORM_STR_TOKEN(((SColumnOptions*)pOptions)->compressLevel, (SToken*)pVal);
|
||||||
if (0 == strlen(((SColumnOptions*)pOptions)->compressLevel)) {
|
if (0 == strlen(((SColumnOptions*)pOptions)->compressLevel)) {
|
||||||
pCxt->errCode = TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
pCxt->errCode = TSDB_CODE_TSC_COMPRESS_LEVEL_ERROR;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case COLUMN_OPTION_PRIMARYKEY:
|
case COLUMN_OPTION_PRIMARYKEY:
|
||||||
|
@ -1789,7 +1786,7 @@ SDataType createDataType(uint8_t type) {
|
||||||
SDataType createVarLenDataType(uint8_t type, const SToken* pLen) {
|
SDataType createVarLenDataType(uint8_t type, const SToken* pLen) {
|
||||||
int32_t len = TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE;
|
int32_t len = TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE;
|
||||||
if (type == TSDB_DATA_TYPE_NCHAR) len /= TSDB_NCHAR_SIZE;
|
if (type == TSDB_DATA_TYPE_NCHAR) len /= TSDB_NCHAR_SIZE;
|
||||||
if(pLen) len = taosStr2Int32(pLen->z, NULL, 10);
|
if (pLen) len = taosStr2Int32(pLen->z, NULL, 10);
|
||||||
SDataType dt = {.type = type, .precision = 0, .scale = 0, .bytes = len};
|
SDataType dt = {.type = type, .precision = 0, .scale = 0, .bytes = len};
|
||||||
return dt;
|
return dt;
|
||||||
}
|
}
|
||||||
|
@ -1895,8 +1892,8 @@ SNode* createAlterTableAddModifyCol(SAstCreateContext* pCxt, SNode* pRealTable,
|
||||||
return createAlterTableStmtFinalize(pRealTable, pStmt);
|
return createAlterTableStmtFinalize(pRealTable, pStmt);
|
||||||
}
|
}
|
||||||
|
|
||||||
SNode* createAlterTableAddModifyColOptions(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SToken* pColName,
|
SNode* createAlterTableAddModifyColOptions(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType,
|
||||||
SNode* pOptions) {
|
SToken* pColName, SNode* pOptions) {
|
||||||
CHECK_PARSER_STATUS(pCxt);
|
CHECK_PARSER_STATUS(pCxt);
|
||||||
if (!checkColumnName(pCxt, pColName)) {
|
if (!checkColumnName(pCxt, pColName)) {
|
||||||
return NULL;
|
return NULL;
|
||||||
|
@ -2965,7 +2962,7 @@ SNode* createTSMAOptions(SAstCreateContext* pCxt, SNodeList* pFuncs) {
|
||||||
CHECK_PARSER_STATUS(pCxt);
|
CHECK_PARSER_STATUS(pCxt);
|
||||||
STSMAOptions* pOptions = (STSMAOptions*)nodesMakeNode(QUERY_NODE_TSMA_OPTIONS);
|
STSMAOptions* pOptions = (STSMAOptions*)nodesMakeNode(QUERY_NODE_TSMA_OPTIONS);
|
||||||
if (!pOptions) {
|
if (!pOptions) {
|
||||||
//nodesDestroyList(pTSMAFuncs);
|
// nodesDestroyList(pTSMAFuncs);
|
||||||
pCxt->errCode = TSDB_CODE_OUT_OF_MEMORY;
|
pCxt->errCode = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
snprintf(pCxt->pQueryCxt->pMsg, pCxt->pQueryCxt->msgLen, "Out of memory");
|
snprintf(pCxt->pQueryCxt->pMsg, pCxt->pQueryCxt->msgLen, "Out of memory");
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
|
@ -4394,7 +4394,6 @@ int32_t translateTable(STranslateContext* pCxt, SNode** pTable, SNode* pJoinPare
|
||||||
}
|
}
|
||||||
case QUERY_NODE_JOIN_TABLE: {
|
case QUERY_NODE_JOIN_TABLE: {
|
||||||
SJoinTableNode* pJoinTable = (SJoinTableNode*)*pTable;
|
SJoinTableNode* pJoinTable = (SJoinTableNode*)*pTable;
|
||||||
pJoinTable->pParent = pJoinParent;
|
|
||||||
code = translateJoinTable(pCxt, pJoinTable);
|
code = translateJoinTable(pCxt, pJoinTable);
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = translateTable(pCxt, &pJoinTable->pLeft, (SNode*)pJoinTable);
|
code = translateTable(pCxt, &pJoinTable->pLeft, (SNode*)pJoinTable);
|
||||||
|
@ -5714,7 +5713,7 @@ static int32_t setEqualTbnameTableVgroups(STranslateContext* pCxt, SSelectStmt*
|
||||||
|
|
||||||
for (int32_t i = 0; i < pInfo->pRealTable->pTsmas->size; ++i) {
|
for (int32_t i = 0; i < pInfo->pRealTable->pTsmas->size; ++i) {
|
||||||
STableTSMAInfo* pTsma = taosArrayGetP(pInfo->pRealTable->pTsmas, i);
|
STableTSMAInfo* pTsma = taosArrayGetP(pInfo->pRealTable->pTsmas, i);
|
||||||
SArray *pTbNames = taosArrayInit(pInfo->aTbnames->size, POINTER_BYTES);
|
SArray* pTbNames = taosArrayInit(pInfo->aTbnames->size, POINTER_BYTES);
|
||||||
if (!pTbNames) return TSDB_CODE_OUT_OF_MEMORY;
|
if (!pTbNames) return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
|
||||||
for (int32_t k = 0; k < pInfo->aTbnames->size; ++k) {
|
for (int32_t k = 0; k < pInfo->aTbnames->size; ++k) {
|
||||||
|
@ -7225,9 +7224,9 @@ static int32_t checkColumnOptions(SNodeList* pList) {
|
||||||
if (!checkColumnEncodeOrSetDefault(pCol->dataType.type, ((SColumnOptions*)pCol->pOptions)->encode))
|
if (!checkColumnEncodeOrSetDefault(pCol->dataType.type, ((SColumnOptions*)pCol->pOptions)->encode))
|
||||||
return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
||||||
if (!checkColumnCompressOrSetDefault(pCol->dataType.type, ((SColumnOptions*)pCol->pOptions)->compress))
|
if (!checkColumnCompressOrSetDefault(pCol->dataType.type, ((SColumnOptions*)pCol->pOptions)->compress))
|
||||||
return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
return TSDB_CODE_TSC_COMPRESS_PARAM_ERROR;
|
||||||
if (!checkColumnLevelOrSetDefault(pCol->dataType.type, ((SColumnOptions*)pCol->pOptions)->compressLevel))
|
if (!checkColumnLevelOrSetDefault(pCol->dataType.type, ((SColumnOptions*)pCol->pOptions)->compressLevel))
|
||||||
return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
return TSDB_CODE_TSC_COMPRESS_LEVEL_ERROR;
|
||||||
}
|
}
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
@ -7742,7 +7741,7 @@ static int32_t addWdurationToSampleProjects(SNodeList* pProjectionList) {
|
||||||
return nodesListAppend(pProjectionList, (SNode*)pFunc);
|
return nodesListAppend(pProjectionList, (SNode*)pFunc);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t buildProjectsForSampleAst(SSampleAstInfo* pInfo, SNodeList** pList, int32_t *pProjectionTotalLen) {
|
static int32_t buildProjectsForSampleAst(SSampleAstInfo* pInfo, SNodeList** pList, int32_t* pProjectionTotalLen) {
|
||||||
SNodeList* pProjectionList = pInfo->pFuncs;
|
SNodeList* pProjectionList = pInfo->pFuncs;
|
||||||
pInfo->pFuncs = NULL;
|
pInfo->pFuncs = NULL;
|
||||||
|
|
||||||
|
@ -8118,13 +8117,15 @@ static int32_t buildAlterSuperTableReq(STranslateContext* pCxt, SAlterTableStmt*
|
||||||
TAOS_FIELD field = {0};
|
TAOS_FIELD field = {0};
|
||||||
strcpy(field.name, pStmt->colName);
|
strcpy(field.name, pStmt->colName);
|
||||||
if (!checkColumnEncode(pStmt->pColOptions->encode)) return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
if (!checkColumnEncode(pStmt->pColOptions->encode)) return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
||||||
if (!checkColumnCompress(pStmt->pColOptions->compress)) return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
if (!checkColumnCompress(pStmt->pColOptions->compress)) return TSDB_CODE_TSC_COMPRESS_PARAM_ERROR;
|
||||||
if (!checkColumnLevel(pStmt->pColOptions->compressLevel)) return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
if (!checkColumnLevel(pStmt->pColOptions->compressLevel)) return TSDB_CODE_TSC_COMPRESS_LEVEL_ERROR;
|
||||||
int8_t valid =
|
int32_t code =
|
||||||
setColCompressByOption(pStmt->dataType.type, columnEncodeVal(pStmt->pColOptions->encode),
|
setColCompressByOption(pStmt->dataType.type, columnEncodeVal(pStmt->pColOptions->encode),
|
||||||
columnCompressVal(pStmt->pColOptions->compress),
|
columnCompressVal(pStmt->pColOptions->compress),
|
||||||
columnLevelVal(pStmt->pColOptions->compressLevel), false, (uint32_t*)&field.bytes);
|
columnLevelVal(pStmt->pColOptions->compressLevel), false, (uint32_t*)&field.bytes);
|
||||||
if (!valid) return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
|
return code;
|
||||||
|
}
|
||||||
taosArrayPush(pAlterReq->pFields, &field);
|
taosArrayPush(pAlterReq->pFields, &field);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
@ -10726,7 +10727,8 @@ static int32_t deduplicateTsmaFuncs(SNodeList* pFuncs) {
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t buildTSMAAstStreamSubTable(SCreateTSMAStmt* pStmt, SMCreateSmaReq* pReq, const SNode* pTbname, SNode** pSubTable) {
|
static int32_t buildTSMAAstStreamSubTable(SCreateTSMAStmt* pStmt, SMCreateSmaReq* pReq, const SNode* pTbname,
|
||||||
|
SNode** pSubTable) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
SFunctionNode* pMd5Func = (SFunctionNode*)nodesMakeNode(QUERY_NODE_FUNCTION);
|
SFunctionNode* pMd5Func = (SFunctionNode*)nodesMakeNode(QUERY_NODE_FUNCTION);
|
||||||
SFunctionNode* pConcatFunc = (SFunctionNode*)nodesMakeNode(QUERY_NODE_FUNCTION);
|
SFunctionNode* pConcatFunc = (SFunctionNode*)nodesMakeNode(QUERY_NODE_FUNCTION);
|
||||||
|
@ -10768,8 +10770,8 @@ _end:
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t buildTSMAAst(STranslateContext* pCxt, SCreateTSMAStmt* pStmt, SMCreateSmaReq* pReq,
|
static int32_t buildTSMAAst(STranslateContext* pCxt, SCreateTSMAStmt* pStmt, SMCreateSmaReq* pReq, const char* tbName,
|
||||||
const char* tbName, int32_t numOfTags, const SSchema* pTags) {
|
int32_t numOfTags, const SSchema* pTags) {
|
||||||
int32_t code = TSDB_CODE_SUCCESS;
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
SSampleAstInfo info = {0};
|
SSampleAstInfo info = {0};
|
||||||
info.createSmaIndex = true;
|
info.createSmaIndex = true;
|
||||||
|
@ -10813,16 +10815,17 @@ static int32_t buildTSMAAst(STranslateContext* pCxt, SCreateTSMAStmt* pStmt, SMC
|
||||||
if (!pTagCol) code = TSDB_CODE_OUT_OF_MEMORY;
|
if (!pTagCol) code = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
}
|
}
|
||||||
if (code == TSDB_CODE_SUCCESS) {
|
if (code == TSDB_CODE_SUCCESS) {
|
||||||
code = buildTSMAAstStreamSubTable(pStmt, pReq, pStmt->pOptions->recursiveTsma ? pTagCol : (SNode*)pTbnameFunc, (SNode**)&pSubTable);
|
code = buildTSMAAstStreamSubTable(pStmt, pReq, pStmt->pOptions->recursiveTsma ? pTagCol : (SNode*)pTbnameFunc,
|
||||||
|
(SNode**)&pSubTable);
|
||||||
info.pSubTable = (SNode*)pSubTable;
|
info.pSubTable = (SNode*)pSubTable;
|
||||||
}
|
}
|
||||||
if (code == TSDB_CODE_SUCCESS)
|
if (code == TSDB_CODE_SUCCESS)
|
||||||
code = nodesListMakeStrictAppend(&info.pTags, pStmt->pOptions->recursiveTsma ? pTagCol : nodesCloneNode((SNode*)pTbnameFunc));
|
code = nodesListMakeStrictAppend(
|
||||||
|
&info.pTags, pStmt->pOptions->recursiveTsma ? pTagCol : nodesCloneNode((SNode*)pTbnameFunc));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (code == TSDB_CODE_SUCCESS && !pStmt->pOptions->recursiveTsma)
|
if (code == TSDB_CODE_SUCCESS && !pStmt->pOptions->recursiveTsma) code = fmCreateStateFuncs(info.pFuncs);
|
||||||
code = fmCreateStateFuncs(info.pFuncs);
|
|
||||||
|
|
||||||
if (code == TSDB_CODE_SUCCESS) {
|
if (code == TSDB_CODE_SUCCESS) {
|
||||||
int32_t pProjectionTotalLen = 0;
|
int32_t pProjectionTotalLen = 0;
|
||||||
|
@ -10914,7 +10917,8 @@ static int32_t rewriteTSMAFuncs(STranslateContext* pCxt, SCreateTSMAStmt* pStmt,
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t buildCreateTSMAReq(STranslateContext* pCxt, SCreateTSMAStmt* pStmt, SMCreateSmaReq* pReq, SName* useTbName) {
|
static int32_t buildCreateTSMAReq(STranslateContext* pCxt, SCreateTSMAStmt* pStmt, SMCreateSmaReq* pReq,
|
||||||
|
SName* useTbName) {
|
||||||
SName name;
|
SName name;
|
||||||
tNameExtractFullName(toName(pCxt->pParseCxt->acctId, pStmt->dbName, pStmt->tsmaName, &name), pReq->name);
|
tNameExtractFullName(toName(pCxt->pParseCxt->acctId, pStmt->dbName, pStmt->tsmaName, &name), pReq->name);
|
||||||
memset(&name, 0, sizeof(SName));
|
memset(&name, 0, sizeof(SName));
|
||||||
|
@ -11022,7 +11026,7 @@ static int32_t translateCreateTSMA(STranslateContext* pCxt, SCreateTSMAStmt* pSt
|
||||||
if (code == TSDB_CODE_SUCCESS) {
|
if (code == TSDB_CODE_SUCCESS) {
|
||||||
code = buildCreateTSMAReq(pCxt, pStmt, pStmt->pReq, &useTbName);
|
code = buildCreateTSMAReq(pCxt, pStmt, pStmt->pReq, &useTbName);
|
||||||
}
|
}
|
||||||
if ( TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = collectUseTable(&useTbName, pCxt->pTargetTables);
|
code = collectUseTable(&useTbName, pCxt->pTargetTables);
|
||||||
}
|
}
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
|
@ -11063,7 +11067,8 @@ int32_t translatePostCreateTSMA(SParseContext* pParseCxt, SQuery* pQuery, SSData
|
||||||
|
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
if (interval.interval > 0) {
|
if (interval.interval > 0) {
|
||||||
pStmt->pReq->lastTs = taosTimeAdd(taosTimeTruncate(lastTs, &interval), interval.interval, interval.intervalUnit, interval.precision);
|
pStmt->pReq->lastTs = taosTimeAdd(taosTimeTruncate(lastTs, &interval), interval.interval, interval.intervalUnit,
|
||||||
|
interval.precision);
|
||||||
} else {
|
} else {
|
||||||
pStmt->pReq->lastTs = lastTs + 1; // start key of the next time window
|
pStmt->pReq->lastTs = lastTs + 1; // start key of the next time window
|
||||||
}
|
}
|
||||||
|
@ -11074,7 +11079,7 @@ int32_t translatePostCreateTSMA(SParseContext* pParseCxt, SQuery* pQuery, SSData
|
||||||
code = setQuery(&cxt, pQuery);
|
code = setQuery(&cxt, pQuery);
|
||||||
}
|
}
|
||||||
|
|
||||||
if ( TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
SName name = {0};
|
SName name = {0};
|
||||||
toName(pParseCxt->acctId, pStmt->dbName, pStmt->originalTbName, &name);
|
toName(pParseCxt->acctId, pStmt->dbName, pStmt->originalTbName, &name);
|
||||||
code = collectUseTable(&name, cxt.pTargetTables);
|
code = collectUseTable(&name, cxt.pTargetTables);
|
||||||
|
@ -12033,13 +12038,13 @@ static int32_t buildNormalTableBatchReq(int32_t acctId, const SCreateTableStmt*
|
||||||
toSchema(pColDef, index + 1, pScheam);
|
toSchema(pColDef, index + 1, pScheam);
|
||||||
if (pColDef->pOptions) {
|
if (pColDef->pOptions) {
|
||||||
req.colCmpr.pColCmpr[index].id = index + 1;
|
req.colCmpr.pColCmpr[index].id = index + 1;
|
||||||
int8_t valid = setColCompressByOption(
|
int32_t code = setColCompressByOption(
|
||||||
pScheam->type, columnEncodeVal(((SColumnOptions*)pColDef->pOptions)->encode),
|
pScheam->type, columnEncodeVal(((SColumnOptions*)pColDef->pOptions)->encode),
|
||||||
columnCompressVal(((SColumnOptions*)pColDef->pOptions)->compress),
|
columnCompressVal(((SColumnOptions*)pColDef->pOptions)->compress),
|
||||||
columnLevelVal(((SColumnOptions*)pColDef->pOptions)->compressLevel), true, &req.colCmpr.pColCmpr[index].alg);
|
columnLevelVal(((SColumnOptions*)pColDef->pOptions)->compressLevel), true, &req.colCmpr.pColCmpr[index].alg);
|
||||||
if (!valid) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
tdDestroySVCreateTbReq(&req);
|
tdDestroySVCreateTbReq(&req);
|
||||||
return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
return code;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
++index;
|
++index;
|
||||||
|
@ -12499,7 +12504,6 @@ static int32_t buildDropTableVgroupHashmap(STranslateContext* pCxt, SDropTableCl
|
||||||
goto over;
|
goto over;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
SVgroupInfo info = {0};
|
SVgroupInfo info = {0};
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = getTableHashVgroup(pCxt, pClause->dbName, pClause->tableName, &info);
|
code = getTableHashVgroup(pCxt, pClause->dbName, pClause->tableName, &info);
|
||||||
|
@ -12879,14 +12883,12 @@ static int buildAlterTableColumnCompress(STranslateContext* pCxt, SAlterTableStm
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!checkColumnEncode(pStmt->pColOptions->encode)) return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
if (!checkColumnEncode(pStmt->pColOptions->encode)) return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
||||||
if (!checkColumnCompress(pStmt->pColOptions->compress)) return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
if (!checkColumnCompress(pStmt->pColOptions->compress)) return TSDB_CODE_TSC_COMPRESS_PARAM_ERROR;
|
||||||
if (!checkColumnLevel(pStmt->pColOptions->compressLevel)) return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
if (!checkColumnLevel(pStmt->pColOptions->compressLevel)) return TSDB_CODE_TSC_COMPRESS_LEVEL_ERROR;
|
||||||
int8_t valid = setColCompressByOption(pSchema->type, columnEncodeVal(pStmt->pColOptions->encode),
|
int8_t code = setColCompressByOption(pSchema->type, columnEncodeVal(pStmt->pColOptions->encode),
|
||||||
columnCompressVal(pStmt->pColOptions->compress),
|
columnCompressVal(pStmt->pColOptions->compress),
|
||||||
columnLevelVal(pStmt->pColOptions->compressLevel), true, &pReq->compress);
|
columnLevelVal(pStmt->pColOptions->compressLevel), true, &pReq->compress);
|
||||||
if (!valid) return TSDB_CODE_TSC_ENCODE_PARAM_ERROR;
|
return code;
|
||||||
|
|
||||||
return TSDB_CODE_SUCCESS;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t buildAlterTbReq(STranslateContext* pCxt, SAlterTableStmt* pStmt, STableMeta* pTableMeta,
|
static int32_t buildAlterTbReq(STranslateContext* pCxt, SAlterTableStmt* pStmt, STableMeta* pTableMeta,
|
||||||
|
|
|
@ -315,19 +315,15 @@ STableMeta* tableMetaDup(const STableMeta* pTableMeta) {
|
||||||
size_t schemaExtSize = hasSchemaExt ? pTableMeta->tableInfo.numOfColumns * sizeof(SSchemaExt) : 0;
|
size_t schemaExtSize = hasSchemaExt ? pTableMeta->tableInfo.numOfColumns * sizeof(SSchemaExt) : 0;
|
||||||
|
|
||||||
size_t size = sizeof(STableMeta) + numOfFields * sizeof(SSchema);
|
size_t size = sizeof(STableMeta) + numOfFields * sizeof(SSchema);
|
||||||
int32_t cpSize = sizeof(STableMeta) - sizeof(void*);
|
|
||||||
STableMeta* p = taosMemoryMalloc(size + schemaExtSize);
|
STableMeta* p = taosMemoryMalloc(size + schemaExtSize);
|
||||||
|
|
||||||
if (NULL == p) return NULL;
|
if (NULL == p) return NULL;
|
||||||
|
|
||||||
memcpy(p, pTableMeta, cpSize);
|
memcpy(p, pTableMeta, schemaExtSize+size);
|
||||||
if (hasSchemaExt) {
|
if (hasSchemaExt) {
|
||||||
p->schemaExt = (SSchemaExt*)(((char*)p) + size);
|
p->schemaExt = (SSchemaExt*)(((char*)p) + size);
|
||||||
memcpy(p->schemaExt, pTableMeta->schemaExt, schemaExtSize);
|
|
||||||
} else {
|
} else {
|
||||||
p->schemaExt = NULL;
|
p->schemaExt = NULL;
|
||||||
}
|
}
|
||||||
memcpy(p->schema, pTableMeta->schema, numOfFields * sizeof(SSchema));
|
|
||||||
return p;
|
return p;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -60,6 +60,7 @@ bool keysHasCol(SNodeList* pKeys);
|
||||||
bool keysHasTbname(SNodeList* pKeys);
|
bool keysHasTbname(SNodeList* pKeys);
|
||||||
SFunctionNode* createGroupKeyAggFunc(SColumnNode* pGroupCol);
|
SFunctionNode* createGroupKeyAggFunc(SColumnNode* pGroupCol);
|
||||||
int32_t getTimeRangeFromNode(SNode** pPrimaryKeyCond, STimeWindow* pTimeRange, bool* pIsStrict);
|
int32_t getTimeRangeFromNode(SNode** pPrimaryKeyCond, STimeWindow* pTimeRange, bool* pIsStrict);
|
||||||
|
int32_t tagScanSetExecutionMode(SScanLogicNode* pScan);
|
||||||
|
|
||||||
#define CLONE_LIMIT 1
|
#define CLONE_LIMIT 1
|
||||||
#define CLONE_SLIMIT 1 << 1
|
#define CLONE_SLIMIT 1 << 1
|
||||||
|
|
|
@ -392,60 +392,6 @@ static int32_t makeScanLogicNode(SLogicPlanContext* pCxt, SRealTableNode* pRealT
|
||||||
|
|
||||||
static bool needScanDefaultCol(EScanType scanType) { return SCAN_TYPE_TABLE_COUNT != scanType; }
|
static bool needScanDefaultCol(EScanType scanType) { return SCAN_TYPE_TABLE_COUNT != scanType; }
|
||||||
|
|
||||||
static EDealRes tagScanNodeHasTbnameFunc(SNode* pNode, void* pContext) {
|
|
||||||
if (QUERY_NODE_FUNCTION == nodeType(pNode) && FUNCTION_TYPE_TBNAME == ((SFunctionNode*)pNode)->funcType ||
|
|
||||||
(QUERY_NODE_COLUMN == nodeType(pNode) && COLUMN_TYPE_TBNAME == ((SColumnNode*)pNode)->colType)) {
|
|
||||||
*(bool*)pContext = true;
|
|
||||||
return DEAL_RES_END;
|
|
||||||
}
|
|
||||||
return DEAL_RES_CONTINUE;
|
|
||||||
}
|
|
||||||
|
|
||||||
static bool tagScanNodeListHasTbname(SNodeList* pCols) {
|
|
||||||
bool hasTbname = false;
|
|
||||||
nodesWalkExprs(pCols, tagScanNodeHasTbnameFunc, &hasTbname);
|
|
||||||
return hasTbname;
|
|
||||||
}
|
|
||||||
|
|
||||||
static bool tagScanNodeHasTbname(SNode* pKeys) {
|
|
||||||
bool hasTbname = false;
|
|
||||||
nodesWalkExpr(pKeys, tagScanNodeHasTbnameFunc, &hasTbname);
|
|
||||||
return hasTbname;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int32_t tagScanSetExecutionMode(SScanLogicNode* pScan) {
|
|
||||||
pScan->onlyMetaCtbIdx = false;
|
|
||||||
|
|
||||||
if (pScan->tableType == TSDB_CHILD_TABLE) {
|
|
||||||
pScan->onlyMetaCtbIdx = false;
|
|
||||||
return TSDB_CODE_SUCCESS;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (tagScanNodeListHasTbname(pScan->pScanPseudoCols)) {
|
|
||||||
pScan->onlyMetaCtbIdx = false;
|
|
||||||
return TSDB_CODE_SUCCESS;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (pScan->node.pConditions == NULL) {
|
|
||||||
pScan->onlyMetaCtbIdx = true;
|
|
||||||
return TSDB_CODE_SUCCESS;
|
|
||||||
}
|
|
||||||
|
|
||||||
SNode* pCond = nodesCloneNode(pScan->node.pConditions);
|
|
||||||
SNode* pTagCond = NULL;
|
|
||||||
SNode* pTagIndexCond = NULL;
|
|
||||||
filterPartitionCond(&pCond, NULL, &pTagIndexCond, &pTagCond, NULL);
|
|
||||||
if (pTagIndexCond || tagScanNodeHasTbname(pTagCond)) {
|
|
||||||
pScan->onlyMetaCtbIdx = false;
|
|
||||||
} else {
|
|
||||||
pScan->onlyMetaCtbIdx = true;
|
|
||||||
}
|
|
||||||
nodesDestroyNode(pCond);
|
|
||||||
nodesDestroyNode(pTagIndexCond);
|
|
||||||
nodesDestroyNode(pTagCond);
|
|
||||||
return TSDB_CODE_SUCCESS;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int32_t createScanLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSelect, SRealTableNode* pRealTable,
|
static int32_t createScanLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSelect, SRealTableNode* pRealTable,
|
||||||
SLogicNode** pLogicNode) {
|
SLogicNode** pLogicNode) {
|
||||||
SScanLogicNode* pScan = NULL;
|
SScanLogicNode* pScan = NULL;
|
||||||
|
|
|
@ -5142,7 +5142,6 @@ int32_t stbJoinOptRewriteToTagScan(SLogicNode* pJoin, SNode* pNode) {
|
||||||
NODES_DESTORY_NODE(pScan->node.pConditions);
|
NODES_DESTORY_NODE(pScan->node.pConditions);
|
||||||
pScan->node.requireDataOrder = DATA_ORDER_LEVEL_NONE;
|
pScan->node.requireDataOrder = DATA_ORDER_LEVEL_NONE;
|
||||||
pScan->node.resultDataOrder = DATA_ORDER_LEVEL_NONE;
|
pScan->node.resultDataOrder = DATA_ORDER_LEVEL_NONE;
|
||||||
pScan->onlyMetaCtbIdx = true;
|
|
||||||
|
|
||||||
SNodeList* pTags = nodesMakeList();
|
SNodeList* pTags = nodesMakeList();
|
||||||
int32_t code = nodesCollectColumnsFromNode(pJoinNode->pTagEqCond, NULL, COLLECT_COL_TYPE_TAG, &pTags);
|
int32_t code = nodesCollectColumnsFromNode(pJoinNode->pTagEqCond, NULL, COLLECT_COL_TYPE_TAG, &pTags);
|
||||||
|
@ -5177,6 +5176,8 @@ int32_t stbJoinOptRewriteToTagScan(SLogicNode* pJoin, SNode* pNode) {
|
||||||
code = stbJoinOptAddFuncToScanNode("_vgid", pScan);
|
code = stbJoinOptAddFuncToScanNode("_vgid", pScan);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
tagScanSetExecutionMode(pScan);
|
||||||
|
|
||||||
if (code) {
|
if (code) {
|
||||||
nodesDestroyList(pTags);
|
nodesDestroyList(pTags);
|
||||||
}
|
}
|
||||||
|
|
|
@ -615,3 +615,61 @@ int32_t getTimeRangeFromNode(SNode** pPrimaryKeyCond, STimeWindow* pTimeRange, b
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
static EDealRes tagScanNodeHasTbnameFunc(SNode* pNode, void* pContext) {
|
||||||
|
if (QUERY_NODE_FUNCTION == nodeType(pNode) && FUNCTION_TYPE_TBNAME == ((SFunctionNode*)pNode)->funcType ||
|
||||||
|
(QUERY_NODE_COLUMN == nodeType(pNode) && COLUMN_TYPE_TBNAME == ((SColumnNode*)pNode)->colType)) {
|
||||||
|
*(bool*)pContext = true;
|
||||||
|
return DEAL_RES_END;
|
||||||
|
}
|
||||||
|
return DEAL_RES_CONTINUE;
|
||||||
|
}
|
||||||
|
|
||||||
|
static bool tagScanNodeListHasTbname(SNodeList* pCols) {
|
||||||
|
bool hasTbname = false;
|
||||||
|
nodesWalkExprs(pCols, tagScanNodeHasTbnameFunc, &hasTbname);
|
||||||
|
return hasTbname;
|
||||||
|
}
|
||||||
|
|
||||||
|
static bool tagScanNodeHasTbname(SNode* pKeys) {
|
||||||
|
bool hasTbname = false;
|
||||||
|
nodesWalkExpr(pKeys, tagScanNodeHasTbnameFunc, &hasTbname);
|
||||||
|
return hasTbname;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
int32_t tagScanSetExecutionMode(SScanLogicNode* pScan) {
|
||||||
|
pScan->onlyMetaCtbIdx = false;
|
||||||
|
|
||||||
|
if (pScan->tableType == TSDB_CHILD_TABLE) {
|
||||||
|
pScan->onlyMetaCtbIdx = false;
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (tagScanNodeListHasTbname(pScan->pScanPseudoCols)) {
|
||||||
|
pScan->onlyMetaCtbIdx = false;
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (pScan->node.pConditions == NULL) {
|
||||||
|
pScan->onlyMetaCtbIdx = true;
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
SNode* pCond = nodesCloneNode(pScan->node.pConditions);
|
||||||
|
SNode* pTagCond = NULL;
|
||||||
|
SNode* pTagIndexCond = NULL;
|
||||||
|
filterPartitionCond(&pCond, NULL, &pTagIndexCond, &pTagCond, NULL);
|
||||||
|
if (pTagIndexCond || tagScanNodeHasTbname(pTagCond)) {
|
||||||
|
pScan->onlyMetaCtbIdx = false;
|
||||||
|
} else {
|
||||||
|
pScan->onlyMetaCtbIdx = true;
|
||||||
|
}
|
||||||
|
nodesDestroyNode(pCond);
|
||||||
|
nodesDestroyNode(pTagIndexCond);
|
||||||
|
nodesDestroyNode(pTagCond);
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -475,6 +475,8 @@ int32_t schHandleDropCallback(void *param, SDataBuf *pMsg, int32_t code) {
|
||||||
SSchTaskCallbackParam *pParam = (SSchTaskCallbackParam *)param;
|
SSchTaskCallbackParam *pParam = (SSchTaskCallbackParam *)param;
|
||||||
qDebug("QID:0x%" PRIx64 ",TID:0x%" PRIx64 " drop task rsp received, code:0x%x", pParam->queryId, pParam->taskId,
|
qDebug("QID:0x%" PRIx64 ",TID:0x%" PRIx64 " drop task rsp received, code:0x%x", pParam->queryId, pParam->taskId,
|
||||||
code);
|
code);
|
||||||
|
// called if drop task rsp received code
|
||||||
|
rpcReleaseHandle(pMsg->handle, TAOS_CONN_CLIENT);
|
||||||
if (pMsg) {
|
if (pMsg) {
|
||||||
taosMemoryFree(pMsg->pData);
|
taosMemoryFree(pMsg->pData);
|
||||||
taosMemoryFree(pMsg->pEpSet);
|
taosMemoryFree(pMsg->pEpSet);
|
||||||
|
@ -486,7 +488,6 @@ int32_t schHandleNotifyCallback(void *param, SDataBuf *pMsg, int32_t code) {
|
||||||
SSchTaskCallbackParam *pParam = (SSchTaskCallbackParam *)param;
|
SSchTaskCallbackParam *pParam = (SSchTaskCallbackParam *)param;
|
||||||
qDebug("QID:0x%" PRIx64 ",TID:0x%" PRIx64 " task notify rsp received, code:0x%x", pParam->queryId, pParam->taskId,
|
qDebug("QID:0x%" PRIx64 ",TID:0x%" PRIx64 " task notify rsp received, code:0x%x", pParam->queryId, pParam->taskId,
|
||||||
code);
|
code);
|
||||||
rpcReleaseHandle(pMsg->handle, TAOS_CONN_CLIENT);
|
|
||||||
if (pMsg) {
|
if (pMsg) {
|
||||||
taosMemoryFree(pMsg->pData);
|
taosMemoryFree(pMsg->pData);
|
||||||
taosMemoryFree(pMsg->pEpSet);
|
taosMemoryFree(pMsg->pEpSet);
|
||||||
|
|
|
@ -2179,6 +2179,7 @@ int32_t copyDataAt(RocksdbCfInst* pSrc, STaskDbWrapper* pDst, int8_t i) {
|
||||||
}
|
}
|
||||||
|
|
||||||
_EXIT:
|
_EXIT:
|
||||||
|
rocksdb_writebatch_destroy(wb);
|
||||||
rocksdb_iter_destroy(pIter);
|
rocksdb_iter_destroy(pIter);
|
||||||
rocksdb_readoptions_destroy(pRdOpt);
|
rocksdb_readoptions_destroy(pRdOpt);
|
||||||
taosMemoryFree(err);
|
taosMemoryFree(err);
|
||||||
|
|
|
@ -1094,14 +1094,11 @@ _end:
|
||||||
|
|
||||||
int32_t streamStatePutParName(SStreamState* pState, int64_t groupId, const char tbname[TSDB_TABLE_NAME_LEN]) {
|
int32_t streamStatePutParName(SStreamState* pState, int64_t groupId, const char tbname[TSDB_TABLE_NAME_LEN]) {
|
||||||
#ifdef USE_ROCKSDB
|
#ifdef USE_ROCKSDB
|
||||||
if (tSimpleHashGetSize(pState->parNameMap) > MAX_TABLE_NAME_NUM) {
|
|
||||||
if (tSimpleHashGet(pState->parNameMap, &groupId, sizeof(int64_t)) == NULL) {
|
if (tSimpleHashGet(pState->parNameMap, &groupId, sizeof(int64_t)) == NULL) {
|
||||||
|
tSimpleHashPut(pState->parNameMap, &groupId, sizeof(int64_t), tbname, TSDB_TABLE_NAME_LEN);
|
||||||
streamStatePutParName_rocksdb(pState, groupId, tbname);
|
streamStatePutParName_rocksdb(pState, groupId, tbname);
|
||||||
}
|
}
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
|
||||||
tSimpleHashPut(pState->parNameMap, &groupId, sizeof(int64_t), tbname, TSDB_TABLE_NAME_LEN);
|
|
||||||
return TSDB_CODE_SUCCESS;
|
|
||||||
#else
|
#else
|
||||||
return tdbTbUpsert(pState->pTdbState->pParNameDb, &groupId, sizeof(int64_t), tbname, TSDB_TABLE_NAME_LEN,
|
return tdbTbUpsert(pState->pTdbState->pParNameDb, &groupId, sizeof(int64_t), tbname, TSDB_TABLE_NAME_LEN,
|
||||||
pState->pTdbState->txn);
|
pState->pTdbState->txn);
|
||||||
|
@ -1112,10 +1109,11 @@ int32_t streamStateGetParName(SStreamState* pState, int64_t groupId, void** pVal
|
||||||
#ifdef USE_ROCKSDB
|
#ifdef USE_ROCKSDB
|
||||||
void* pStr = tSimpleHashGet(pState->parNameMap, &groupId, sizeof(int64_t));
|
void* pStr = tSimpleHashGet(pState->parNameMap, &groupId, sizeof(int64_t));
|
||||||
if (!pStr) {
|
if (!pStr) {
|
||||||
if (tSimpleHashGetSize(pState->parNameMap) > MAX_TABLE_NAME_NUM) {
|
int32_t code = streamStateGetParName_rocksdb(pState, groupId, pVal);
|
||||||
return streamStateGetParName_rocksdb(pState, groupId, pVal);
|
if (code == TSDB_CODE_SUCCESS) {
|
||||||
|
tSimpleHashPut(pState->parNameMap, &groupId, sizeof(int64_t), *pVal, TSDB_TABLE_NAME_LEN);
|
||||||
}
|
}
|
||||||
return TSDB_CODE_FAILED;
|
return code;
|
||||||
}
|
}
|
||||||
*pVal = taosMemoryCalloc(1, TSDB_TABLE_NAME_LEN);
|
*pVal = taosMemoryCalloc(1, TSDB_TABLE_NAME_LEN);
|
||||||
memcpy(*pVal, pStr, TSDB_TABLE_NAME_LEN);
|
memcpy(*pVal, pStr, TSDB_TABLE_NAME_LEN);
|
||||||
|
|
|
@ -13,6 +13,8 @@
|
||||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
|
#include "tcompare.h"
|
||||||
|
#include "tdatablock.h"
|
||||||
#include "tencode.h"
|
#include "tencode.h"
|
||||||
#include "tstreamUpdate.h"
|
#include "tstreamUpdate.h"
|
||||||
#include "ttime.h"
|
#include "ttime.h"
|
||||||
|
@ -31,6 +33,39 @@
|
||||||
|
|
||||||
static int64_t adjustExpEntries(int64_t entries) { return TMIN(DEFAULT_EXPECTED_ENTRIES, entries); }
|
static int64_t adjustExpEntries(int64_t entries) { return TMIN(DEFAULT_EXPECTED_ENTRIES, entries); }
|
||||||
|
|
||||||
|
int compareKeyTs(void* pTs1, void* pTs2, void* pPkVal, __compar_fn_t cmpPkFn) {
|
||||||
|
return compareInt64Val(pTs1, pTs2);;
|
||||||
|
}
|
||||||
|
|
||||||
|
int compareKeyTsAndPk(void* pValue1, void* pTs, void* pPkVal, __compar_fn_t cmpPkFn) {
|
||||||
|
int res = compareInt64Val(pValue1, pTs);
|
||||||
|
if (res != 0) {
|
||||||
|
return res;
|
||||||
|
} else {
|
||||||
|
void* pk1 = (char*)pValue1 + sizeof(TSKEY);
|
||||||
|
return cmpPkFn(pk1, pPkVal);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t getKeyBuff(TSKEY ts, int64_t tbUid, void* pVal, int32_t len, char* buff) {
|
||||||
|
*(TSKEY*)buff = ts;
|
||||||
|
memcpy(buff+ sizeof(TSKEY), &tbUid, sizeof(int64_t));
|
||||||
|
if (len == 0) {
|
||||||
|
return sizeof(TSKEY) + sizeof(int64_t);
|
||||||
|
}
|
||||||
|
memcpy(buff, pVal, len);
|
||||||
|
return sizeof(TSKEY) + sizeof(int64_t) + len;
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t getValueBuff(TSKEY ts, char* pVal, int32_t len, char* buff) {
|
||||||
|
*(TSKEY*)buff = ts;
|
||||||
|
if (len == 0) {
|
||||||
|
return sizeof(TSKEY);
|
||||||
|
}
|
||||||
|
memcpy(buff + sizeof(TSKEY), pVal, len);
|
||||||
|
return sizeof(TSKEY) + len;
|
||||||
|
}
|
||||||
|
|
||||||
void windowSBfAdd(SUpdateInfo *pInfo, uint64_t count) {
|
void windowSBfAdd(SUpdateInfo *pInfo, uint64_t count) {
|
||||||
if (pInfo->numSBFs < count) {
|
if (pInfo->numSBFs < count) {
|
||||||
count = pInfo->numSBFs;
|
count = pInfo->numSBFs;
|
||||||
|
@ -89,11 +124,11 @@ static int64_t adjustWatermark(int64_t adjInterval, int64_t originInt, int64_t w
|
||||||
return watermark;
|
return watermark;
|
||||||
}
|
}
|
||||||
|
|
||||||
SUpdateInfo *updateInfoInitP(SInterval *pInterval, int64_t watermark, bool igUp) {
|
SUpdateInfo *updateInfoInitP(SInterval *pInterval, int64_t watermark, bool igUp, int8_t pkType, int32_t pkLen) {
|
||||||
return updateInfoInit(pInterval->interval, pInterval->precision, watermark, igUp);
|
return updateInfoInit(pInterval->interval, pInterval->precision, watermark, igUp, pkType, pkLen);
|
||||||
}
|
}
|
||||||
|
|
||||||
SUpdateInfo *updateInfoInit(int64_t interval, int32_t precision, int64_t watermark, bool igUp) {
|
SUpdateInfo *updateInfoInit(int64_t interval, int32_t precision, int64_t watermark, bool igUp, int8_t pkType, int32_t pkLen) {
|
||||||
SUpdateInfo *pInfo = taosMemoryCalloc(1, sizeof(SUpdateInfo));
|
SUpdateInfo *pInfo = taosMemoryCalloc(1, sizeof(SUpdateInfo));
|
||||||
if (pInfo == NULL) {
|
if (pInfo == NULL) {
|
||||||
return NULL;
|
return NULL;
|
||||||
|
@ -133,6 +168,17 @@ SUpdateInfo *updateInfoInit(int64_t interval, int32_t precision, int64_t waterma
|
||||||
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_UBIGINT);
|
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_UBIGINT);
|
||||||
pInfo->pMap = taosHashInit(DEFAULT_MAP_CAPACITY, hashFn, true, HASH_NO_LOCK);
|
pInfo->pMap = taosHashInit(DEFAULT_MAP_CAPACITY, hashFn, true, HASH_NO_LOCK);
|
||||||
pInfo->maxDataVersion = 0;
|
pInfo->maxDataVersion = 0;
|
||||||
|
pInfo->pkColLen = pkLen;
|
||||||
|
pInfo->pkColType = pkType;
|
||||||
|
pInfo->pKeyBuff = taosMemoryCalloc(1, sizeof(TSKEY) + sizeof(int64_t) + pkLen);
|
||||||
|
pInfo->pValueBuff = taosMemoryCalloc(1, sizeof(TSKEY) + pkLen);
|
||||||
|
if (pkLen != 0) {
|
||||||
|
pInfo->comparePkRowFn = compareKeyTsAndPk;
|
||||||
|
pInfo->comparePkCol = getKeyComparFunc(pkType, TSDB_ORDER_ASC);;
|
||||||
|
} else {
|
||||||
|
pInfo->comparePkRowFn = compareKeyTs;
|
||||||
|
pInfo->comparePkCol = NULL;
|
||||||
|
}
|
||||||
return pInfo;
|
return pInfo;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -168,47 +214,60 @@ bool updateInfoIsTableInserted(SUpdateInfo *pInfo, int64_t tbUid) {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
TSKEY updateInfoFillBlockData(SUpdateInfo *pInfo, SSDataBlock *pBlock, int32_t primaryTsCol) {
|
TSKEY updateInfoFillBlockData(SUpdateInfo *pInfo, SSDataBlock *pBlock, int32_t primaryTsCol, int32_t primaryKeyCol) {
|
||||||
if (pBlock == NULL || pBlock->info.rows == 0) return INT64_MIN;
|
if (pBlock == NULL || pBlock->info.rows == 0) return INT64_MIN;
|
||||||
TSKEY maxTs = INT64_MIN;
|
TSKEY maxTs = INT64_MIN;
|
||||||
|
void* pPkVal = NULL;
|
||||||
|
void* pMaxPkVal = NULL;
|
||||||
|
int32_t maxLen = 0;
|
||||||
|
int32_t len = 0;
|
||||||
int64_t tbUid = pBlock->info.id.uid;
|
int64_t tbUid = pBlock->info.id.uid;
|
||||||
|
|
||||||
SColumnInfoData *pColDataInfo = taosArrayGet(pBlock->pDataBlock, primaryTsCol);
|
SColumnInfoData *pColDataInfo = taosArrayGet(pBlock->pDataBlock, primaryTsCol);
|
||||||
|
SColumnInfoData *pPkDataInfo = NULL;
|
||||||
|
if (primaryKeyCol >= 0) {
|
||||||
|
pPkDataInfo = taosArrayGet(pBlock->pDataBlock, primaryKeyCol);
|
||||||
|
}
|
||||||
|
|
||||||
for (int32_t i = 0; i < pBlock->info.rows; i++) {
|
for (int32_t i = 0; i < pBlock->info.rows; i++) {
|
||||||
TSKEY ts = ((TSKEY *)pColDataInfo->pData)[i];
|
TSKEY ts = ((TSKEY *)pColDataInfo->pData)[i];
|
||||||
maxTs = TMAX(maxTs, ts);
|
if (maxTs < ts) {
|
||||||
|
maxTs = ts;
|
||||||
|
if (primaryKeyCol >= 0) {
|
||||||
|
pMaxPkVal = colDataGetData(pPkDataInfo, i);
|
||||||
|
maxLen = colDataGetRowLength(pPkDataInfo, i);
|
||||||
|
}
|
||||||
|
}
|
||||||
SScalableBf *pSBf = getSBf(pInfo, ts);
|
SScalableBf *pSBf = getSBf(pInfo, ts);
|
||||||
if (pSBf) {
|
if (pSBf) {
|
||||||
SUpdateKey updateKey = {
|
if (primaryKeyCol >= 0) {
|
||||||
.tbUid = tbUid,
|
pPkVal = colDataGetData(pPkDataInfo, i);
|
||||||
.ts = ts,
|
len = colDataGetRowLength(pPkDataInfo, i);
|
||||||
};
|
}
|
||||||
tScalableBfPut(pSBf, &updateKey, sizeof(SUpdateKey));
|
int32_t buffLen = getKeyBuff(ts, tbUid, pPkVal, len, pInfo->pKeyBuff);
|
||||||
|
tScalableBfPut(pSBf, pInfo->pKeyBuff, buffLen);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
TSKEY *pMaxTs = taosHashGet(pInfo->pMap, &tbUid, sizeof(int64_t));
|
void *pMaxTs = taosHashGet(pInfo->pMap, &tbUid, sizeof(int64_t));
|
||||||
if (pMaxTs == NULL || *pMaxTs > maxTs) {
|
if (pMaxTs == NULL || pInfo->comparePkRowFn(pMaxTs, &maxTs, pMaxPkVal, pInfo->comparePkCol) == -1) {
|
||||||
taosHashPut(pInfo->pMap, &tbUid, sizeof(int64_t), &maxTs, sizeof(TSKEY));
|
int32_t valueLen = getValueBuff(maxTs, pMaxPkVal, maxLen, pInfo->pValueBuff);
|
||||||
|
taosHashPut(pInfo->pMap, &tbUid, sizeof(int64_t), pInfo->pValueBuff, valueLen);
|
||||||
}
|
}
|
||||||
return maxTs;
|
return maxTs;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool updateInfoIsUpdated(SUpdateInfo *pInfo, uint64_t tableId, TSKEY ts) {
|
bool updateInfoIsUpdated(SUpdateInfo *pInfo, uint64_t tableId, TSKEY ts, void* pPkVal, int32_t len) {
|
||||||
int32_t res = TSDB_CODE_FAILED;
|
int32_t res = TSDB_CODE_FAILED;
|
||||||
|
int32_t buffLen = 0;
|
||||||
|
|
||||||
SUpdateKey updateKey = {
|
buffLen = getKeyBuff(ts, tableId, pPkVal, len, pInfo->pKeyBuff);
|
||||||
.tbUid = tableId,
|
void* *pMapMaxTs = taosHashGet(pInfo->pMap, &tableId, sizeof(uint64_t));
|
||||||
.ts = ts,
|
|
||||||
};
|
|
||||||
|
|
||||||
TSKEY *pMapMaxTs = taosHashGet(pInfo->pMap, &tableId, sizeof(uint64_t));
|
|
||||||
uint64_t index = ((uint64_t)tableId) % pInfo->numBuckets;
|
uint64_t index = ((uint64_t)tableId) % pInfo->numBuckets;
|
||||||
TSKEY maxTs = *(TSKEY *)taosArrayGet(pInfo->pTsBuckets, index);
|
TSKEY maxTs = *(TSKEY *)taosArrayGet(pInfo->pTsBuckets, index);
|
||||||
if (ts < maxTs - pInfo->watermark) {
|
if (ts < maxTs - pInfo->watermark) {
|
||||||
// this window has been closed.
|
// this window has been closed.
|
||||||
if (pInfo->pCloseWinSBF) {
|
if (pInfo->pCloseWinSBF) {
|
||||||
res = tScalableBfPut(pInfo->pCloseWinSBF, &updateKey, sizeof(SUpdateKey));
|
res = tScalableBfPut(pInfo->pCloseWinSBF, pInfo->pKeyBuff, buffLen);
|
||||||
if (res == TSDB_CODE_SUCCESS) {
|
if (res == TSDB_CODE_SUCCESS) {
|
||||||
return false;
|
return false;
|
||||||
} else {
|
} else {
|
||||||
|
@ -221,18 +280,19 @@ bool updateInfoIsUpdated(SUpdateInfo *pInfo, uint64_t tableId, TSKEY ts) {
|
||||||
SScalableBf *pSBf = getSBf(pInfo, ts);
|
SScalableBf *pSBf = getSBf(pInfo, ts);
|
||||||
|
|
||||||
int32_t size = taosHashGetSize(pInfo->pMap);
|
int32_t size = taosHashGetSize(pInfo->pMap);
|
||||||
if ((!pMapMaxTs && size < DEFAULT_MAP_SIZE) || (pMapMaxTs && *pMapMaxTs < ts)) {
|
if ((!pMapMaxTs && size < DEFAULT_MAP_SIZE) || (pMapMaxTs && pInfo->comparePkRowFn(pMapMaxTs, &ts, pPkVal, pInfo->comparePkCol) == -1 )) {
|
||||||
taosHashPut(pInfo->pMap, &tableId, sizeof(uint64_t), &ts, sizeof(TSKEY));
|
int32_t valueLen = getValueBuff(ts, pPkVal, len, pInfo->pValueBuff);
|
||||||
|
taosHashPut(pInfo->pMap, &tableId, sizeof(uint64_t), pInfo->pValueBuff, valueLen);
|
||||||
// pSBf may be a null pointer
|
// pSBf may be a null pointer
|
||||||
if (pSBf) {
|
if (pSBf) {
|
||||||
res = tScalableBfPutNoCheck(pSBf, &updateKey, sizeof(SUpdateKey));
|
res = tScalableBfPutNoCheck(pSBf, pInfo->pKeyBuff, buffLen);
|
||||||
}
|
}
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
// pSBf may be a null pointer
|
// pSBf may be a null pointer
|
||||||
if (pSBf) {
|
if (pSBf) {
|
||||||
res = tScalableBfPut(pSBf, &updateKey, sizeof(SUpdateKey));
|
res = tScalableBfPut(pSBf, pInfo->pKeyBuff, buffLen);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!pMapMaxTs && maxTs < ts) {
|
if (!pMapMaxTs && maxTs < ts) {
|
||||||
|
@ -262,6 +322,8 @@ void updateInfoDestroy(SUpdateInfo *pInfo) {
|
||||||
}
|
}
|
||||||
|
|
||||||
taosArrayDestroy(pInfo->pTsSBFs);
|
taosArrayDestroy(pInfo->pTsSBFs);
|
||||||
|
taosMemoryFreeClear(pInfo->pKeyBuff);
|
||||||
|
taosMemoryFreeClear(pInfo->pValueBuff);
|
||||||
taosHashCleanup(pInfo->pMap);
|
taosHashCleanup(pInfo->pMap);
|
||||||
updateInfoDestoryColseWinSBF(pInfo);
|
updateInfoDestoryColseWinSBF(pInfo);
|
||||||
taosMemoryFree(pInfo);
|
taosMemoryFree(pInfo);
|
||||||
|
@ -322,11 +384,15 @@ int32_t updateInfoSerialize(void *buf, int32_t bufLen, const SUpdateInfo *pInfo)
|
||||||
while ((pIte = taosHashIterate(pInfo->pMap, pIte)) != NULL) {
|
while ((pIte = taosHashIterate(pInfo->pMap, pIte)) != NULL) {
|
||||||
void *key = taosHashGetKey(pIte, &keyLen);
|
void *key = taosHashGetKey(pIte, &keyLen);
|
||||||
if (tEncodeU64(&encoder, *(uint64_t *)key) < 0) return -1;
|
if (tEncodeU64(&encoder, *(uint64_t *)key) < 0) return -1;
|
||||||
if (tEncodeI64(&encoder, *(TSKEY *)pIte) < 0) return -1;
|
int32_t valueSize = taosHashGetValueSize(pIte);
|
||||||
|
if (tEncodeBinary(&encoder, (const uint8_t *)pIte, valueSize) < 0) return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (tEncodeU64(&encoder, pInfo->maxDataVersion) < 0) return -1;
|
if (tEncodeU64(&encoder, pInfo->maxDataVersion) < 0) return -1;
|
||||||
|
|
||||||
|
if (tEncodeI32(&encoder, pInfo->pkColLen) < 0) return -1;
|
||||||
|
if (tEncodeI8(&encoder, pInfo->pkColType) < 0) return -1;
|
||||||
|
|
||||||
tEndEncode(&encoder);
|
tEndEncode(&encoder);
|
||||||
|
|
||||||
int32_t tlen = encoder.pos;
|
int32_t tlen = encoder.pos;
|
||||||
|
@ -371,28 +437,43 @@ int32_t updateInfoDeserialize(void *buf, int32_t bufLen, SUpdateInfo *pInfo) {
|
||||||
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_UBIGINT);
|
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_UBIGINT);
|
||||||
pInfo->pMap = taosHashInit(mapSize, hashFn, true, HASH_NO_LOCK);
|
pInfo->pMap = taosHashInit(mapSize, hashFn, true, HASH_NO_LOCK);
|
||||||
uint64_t uid = 0;
|
uint64_t uid = 0;
|
||||||
ts = INT64_MIN;
|
void* pVal = NULL;
|
||||||
|
int32_t valSize = 0;
|
||||||
for (int32_t i = 0; i < mapSize; i++) {
|
for (int32_t i = 0; i < mapSize; i++) {
|
||||||
if (tDecodeU64(&decoder, &uid) < 0) return -1;
|
if (tDecodeU64(&decoder, &uid) < 0) return -1;
|
||||||
if (tDecodeI64(&decoder, &ts) < 0) return -1;
|
if (tDecodeBinary(&decoder, (uint8_t**)&pVal, &valSize) < 0) return -1;
|
||||||
taosHashPut(pInfo->pMap, &uid, sizeof(uint64_t), &ts, sizeof(TSKEY));
|
taosHashPut(pInfo->pMap, &uid, sizeof(uint64_t), pVal, valSize);
|
||||||
}
|
}
|
||||||
ASSERT(mapSize == taosHashGetSize(pInfo->pMap));
|
ASSERT(mapSize == taosHashGetSize(pInfo->pMap));
|
||||||
if (tDecodeU64(&decoder, &pInfo->maxDataVersion) < 0) return -1;
|
if (tDecodeU64(&decoder, &pInfo->maxDataVersion) < 0) return -1;
|
||||||
|
|
||||||
|
if (tDecodeI32(&decoder, &pInfo->pkColLen) < 0) return -1;
|
||||||
|
if (tDecodeI8(&decoder, &pInfo->pkColType) < 0) return -1;
|
||||||
|
|
||||||
|
pInfo->pKeyBuff = taosMemoryCalloc(1, sizeof(TSKEY) + sizeof(int64_t) + pInfo->pkColLen);
|
||||||
|
pInfo->pValueBuff = taosMemoryCalloc(1, sizeof(TSKEY) + pInfo->pkColLen);
|
||||||
|
if (pInfo->pkColLen != 0) {
|
||||||
|
pInfo->comparePkRowFn = compareKeyTsAndPk;
|
||||||
|
pInfo->comparePkCol = getKeyComparFunc(pInfo->pkColType, TSDB_ORDER_ASC);;
|
||||||
|
} else {
|
||||||
|
pInfo->comparePkRowFn = compareKeyTs;
|
||||||
|
pInfo->comparePkCol = NULL;
|
||||||
|
}
|
||||||
|
|
||||||
tEndDecode(&decoder);
|
tEndDecode(&decoder);
|
||||||
|
|
||||||
tDecoderClear(&decoder);
|
tDecoderClear(&decoder);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool isIncrementalTimeStamp(SUpdateInfo *pInfo, uint64_t tableId, TSKEY ts) {
|
bool isIncrementalTimeStamp(SUpdateInfo *pInfo, uint64_t tableId, TSKEY ts, void* pPkVal, int32_t len) {
|
||||||
TSKEY *pMapMaxTs = taosHashGet(pInfo->pMap, &tableId, sizeof(uint64_t));
|
TSKEY *pMapMaxTs = taosHashGet(pInfo->pMap, &tableId, sizeof(uint64_t));
|
||||||
bool res = true;
|
bool res = true;
|
||||||
if (pMapMaxTs && ts < *pMapMaxTs) {
|
if (pMapMaxTs && pInfo->comparePkRowFn(pMapMaxTs, &ts, pPkVal, pInfo->comparePkCol) == 1) {
|
||||||
res = false;
|
res = false;
|
||||||
} else {
|
} else {
|
||||||
taosHashPut(pInfo->pMap, &tableId, sizeof(uint64_t), &ts, sizeof(TSKEY));
|
int32_t valueLen = getValueBuff(ts, pPkVal, len, pInfo->pValueBuff);
|
||||||
|
taosHashPut(pInfo->pMap, &tableId, sizeof(uint64_t), pInfo->pValueBuff, valueLen);
|
||||||
}
|
}
|
||||||
return res;
|
return res;
|
||||||
}
|
}
|
||||||
|
|
|
@ -153,6 +153,130 @@ void taosqsort(void *src, int64_t numOfElem, int64_t size, const void *param, __
|
||||||
taosMemoryFreeClear(buf);
|
taosMemoryFreeClear(buf);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#define DOSWAP(a, b, size) \
|
||||||
|
do { \
|
||||||
|
size_t __size = (size); \
|
||||||
|
char *__a = (a), *__b = (b); \
|
||||||
|
do { \
|
||||||
|
char __tmp = *__a; \
|
||||||
|
*__a++ = *__b; \
|
||||||
|
*__b++ = __tmp; \
|
||||||
|
} while (--__size > 0); \
|
||||||
|
} while (0)
|
||||||
|
|
||||||
|
typedef struct {
|
||||||
|
char *lo;
|
||||||
|
char *hi;
|
||||||
|
} stack_node;
|
||||||
|
|
||||||
|
#define STACK_SIZE (CHAR_BIT * sizeof(size_t))
|
||||||
|
#define PUSH(low, high) ((void)((top->lo = (low)), (top->hi = (high)), ++top))
|
||||||
|
#define POP(low, high) ((void)(--top, (low = top->lo), (high = top->hi)))
|
||||||
|
#define STACK_NOT_EMPTY (stack < top)
|
||||||
|
|
||||||
|
void taosqsort_r(void *src, int64_t nelem, int64_t size, const void *arg, __ext_compar_fn_t cmp) {
|
||||||
|
const int32_t MAX_THRESH = 6;
|
||||||
|
char *base_ptr = (char *)src;
|
||||||
|
|
||||||
|
const size_t max_thresh = MAX_THRESH * size;
|
||||||
|
|
||||||
|
if (nelem == 0) return;
|
||||||
|
|
||||||
|
if (nelem > MAX_THRESH) {
|
||||||
|
char *lo = base_ptr;
|
||||||
|
char *hi = &lo[size * (nelem - 1)];
|
||||||
|
stack_node stack[STACK_SIZE];
|
||||||
|
stack_node *top = stack;
|
||||||
|
|
||||||
|
PUSH(NULL, NULL);
|
||||||
|
|
||||||
|
while (STACK_NOT_EMPTY) {
|
||||||
|
char *left_ptr;
|
||||||
|
char *right_ptr;
|
||||||
|
|
||||||
|
char *mid = lo + size * ((hi - lo) / size >> 1);
|
||||||
|
|
||||||
|
if ((*cmp)((void *)mid, (void *)lo, arg) < 0) DOSWAP(mid, lo, size);
|
||||||
|
if ((*cmp)((void *)hi, (void *)mid, arg) < 0)
|
||||||
|
DOSWAP(mid, hi, size);
|
||||||
|
else
|
||||||
|
goto jump_over;
|
||||||
|
if ((*cmp)((void *)mid, (void *)lo, arg) < 0) DOSWAP(mid, lo, size);
|
||||||
|
jump_over:;
|
||||||
|
|
||||||
|
left_ptr = lo + size;
|
||||||
|
right_ptr = hi - size;
|
||||||
|
do {
|
||||||
|
while ((*cmp)((void *)left_ptr, (void *)mid, arg) < 0) left_ptr += size;
|
||||||
|
|
||||||
|
while ((*cmp)((void *)mid, (void *)right_ptr, arg) < 0) right_ptr -= size;
|
||||||
|
|
||||||
|
if (left_ptr < right_ptr) {
|
||||||
|
DOSWAP(left_ptr, right_ptr, size);
|
||||||
|
if (mid == left_ptr)
|
||||||
|
mid = right_ptr;
|
||||||
|
else if (mid == right_ptr)
|
||||||
|
mid = left_ptr;
|
||||||
|
left_ptr += size;
|
||||||
|
right_ptr -= size;
|
||||||
|
} else if (left_ptr == right_ptr) {
|
||||||
|
left_ptr += size;
|
||||||
|
right_ptr -= size;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
} while (left_ptr <= right_ptr);
|
||||||
|
|
||||||
|
if ((size_t)(right_ptr - lo) <= max_thresh) {
|
||||||
|
if ((size_t)(hi - left_ptr) <= max_thresh)
|
||||||
|
POP(lo, hi);
|
||||||
|
else
|
||||||
|
lo = left_ptr;
|
||||||
|
} else if ((size_t)(hi - left_ptr) <= max_thresh)
|
||||||
|
hi = right_ptr;
|
||||||
|
else if ((right_ptr - lo) > (hi - left_ptr)) {
|
||||||
|
PUSH(lo, right_ptr);
|
||||||
|
lo = left_ptr;
|
||||||
|
} else {
|
||||||
|
PUSH(left_ptr, hi);
|
||||||
|
hi = right_ptr;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
#define min(x, y) ((x) < (y) ? (x) : (y))
|
||||||
|
|
||||||
|
{
|
||||||
|
char *const end_ptr = &base_ptr[size * (nelem - 1)];
|
||||||
|
char *tmp_ptr = base_ptr;
|
||||||
|
char *thresh = min(end_ptr, base_ptr + max_thresh);
|
||||||
|
char *run_ptr;
|
||||||
|
|
||||||
|
for (run_ptr = tmp_ptr + size; run_ptr <= thresh; run_ptr += size)
|
||||||
|
if ((*cmp)((void *)run_ptr, (void *)tmp_ptr, arg) < 0) tmp_ptr = run_ptr;
|
||||||
|
|
||||||
|
if (tmp_ptr != base_ptr) DOSWAP(tmp_ptr, base_ptr, size);
|
||||||
|
|
||||||
|
run_ptr = base_ptr + size;
|
||||||
|
while ((run_ptr += size) <= end_ptr) {
|
||||||
|
tmp_ptr = run_ptr - size;
|
||||||
|
while ((*cmp)((void *)run_ptr, (void *)tmp_ptr, arg) < 0) tmp_ptr -= size;
|
||||||
|
|
||||||
|
tmp_ptr += size;
|
||||||
|
if (tmp_ptr != run_ptr) {
|
||||||
|
char *trav;
|
||||||
|
|
||||||
|
trav = run_ptr + size;
|
||||||
|
while (--trav >= run_ptr) {
|
||||||
|
char c = *trav;
|
||||||
|
char *hi, *lo;
|
||||||
|
|
||||||
|
for (hi = lo = trav; (lo -= size) >= tmp_ptr; hi = lo) *hi = *lo;
|
||||||
|
*hi = c;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
void *taosbsearch(const void *key, const void *base, int32_t nmemb, int32_t size, __compar_fn_t compar, int32_t flags) {
|
void *taosbsearch(const void *key, const void *base, int32_t nmemb, int32_t size, __compar_fn_t compar, int32_t flags) {
|
||||||
uint8_t *p;
|
uint8_t *p;
|
||||||
int32_t lidx;
|
int32_t lidx;
|
||||||
|
@ -351,7 +475,6 @@ int32_t msortHelper(const void *p1, const void *p2, const void *param) {
|
||||||
return comparFn(p1, p2);
|
return comparFn(p1, p2);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
int32_t taosMergeSort(void *src, int64_t numOfElem, int64_t size, __compar_fn_t comparFn) {
|
int32_t taosMergeSort(void *src, int64_t numOfElem, int64_t size, __compar_fn_t comparFn) {
|
||||||
void *param = comparFn;
|
void *param = comparFn;
|
||||||
return taosMergeSortHelper(src, numOfElem, size, param, msortHelper);
|
return taosMergeSortHelper(src, numOfElem, size, param, msortHelper);
|
||||||
|
|
|
@ -179,6 +179,7 @@ int32_t l2DecompressImpl_tsz(const char *const input, const int32_t inputSize, c
|
||||||
#if defined(WINDOWS) || defined(_TD_DARWIN_64)
|
#if defined(WINDOWS) || defined(_TD_DARWIN_64)
|
||||||
// do nothing
|
// do nothing
|
||||||
#else
|
#else
|
||||||
|
|
||||||
int32_t l2ComressInitImpl_zlib(char *lossyColumns, float fPrecision, double dPrecision, uint32_t maxIntervals,
|
int32_t l2ComressInitImpl_zlib(char *lossyColumns, float fPrecision, double dPrecision, uint32_t maxIntervals,
|
||||||
uint32_t intervals, int32_t ifAdtFse, const char *compressor) {
|
uint32_t intervals, int32_t ifAdtFse, const char *compressor) {
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -187,7 +188,7 @@ int32_t l2ComressInitImpl_zlib(char *lossyColumns, float fPrecision, double dPre
|
||||||
int32_t l2CompressImpl_zlib(const char *const input, const int32_t inputSize, char *const output, int32_t outputSize,
|
int32_t l2CompressImpl_zlib(const char *const input, const int32_t inputSize, char *const output, int32_t outputSize,
|
||||||
const char type, int8_t lvl) {
|
const char type, int8_t lvl) {
|
||||||
uLongf dstLen = outputSize - 1;
|
uLongf dstLen = outputSize - 1;
|
||||||
int32_t ret = compress2((Bytef *)(output + 1), (uLongf *)&dstLen, (Bytef *)input, (uLong)inputSize, 9);
|
int32_t ret = compress2((Bytef *)(output + 1), (uLongf *)&dstLen, (Bytef *)input, (uLong)inputSize, lvl);
|
||||||
if (ret == Z_OK) {
|
if (ret == Z_OK) {
|
||||||
output[0] = 1;
|
output[0] = 1;
|
||||||
return dstLen + 1;
|
return dstLen + 1;
|
||||||
|
@ -226,7 +227,7 @@ int32_t l2ComressInitImpl_zstd(char *lossyColumns, float fPrecision, double dPre
|
||||||
|
|
||||||
int32_t l2CompressImpl_zstd(const char *const input, const int32_t inputSize, char *const output, int32_t outputSize,
|
int32_t l2CompressImpl_zstd(const char *const input, const int32_t inputSize, char *const output, int32_t outputSize,
|
||||||
const char type, int8_t lvl) {
|
const char type, int8_t lvl) {
|
||||||
size_t len = ZSTD_compress(output + 1, outputSize - 1, input, inputSize, ZSTD_CLEVEL_DEFAULT);
|
size_t len = ZSTD_compress(output + 1, outputSize - 1, input, inputSize, lvl);
|
||||||
if (len > inputSize) {
|
if (len > inputSize) {
|
||||||
output[0] = 0;
|
output[0] = 0;
|
||||||
memcpy(output + 1, input, inputSize);
|
memcpy(output + 1, input, inputSize);
|
||||||
|
@ -253,7 +254,7 @@ int32_t l2ComressInitImpl_xz(char *lossyColumns, float fPrecision, double dPreci
|
||||||
}
|
}
|
||||||
int32_t l2CompressImpl_xz(const char *const input, const int32_t inputSize, char *const output, int32_t outputSize,
|
int32_t l2CompressImpl_xz(const char *const input, const int32_t inputSize, char *const output, int32_t outputSize,
|
||||||
const char type, int8_t lvl) {
|
const char type, int8_t lvl) {
|
||||||
size_t len = FL2_compress(output + 1, outputSize - 1, input, inputSize, 0);
|
size_t len = FL2_compress(output + 1, outputSize - 1, input, inputSize, lvl);
|
||||||
if (len > inputSize) {
|
if (len > inputSize) {
|
||||||
output[0] = 0;
|
output[0] = 0;
|
||||||
memcpy(output + 1, input, inputSize);
|
memcpy(output + 1, input, inputSize);
|
||||||
|
@ -274,14 +275,19 @@ int32_t l2DecompressImpl_xz(const char *const input, const int32_t compressedSiz
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
TCompressL1FnSet compressL1Dict[] = {{"PLAIN", NULL, tsCompressPlain2, tsDecompressPlain2},
|
TCmprL1FnSet compressL1Dict[] = {{"PLAIN", NULL, tsCompressPlain2, tsDecompressPlain2},
|
||||||
{"SIMPLE-8B", NULL, tsCompressINTImp2, tsDecompressINTImp2},
|
{"SIMPLE-8B", NULL, tsCompressINTImp2, tsDecompressINTImp2},
|
||||||
{"DELTAI", NULL, tsCompressTimestampImp2, tsDecompressTimestampImp2},
|
{"DELTAI", NULL, tsCompressTimestampImp2, tsDecompressTimestampImp2},
|
||||||
{"BIT-PACKING", NULL, tsCompressBoolImp2, tsDecompressBoolImp2},
|
{"BIT-PACKING", NULL, tsCompressBoolImp2, tsDecompressBoolImp2},
|
||||||
{"DELTAD", NULL, tsCompressDoubleImp2, tsDecompressDoubleImp2}};
|
{"DELTAD", NULL, tsCompressDoubleImp2, tsDecompressDoubleImp2}};
|
||||||
|
|
||||||
|
TCmprLvlSet compressL2LevelDict[] = {
|
||||||
|
{"unknown", .lvl = {1, 2, 3}}, {"lz4", .lvl = {1, 2, 3}}, {"zlib", .lvl = {1, 6, 9}},
|
||||||
|
{"zstd", .lvl = {1, 11, 22}}, {"tsz", .lvl = {1, 2, 3}}, {"xz", .lvl = {1, 6, 9}},
|
||||||
|
};
|
||||||
|
|
||||||
#if defined(WINDOWS) || defined(_TD_DARWIN_64)
|
#if defined(WINDOWS) || defined(_TD_DARWIN_64)
|
||||||
TCompressL2FnSet compressL2Dict[] = {
|
TCmprL2FnSet compressL2Dict[] = {
|
||||||
{"unknown", l2ComressInitImpl_disabled, l2CompressImpl_disabled, l2DecompressImpl_disabled},
|
{"unknown", l2ComressInitImpl_disabled, l2CompressImpl_disabled, l2DecompressImpl_disabled},
|
||||||
{"lz4", l2ComressInitImpl_lz4, l2CompressImpl_lz4, l2DecompressImpl_lz4},
|
{"lz4", l2ComressInitImpl_lz4, l2CompressImpl_lz4, l2DecompressImpl_lz4},
|
||||||
{"zlib", l2ComressInitImpl_lz4, l2CompressImpl_lz4, l2DecompressImpl_lz4},
|
{"zlib", l2ComressInitImpl_lz4, l2CompressImpl_lz4, l2DecompressImpl_lz4},
|
||||||
|
@ -289,7 +295,7 @@ TCompressL2FnSet compressL2Dict[] = {
|
||||||
{"tsz", l2ComressInitImpl_tsz, l2CompressImpl_tsz, l2DecompressImpl_tsz},
|
{"tsz", l2ComressInitImpl_tsz, l2CompressImpl_tsz, l2DecompressImpl_tsz},
|
||||||
{"xz", l2ComressInitImpl_lz4, l2CompressImpl_lz4, l2DecompressImpl_lz4}};
|
{"xz", l2ComressInitImpl_lz4, l2CompressImpl_lz4, l2DecompressImpl_lz4}};
|
||||||
#else
|
#else
|
||||||
TCompressL2FnSet compressL2Dict[] = {
|
TCmprL2FnSet compressL2Dict[] = {
|
||||||
{"unknown", l2ComressInitImpl_disabled, l2CompressImpl_disabled, l2DecompressImpl_disabled},
|
{"unknown", l2ComressInitImpl_disabled, l2CompressImpl_disabled, l2DecompressImpl_disabled},
|
||||||
{"lz4", l2ComressInitImpl_lz4, l2CompressImpl_lz4, l2DecompressImpl_lz4},
|
{"lz4", l2ComressInitImpl_lz4, l2CompressImpl_lz4, l2DecompressImpl_lz4},
|
||||||
{"zlib", l2ComressInitImpl_zlib, l2CompressImpl_zlib, l2DecompressImpl_zlib},
|
{"zlib", l2ComressInitImpl_zlib, l2CompressImpl_zlib, l2DecompressImpl_zlib},
|
||||||
|
@ -299,6 +305,17 @@ TCompressL2FnSet compressL2Dict[] = {
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
int8_t tsGetCompressL2Level(uint8_t alg, uint8_t lvl) {
|
||||||
|
if (lvl == L2_LVL_LOW) {
|
||||||
|
return compressL2LevelDict[alg].lvl[0];
|
||||||
|
} else if (lvl == L2_LVL_MEDIUM) {
|
||||||
|
return compressL2LevelDict[alg].lvl[1];
|
||||||
|
} else if (lvl == L2_LVL_HIGH) {
|
||||||
|
return compressL2LevelDict[alg].lvl[2];
|
||||||
|
}
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
static const int32_t TEST_NUMBER = 1;
|
static const int32_t TEST_NUMBER = 1;
|
||||||
#define is_bigendian() ((*(char *)&TEST_NUMBER) == 0)
|
#define is_bigendian() ((*(char *)&TEST_NUMBER) == 0)
|
||||||
#define SIMPLE8B_MAX_INT64 ((uint64_t)1152921504606846974LL)
|
#define SIMPLE8B_MAX_INT64 ((uint64_t)1152921504606846974LL)
|
||||||
|
@ -2704,7 +2721,8 @@ int32_t tsDecompressBigint(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int
|
||||||
uTrace("encode:%s, compress:%s, level:%d, type:%s, l1:%d", compressL1Dict[l1].name, compressL2Dict[l2].name, \
|
uTrace("encode:%s, compress:%s, level:%d, type:%s, l1:%d", compressL1Dict[l1].name, compressL2Dict[l2].name, \
|
||||||
lvl, tDataTypes[type].name, l1); \
|
lvl, tDataTypes[type].name, l1); \
|
||||||
int32_t len = compressL1Dict[l1].comprFn(pIn, nEle, pBuf, type); \
|
int32_t len = compressL1Dict[l1].comprFn(pIn, nEle, pBuf, type); \
|
||||||
return compressL2Dict[l2].comprFn(pBuf, len, pOut, nOut, type, lvl); \
|
int8_t alvl = tsGetCompressL2Level(l2, lvl); \
|
||||||
|
return compressL2Dict[l2].comprFn(pBuf, len, pOut, nOut, type, alvl); \
|
||||||
} else { \
|
} else { \
|
||||||
uTrace("dencode:%s, decompress:%s, level:%d, type:%s", compressL1Dict[l1].name, compressL2Dict[l2].name, lvl, \
|
uTrace("dencode:%s, decompress:%s, level:%d, type:%s", compressL1Dict[l1].name, compressL2Dict[l2].name, lvl, \
|
||||||
tDataTypes[type].name); \
|
tDataTypes[type].name); \
|
||||||
|
@ -2715,7 +2733,8 @@ int32_t tsDecompressBigint(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int
|
||||||
if (compress) { \
|
if (compress) { \
|
||||||
uTrace("encode:%s, compress:%s, level:%d, type:%s", "disabled", compressL2Dict[l1].name, lvl, \
|
uTrace("encode:%s, compress:%s, level:%d, type:%s", "disabled", compressL2Dict[l1].name, lvl, \
|
||||||
tDataTypes[type].name); \
|
tDataTypes[type].name); \
|
||||||
return compressL2Dict[l2].comprFn(pIn, nIn, pOut, nOut, type, lvl); \
|
int8_t alvl = tsGetCompressL2Level(l2, lvl); \
|
||||||
|
return compressL2Dict[l2].comprFn(pIn, nIn, pOut, nOut, type, alvl); \
|
||||||
} else { \
|
} else { \
|
||||||
uTrace("dencode:%s, dcompress:%s, level:%d, type:%s", "disabled", compressL2Dict[l1].name, lvl, \
|
uTrace("dencode:%s, dcompress:%s, level:%d, type:%s", "disabled", compressL2Dict[l1].name, lvl, \
|
||||||
tDataTypes[type].name); \
|
tDataTypes[type].name); \
|
||||||
|
@ -2913,127 +2932,6 @@ int32_t tsDecompressBigint2(void *pIn, int32_t nIn, int32_t nEle, void *pOut, in
|
||||||
FUNC_COMPRESS_IMPL(pIn, nIn, nEle, pOut, nOut, cmprAlg, pBuf, nBuf, TSDB_DATA_TYPE_BIGINT, 0);
|
FUNC_COMPRESS_IMPL(pIn, nIn, nEle, pOut, nOut, cmprAlg, pBuf, nBuf, TSDB_DATA_TYPE_BIGINT, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
// int32_t tsFindCompressAlg(int8_t dataType, uint8_t compress, TCompressL1FnSet *l1Fn, TCompressL2FnSet *l2Fn);
|
|
||||||
|
|
||||||
// int32_t tsCompressImpl(int8_t type, void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg,
|
|
||||||
// void *pBuf, int32_t nBuf) {
|
|
||||||
// TCompressL1FnSet fn1;
|
|
||||||
// TCompressL2FnSet fn2;
|
|
||||||
|
|
||||||
// if (tsFindCompressAlg(type, cmprAlg, &fn1, &fn2)) return -1;
|
|
||||||
|
|
||||||
// int32_t len = 0;
|
|
||||||
// uint8_t l1 = COMPRESS_L1_TYPE_U8(cmprAlg);
|
|
||||||
// uint8_t l2 = COMPRESS_L2_TYPE_U8(cmprAlg);
|
|
||||||
// uint8_t lvl = COMPRESS_L2_TYPE_LEVEL_U8(cmprAlg);
|
|
||||||
|
|
||||||
// if (l2 == L2_DISABLED) {
|
|
||||||
// len = fn1.comprFn(pIn, nEle, pOut, type);
|
|
||||||
// } else {
|
|
||||||
// len = fn1.comprFn(pIn, nEle, pBuf, type);
|
|
||||||
// len = fn2.comprFn(pBuf, len, pOut, nOut, type, lvl);
|
|
||||||
// }
|
|
||||||
// return len;
|
|
||||||
// }
|
|
||||||
// int32_t tsDecompressImpl(int8_t type, void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t
|
|
||||||
// cmprAlg,
|
|
||||||
// void *pBuf, int32_t nBuf) {
|
|
||||||
// TCompressL1FnSet fn1;
|
|
||||||
// TCompressL2FnSet fn2;
|
|
||||||
|
|
||||||
// if (tsFindCompressAlg(type, cmprAlg, &fn1, &fn2) != 0) return -1;
|
|
||||||
|
|
||||||
// uint8_t l1 = COMPRESS_L1_TYPE_U8(cmprAlg);
|
|
||||||
// uint8_t l2 = COMPRESS_L2_TYPE_U8(cmprAlg);
|
|
||||||
// uint8_t lvl = COMPRESS_L2_TYPE_LEVEL_U8(cmprAlg);
|
|
||||||
// uint32_t len = 0;
|
|
||||||
// if (l2 == L2_DISABLED) {
|
|
||||||
// len = fn1.decomprFn(pIn, nEle, pOut, type);
|
|
||||||
// } else {
|
|
||||||
// len = fn2.decomprFn(pIn, nIn, pBuf, nBuf, type);
|
|
||||||
// if (len < 0) return -1;
|
|
||||||
// len = fn1.decomprFn(pBuf, nEle, pOut, type);
|
|
||||||
// }
|
|
||||||
// return len;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// int32_t tsFindCompressAlg(int8_t dataType, uint8_t compress, TCompressL1FnSet *l1Fn, TCompressL2FnSet *l2Fn) {
|
|
||||||
// uint8_t l1 = COMPRESS_L1_TYPE_U8(compress);
|
|
||||||
// uint8_t l2 = COMPRESS_L2_TYPE_U8(compress);
|
|
||||||
// uint8_t lvl = COMPRESS_L2_TYPE_LEVEL_U8(compress);
|
|
||||||
|
|
||||||
// static int32_t l1Sz = sizeof(compressL1Dict) / sizeof(compressL1Dict[0]);
|
|
||||||
// if (l1 >= l1Sz) return -1;
|
|
||||||
|
|
||||||
// static int32_t l2Sz = sizeof(compressL2Dict) / sizeof(compressL2Dict[0]);
|
|
||||||
// if (l2 >= l2Sz) return -1;
|
|
||||||
|
|
||||||
// *l1Fn = compressL1Dict[l1];
|
|
||||||
// *l2Fn = compressL2Dict[l2];
|
|
||||||
// return 0;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// typedef struct {
|
|
||||||
// int8_t dtype;
|
|
||||||
// SArray *l1Set;
|
|
||||||
// SArray *l2Set;
|
|
||||||
// } TCompressCompatible;
|
|
||||||
|
|
||||||
// SHashObj *algSet = NULL;
|
|
||||||
|
|
||||||
// int32_t tsCompressSetInit() {
|
|
||||||
// algSet = taosHashInit(24, taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT), false, HASH_ENTRY_LOCK);
|
|
||||||
// for (int i = TSDB_DATA_TYPE_NULL; i < TSDB_DATA_TYPE_MAX; i++) {
|
|
||||||
// TCompressCompatible p;
|
|
||||||
// p.dtype = i;
|
|
||||||
// p.l1Set = taosArrayInit(4, sizeof(int8_t));
|
|
||||||
// p.l2Set = taosArrayInit(4, sizeof(int8_t));
|
|
||||||
|
|
||||||
// for (int8_t j = L1_DISABLED; j < L1_MAX; j++) {
|
|
||||||
// taosArrayPush(p.l1Set, &j);
|
|
||||||
// }
|
|
||||||
|
|
||||||
// for (int8_t j = L2_DISABLED; j < L2_MAX; j++) {
|
|
||||||
// taosArrayPush(p.l2Set, &j);
|
|
||||||
// }
|
|
||||||
|
|
||||||
// taosHashPut(algSet, &i, sizeof(i), &p, sizeof(TCompressCompatible));
|
|
||||||
// }
|
|
||||||
// return 0;
|
|
||||||
// }
|
|
||||||
// int32_t tsCompressSetDestroy() {
|
|
||||||
// void *p = taosHashIterate(algSet, NULL);
|
|
||||||
// while (p) {
|
|
||||||
// TCompressCompatible *v = p;
|
|
||||||
// taosArrayDestroy(v->l1Set);
|
|
||||||
// taosArrayDestroy(v->l2Set);
|
|
||||||
|
|
||||||
// taosHashIterate(algSet, p);
|
|
||||||
// }
|
|
||||||
// return 0;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// int32_t tsValidCompressAlgByDataTypes(int8_t type, int8_t compress) {
|
|
||||||
// // compress alg
|
|
||||||
// int8_t l1 = COMPRESS_L1_TYPE_U8(compress);
|
|
||||||
// int8_t l2 = COMPRESS_L2_TYPE_U8(compress);
|
|
||||||
// int8_t lvl = COMPRESS_L2_TYPE_LEVEL_U8(compress);
|
|
||||||
|
|
||||||
// TCompressCompatible *p = taosHashGet(algSet, &type, sizeof(type));
|
|
||||||
// if (p == NULL) return -1;
|
|
||||||
|
|
||||||
// if (p->dtype != type) return -1;
|
|
||||||
|
|
||||||
// if (taosArraySearch(p->l1Set, &l1, compareInt8Val, 0) == NULL) {
|
|
||||||
// return -1;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// if (taosArraySearch(p->l2Set, &l2, compareInt8Val, 0) == NULL) {
|
|
||||||
// return -1;
|
|
||||||
// }
|
|
||||||
// return 0;
|
|
||||||
// }
|
|
||||||
|
|
||||||
int32_t tcompressDebug(uint32_t cmprAlg, uint8_t *l1Alg, uint8_t *l2Alg, uint8_t *level) {
|
int32_t tcompressDebug(uint32_t cmprAlg, uint8_t *l1Alg, uint8_t *l2Alg, uint8_t *level) {
|
||||||
DEFINE_VAR(cmprAlg)
|
DEFINE_VAR(cmprAlg)
|
||||||
*l1Alg = l1;
|
*l1Alg = l1;
|
||||||
|
|
|
@ -151,8 +151,12 @@ TAOS_DEFINE_ERROR(TSDB_CODE_TSC_QUERY_KILLED, "Query killed")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_NO_EXEC_NODE, "No available execution node in current query policy configuration")
|
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_NO_EXEC_NODE, "No available execution node in current query policy configuration")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_NOT_STABLE_ERROR, "Table is not a super table")
|
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_NOT_STABLE_ERROR, "Table is not a super table")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_STMT_CACHE_ERROR, "Stmt cache error")
|
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_STMT_CACHE_ERROR, "Stmt cache error")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_ENCODE_PARAM_ERROR, "Invalid compress param")
|
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_ENCODE_PARAM_ERROR, "Invalid encode param")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_ENCODE_PARAM_NULL, "Not found compress param")
|
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_ENCODE_PARAM_NULL, "Not found compress param")
|
||||||
|
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_COMPRESS_PARAM_ERROR, "Invalid compress param")
|
||||||
|
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_COMPRESS_LEVEL_ERROR, "Invalid compress level param")
|
||||||
|
|
||||||
|
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INTERNAL_ERROR, "Internal error")
|
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INTERNAL_ERROR, "Internal error")
|
||||||
|
|
||||||
// mnode-common
|
// mnode-common
|
||||||
|
@ -221,7 +225,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_MND_COLUMN_NOT_EXIST, "Column does not exist
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_MND_INVALID_STB_OPTION, "Invalid stable options")
|
TAOS_DEFINE_ERROR(TSDB_CODE_MND_INVALID_STB_OPTION, "Invalid stable options")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_MND_INVALID_ROW_BYTES, "Invalid row bytes")
|
TAOS_DEFINE_ERROR(TSDB_CODE_MND_INVALID_ROW_BYTES, "Invalid row bytes")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_MND_FIELD_VALUE_OVERFLOW, "out of range and overflow")
|
TAOS_DEFINE_ERROR(TSDB_CODE_MND_FIELD_VALUE_OVERFLOW, "out of range and overflow")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_MND_COLUMN_COMPRESS_ALREADY_EXIST, "Column compress already exist")
|
TAOS_DEFINE_ERROR(TSDB_CODE_MND_COLUMN_COMPRESS_ALREADY_EXIST, "Same with old param")
|
||||||
|
|
||||||
|
|
||||||
// mnode-func
|
// mnode-func
|
||||||
|
@ -397,7 +401,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_VND_ALREADY_IS_VOTER, "Vnode already is a vo
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_VND_DIR_ALREADY_EXIST, "Vnode directory already exist")
|
TAOS_DEFINE_ERROR(TSDB_CODE_VND_DIR_ALREADY_EXIST, "Vnode directory already exist")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_VND_META_DATA_UNSAFE_DELETE, "Single replica vnode data will lost permanently after this operation, if you make sure this, please use drop dnode <id> unsafe to execute")
|
TAOS_DEFINE_ERROR(TSDB_CODE_VND_META_DATA_UNSAFE_DELETE, "Single replica vnode data will lost permanently after this operation, if you make sure this, please use drop dnode <id> unsafe to execute")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_VND_ARB_NOT_SYNCED, "Vgroup peer is not synced")
|
TAOS_DEFINE_ERROR(TSDB_CODE_VND_ARB_NOT_SYNCED, "Vgroup peer is not synced")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_VND_COLUMN_COMPRESS_ALREADY_EXIST,"Column compress already exist")
|
TAOS_DEFINE_ERROR(TSDB_CODE_VND_COLUMN_COMPRESS_ALREADY_EXIST,"Same with old param")
|
||||||
|
|
||||||
|
|
||||||
// tsdb
|
// tsdb
|
||||||
|
|
|
@ -719,6 +719,11 @@ void *taosHashGetKey(void *data, size_t *keyLen) {
|
||||||
return GET_HASH_NODE_KEY(node);
|
return GET_HASH_NODE_KEY(node);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int32_t taosHashGetValueSize(void *data) {
|
||||||
|
SHashNode *node = GET_HASH_PNODE(data);
|
||||||
|
return node->dataLen;
|
||||||
|
}
|
||||||
|
|
||||||
// release the pNode, return next pNode, and lock the current entry
|
// release the pNode, return next pNode, and lock the current entry
|
||||||
static void *taosHashReleaseNode(SHashObj *pHashObj, void *p, int *slot) {
|
static void *taosHashReleaseNode(SHashObj *pHashObj, void *p, int *slot) {
|
||||||
SHashNode *pOld = (SHashNode *)GET_HASH_PNODE(p);
|
SHashNode *pOld = (SHashNode *)GET_HASH_PNODE(p);
|
||||||
|
|
|
@ -30,7 +30,7 @@ from frame.srvCtl import *
|
||||||
|
|
||||||
class TDTestCase(TBase):
|
class TDTestCase(TBase):
|
||||||
updatecfgDict = {
|
updatecfgDict = {
|
||||||
"countAlwaysReturnValue" : "0",
|
"countAlwaysReturnValue" : "1",
|
||||||
"lossyColumns" : "float,double",
|
"lossyColumns" : "float,double",
|
||||||
"fPrecision" : "0.000000001",
|
"fPrecision" : "0.000000001",
|
||||||
"dPrecision" : "0.00000000000000001",
|
"dPrecision" : "0.00000000000000001",
|
||||||
|
@ -106,7 +106,7 @@ class TDTestCase(TBase):
|
||||||
# check count always return value
|
# check count always return value
|
||||||
sql = f"select count(*) from {self.db}.ta"
|
sql = f"select count(*) from {self.db}.ta"
|
||||||
tdSql.query(sql)
|
tdSql.query(sql)
|
||||||
tdSql.checkRows(0) # countAlwaysReturnValue is false
|
tdSql.checkRows(1) # countAlwaysReturnValue is false
|
||||||
|
|
||||||
# run
|
# run
|
||||||
def run(self):
|
def run(self):
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -118,18 +118,10 @@ class TDTestCase(TBase):
|
||||||
sql = f"describe {self.db}.{self.stb}"
|
sql = f"describe {self.db}.{self.stb}"
|
||||||
tdSql.query(sql)
|
tdSql.query(sql)
|
||||||
|
|
||||||
'''
|
|
||||||
# see AutoGen.types
|
# see AutoGen.types
|
||||||
defEncodes = [ "delta-i","delta-i","simple8b","simple8b","simple8b","simple8b","simple8b","simple8b",
|
defEncodes = [ "delta-i","delta-i","simple8b","simple8b","simple8b","simple8b","simple8b","simple8b",
|
||||||
"simple8b","simple8b","delta-d","delta-d","bit-packing",
|
"simple8b","simple8b","delta-d","delta-d","bit-packing",
|
||||||
"disabled","disabled","disabled","disabled","disabled"]
|
"disabled","disabled","disabled","disabled"]
|
||||||
'''
|
|
||||||
|
|
||||||
# pass-ci have error
|
|
||||||
defEncodes = [ "delta-i","delta-i","simple8b","simple8b","simple8b","simple8b","simple8b","simple8b",
|
|
||||||
"simple8b","simple8b","delta-d","delta-d","bit-packing",
|
|
||||||
"disabled","disabled","disabled","disabled","simple8b"]
|
|
||||||
|
|
||||||
|
|
||||||
count = tdSql.getRows()
|
count = tdSql.getRows()
|
||||||
for i in range(count):
|
for i in range(count):
|
||||||
|
|
|
@ -32,7 +32,7 @@
|
||||||
{
|
{
|
||||||
"name": "stb",
|
"name": "stb",
|
||||||
"child_table_exists": "no",
|
"child_table_exists": "no",
|
||||||
"childtable_count": 10,
|
"childtable_count": 6,
|
||||||
"insert_rows": 2000000,
|
"insert_rows": 2000000,
|
||||||
"childtable_prefix": "d",
|
"childtable_prefix": "d",
|
||||||
"insert_mode": "taosc",
|
"insert_mode": "taosc",
|
||||||
|
|
|
@ -38,6 +38,10 @@ s3EndPoint http://192.168.1.52:9000
|
||||||
s3AccessKey 'zOgllR6bSnw2Ah3mCNel:cdO7oXAu3Cqdb1rUdevFgJMi0LtRwCXdWKQx4bhX'
|
s3AccessKey 'zOgllR6bSnw2Ah3mCNel:cdO7oXAu3Cqdb1rUdevFgJMi0LtRwCXdWKQx4bhX'
|
||||||
s3BucketName ci-bucket
|
s3BucketName ci-bucket
|
||||||
s3UploadDelaySec 60
|
s3UploadDelaySec 60
|
||||||
|
|
||||||
|
for test:
|
||||||
|
"s3AccessKey" : "fGPPyYjzytw05nw44ViA:vK1VcwxgSOykicx6hk8fL1x15uEtyDSFU3w4hTaZ"
|
||||||
|
"s3BucketName": "test-bucket"
|
||||||
'''
|
'''
|
||||||
|
|
||||||
|
|
||||||
|
@ -63,7 +67,7 @@ class TDTestCase(TBase):
|
||||||
|
|
||||||
tdSql.execute(f"use {self.db}")
|
tdSql.execute(f"use {self.db}")
|
||||||
# come from s3_basic.json
|
# come from s3_basic.json
|
||||||
self.childtable_count = 10
|
self.childtable_count = 6
|
||||||
self.insert_rows = 2000000
|
self.insert_rows = 2000000
|
||||||
self.timestamp_step = 1000
|
self.timestamp_step = 1000
|
||||||
|
|
||||||
|
@ -85,7 +89,7 @@ class TDTestCase(TBase):
|
||||||
fileName = cols[8]
|
fileName = cols[8]
|
||||||
#print(f" filesize={fileSize} fileName={fileName} line={line}")
|
#print(f" filesize={fileSize} fileName={fileName} line={line}")
|
||||||
if fileSize > maxFileSize:
|
if fileSize > maxFileSize:
|
||||||
tdLog.info(f"error, {fileSize} over max size({maxFileSize})\n")
|
tdLog.info(f"error, {fileSize} over max size({maxFileSize}) {fileName}\n")
|
||||||
overCnt += 1
|
overCnt += 1
|
||||||
else:
|
else:
|
||||||
tdLog.info(f"{fileName}({fileSize}) check size passed.")
|
tdLog.info(f"{fileName}({fileSize}) check size passed.")
|
||||||
|
@ -99,7 +103,7 @@ class TDTestCase(TBase):
|
||||||
loop = 0
|
loop = 0
|
||||||
rets = []
|
rets = []
|
||||||
overCnt = 0
|
overCnt = 0
|
||||||
while loop < 180:
|
while loop < 100:
|
||||||
time.sleep(3)
|
time.sleep(3)
|
||||||
|
|
||||||
# check upload to s3
|
# check upload to s3
|
||||||
|
@ -335,7 +339,7 @@ class TDTestCase(TBase):
|
||||||
self.snapshotAgg()
|
self.snapshotAgg()
|
||||||
self.doAction()
|
self.doAction()
|
||||||
self.checkAggCorrect()
|
self.checkAggCorrect()
|
||||||
self.checkInsertCorrect(difCnt=self.childtable_count*999999)
|
self.checkInsertCorrect(difCnt=self.childtable_count*1499999)
|
||||||
self.checkDelete()
|
self.checkDelete()
|
||||||
self.doAction()
|
self.doAction()
|
||||||
|
|
||||||
|
|
|
@ -32,7 +32,7 @@
|
||||||
{
|
{
|
||||||
"name": "stb",
|
"name": "stb",
|
||||||
"child_table_exists": "yes",
|
"child_table_exists": "yes",
|
||||||
"childtable_count": 10,
|
"childtable_count": 6,
|
||||||
"insert_rows": 1000000,
|
"insert_rows": 1000000,
|
||||||
"childtable_prefix": "d",
|
"childtable_prefix": "d",
|
||||||
"insert_mode": "taosc",
|
"insert_mode": "taosc",
|
||||||
|
|
|
@ -140,7 +140,7 @@ class TBase:
|
||||||
|
|
||||||
# check step
|
# check step
|
||||||
sql = f"select count(*) from (select diff(ts) as dif from {self.stb} partition by tbname order by ts desc) where dif != {self.timestamp_step}"
|
sql = f"select count(*) from (select diff(ts) as dif from {self.stb} partition by tbname order by ts desc) where dif != {self.timestamp_step}"
|
||||||
#tdSql.checkAgg(sql, difCnt)
|
tdSql.checkAgg(sql, difCnt)
|
||||||
|
|
||||||
# save agg result
|
# save agg result
|
||||||
def snapshotAgg(self):
|
def snapshotAgg(self):
|
||||||
|
|
|
@ -14,6 +14,7 @@
|
||||||
,,y,army,./pytest.sh python3 ./test.py -f enterprise/s3/s3Basic.py -N 3
|
,,y,army,./pytest.sh python3 ./test.py -f enterprise/s3/s3Basic.py -N 3
|
||||||
,,y,army,./pytest.sh python3 ./test.py -f community/cluster/snapshot.py -N 3 -L 3 -D 2
|
,,y,army,./pytest.sh python3 ./test.py -f community/cluster/snapshot.py -N 3 -L 3 -D 2
|
||||||
,,y,army,./pytest.sh python3 ./test.py -f community/query/function/test_func_elapsed.py
|
,,y,army,./pytest.sh python3 ./test.py -f community/query/function/test_func_elapsed.py
|
||||||
|
,,y,army,./pytest.sh python3 ./test.py -f community/query/test_join.py
|
||||||
,,y,army,./pytest.sh python3 ./test.py -f community/query/fill/fill_desc.py -N 3 -L 3 -D 2
|
,,y,army,./pytest.sh python3 ./test.py -f community/query/fill/fill_desc.py -N 3 -L 3 -D 2
|
||||||
,,y,army,./pytest.sh python3 ./test.py -f community/cluster/incSnapshot.py -N 3
|
,,y,army,./pytest.sh python3 ./test.py -f community/cluster/incSnapshot.py -N 3
|
||||||
,,y,army,./pytest.sh python3 ./test.py -f community/query/query_basic.py -N 3
|
,,y,army,./pytest.sh python3 ./test.py -f community/query/query_basic.py -N 3
|
||||||
|
@ -128,8 +129,8 @@
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/subscribeDb0.py -N 3 -n 3
|
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/subscribeDb0.py -N 3 -n 3
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/ins_topics_test.py
|
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/ins_topics_test.py
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmqMaxTopic.py
|
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmqMaxTopic.py
|
||||||
#,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmqParamsTest.py
|
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmqParamsTest.py
|
||||||
#,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmqParamsTest.py -R
|
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmqParamsTest.py -R
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmqClientConsLog.py
|
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmqClientConsLog.py
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmqMaxGroupIds.py
|
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmqMaxGroupIds.py
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmqConsumeDiscontinuousData.py
|
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmqConsumeDiscontinuousData.py
|
||||||
|
|
|
@ -194,4 +194,12 @@ if $rows != 144 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
sql select a.ts, b.ts from tba1 a join sta b on a.ts = b.ts and a.t1 = b.t1;
|
||||||
|
if $rows != 4 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
|
||||||
|
sql select a.ts, b.ts from sta a join sta b on a.ts = b.ts and a.t1 = b.t1;
|
||||||
|
if $rows != 8 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue