update
This commit is contained in:
commit
9687a70bf2
|
@ -7,8 +7,6 @@ description: "TAOS SQL 支持的语法规则、主要查询功能、支持的 SQ
|
|||
|
||||
TAOS SQL 是用户对 TDengine 进行数据写入和查询的主要工具。TAOS SQL 为了便于用户快速上手,在一定程度上提供与标准 SQL 类似的风格和模式。严格意义上,TAOS SQL 并不是也不试图提供标准的 SQL 语法。此外,由于 TDengine 针对的时序性结构化数据不提供删除功能,因此在 TAO SQL 中不提供数据删除的相关功能。
|
||||
|
||||
TAOS SQL 不支持关键字的缩写,例如 DESCRIBE 不能缩写为 DESC。
|
||||
|
||||
本章节 SQL 语法遵循如下约定:
|
||||
|
||||
- <\> 里的内容是用户需要输入的,但不要输入 <\> 本身
|
||||
|
@ -37,4 +35,4 @@ import DocCardList from '@theme/DocCardList';
|
|||
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
||||
|
||||
<DocCardList items={useCurrentSidebarCategory().items}/>
|
||||
```
|
||||
```
|
||||
|
|
|
@ -9,7 +9,7 @@ Keyword `STable`, abbreviated for super table, is supported since version 2.0.15
|
|||
|
||||
:::
|
||||
|
||||
## Crate STable
|
||||
## Create STable
|
||||
|
||||
```
|
||||
CREATE STable [IF NOT EXISTS] stb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]) TAGS (tag1_name tag_type1, tag2_name tag_type2 [, tag3_name tag_type3]);
|
||||
|
@ -19,7 +19,7 @@ The SQL statement of creating a STable is similar to that of creating a table, b
|
|||
|
||||
:::info
|
||||
|
||||
1. The tag types specified in TAGS should NOT be timestamp. Since 2.1.3.0 timestamp type can be used in TAGS column, but its value must be fixed and arithmetic operation can't be applied on it.
|
||||
1. A tag can be of type timestamp, since version 2.1.3.0, but its value must be fixed and arithmetic operations cannot be performed on it. Prior to version 2.1.3.0, tag types specified in TAGS could not be of type timestamp.
|
||||
2. The tag names specified in TAGS should NOT be the same as other columns.
|
||||
3. The tag names specified in TAGS should NOT be the same as any reserved keywords.(Please refer to [keywords](/taos-sql/keywords/)
|
||||
4. The maximum number of tags specified in TAGS is 128, there must be at least one tag, and the total length of all tag columns should NOT exceed 16KB.
|
||||
|
@ -76,7 +76,7 @@ ALTER STable stb_name DROP COLUMN field_name;
|
|||
ALTER STable stb_name MODIFY COLUMN field_name data_type(length);
|
||||
```
|
||||
|
||||
This command can be used to change (or increase, more specifically) the length of a column of variable length types, like BINARY or NCHAR.
|
||||
This command can be used to change (or more specifically, increase) the length of a column of variable length types, like BINARY or NCHAR.
|
||||
|
||||
## Change Tags of A STable
|
||||
|
||||
|
@ -94,7 +94,7 @@ This command is used to add a new tag for a STable and specify the tag type.
|
|||
ALTER STable stb_name DROP TAG tag_name;
|
||||
```
|
||||
|
||||
The tag will be removed automatically from all the subtables created using the super table as template once a tag is removed from a super table.
|
||||
The tag will be removed automatically from all the subtables, created using the super table as template, once a tag is removed from a super table.
|
||||
|
||||
### Change A Tag
|
||||
|
||||
|
@ -102,7 +102,7 @@ The tag will be removed automatically from all the subtables created using the s
|
|||
ALTER STable stb_name CHANGE TAG old_tag_name new_tag_name;
|
||||
```
|
||||
|
||||
The tag name will be changed automatically for all the subtables created using the super table as template once a tag name is changed for a super table.
|
||||
The tag name will be changed automatically for all the subtables, created using the super table as template, once a tag name is changed for a super table.
|
||||
|
||||
### Change Tag Length
|
||||
|
||||
|
@ -110,7 +110,7 @@ The tag name will be changed automatically for all the subtables created using t
|
|||
ALTER STable stb_name MODIFY TAG tag_name data_type(length);
|
||||
```
|
||||
|
||||
This command can be used to change (or increase, more specifically) the length of a tag of variable length types, like BINARY or NCHAR.
|
||||
This command can be used to change (or more specifically, increase) the length of a tag of variable length types, like BINARY or NCHAR.
|
||||
|
||||
:::note
|
||||
Changing tag values can be applied to only subtables. All other tag operations, like add tag, remove tag, however, can be applied to only STable. If a new tag is added for a STable, the tag will be added with NULL value for all its subtables.
|
||||
|
|
|
@ -21,7 +21,7 @@ SELECT select_expr [, select_expr ...]
|
|||
|
||||
## Wildcard
|
||||
|
||||
Wilcard \* can be used to specify all columns. The result includes only data columns for normal tables.
|
||||
Wildcard \* can be used to specify all columns. The result includes only data columns for normal tables.
|
||||
|
||||
```
|
||||
taos> SELECT * FROM d1001;
|
||||
|
@ -51,14 +51,14 @@ taos> SELECT * FROM meters;
|
|||
Query OK, 9 row(s) in set (0.002022s)
|
||||
```
|
||||
|
||||
Wildcard can be used with table name as prefix, both below SQL statements have same effects and return all columns.
|
||||
Wildcard can be used with table name as prefix. Both SQL statements below have the same effect and return all columns.
|
||||
|
||||
```SQL
|
||||
SELECT * FROM d1001;
|
||||
SELECT d1001.* FROM d1001;
|
||||
```
|
||||
|
||||
In JOIN query, however, with or without table name prefix will return different results. \* without table prefix will return all the columns of both tables, but \* with table name as prefix will return only the columns of that table.
|
||||
In a JOIN query, however, the results are different with or without a table name prefix. \* without table prefix will return all the columns of both tables, but \* with table name as prefix will return only the columns of that table.
|
||||
|
||||
```
|
||||
taos> SELECT * FROM d1001, d1003 WHERE d1001.ts=d1003.ts;
|
||||
|
@ -76,7 +76,7 @@ taos> SELECT d1001.* FROM d1001,d1003 WHERE d1001.ts = d1003.ts;
|
|||
Query OK, 1 row(s) in set (0.020443s)
|
||||
```
|
||||
|
||||
Wilcard \* can be used with some functions, but the result may be different depending on the function being used. For example, `count(*)` returns only one column, i.e. the number of rows; `first`, `last` and `last_row` return all columns of the selected row.
|
||||
Wildcard \* can be used with some functions, but the result may be different depending on the function being used. For example, `count(*)` returns only one column, i.e. the number of rows; `first`, `last` and `last_row` return all columns of the selected row.
|
||||
|
||||
```
|
||||
taos> SELECT COUNT(*) FROM d1001;
|
||||
|
@ -96,7 +96,7 @@ Query OK, 1 row(s) in set (0.000849s)
|
|||
|
||||
## Tags
|
||||
|
||||
Starting from version 2.0.14, tag columns can be selected together with data columns when querying sub tables. Please note that, however, wildcard \* doesn't represent any tag column, that means tag columns must be specified explicitly like the example below.
|
||||
Starting from version 2.0.14, tag columns can be selected together with data columns when querying sub tables. Please note however, that, wildcard \* cannot be used to represent any tag column. This means that tag columns must be specified explicitly like the example below.
|
||||
|
||||
```
|
||||
taos> SELECT location, groupid, current FROM d1001 LIMIT 2;
|
||||
|
@ -109,7 +109,7 @@ Query OK, 2 row(s) in set (0.003112s)
|
|||
|
||||
## Get distinct values
|
||||
|
||||
`DISTINCT` keyword can be used to get all the unique values of tag columns from a super table, it can also be used to get all the unique values of data columns from a table or subtable.
|
||||
`DISTINCT` keyword can be used to get all the unique values of tag columns from a super table. It can also be used to get all the unique values of data columns from a table or subtable.
|
||||
|
||||
```sql
|
||||
SELECT DISTINCT tag_name [, tag_name ...] FROM stb_name;
|
||||
|
@ -118,15 +118,15 @@ SELECT DISTINCT col_name [, col_name ...] FROM tb_name;
|
|||
|
||||
:::info
|
||||
|
||||
1. Configuration parameter `maxNumOfDistinctRes` in `taos.cfg` is used to control the number of rows to output. The minimum configurable value is 100,000, the maximum configurable value is 100,000,000, the default value is 1000,000. If the actual number of rows exceeds the value of this parameter, only the number of rows specified by this parameter will be output.
|
||||
2. It can't be guaranteed that the results selected by using `DISTINCT` on columns of `FLOAT` or `DOUBLE` are exactly unique because of the precision nature of floating numbers.
|
||||
1. Configuration parameter `maxNumOfDistinctRes` in `taos.cfg` is used to control the number of rows to output. The minimum configurable value is 100,000, the maximum configurable value is 100,000,000, the default value is 1,000,000. If the actual number of rows exceeds the value of this parameter, only the number of rows specified by this parameter will be output.
|
||||
2. It can't be guaranteed that the results selected by using `DISTINCT` on columns of `FLOAT` or `DOUBLE` are exactly unique because of the precision errors in floating point numbers.
|
||||
3. `DISTINCT` can't be used in the sub-query of a nested query statement, and can't be used together with aggregate functions, `GROUP BY` or `JOIN` in the same SQL statement.
|
||||
|
||||
:::
|
||||
|
||||
## Columns Names of Result Set
|
||||
|
||||
When using `SELECT`, the column names in the result set will be same as that in the select clause if `AS` is not used. `AS` can be used to rename the column names in the result set. For example
|
||||
When using `SELECT`, the column names in the result set will be the same as that in the select clause if `AS` is not used. `AS` can be used to rename the column names in the result set. For example
|
||||
|
||||
```
|
||||
taos> SELECT ts, ts AS primary_key_ts FROM d1001;
|
||||
|
@ -161,7 +161,7 @@ SELECT * FROM d1001;
|
|||
|
||||
## Special Query
|
||||
|
||||
Some special query functionalities can be performed without `FORM` sub-clause. For example, below statement can be used to get the current database in use.
|
||||
Some special query functions can be invoked without `FROM` sub-clause. For example, the statement below can be used to get the current database in use.
|
||||
|
||||
```
|
||||
taos> SELECT DATABASE();
|
||||
|
@ -181,7 +181,7 @@ taos> SELECT DATABASE();
|
|||
Query OK, 1 row(s) in set (0.000184s)
|
||||
```
|
||||
|
||||
Below statement can be used to get the version of client or server.
|
||||
The statement below can be used to get the version of client or server.
|
||||
|
||||
```
|
||||
taos> SELECT CLIENT_VERSION();
|
||||
|
@ -197,7 +197,7 @@ taos> SELECT SERVER_VERSION();
|
|||
Query OK, 1 row(s) in set (0.000077s)
|
||||
```
|
||||
|
||||
Below statement is used to check the server status. One integer, like `1`, is returned if the server status is OK, otherwise an error code is returned. This is compatible with the status check for TDengine from connection pool or 3rd party tools, and can avoid the problem of losing the connection from a connection pool when using the wrong heartbeat checking SQL statement.
|
||||
The statement below is used to check the server status. An integer, like `1`, is returned if the server status is OK, otherwise an error code is returned. This is compatible with the status check for TDengine from connection pool or 3rd party tools, and can avoid the problem of losing the connection from a connection pool when using the wrong heartbeat checking SQL statement.
|
||||
|
||||
```
|
||||
taos> SELECT SERVER_STATUS();
|
||||
|
@ -284,7 +284,7 @@ taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2;
|
|||
Query OK, 1 row(s) in set (0.001091s)
|
||||
```
|
||||
|
||||
- Wildcard \* can be used to get all columns, or specific column names can be specified. Arithmetic operation can be performed on columns of number types, columns can be renamed in the result set.
|
||||
- Wildcard \* can be used to get all columns, or specific column names can be specified. Arithmetic operation can be performed on columns of numerical types, columns can be renamed in the result set.
|
||||
- Arithmetic operation on columns can't be used in where clause. For example, `where a*2>6;` is not allowed but `where a>6/2;` can be used instead for the same purpose.
|
||||
- Arithmetic operation on columns can't be used as the objectives of select statement. For example, `select min(2*a) from t;` is not allowed but `select 2*min(a) from t;` can be used instead.
|
||||
- Logical operation can be used in `WHERE` clause to filter numeric values, wildcard can be used to filter string values.
|
||||
|
@ -318,13 +318,13 @@ Logical operations in below table can be used in the `where` clause to filter th
|
|||
- Operator `like` is used together with wildcards to match strings
|
||||
- '%' matches 0 or any number of characters, '\_' matches any single ASCII character.
|
||||
- `\_` is used to match the \_ in the string.
|
||||
- The maximum length of wildcard string is 100 bytes from version 2.1.6.1 (before that the maximum length is 20 bytes). `maxWildCardsLength` in `taos.cfg` can be used to control this threshold. Too long wildcard string may slowdown the execution performance of `LIKE` operator.
|
||||
- The maximum length of wildcard string is 100 bytes from version 2.1.6.1 (before that the maximum length is 20 bytes). `maxWildCardsLength` in `taos.cfg` can be used to control this threshold. A very long wildcard string may slowdown the execution performance of `LIKE` operator.
|
||||
- `AND` keyword can be used to filter multiple columns simultaneously. AND/OR operation can be performed on single or multiple columns from version 2.3.0.0. However, before 2.3.0.0 `OR` can't be used on multiple columns.
|
||||
- For timestamp column, only one condition can be used; for other columns or tags, `OR` keyword can be used to combine multiple logical operators. For example, `((value > 20 AND value < 30) OR (value < 12))`.
|
||||
- From version 2.3.0.0, multiple conditions can be used on timestamp column, but the result set can only contain single time range.
|
||||
- From version 2.0.17.0, operator `BETWEEN AND` can be used in where clause, for example `WHERE col2 BETWEEN 1.5 AND 3.25` means the filter condition is equal to "1.5 ≤ col2 ≤ 3.25".
|
||||
- From version 2.1.4.0, operator `IN` can be used in the where clause. For example, `WHERE city IN ('California.SanFrancisco', 'California.SanDiego')`. For bool type, both `{true, false}` and `{0, 1}` are allowed, but integers other than 0 or 1 are not allowed. FLOAT and DOUBLE types are impacted by floating precision, only values that match the condition within the tolerance will be selected. Non-primary key column of timestamp type can be used with `IN`.
|
||||
- From version 2.3.0.0, regular expression is supported in the where clause with keyword `match` or `nmatch`, the regular expression is case insensitive.
|
||||
- From version 2.1.4.0, operator `IN` can be used in the where clause. For example, `WHERE city IN ('California.SanFrancisco', 'California.SanDiego')`. For bool type, both `{true, false}` and `{0, 1}` are allowed, but integers other than 0 or 1 are not allowed. FLOAT and DOUBLE types are impacted by floating point precision errors. Only values that match the condition within the tolerance will be selected. Non-primary key column of timestamp type can be used with `IN`.
|
||||
- From version 2.3.0.0, regular expression is supported in the where clause with keyword `match` or `nmatch`. The regular expression is case insensitive.
|
||||
|
||||
## Regular Expression
|
||||
|
||||
|
@ -364,7 +364,7 @@ FROM temp_STable t1, temp_STable t2
|
|||
WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
|
||||
```
|
||||
|
||||
Similary, join operation can be performed on the result set of multiple sub queries.
|
||||
Similarly, join operations can be performed on the result set of multiple sub queries.
|
||||
|
||||
:::note
|
||||
Restrictions on join operation:
|
||||
|
@ -380,7 +380,7 @@ Restrictions on join operation:
|
|||
|
||||
## Nested Query
|
||||
|
||||
Nested query is also called sub query, that means in a single SQL statement the result of inner query can be used as the data source of the outer query.
|
||||
Nested query is also called sub query. This means that in a single SQL statement the result of inner query can be used as the data source of the outer query.
|
||||
|
||||
From 2.2.0.0, unassociated sub query can be used in the `FROM` clause. Unassociated means the sub query doesn't use the parameters in the parent query. More specifically, in the `tb_name_list` of `SELECT` statement, an independent SELECT statement can be used. So a complete nested query looks like:
|
||||
|
||||
|
@ -390,14 +390,14 @@ SELECT ... FROM (SELECT ... FROM ...) ...;
|
|||
|
||||
:::info
|
||||
|
||||
- Only one layer of nesting is allowed, that means no sub query is allowed in a sub query
|
||||
- The result set returned by the inner query will be used as a "virtual table" by the outer query, the "virtual table" can be renamed using `AS` keyword for easy reference in the outer query.
|
||||
- Only one layer of nesting is allowed, that means no sub query is allowed within a sub query
|
||||
- The result set returned by the inner query will be used as a "virtual table" by the outer query. The "virtual table" can be renamed using `AS` keyword for easy reference in the outer query.
|
||||
- Sub query is not allowed in continuous query.
|
||||
- JOIN operation is allowed between tables/STables inside both inner and outer queries. Join operation can be performed on the result set of the inner query.
|
||||
- UNION operation is not allowed in either inner query or outer query.
|
||||
- The functionalities that can be used in the inner query is same as non-nested query.
|
||||
- `ORDER BY` inside the inner query doesn't make any sense but will slow down the query performance significantly, so please avoid such usage.
|
||||
- Compared to the non-nested query, the functionalities that can be used in the outer query have such restrictions as:
|
||||
- The functions that can be used in the inner query are the same as those that can be used in a non-nested query.
|
||||
- `ORDER BY` inside the inner query is unnecessary and will slow down the query performance significantly. It is best to avoid the use of `ORDER BY` inside the inner query.
|
||||
- Compared to the non-nested query, the functionality that can be used in the outer query has the following restrictions:
|
||||
- Functions
|
||||
- If the result set returned by the inner query doesn't contain timestamp column, then functions relying on timestamp can't be used in the outer query, like `TOP`, `BOTTOM`, `FIRST`, `LAST`, `DIFF`.
|
||||
- Functions that need to scan the data twice can't be used in the outer query, like `STDDEV`, `PERCENTILE`.
|
||||
|
@ -442,8 +442,8 @@ The sum of col1 and col2 for rows later than 2018-06-01 08:00:00.000 and whose c
|
|||
SELECT (col1 + col2) AS 'complex' FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' AND col2 > 1.2 LIMIT 10 OFFSET 5;
|
||||
```
|
||||
|
||||
The rows in the past 10 minutes and whose col2 is bigger than 3.14 are selected and output to the result file `/home/testoutpu.csv` with below SQL statement:
|
||||
The rows in the past 10 minutes and whose col2 is bigger than 3.14 are selected and output to the result file `/home/testoutput.csv` with below SQL statement:
|
||||
|
||||
```SQL
|
||||
SELECT COUNT(*) FROM tb1 WHERE ts >= NOW - 10m AND col2 > 3.14 >> /home/testoutpu.csv;
|
||||
SELECT COUNT(*) FROM tb1 WHERE ts >= NOW - 10m AND col2 > 3.14 >> /home/testoutput.csv;
|
||||
```
|
||||
|
|
|
@ -7,8 +7,6 @@ This section explains the syntax of SQL to perform operations on databases, tabl
|
|||
|
||||
TDengine SQL is the major interface for users to write data into or query from TDengine. For ease of use, the syntax is similar to that of standard SQL. However, please note that TDengine SQL is not standard SQL. For instance, TDengine doesn't provide a delete function for time series data and so corresponding statements are not provided in TDengine SQL.
|
||||
|
||||
TDengine SQL doesn't support abbreviation for keywords. For example "SELECT" cannot be abbreviated as "SEL".
|
||||
|
||||
Syntax Specifications used in this chapter:
|
||||
|
||||
- The content inside <\> needs to be input by the user, excluding <\> itself.
|
||||
|
|
|
@ -81,7 +81,7 @@ int32_t mndGetLoad(SMnode *pMnode, SMnodeLoad *pLoad);
|
|||
* @param pMsg The request msg.
|
||||
* @return int32_t 0 for success, -1 for failure.
|
||||
*/
|
||||
int32_t mndProcessMsg(SRpcMsg *pMsg);
|
||||
int32_t mndProcessRpcMsg(SRpcMsg *pMsg);
|
||||
int32_t mndProcessSyncMsg(SRpcMsg *pMsg);
|
||||
|
||||
/**
|
||||
|
|
|
@ -52,23 +52,31 @@ typedef struct SUserAuthInfo {
|
|||
AUTH_TYPE type;
|
||||
} SUserAuthInfo;
|
||||
|
||||
typedef struct SDbInfo {
|
||||
int32_t vgVer;
|
||||
int32_t tbNum;
|
||||
int64_t dbId;
|
||||
} SDbInfo;
|
||||
|
||||
typedef struct SCatalogReq {
|
||||
SArray *pTableMeta; // element is SNAME
|
||||
SArray *pDbVgroup; // element is db full name
|
||||
SArray *pDbCfg; // element is db full name
|
||||
SArray *pDbInfo; // element is db full name
|
||||
SArray *pTableMeta; // element is SNAME
|
||||
SArray *pTableHash; // element is SNAME
|
||||
SArray *pUdf; // element is udf name
|
||||
SArray *pDbCfg; // element is db full name
|
||||
SArray *pIndex; // element is index name
|
||||
SArray *pUser; // element is SUserAuthInfo
|
||||
bool qNodeRequired; // valid qnode
|
||||
} SCatalogReq;
|
||||
|
||||
typedef struct SMetaData {
|
||||
SArray *pTableMeta; // SArray<STableMeta*>
|
||||
SArray *pDbVgroup; // SArray<SArray<SVgroupInfo>*>
|
||||
SArray *pDbCfg; // SArray<SDbCfgInfo>
|
||||
SArray *pDbInfo; // SArray<SDbInfo>
|
||||
SArray *pTableMeta; // SArray<STableMeta*>
|
||||
SArray *pTableHash; // SArray<SVgroupInfo>
|
||||
SArray *pUdfList; // SArray<SFuncInfo>
|
||||
SArray *pDbCfg; // SArray<SDbCfgInfo>
|
||||
SArray *pIndex; // SArray<SIndexInfo>
|
||||
SArray *pUser; // SArray<bool>
|
||||
SArray *pQnodeList; // SArray<SQueryNodeAddr>
|
||||
|
@ -269,6 +277,8 @@ int32_t catalogChkAuth(SCatalog* pCtg, void *pRpc, const SEpSet* pMgmtEps, const
|
|||
|
||||
int32_t catalogUpdateUserAuthInfo(SCatalog* pCtg, SGetUserAuthRsp* pAuth);
|
||||
|
||||
int32_t catalogUpdateVgEpSet(SCatalog* pCtg, const char* dbFName, int32_t vgId, SEpSet *epSet);
|
||||
|
||||
|
||||
int32_t ctgdLaunchAsyncCall(SCatalog* pCtg, void *pTrans, const SEpSet* pMgmtEps, uint64_t reqId);
|
||||
|
||||
|
|
|
@ -43,6 +43,12 @@ typedef enum {
|
|||
TASK_TYPE_TEMP,
|
||||
} ETaskType;
|
||||
|
||||
typedef enum {
|
||||
TARGET_TYPE_MNODE = 1,
|
||||
TARGET_TYPE_VNODE,
|
||||
TARGET_TYPE_OTHER,
|
||||
} ETargetType;
|
||||
|
||||
typedef struct STableComInfo {
|
||||
uint8_t numOfTags; // the number of tags in schema
|
||||
uint8_t precision; // the number of precision
|
||||
|
@ -126,11 +132,18 @@ typedef struct SDataBuf {
|
|||
void* handle;
|
||||
} SDataBuf;
|
||||
|
||||
typedef struct STargetInfo {
|
||||
ETargetType type;
|
||||
char dbFName[TSDB_DB_FNAME_LEN]; // used to update db's vgroup epset
|
||||
int32_t vgId;
|
||||
} STargetInfo;
|
||||
|
||||
typedef int32_t (*__async_send_cb_fn_t)(void* param, const SDataBuf* pMsg, int32_t code);
|
||||
typedef int32_t (*__async_exec_fn_t)(void* param);
|
||||
|
||||
typedef struct SMsgSendInfo {
|
||||
__async_send_cb_fn_t fp; // async callback function
|
||||
STargetInfo target; // for update epset
|
||||
void* param;
|
||||
uint64_t requestId;
|
||||
uint64_t requestObjRefId;
|
||||
|
|
|
@ -729,23 +729,55 @@ static void destroySendMsgInfo(SMsgSendInfo* pMsgBody) {
|
|||
taosMemoryFreeClear(pMsgBody);
|
||||
}
|
||||
|
||||
void updateTargetEpSet(SMsgSendInfo* pSendInfo, STscObj* pTscObj, SRpcMsg* pMsg, SEpSet* pEpSet) {
|
||||
if (NULL == pEpSet) {
|
||||
return;
|
||||
}
|
||||
|
||||
switch (pSendInfo->target.type) {
|
||||
case TARGET_TYPE_MNODE:
|
||||
if (NULL == pTscObj) {
|
||||
tscError("mnode epset changed but not able to update it, reqObjRefId:%" PRIx64, pSendInfo->requestObjRefId);
|
||||
return;
|
||||
}
|
||||
|
||||
updateEpSet_s(&pTscObj->pAppInfo->mgmtEp, pEpSet);
|
||||
break;
|
||||
case TARGET_TYPE_VNODE: {
|
||||
if (NULL == pTscObj) {
|
||||
tscError("vnode epset changed but not able to update it, reqObjRefId:%" PRIx64, pSendInfo->requestObjRefId);
|
||||
return;
|
||||
}
|
||||
|
||||
SCatalog* pCatalog = NULL;
|
||||
int32_t code = catalogGetHandle(pTscObj->pAppInfo->clusterId, &pCatalog);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
tscError("fail to get catalog handle, clusterId:%" PRIx64 ", error %s", pTscObj->pAppInfo->clusterId, tstrerror(code));
|
||||
return;
|
||||
}
|
||||
|
||||
catalogUpdateVgEpSet(pCatalog, pSendInfo->target.dbFName, pSendInfo->target.vgId, pEpSet);
|
||||
break;
|
||||
}
|
||||
default:
|
||||
tscDebug("epset changed, not updated, msgType %s", TMSG_INFO(pMsg->msgType));
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void processMsgFromServer(void* parent, SRpcMsg* pMsg, SEpSet* pEpSet) {
|
||||
SMsgSendInfo* pSendInfo = (SMsgSendInfo*)pMsg->info.ahandle;
|
||||
assert(pMsg->info.ahandle != NULL);
|
||||
SRequestObj* pRequest = NULL;
|
||||
STscObj* pTscObj = NULL;
|
||||
|
||||
if (pSendInfo->requestObjRefId != 0) {
|
||||
SRequestObj* pRequest = (SRequestObj*)taosAcquireRef(clientReqRefPool, pSendInfo->requestObjRefId);
|
||||
assert(pRequest->self == pSendInfo->requestObjRefId);
|
||||
|
||||
pRequest->metric.rsp = taosGetTimestampUs();
|
||||
|
||||
//STscObj* pTscObj = pRequest->pTscObj;
|
||||
//if (pEpSet) {
|
||||
// if (!isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, pEpSet)) {
|
||||
// updateEpSet_s(&pTscObj->pAppInfo->mgmtEp, pEpSet);
|
||||
// }
|
||||
//}
|
||||
|
||||
pTscObj = pRequest->pTscObj;
|
||||
/*
|
||||
* There is not response callback function for submit response.
|
||||
* The actual inserted number of points is the first number.
|
||||
|
@ -762,6 +794,8 @@ void processMsgFromServer(void* parent, SRpcMsg* pMsg, SEpSet* pEpSet) {
|
|||
taosReleaseRef(clientReqRefPool, pSendInfo->requestObjRefId);
|
||||
}
|
||||
|
||||
updateTargetEpSet(pSendInfo, pTscObj, pMsg, pEpSet);
|
||||
|
||||
SDataBuf buf = {.len = pMsg->contLen, .pData = NULL, .handle = pMsg->info.handle};
|
||||
|
||||
if (pMsg->contLen > 0) {
|
||||
|
|
|
@ -528,14 +528,14 @@ int32_t convertStringToTimestamp(int16_t type, char *inputData, int64_t timePrec
|
|||
}
|
||||
taosMemoryFree(newColData);
|
||||
} else if (type == TSDB_DATA_TYPE_NCHAR) {
|
||||
newColData = taosMemoryCalloc(1, charLen / TSDB_NCHAR_SIZE + 1);
|
||||
newColData = taosMemoryCalloc(1, charLen + TSDB_NCHAR_SIZE);
|
||||
int len = taosUcs4ToMbs((TdUcs4 *)varDataVal(inputData), charLen, newColData);
|
||||
if (len < 0){
|
||||
taosMemoryFree(newColData);
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
newColData[len] = 0;
|
||||
bool ret = taosParseTime(newColData, timeVal, len + 1, (int32_t)timePrec, tsDaylight);
|
||||
int32_t ret = taosParseTime(newColData, timeVal, len + 1, (int32_t)timePrec, tsDaylight);
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
taosMemoryFree(newColData);
|
||||
return ret;
|
||||
|
|
|
@ -40,7 +40,7 @@ static void mmProcessQueue(SQueueInfo *pInfo, SRpcMsg *pMsg) {
|
|||
break;
|
||||
default:
|
||||
pMsg->info.node = pMgmt->pMnode;
|
||||
code = mndProcessMsg(pMsg);
|
||||
code = mndProcessRpcMsg(pMsg);
|
||||
}
|
||||
|
||||
if (IsReq(pMsg) && pMsg->info.handle != NULL && code != TSDB_CODE_ACTION_IN_PROGRESS) {
|
||||
|
|
|
@ -75,13 +75,12 @@ typedef struct {
|
|||
} STelemMgmt;
|
||||
|
||||
typedef struct {
|
||||
SWal *pWal;
|
||||
sem_t syncSem;
|
||||
int64_t sync;
|
||||
bool standby;
|
||||
bool restored;
|
||||
int32_t errCode;
|
||||
int32_t transId;
|
||||
SWal *pWal;
|
||||
sem_t syncSem;
|
||||
int64_t sync;
|
||||
bool standby;
|
||||
int32_t errCode;
|
||||
int32_t transId;
|
||||
} SSyncMgmt;
|
||||
|
||||
typedef struct {
|
||||
|
@ -90,34 +89,45 @@ typedef struct {
|
|||
} SGrantInfo;
|
||||
|
||||
typedef struct SMnode {
|
||||
int32_t selfDnodeId;
|
||||
int64_t clusterId;
|
||||
TdThread thread;
|
||||
bool deploy;
|
||||
bool stopped;
|
||||
int8_t replica;
|
||||
int8_t selfIndex;
|
||||
SReplica replicas[TSDB_MAX_REPLICA];
|
||||
char *path;
|
||||
int64_t checkTime;
|
||||
SSdb *pSdb;
|
||||
SMgmtWrapper *pWrapper;
|
||||
SArray *pSteps;
|
||||
SQHandle *pQuery;
|
||||
SShowMgmt showMgmt;
|
||||
SProfileMgmt profileMgmt;
|
||||
STelemMgmt telemMgmt;
|
||||
SSyncMgmt syncMgmt;
|
||||
SHashObj *infosMeta;
|
||||
SHashObj *perfsMeta;
|
||||
SGrantInfo grant;
|
||||
MndMsgFp msgFp[TDMT_MAX];
|
||||
SMsgCb msgCb;
|
||||
int32_t selfDnodeId;
|
||||
int64_t clusterId;
|
||||
TdThread thread;
|
||||
TdThreadRwlock lock;
|
||||
int32_t rpcRef;
|
||||
int32_t syncRef;
|
||||
bool stopped;
|
||||
bool restored;
|
||||
bool deploy;
|
||||
int8_t replica;
|
||||
int8_t selfIndex;
|
||||
SReplica replicas[TSDB_MAX_REPLICA];
|
||||
char *path;
|
||||
int64_t checkTime;
|
||||
SSdb *pSdb;
|
||||
SArray *pSteps;
|
||||
SQHandle *pQuery;
|
||||
SHashObj *infosMeta;
|
||||
SHashObj *perfsMeta;
|
||||
SShowMgmt showMgmt;
|
||||
SProfileMgmt profileMgmt;
|
||||
STelemMgmt telemMgmt;
|
||||
SSyncMgmt syncMgmt;
|
||||
SGrantInfo grant;
|
||||
MndMsgFp msgFp[TDMT_MAX];
|
||||
SMsgCb msgCb;
|
||||
} SMnode;
|
||||
|
||||
void mndSetMsgHandle(SMnode *pMnode, tmsg_t msgType, MndMsgFp fp);
|
||||
int64_t mndGenerateUid(char *name, int32_t len);
|
||||
|
||||
int32_t mndAcquireRpcRef(SMnode *pMnode);
|
||||
void mndReleaseRpcRef(SMnode *pMnode);
|
||||
void mndSetRestore(SMnode *pMnode, bool restored);
|
||||
void mndSetStop(SMnode *pMnode);
|
||||
bool mndGetStop(SMnode *pMnode);
|
||||
int32_t mndAcquireSyncRef(SMnode *pMnode);
|
||||
void mndReleaseSyncRef(SMnode *pMnode);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -85,7 +85,7 @@ static void *mndThreadFp(void *param) {
|
|||
while (1) {
|
||||
lastTime++;
|
||||
taosMsleep(100);
|
||||
if (pMnode->stopped) break;
|
||||
if (mndGetStop(pMnode)) break;
|
||||
|
||||
if (lastTime % (tsTransPullupInterval * 10) == 0) {
|
||||
mndPullupTrans(pMnode);
|
||||
|
@ -118,7 +118,6 @@ static int32_t mndInitTimer(SMnode *pMnode) {
|
|||
}
|
||||
|
||||
static void mndCleanupTimer(SMnode *pMnode) {
|
||||
pMnode->stopped = true;
|
||||
if (taosCheckPthreadValid(pMnode->thread)) {
|
||||
taosThreadJoin(pMnode->thread, NULL);
|
||||
taosThreadClear(&pMnode->thread);
|
||||
|
@ -335,15 +334,19 @@ void mndClose(SMnode *pMnode) {
|
|||
int32_t mndStart(SMnode *pMnode) {
|
||||
mndSyncStart(pMnode);
|
||||
if (pMnode->deploy) {
|
||||
if (sdbDeploy(pMnode->pSdb) != 0) return -1;
|
||||
pMnode->syncMgmt.restored = true;
|
||||
if (sdbDeploy(pMnode->pSdb) != 0) {
|
||||
mError("failed to deploy sdb while start mnode");
|
||||
return -1;
|
||||
}
|
||||
mndSetRestore(pMnode, true);
|
||||
}
|
||||
return mndInitTimer(pMnode);
|
||||
}
|
||||
|
||||
void mndStop(SMnode *pMnode) {
|
||||
mndSetStop(pMnode);
|
||||
mndSyncStop(pMnode);
|
||||
return mndCleanupTimer(pMnode);
|
||||
mndCleanupTimer(pMnode);
|
||||
}
|
||||
|
||||
int32_t mndProcessSyncMsg(SRpcMsg *pMsg) {
|
||||
|
@ -362,6 +365,11 @@ int32_t mndProcessSyncMsg(SRpcMsg *pMsg) {
|
|||
return TAOS_SYNC_PROPOSE_OTHER_ERROR;
|
||||
}
|
||||
|
||||
if (mndAcquireSyncRef(pMnode) != 0) {
|
||||
mError("failed to process sync msg:%p type:%s since %s", pMsg, TMSG_INFO(pMsg->msgType), terrstr());
|
||||
return TAOS_SYNC_PROPOSE_OTHER_ERROR;
|
||||
}
|
||||
|
||||
char logBuf[512];
|
||||
char *syncNodeStr = sync2SimpleStr(pMgmt->sync);
|
||||
snprintf(logBuf, sizeof(logBuf), "==vnodeProcessSyncReq== msgType:%d, syncNode: %s", pMsg->msgType, syncNodeStr);
|
||||
|
@ -405,59 +413,45 @@ int32_t mndProcessSyncMsg(SRpcMsg *pMsg) {
|
|||
code = TAOS_SYNC_PROPOSE_OTHER_ERROR;
|
||||
}
|
||||
|
||||
mndReleaseSyncRef(pMnode);
|
||||
return code;
|
||||
}
|
||||
|
||||
static int32_t mndCheckMnodeMaster(SRpcMsg *pMsg) {
|
||||
if (!IsReq(pMsg)) return 0;
|
||||
if (mndIsMaster(pMsg->info.node)) return 0;
|
||||
static int32_t mndCheckMnodeState(SRpcMsg *pMsg) {
|
||||
if (mndAcquireRpcRef(pMsg->info.node) == 0) return 0;
|
||||
|
||||
if (pMsg->msgType == TDMT_MND_MQ_TIMER || pMsg->msgType == TDMT_MND_TELEM_TIMER ||
|
||||
pMsg->msgType == TDMT_MND_TRANS_TIMER) {
|
||||
return -1;
|
||||
}
|
||||
mError("msg:%p, failed to check master since %s, app:%p type:%s", pMsg, terrstr(), pMsg->info.ahandle,
|
||||
TMSG_INFO(pMsg->msgType));
|
||||
if (IsReq(pMsg) && pMsg->msgType != TDMT_MND_MQ_TIMER && pMsg->msgType != TDMT_MND_TELEM_TIMER &&
|
||||
pMsg->msgType != TDMT_MND_TRANS_TIMER) {
|
||||
mError("msg:%p, failed to check mnode state since %s, app:%p type:%s", pMsg, terrstr(), pMsg->info.ahandle,
|
||||
TMSG_INFO(pMsg->msgType));
|
||||
|
||||
SEpSet epSet = {0};
|
||||
mndGetMnodeEpSet(pMsg->info.node, &epSet);
|
||||
SEpSet epSet = {0};
|
||||
mndGetMnodeEpSet(pMsg->info.node, &epSet);
|
||||
|
||||
#if 0
|
||||
mTrace("msg:%p, is redirected, num:%d use:%d", pMsg, epSet.numOfEps, epSet.inUse);
|
||||
for (int32_t i = 0; i < epSet.numOfEps; ++i) {
|
||||
mTrace("mnode index:%d %s:%u", i, epSet.eps[i].fqdn, epSet.eps[i].port);
|
||||
if (strcmp(epSet.eps[i].fqdn, tsLocalFqdn) == 0 && epSet.eps[i].port == tsServerPort) {
|
||||
epSet.inUse = (i + 1) % epSet.numOfEps;
|
||||
int32_t contLen = tSerializeSEpSet(NULL, 0, &epSet);
|
||||
pMsg->info.rsp = rpcMallocCont(contLen);
|
||||
if (pMsg->info.rsp != NULL) {
|
||||
tSerializeSEpSet(pMsg->info.rsp, contLen, &epSet);
|
||||
pMsg->info.rspLen = contLen;
|
||||
terrno = TSDB_CODE_RPC_REDIRECT;
|
||||
} else {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
int32_t contLen = tSerializeSEpSet(NULL, 0, &epSet);
|
||||
pMsg->info.rsp = rpcMallocCont(contLen);
|
||||
if (pMsg->info.rsp != NULL) {
|
||||
tSerializeSEpSet(pMsg->info.rsp, contLen, &epSet);
|
||||
pMsg->info.rspLen = contLen;
|
||||
terrno = TSDB_CODE_RPC_REDIRECT;
|
||||
} else {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
static int32_t mndCheckRequestValid(SRpcMsg *pMsg) {
|
||||
static int32_t mndCheckMsgContent(SRpcMsg *pMsg) {
|
||||
if (!IsReq(pMsg)) return 0;
|
||||
if (pMsg->contLen != 0 && pMsg->pCont != NULL) return 0;
|
||||
|
||||
mError("msg:%p, failed to valid request, app:%p type:%s", pMsg, pMsg->info.ahandle, TMSG_INFO(pMsg->msgType));
|
||||
mError("msg:%p, failed to check msg content, app:%p type:%s", pMsg, pMsg->info.ahandle, TMSG_INFO(pMsg->msgType));
|
||||
terrno = TSDB_CODE_INVALID_MSG_LEN;
|
||||
return -1;
|
||||
}
|
||||
|
||||
int32_t mndProcessMsg(SRpcMsg *pMsg) {
|
||||
if (mndCheckMnodeMaster(pMsg) != 0) return -1;
|
||||
if (mndCheckRequestValid(pMsg) != 0) return -1;
|
||||
|
||||
int32_t mndProcessRpcMsg(SRpcMsg *pMsg) {
|
||||
SMnode *pMnode = pMsg->info.node;
|
||||
MndMsgFp fp = pMnode->msgFp[TMSG_INDEX(pMsg->msgType)];
|
||||
if (fp == NULL) {
|
||||
|
@ -466,8 +460,13 @@ int32_t mndProcessMsg(SRpcMsg *pMsg) {
|
|||
return -1;
|
||||
}
|
||||
|
||||
mTrace("msg:%p, will be processed in mnode, app:%p type:%s", pMsg, pMsg->info.ahandle, TMSG_INFO(pMsg->msgType));
|
||||
if (mndCheckMsgContent(pMsg) != 0) return -1;
|
||||
if (mndCheckMnodeState(pMsg) != 0) return -1;
|
||||
|
||||
mTrace("msg:%p, start to process in mnode, app:%p type:%s", pMsg, pMsg->info.ahandle, TMSG_INFO(pMsg->msgType));
|
||||
int32_t code = (*fp)(pMsg);
|
||||
mndReleaseRpcRef(pMnode);
|
||||
|
||||
if (code == TSDB_CODE_ACTION_IN_PROGRESS) {
|
||||
mTrace("msg:%p, won't response immediately since in progress", pMsg);
|
||||
} else if (code == 0) {
|
||||
|
@ -476,6 +475,7 @@ int32_t mndProcessMsg(SRpcMsg *pMsg) {
|
|||
mError("msg:%p, failed to process since %s, app:%p type:%s", pMsg, terrstr(), pMsg->info.ahandle,
|
||||
TMSG_INFO(pMsg->msgType));
|
||||
}
|
||||
|
||||
return code;
|
||||
}
|
||||
|
||||
|
@ -502,7 +502,7 @@ int64_t mndGenerateUid(char *name, int32_t len) {
|
|||
|
||||
int32_t mndGetMonitorInfo(SMnode *pMnode, SMonClusterInfo *pClusterInfo, SMonVgroupInfo *pVgroupInfo,
|
||||
SMonGrantInfo *pGrantInfo) {
|
||||
if (!mndIsMaster(pMnode)) return -1;
|
||||
if (mndAcquireRpcRef(pMnode) != 0) return -1;
|
||||
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
int64_t ms = taosGetTimestampMs();
|
||||
|
@ -511,6 +511,7 @@ int32_t mndGetMonitorInfo(SMnode *pMnode, SMonClusterInfo *pClusterInfo, SMonVgr
|
|||
pClusterInfo->mnodes = taosArrayInit(sdbGetSize(pSdb, SDB_MNODE), sizeof(SMonMnodeDesc));
|
||||
pVgroupInfo->vgroups = taosArrayInit(sdbGetSize(pSdb, SDB_VGROUP), sizeof(SMonVgroupDesc));
|
||||
if (pClusterInfo->dnodes == NULL || pClusterInfo->mnodes == NULL || pVgroupInfo->vgroups == NULL) {
|
||||
mndReleaseRpcRef(pMnode);
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
@ -605,6 +606,7 @@ int32_t mndGetMonitorInfo(SMnode *pMnode, SMonClusterInfo *pClusterInfo, SMonVgr
|
|||
pGrantInfo->timeseries_total = INT32_MAX;
|
||||
}
|
||||
|
||||
mndReleaseRpcRef(pMnode);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -612,3 +614,76 @@ int32_t mndGetLoad(SMnode *pMnode, SMnodeLoad *pLoad) {
|
|||
pLoad->syncState = syncGetMyRole(pMnode->syncMgmt.sync);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t mndAcquireRpcRef(SMnode *pMnode) {
|
||||
int32_t code = 0;
|
||||
taosThreadRwlockRdlock(&pMnode->lock);
|
||||
if (pMnode->stopped) {
|
||||
terrno = TSDB_CODE_APP_NOT_READY;
|
||||
code = -1;
|
||||
} else if (!mndIsMaster(pMnode)) {
|
||||
code = -1;
|
||||
} else {
|
||||
int32_t ref = atomic_add_fetch_32(&pMnode->rpcRef, 1);
|
||||
mTrace("mnode rpc is acquired, ref:%d", ref);
|
||||
}
|
||||
taosThreadRwlockUnlock(&pMnode->lock);
|
||||
return code;
|
||||
}
|
||||
|
||||
void mndReleaseRpcRef(SMnode *pMnode) {
|
||||
taosThreadRwlockRdlock(&pMnode->lock);
|
||||
int32_t ref = atomic_sub_fetch_32(&pMnode->rpcRef, 1);
|
||||
mTrace("mnode rpc is released, ref:%d", ref);
|
||||
taosThreadRwlockUnlock(&pMnode->lock);
|
||||
}
|
||||
|
||||
void mndSetRestore(SMnode *pMnode, bool restored) {
|
||||
if (restored) {
|
||||
taosThreadRwlockWrlock(&pMnode->lock);
|
||||
pMnode->restored = true;
|
||||
taosThreadRwlockUnlock(&pMnode->lock);
|
||||
mTrace("mnode set restored:%d", restored);
|
||||
} else {
|
||||
taosThreadRwlockWrlock(&pMnode->lock);
|
||||
pMnode->restored = false;
|
||||
taosThreadRwlockUnlock(&pMnode->lock);
|
||||
mTrace("mnode set restored:%d", restored);
|
||||
while (1) {
|
||||
if (pMnode->rpcRef <= 0) break;
|
||||
taosMsleep(3);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
bool mndGetRestored(SMnode *pMnode) { return pMnode->restored; }
|
||||
|
||||
void mndSetStop(SMnode *pMnode) {
|
||||
taosThreadRwlockWrlock(&pMnode->lock);
|
||||
pMnode->stopped = true;
|
||||
taosThreadRwlockUnlock(&pMnode->lock);
|
||||
mTrace("mnode set stopped");
|
||||
}
|
||||
|
||||
bool mndGetStop(SMnode *pMnode) { return pMnode->stopped; }
|
||||
|
||||
int32_t mndAcquireSyncRef(SMnode *pMnode) {
|
||||
int32_t code = 0;
|
||||
taosThreadRwlockRdlock(&pMnode->lock);
|
||||
if (pMnode->stopped) {
|
||||
terrno = TSDB_CODE_APP_NOT_READY;
|
||||
code = -1;
|
||||
} else {
|
||||
int32_t ref = atomic_add_fetch_32(&pMnode->syncRef, 1);
|
||||
mTrace("mnode sync is acquired, ref:%d", ref);
|
||||
}
|
||||
taosThreadRwlockUnlock(&pMnode->lock);
|
||||
return code;
|
||||
}
|
||||
|
||||
void mndReleaseSyncRef(SMnode *pMnode) {
|
||||
taosThreadRwlockRdlock(&pMnode->lock);
|
||||
int32_t ref = atomic_sub_fetch_32(&pMnode->syncRef, 1);
|
||||
mTrace("mnode sync is released, ref:%d", ref);
|
||||
taosThreadRwlockUnlock(&pMnode->lock);
|
||||
}
|
|
@ -63,7 +63,7 @@ void mndRestoreFinish(struct SSyncFSM *pFsm) {
|
|||
if (!pMnode->deploy) {
|
||||
mInfo("mnode sync restore finished");
|
||||
mndTransPullup(pMnode);
|
||||
pMnode->syncMgmt.restored = true;
|
||||
mndSetRestore(pMnode, true);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -85,7 +85,7 @@ int32_t mndSnapshotRead(struct SSyncFSM *pFsm, const SSnapshot *pSnapshot, void
|
|||
|
||||
int32_t mndSnapshotApply(struct SSyncFSM *pFsm, const SSnapshot *pSnapshot, char *pBuf, int32_t len) {
|
||||
SMnode *pMnode = pFsm->data;
|
||||
pMnode->syncMgmt.restored = false;
|
||||
mndSetRestore(pMnode, false);
|
||||
mInfo("start to apply snapshot to sdb, len:%d", len);
|
||||
|
||||
int32_t code = sdbApplySnapshot(pMnode->pSdb, pBuf, len);
|
||||
|
@ -93,7 +93,7 @@ int32_t mndSnapshotApply(struct SSyncFSM *pFsm, const SSnapshot *pSnapshot, char
|
|||
mError("failed to apply snapshot to sdb, len:%d", len);
|
||||
} else {
|
||||
mInfo("successfully to apply snapshot to sdb, len:%d", len);
|
||||
pMnode->syncMgmt.restored = true;
|
||||
mndSetRestore(pMnode, true);
|
||||
}
|
||||
|
||||
// taosMemoryFree(pBuf);
|
||||
|
@ -250,7 +250,7 @@ bool mndIsMaster(SMnode *pMnode) {
|
|||
return false;
|
||||
}
|
||||
|
||||
if (!pMgmt->restored) {
|
||||
if (!pMnode->restored) {
|
||||
terrno = TSDB_CODE_APP_NOT_READY;
|
||||
return false;
|
||||
}
|
||||
|
|
|
@ -166,7 +166,6 @@ typedef struct SSdbRow {
|
|||
typedef struct SSdb {
|
||||
SMnode *pMnode;
|
||||
char *currDir;
|
||||
char *syncDir;
|
||||
char *tmpDir;
|
||||
int64_t lastCommitVer;
|
||||
int64_t curVer;
|
||||
|
@ -182,6 +181,7 @@ typedef struct SSdb {
|
|||
SdbDeployFp deployFps[SDB_MAX];
|
||||
SdbEncodeFp encodeFps[SDB_MAX];
|
||||
SdbDecodeFp decodeFps[SDB_MAX];
|
||||
TdThreadMutex filelock;
|
||||
} SSdb;
|
||||
|
||||
typedef struct SSdbIter {
|
||||
|
|
|
@ -56,6 +56,7 @@ SSdb *sdbInit(SSdbOpt *pOption) {
|
|||
pSdb->curTerm = -1;
|
||||
pSdb->lastCommitVer = -1;
|
||||
pSdb->pMnode = pOption->pMnode;
|
||||
taosThreadMutexInit(&pSdb->filelock, NULL);
|
||||
mDebug("sdb init successfully");
|
||||
return pSdb;
|
||||
}
|
||||
|
@ -69,10 +70,6 @@ void sdbCleanup(SSdb *pSdb) {
|
|||
taosMemoryFreeClear(pSdb->currDir);
|
||||
}
|
||||
|
||||
if (pSdb->syncDir != NULL) {
|
||||
taosMemoryFreeClear(pSdb->syncDir);
|
||||
}
|
||||
|
||||
if (pSdb->tmpDir != NULL) {
|
||||
taosMemoryFreeClear(pSdb->tmpDir);
|
||||
}
|
||||
|
@ -104,6 +101,7 @@ void sdbCleanup(SSdb *pSdb) {
|
|||
mDebug("sdb table:%s is cleaned up", sdbTableName(i));
|
||||
}
|
||||
|
||||
taosThreadMutexDestroy(&pSdb->filelock);
|
||||
taosMemoryFree(pSdb);
|
||||
mDebug("sdb is cleaned up");
|
||||
}
|
||||
|
|
|
@ -22,13 +22,14 @@
|
|||
#define SDB_RESERVE_SIZE 512
|
||||
#define SDB_FILE_VER 1
|
||||
|
||||
static int32_t sdbRunDeployFp(SSdb *pSdb) {
|
||||
static int32_t sdbDeployData(SSdb *pSdb) {
|
||||
mDebug("start to deploy sdb");
|
||||
|
||||
for (int32_t i = SDB_MAX - 1; i >= 0; --i) {
|
||||
SdbDeployFp fp = pSdb->deployFps[i];
|
||||
if (fp == NULL) continue;
|
||||
|
||||
mDebug("start to deploy sdb:%s", sdbTableName(i));
|
||||
if ((*fp)(pSdb->pMnode) != 0) {
|
||||
mError("failed to deploy sdb:%s since %s", sdbTableName(i), terrstr());
|
||||
return -1;
|
||||
|
@ -39,6 +40,39 @@ static int32_t sdbRunDeployFp(SSdb *pSdb) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void sdbResetData(SSdb *pSdb) {
|
||||
mDebug("start to reset sdb");
|
||||
|
||||
for (ESdbType i = 0; i < SDB_MAX; ++i) {
|
||||
SHashObj *hash = pSdb->hashObjs[i];
|
||||
if (hash == NULL) continue;
|
||||
|
||||
SSdbRow **ppRow = taosHashIterate(hash, NULL);
|
||||
while (ppRow != NULL) {
|
||||
SSdbRow *pRow = *ppRow;
|
||||
if (pRow == NULL) continue;
|
||||
|
||||
sdbFreeRow(pSdb, pRow, true);
|
||||
ppRow = taosHashIterate(hash, ppRow);
|
||||
}
|
||||
}
|
||||
|
||||
for (ESdbType i = 0; i < SDB_MAX; ++i) {
|
||||
SHashObj *hash = pSdb->hashObjs[i];
|
||||
if (hash == NULL) continue;
|
||||
|
||||
taosHashClear(pSdb->hashObjs[i]);
|
||||
pSdb->tableVer[i] = 0;
|
||||
pSdb->maxId[i] = 0;
|
||||
mDebug("sdb:%s is reset", sdbTableName(i));
|
||||
}
|
||||
|
||||
pSdb->curVer = -1;
|
||||
pSdb->curTerm = -1;
|
||||
pSdb->lastCommitVer = -1;
|
||||
mDebug("sdb reset successfully");
|
||||
}
|
||||
|
||||
static int32_t sdbReadFileHead(SSdb *pSdb, TdFilePtr pFile) {
|
||||
int64_t sver = 0;
|
||||
int32_t ret = taosReadFile(pFile, &sver, sizeof(int64_t));
|
||||
|
@ -169,11 +203,15 @@ static int32_t sdbWriteFileHead(SSdb *pSdb, TdFilePtr pFile) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t sdbReadFile(SSdb *pSdb) {
|
||||
static int32_t sdbReadFileImp(SSdb *pSdb) {
|
||||
int64_t offset = 0;
|
||||
int32_t code = 0;
|
||||
int32_t readLen = 0;
|
||||
int64_t ret = 0;
|
||||
char file[PATH_MAX] = {0};
|
||||
|
||||
snprintf(file, sizeof(file), "%s%ssdb.data", pSdb->currDir, TD_DIRSEP);
|
||||
mDebug("start to read file:%s", file);
|
||||
|
||||
SSdbRaw *pRaw = taosMemoryMalloc(WAL_MAX_SIZE + 100);
|
||||
if (pRaw == NULL) {
|
||||
|
@ -182,10 +220,6 @@ int32_t sdbReadFile(SSdb *pSdb) {
|
|||
return -1;
|
||||
}
|
||||
|
||||
char file[PATH_MAX] = {0};
|
||||
snprintf(file, sizeof(file), "%s%ssdb.data", pSdb->currDir, TD_DIRSEP);
|
||||
mDebug("start to read file:%s", file);
|
||||
|
||||
TdFilePtr pFile = taosOpenFile(file, TD_FILE_READ);
|
||||
if (pFile == NULL) {
|
||||
taosMemoryFree(pRaw);
|
||||
|
@ -196,8 +230,6 @@ int32_t sdbReadFile(SSdb *pSdb) {
|
|||
|
||||
if (sdbReadFileHead(pSdb, pFile) != 0) {
|
||||
mError("failed to read file:%s head since %s", file, terrstr());
|
||||
pSdb->curVer = -1;
|
||||
pSdb->curTerm = -1;
|
||||
taosMemoryFree(pRaw);
|
||||
taosCloseFile(&pFile);
|
||||
return -1;
|
||||
|
@ -264,6 +296,20 @@ _OVER:
|
|||
return code;
|
||||
}
|
||||
|
||||
int32_t sdbReadFile(SSdb *pSdb) {
|
||||
taosThreadMutexLock(&pSdb->filelock);
|
||||
|
||||
sdbResetData(pSdb);
|
||||
int32_t code = sdbReadFileImp(pSdb);
|
||||
if (code != 0) {
|
||||
mError("failed to read sdb since %s", terrstr());
|
||||
sdbResetData(pSdb);
|
||||
}
|
||||
|
||||
taosThreadMutexUnlock(&pSdb->filelock);
|
||||
return code;
|
||||
}
|
||||
|
||||
static int32_t sdbWriteFileImp(SSdb *pSdb) {
|
||||
int32_t code = 0;
|
||||
|
||||
|
@ -378,15 +424,21 @@ int32_t sdbWriteFile(SSdb *pSdb) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
return sdbWriteFileImp(pSdb);
|
||||
taosThreadMutexLock(&pSdb->filelock);
|
||||
int32_t code = sdbWriteFileImp(pSdb);
|
||||
if (code != 0) {
|
||||
mError("failed to write sdb since %s", terrstr());
|
||||
}
|
||||
taosThreadMutexUnlock(&pSdb->filelock);
|
||||
return code;
|
||||
}
|
||||
|
||||
int32_t sdbDeploy(SSdb *pSdb) {
|
||||
if (sdbRunDeployFp(pSdb) != 0) {
|
||||
if (sdbDeployData(pSdb) != 0) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (sdbWriteFileImp(pSdb) != 0) {
|
||||
if (sdbWriteFile(pSdb) != 0) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
@ -397,13 +449,16 @@ static SSdbIter *sdbOpenIter(SSdb *pSdb) {
|
|||
char datafile[PATH_MAX] = {0};
|
||||
char tmpfile[PATH_MAX] = {0};
|
||||
snprintf(datafile, sizeof(datafile), "%s%ssdb.data", pSdb->currDir, TD_DIRSEP);
|
||||
snprintf(tmpfile, sizeof(datafile), "%s%ssdb.data", pSdb->tmpDir, TD_DIRSEP);
|
||||
snprintf(tmpfile, sizeof(tmpfile), "%s%ssdb.data", pSdb->tmpDir, TD_DIRSEP);
|
||||
|
||||
taosThreadMutexLock(&pSdb->filelock);
|
||||
if (taosCopyFile(datafile, tmpfile) != 0) {
|
||||
taosThreadMutexUnlock(&pSdb->filelock);
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
mError("failed to copy file %s to %s since %s", datafile, tmpfile, terrstr());
|
||||
return NULL;
|
||||
}
|
||||
taosThreadMutexUnlock(&pSdb->filelock);
|
||||
|
||||
SSdbIter *pIter = taosMemoryCalloc(1, sizeof(SSdbIter));
|
||||
if (pIter == NULL) {
|
||||
|
@ -422,11 +477,16 @@ static SSdbIter *sdbOpenIter(SSdb *pSdb) {
|
|||
return pIter;
|
||||
}
|
||||
|
||||
static void sdbCloseIter(SSdbIter *pIter) {
|
||||
static void sdbCloseIter(SSdb *pSdb, SSdbIter *pIter) {
|
||||
if (pIter == NULL) return;
|
||||
if (pIter->file != NULL) {
|
||||
taosCloseFile(&pIter->file);
|
||||
}
|
||||
|
||||
char tmpfile[PATH_MAX] = {0};
|
||||
snprintf(tmpfile, sizeof(tmpfile), "%s%ssdb.data", pSdb->tmpDir, TD_DIRSEP);
|
||||
taosRemoveFile(tmpfile);
|
||||
|
||||
taosMemoryFree(pIter);
|
||||
mInfo("sdbiter:%p, is closed", pIter);
|
||||
}
|
||||
|
@ -453,15 +513,14 @@ static SSdbIter *sdbGetIter(SSdb *pSdb, SSdbIter **ppIter) {
|
|||
}
|
||||
|
||||
int32_t sdbReadSnapshot(SSdb *pSdb, SSdbIter **ppIter, char **ppBuf, int32_t *len) {
|
||||
const int32_t maxlen = 100;
|
||||
|
||||
SSdbIter *pIter = sdbGetIter(pSdb, ppIter);
|
||||
if (pIter == NULL) return -1;
|
||||
|
||||
char *pBuf = taosMemoryCalloc(1, maxlen);
|
||||
int32_t maxlen = 100;
|
||||
char *pBuf = taosMemoryCalloc(1, maxlen);
|
||||
if (pBuf == NULL) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
sdbCloseIter(pIter);
|
||||
sdbCloseIter(pSdb, pIter);
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
@ -472,7 +531,7 @@ int32_t sdbReadSnapshot(SSdb *pSdb, SSdbIter **ppIter, char **ppBuf, int32_t *le
|
|||
*ppBuf = NULL;
|
||||
*len = 0;
|
||||
*ppIter = NULL;
|
||||
sdbCloseIter(pIter);
|
||||
sdbCloseIter(pSdb, pIter);
|
||||
taosMemoryFree(pBuf);
|
||||
return -1;
|
||||
} else if (readlen == 0) {
|
||||
|
@ -480,7 +539,7 @@ int32_t sdbReadSnapshot(SSdb *pSdb, SSdbIter **ppIter, char **ppBuf, int32_t *le
|
|||
*ppBuf = NULL;
|
||||
*len = 0;
|
||||
*ppIter = NULL;
|
||||
sdbCloseIter(pIter);
|
||||
sdbCloseIter(pSdb, pIter);
|
||||
taosMemoryFree(pBuf);
|
||||
return 0;
|
||||
} else if ((readlen < maxlen && errno != 0) || readlen == maxlen) {
|
||||
|
@ -494,7 +553,7 @@ int32_t sdbReadSnapshot(SSdb *pSdb, SSdbIter **ppIter, char **ppBuf, int32_t *le
|
|||
*ppBuf = pBuf;
|
||||
*len = readlen;
|
||||
*ppIter = NULL;
|
||||
sdbCloseIter(pIter);
|
||||
sdbCloseIter(pSdb, pIter);
|
||||
return 0;
|
||||
} else {
|
||||
// impossible
|
||||
|
@ -502,7 +561,7 @@ int32_t sdbReadSnapshot(SSdb *pSdb, SSdbIter **ppIter, char **ppBuf, int32_t *le
|
|||
*ppBuf = NULL;
|
||||
*len = 0;
|
||||
*ppIter = NULL;
|
||||
sdbCloseIter(pIter);
|
||||
sdbCloseIter(pSdb, pIter);
|
||||
taosMemoryFree(pBuf);
|
||||
return -1;
|
||||
}
|
||||
|
@ -512,7 +571,7 @@ int32_t sdbApplySnapshot(SSdb *pSdb, char *pBuf, int32_t len) {
|
|||
char datafile[PATH_MAX] = {0};
|
||||
char tmpfile[PATH_MAX] = {0};
|
||||
snprintf(datafile, sizeof(datafile), "%s%ssdb.data", pSdb->currDir, TD_DIRSEP);
|
||||
snprintf(tmpfile, sizeof(datafile), "%s%ssdb.data", pSdb->tmpDir, TD_DIRSEP);
|
||||
snprintf(tmpfile, sizeof(tmpfile), "%s%ssdb.data", pSdb->tmpDir, TD_DIRSEP);
|
||||
|
||||
TdFilePtr pFile = taosOpenFile(tmpfile, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_TRUNC);
|
||||
if (pFile == NULL) {
|
||||
|
|
|
@ -155,44 +155,52 @@ int metaTbCursorNext(SMTbCursor *pTbCur) {
|
|||
}
|
||||
|
||||
SSchemaWrapper *metaGetTableSchema(SMeta *pMeta, tb_uid_t uid, int32_t sver, bool isinline) {
|
||||
void *pKey = NULL;
|
||||
void *pVal = NULL;
|
||||
int kLen = 0;
|
||||
int vLen = 0;
|
||||
int ret;
|
||||
SSkmDbKey skmDbKey;
|
||||
SSchemaWrapper *pSW = NULL;
|
||||
SSchema *pSchema = NULL;
|
||||
void *pBuf;
|
||||
SDecoder coder = {0};
|
||||
void *pData = NULL;
|
||||
int nData = 0;
|
||||
int64_t version;
|
||||
SSchemaWrapper schema = {0};
|
||||
SSchemaWrapper *pSchema = NULL;
|
||||
SDecoder dc = {0};
|
||||
|
||||
// fetch
|
||||
skmDbKey.uid = uid;
|
||||
skmDbKey.sver = sver;
|
||||
pKey = &skmDbKey;
|
||||
kLen = sizeof(skmDbKey);
|
||||
metaRLock(pMeta);
|
||||
ret = tdbTbGet(pMeta->pSkmDb, pKey, kLen, &pVal, &vLen);
|
||||
metaULock(pMeta);
|
||||
if (ret < 0) {
|
||||
return NULL;
|
||||
if (sver < 0) {
|
||||
if (tdbTbGet(pMeta->pUidIdx, &uid, sizeof(uid), &pData, &nData) < 0) {
|
||||
goto _err;
|
||||
}
|
||||
|
||||
version = *(int64_t *)pData;
|
||||
|
||||
tdbTbGet(pMeta->pTbDb, &(STbDbKey){.uid = uid, .version = version}, sizeof(STbDbKey), &pData, &nData);
|
||||
|
||||
SMetaEntry me = {0};
|
||||
tDecoderInit(&dc, pData, nData);
|
||||
metaDecodeEntry(&dc, &me);
|
||||
if (me.type == TSDB_SUPER_TABLE) {
|
||||
pSchema = tCloneSSchemaWrapper(&me.stbEntry.schemaRow);
|
||||
} else if (me.type == TSDB_NORMAL_TABLE) {
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
tDecoderClear(&dc);
|
||||
} else {
|
||||
if (tdbTbGet(pMeta->pSkmDb, &(SSkmDbKey){.uid = uid, .sver = sver}, sizeof(SSkmDbKey), &pData, &nData) < 0) {
|
||||
goto _err;
|
||||
}
|
||||
|
||||
tDecoderInit(&dc, pData, nData);
|
||||
tDecodeSSchemaWrapper(&dc, &schema);
|
||||
pSchema = tCloneSSchemaWrapper(&schema);
|
||||
tDecoderClear(&dc);
|
||||
}
|
||||
|
||||
// decode
|
||||
pBuf = pVal;
|
||||
pSW = taosMemoryMalloc(sizeof(SSchemaWrapper));
|
||||
metaULock(pMeta);
|
||||
tdbFree(pData);
|
||||
return pSchema;
|
||||
|
||||
tDecoderInit(&coder, pVal, vLen);
|
||||
tDecodeSSchemaWrapper(&coder, pSW);
|
||||
pSchema = taosMemoryMalloc(sizeof(SSchema) * pSW->nCols);
|
||||
memcpy(pSchema, pSW->pSchema, sizeof(SSchema) * pSW->nCols);
|
||||
tDecoderClear(&coder);
|
||||
|
||||
pSW->pSchema = pSchema;
|
||||
|
||||
tdbFree(pVal);
|
||||
|
||||
return pSW;
|
||||
_err:
|
||||
metaULock(pMeta);
|
||||
tdbFree(pData);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
struct SMCtbCursor {
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include "vnode.h"
|
||||
#include "tsdb.h"
|
||||
|
||||
#define EXTRA_BYTES 2
|
||||
|
@ -140,12 +141,6 @@ typedef struct STsdbReadHandle {
|
|||
STSchema* pSchema;
|
||||
} STsdbReadHandle;
|
||||
|
||||
typedef struct STableGroupSupporter {
|
||||
int32_t numOfCols;
|
||||
SColIndex* pCols;
|
||||
SSchema* pTagSchema;
|
||||
} STableGroupSupporter;
|
||||
|
||||
static STimeWindow updateLastrowForEachGroup(STableListInfo* pList);
|
||||
static int32_t checkForCachedLastRow(STsdbReadHandle* pTsdbReadHandle, STableListInfo* pList);
|
||||
static int32_t checkForCachedLast(STsdbReadHandle* pTsdbReadHandle);
|
||||
|
@ -211,12 +206,6 @@ int64_t tsdbGetNumOfRowsInMemTable(tsdbReaderT* pHandle) {
|
|||
return rows;
|
||||
}
|
||||
|
||||
// STableData* pMem = NULL;
|
||||
// STableData* pIMem = NULL;
|
||||
|
||||
// SMemTable* pMemT = pMemRef->snapshot.mem;
|
||||
// SMemTable* pIMemT = pMemRef->snapshot.imem;
|
||||
|
||||
size_t size = taosArrayGetSize(pTsdbReadHandle->pTableCheckInfo);
|
||||
for (int32_t i = 0; i < size; ++i) {
|
||||
STableCheckInfo* pCheckInfo = taosArrayGet(pTsdbReadHandle->pTableCheckInfo, i);
|
||||
|
@ -447,10 +436,10 @@ static STsdbReadHandle* tsdbQueryTablesImpl(SVnode* pVnode, SQueryTableDataCond*
|
|||
}
|
||||
|
||||
pReadHandle->suppInfo.defaultLoadColumn = getDefaultLoadColumns(pReadHandle, true);
|
||||
pReadHandle->suppInfo.slotIds =
|
||||
taosMemoryMalloc(sizeof(int32_t) * taosArrayGetSize(pReadHandle->suppInfo.defaultLoadColumn));
|
||||
pReadHandle->suppInfo.plist =
|
||||
taosMemoryCalloc(taosArrayGetSize(pReadHandle->suppInfo.defaultLoadColumn), POINTER_BYTES);
|
||||
|
||||
size_t size = taosArrayGetSize(pReadHandle->suppInfo.defaultLoadColumn);
|
||||
pReadHandle->suppInfo.slotIds = taosMemoryCalloc(size, sizeof(int32_t));
|
||||
pReadHandle->suppInfo.plist = taosMemoryCalloc(size, POINTER_BYTES);
|
||||
}
|
||||
|
||||
pReadHandle->pDataCols = tdNewDataCols(1000, pVnode->config.tsdbCfg.maxRows);
|
||||
|
@ -471,6 +460,39 @@ _end:
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static int32_t setCurrentSchema(SVnode* pVnode, STsdbReadHandle* pTsdbReadHandle) {
|
||||
STableCheckInfo* pCheckInfo = taosArrayGet(pTsdbReadHandle->pTableCheckInfo, 0);
|
||||
|
||||
int32_t sversion = 1;
|
||||
|
||||
SMetaReader mr = {0};
|
||||
metaReaderInit(&mr, pVnode->pMeta, 0);
|
||||
int32_t code = metaGetTableEntryByUid(&mr, pCheckInfo->tableId);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
terrno = TSDB_CODE_TDB_INVALID_TABLE_ID;
|
||||
metaReaderClear(&mr);
|
||||
return terrno;
|
||||
}
|
||||
|
||||
if (mr.me.type == TSDB_CHILD_TABLE) {
|
||||
tb_uid_t suid = mr.me.ctbEntry.suid;
|
||||
code = metaGetTableEntryByUid(&mr, suid);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
terrno = TSDB_CODE_TDB_INVALID_TABLE_ID;
|
||||
metaReaderClear(&mr);
|
||||
return terrno;
|
||||
}
|
||||
sversion = mr.me.stbEntry.schemaRow.version;
|
||||
} else {
|
||||
ASSERT(mr.me.type == TSDB_NORMAL_TABLE);
|
||||
sversion = mr.me.ntbEntry.schemaRow.version;
|
||||
}
|
||||
|
||||
metaReaderClear(&mr);
|
||||
pTsdbReadHandle->pSchema = metaGetTbTSchema(pVnode->pMeta, pCheckInfo->tableId, sversion);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
tsdbReaderT* tsdbQueryTables(SVnode* pVnode, SQueryTableDataCond* pCond, STableListInfo* tableList, uint64_t qId,
|
||||
uint64_t taskId) {
|
||||
STsdbReadHandle* pTsdbReadHandle = tsdbQueryTablesImpl(pVnode, pCond, qId, taskId);
|
||||
|
@ -490,9 +512,12 @@ tsdbReaderT* tsdbQueryTables(SVnode* pVnode, SQueryTableDataCond* pCond, STableL
|
|||
return NULL;
|
||||
}
|
||||
|
||||
STableCheckInfo* pCheckInfo = taosArrayGet(pTsdbReadHandle->pTableCheckInfo, 0);
|
||||
int32_t code = setCurrentSchema(pVnode, pTsdbReadHandle);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
terrno = code;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
pTsdbReadHandle->pSchema = metaGetTbTSchema(pVnode->pMeta, pCheckInfo->tableId, 1);
|
||||
int32_t numOfCols = taosArrayGetSize(pTsdbReadHandle->suppInfo.defaultLoadColumn);
|
||||
int16_t* ids = pTsdbReadHandle->suppInfo.defaultLoadColumn->pData;
|
||||
|
||||
|
@ -3471,7 +3496,6 @@ void tsdbRetrieveDataBlockInfo(tsdbReaderT* pTsdbReadHandle, SDataBlockInfo* pDa
|
|||
|
||||
pDataBlockInfo->rows = cur->rows;
|
||||
pDataBlockInfo->window = cur->win;
|
||||
// ASSERT(pDataBlockInfo->numOfCols >= (int32_t)(QH_GET_NUM_OF_COLS(pHandle));
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -3537,9 +3561,9 @@ int32_t tsdbRetrieveDataBlockStatisInfo(tsdbReaderT* pTsdbReadHandle, SColumnDat
|
|||
if (IS_BSMA_ON(&(pHandle->pSchema->columns[slotIds[i]]))) {
|
||||
if (pHandle->suppInfo.pstatis[i].numOfNull == -1) { // set the column data are all NULL
|
||||
pHandle->suppInfo.pstatis[i].numOfNull = pBlockInfo->compBlock->numOfRows;
|
||||
} else {
|
||||
pHandle->suppInfo.plist[i] = &pHandle->suppInfo.pstatis[i];
|
||||
}
|
||||
|
||||
pHandle->suppInfo.plist[i] = &pHandle->suppInfo.pstatis[i];
|
||||
} else {
|
||||
*allHave = false;
|
||||
}
|
||||
|
@ -3588,108 +3612,6 @@ SArray* tsdbRetrieveDataBlock(tsdbReaderT* pTsdbReadHandle, SArray* pIdList) {
|
|||
}
|
||||
}
|
||||
}
|
||||
#if 0
|
||||
void filterPrepare(void* expr, void* param) {
|
||||
tExprNode* pExpr = (tExprNode*)expr;
|
||||
if (pExpr->_node.info != NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
pExpr->_node.info = taosMemoryCalloc(1, sizeof(tQueryInfo));
|
||||
|
||||
STSchema* pTSSchema = (STSchema*) param;
|
||||
tQueryInfo* pInfo = pExpr->_node.info;
|
||||
tVariant* pCond = pExpr->_node.pRight->pVal;
|
||||
SSchema* pSchema = pExpr->_node.pLeft->pSchema;
|
||||
|
||||
pInfo->sch = *pSchema;
|
||||
pInfo->optr = pExpr->_node.optr;
|
||||
pInfo->compare = getComparFunc(pInfo->sch.type, pInfo->optr);
|
||||
pInfo->indexed = pTSSchema->columns->colId == pInfo->sch.colId;
|
||||
|
||||
if (pInfo->optr == TSDB_RELATION_IN) {
|
||||
int dummy = -1;
|
||||
SHashObj *pObj = NULL;
|
||||
if (pInfo->sch.colId == TSDB_TBNAME_COLUMN_INDEX) {
|
||||
pObj = taosHashInit(256, taosGetDefaultHashFunction(pInfo->sch.type), true, false);
|
||||
SArray *arr = (SArray *)(pCond->arr);
|
||||
for (size_t i = 0; i < taosArrayGetSize(arr); i++) {
|
||||
char* p = taosArrayGetP(arr, i);
|
||||
strntolower_s(varDataVal(p), varDataVal(p), varDataLen(p));
|
||||
taosHashPut(pObj, varDataVal(p), varDataLen(p), &dummy, sizeof(dummy));
|
||||
}
|
||||
} else {
|
||||
buildFilterSetFromBinary((void **)&pObj, pCond->pz, pCond->nLen);
|
||||
}
|
||||
pInfo->q = (char *)pObj;
|
||||
} else if (pCond != NULL) {
|
||||
uint32_t size = pCond->nLen * TSDB_NCHAR_SIZE;
|
||||
if (size < (uint32_t)pSchema->bytes) {
|
||||
size = pSchema->bytes;
|
||||
}
|
||||
// to make sure tonchar does not cause invalid write, since the '\0' needs at least sizeof(TdUcs4) space.
|
||||
pInfo->q = taosMemoryCalloc(1, size + TSDB_NCHAR_SIZE + VARSTR_HEADER_SIZE);
|
||||
tVariantDump(pCond, pInfo->q, pSchema->type, true);
|
||||
}
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
static int32_t tableGroupComparFn(const void* p1, const void* p2, const void* param) {
|
||||
#if 0
|
||||
STableGroupSupporter* pTableGroupSupp = (STableGroupSupporter*) param;
|
||||
STable* pTable1 = ((STableKeyInfo*) p1)->uid;
|
||||
STable* pTable2 = ((STableKeyInfo*) p2)->uid;
|
||||
|
||||
for (int32_t i = 0; i < pTableGroupSupp->numOfCols; ++i) {
|
||||
SColIndex* pColIndex = &pTableGroupSupp->pCols[i];
|
||||
int32_t colIndex = pColIndex->colIndex;
|
||||
|
||||
assert(colIndex >= TSDB_TBNAME_COLUMN_INDEX);
|
||||
|
||||
char * f1 = NULL;
|
||||
char * f2 = NULL;
|
||||
int32_t type = 0;
|
||||
int32_t bytes = 0;
|
||||
|
||||
if (colIndex == TSDB_TBNAME_COLUMN_INDEX) {
|
||||
f1 = (char*) TABLE_NAME(pTable1);
|
||||
f2 = (char*) TABLE_NAME(pTable2);
|
||||
type = TSDB_DATA_TYPE_BINARY;
|
||||
bytes = tGetTbnameColumnSchema()->bytes;
|
||||
} else {
|
||||
if (pTableGroupSupp->pTagSchema && colIndex < pTableGroupSupp->pTagSchema->numOfCols) {
|
||||
STColumn* pCol = schemaColAt(pTableGroupSupp->pTagSchema, colIndex);
|
||||
bytes = pCol->bytes;
|
||||
type = pCol->type;
|
||||
f1 = tdGetKVRowValOfCol(pTable1->tagVal, pCol->colId);
|
||||
f2 = tdGetKVRowValOfCol(pTable2->tagVal, pCol->colId);
|
||||
}
|
||||
}
|
||||
|
||||
// this tags value may be NULL
|
||||
if (f1 == NULL && f2 == NULL) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (f1 == NULL) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (f2 == NULL) {
|
||||
return 1;
|
||||
}
|
||||
|
||||
int32_t ret = doCompare(f1, f2, type, bytes);
|
||||
if (ret == 0) {
|
||||
continue;
|
||||
} else {
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tsdbCheckInfoCompar(const void* key1, const void* key2) {
|
||||
if (((STableCheckInfo*)key1)->tableId < ((STableCheckInfo*)key2)->tableId) {
|
||||
|
@ -3702,170 +3624,6 @@ static int tsdbCheckInfoCompar(const void* key1, const void* key2) {
|
|||
}
|
||||
}
|
||||
|
||||
void createTableGroupImpl(SArray* pGroups, SArray* pTableList, size_t numOfTables, TSKEY skey,
|
||||
STableGroupSupporter* pSupp, __ext_compar_fn_t compareFn) {
|
||||
STable* pTable = taosArrayGetP(pTableList, 0);
|
||||
SArray* g = taosArrayInit(16, sizeof(STableKeyInfo));
|
||||
|
||||
STableKeyInfo info = {.lastKey = skey};
|
||||
taosArrayPush(g, &info);
|
||||
|
||||
for (int32_t i = 1; i < numOfTables; ++i) {
|
||||
STable** prev = taosArrayGet(pTableList, i - 1);
|
||||
STable** p = taosArrayGet(pTableList, i);
|
||||
|
||||
int32_t ret = compareFn(prev, p, pSupp);
|
||||
assert(ret == 0 || ret == -1);
|
||||
|
||||
if (ret == 0) {
|
||||
STableKeyInfo info1 = {.lastKey = skey};
|
||||
taosArrayPush(g, &info1);
|
||||
} else {
|
||||
taosArrayPush(pGroups, &g); // current group is ended, start a new group
|
||||
g = taosArrayInit(16, sizeof(STableKeyInfo));
|
||||
|
||||
STableKeyInfo info1 = {.lastKey = skey};
|
||||
taosArrayPush(g, &info1);
|
||||
}
|
||||
}
|
||||
|
||||
taosArrayPush(pGroups, &g);
|
||||
}
|
||||
|
||||
SArray* createTableGroup(SArray* pTableList, SSchemaWrapper* pTagSchema, SColIndex* pCols, int32_t numOfOrderCols,
|
||||
TSKEY skey) {
|
||||
assert(pTableList != NULL);
|
||||
SArray* pTableGroup = taosArrayInit(1, POINTER_BYTES);
|
||||
|
||||
size_t size = taosArrayGetSize(pTableList);
|
||||
if (size == 0) {
|
||||
tsdbDebug("no qualified tables");
|
||||
return pTableGroup;
|
||||
}
|
||||
|
||||
if (numOfOrderCols == 0 || size == 1) { // no group by tags clause or only one table
|
||||
SArray* sa = taosArrayDup(pTableList);
|
||||
if (sa == NULL) {
|
||||
taosArrayDestroy(pTableGroup);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
taosArrayPush(pTableGroup, &sa);
|
||||
tsdbDebug("all %" PRIzu " tables belong to one group", size);
|
||||
} else {
|
||||
STableGroupSupporter sup = {0};
|
||||
sup.numOfCols = numOfOrderCols;
|
||||
sup.pTagSchema = pTagSchema->pSchema;
|
||||
sup.pCols = pCols;
|
||||
|
||||
taosqsort(pTableList->pData, size, sizeof(STableKeyInfo), &sup, tableGroupComparFn);
|
||||
createTableGroupImpl(pTableGroup, pTableList, size, skey, &sup, tableGroupComparFn);
|
||||
}
|
||||
|
||||
return pTableGroup;
|
||||
}
|
||||
|
||||
// static bool tableFilterFp(const void* pNode, void* param) {
|
||||
// tQueryInfo* pInfo = (tQueryInfo*) param;
|
||||
//
|
||||
// STable* pTable = (STable*)(SL_GET_NODE_DATA((SSkipListNode*)pNode));
|
||||
//
|
||||
// char* val = NULL;
|
||||
// if (pInfo->sch.colId == TSDB_TBNAME_COLUMN_INDEX) {
|
||||
// val = (char*) TABLE_NAME(pTable);
|
||||
// } else {
|
||||
// val = tdGetKVRowValOfCol(pTable->tagVal, pInfo->sch.colId);
|
||||
// }
|
||||
//
|
||||
// if (pInfo->optr == TSDB_RELATION_ISNULL || pInfo->optr == TSDB_RELATION_NOTNULL) {
|
||||
// if (pInfo->optr == TSDB_RELATION_ISNULL) {
|
||||
// return (val == NULL) || isNull(val, pInfo->sch.type);
|
||||
// } else if (pInfo->optr == TSDB_RELATION_NOTNULL) {
|
||||
// return (val != NULL) && (!isNull(val, pInfo->sch.type));
|
||||
// }
|
||||
// } else if (pInfo->optr == TSDB_RELATION_IN) {
|
||||
// int type = pInfo->sch.type;
|
||||
// if (type == TSDB_DATA_TYPE_BOOL || IS_SIGNED_NUMERIC_TYPE(type) || type == TSDB_DATA_TYPE_TIMESTAMP) {
|
||||
// int64_t v;
|
||||
// GET_TYPED_DATA(v, int64_t, pInfo->sch.type, val);
|
||||
// return NULL != taosHashGet((SHashObj *)pInfo->q, (char *)&v, sizeof(v));
|
||||
// } else if (IS_UNSIGNED_NUMERIC_TYPE(type)) {
|
||||
// uint64_t v;
|
||||
// GET_TYPED_DATA(v, uint64_t, pInfo->sch.type, val);
|
||||
// return NULL != taosHashGet((SHashObj *)pInfo->q, (char *)&v, sizeof(v));
|
||||
// }
|
||||
// else if (type == TSDB_DATA_TYPE_DOUBLE || type == TSDB_DATA_TYPE_FLOAT) {
|
||||
// double v;
|
||||
// GET_TYPED_DATA(v, double, pInfo->sch.type, val);
|
||||
// return NULL != taosHashGet((SHashObj *)pInfo->q, (char *)&v, sizeof(v));
|
||||
// } else if (type == TSDB_DATA_TYPE_BINARY || type == TSDB_DATA_TYPE_NCHAR){
|
||||
// return NULL != taosHashGet((SHashObj *)pInfo->q, varDataVal(val), varDataLen(val));
|
||||
// }
|
||||
//
|
||||
// }
|
||||
//
|
||||
// int32_t ret = 0;
|
||||
// if (val == NULL) { //the val is possible to be null, so check it out carefully
|
||||
// ret = -1; // val is missing in table tags value pairs
|
||||
// } else {
|
||||
// ret = pInfo->compare(val, pInfo->q);
|
||||
// }
|
||||
//
|
||||
// switch (pInfo->optr) {
|
||||
// case TSDB_RELATION_EQUAL: {
|
||||
// return ret == 0;
|
||||
// }
|
||||
// case TSDB_RELATION_NOT_EQUAL: {
|
||||
// return ret != 0;
|
||||
// }
|
||||
// case TSDB_RELATION_GREATER_EQUAL: {
|
||||
// return ret >= 0;
|
||||
// }
|
||||
// case TSDB_RELATION_GREATER: {
|
||||
// return ret > 0;
|
||||
// }
|
||||
// case TSDB_RELATION_LESS_EQUAL: {
|
||||
// return ret <= 0;
|
||||
// }
|
||||
// case TSDB_RELATION_LESS: {
|
||||
// return ret < 0;
|
||||
// }
|
||||
// case TSDB_RELATION_LIKE: {
|
||||
// return ret == 0;
|
||||
// }
|
||||
// case TSDB_RELATION_MATCH: {
|
||||
// return ret == 0;
|
||||
// }
|
||||
// case TSDB_RELATION_NMATCH: {
|
||||
// return ret == 0;
|
||||
// }
|
||||
// case TSDB_RELATION_IN: {
|
||||
// return ret == 1;
|
||||
// }
|
||||
//
|
||||
// default:
|
||||
// assert(false);
|
||||
// }
|
||||
//
|
||||
// return true;
|
||||
//}
|
||||
|
||||
// static void getTableListfromSkipList(tExprNode *pExpr, SSkipList *pSkipList, SArray *result, SExprTraverseSupp
|
||||
// *param);
|
||||
|
||||
// static int32_t doQueryTableList(STable* pSTable, SArray* pRes, tExprNode* pExpr) {
|
||||
// // // query according to the expression tree
|
||||
// SExprTraverseSupp supp = {
|
||||
// .nodeFilterFn = (__result_filter_fn_t)tableFilterFp,
|
||||
// .setupInfoFn = filterPrepare,
|
||||
// .pExtInfo = pSTable->tagSchema,
|
||||
// };
|
||||
//
|
||||
// getTableListfromSkipList(pExpr, pSTable->pIndex, pRes, &supp);
|
||||
// tExprTreeDestroy(pExpr, destroyHelper);
|
||||
// return TSDB_CODE_SUCCESS;
|
||||
//}
|
||||
|
||||
static void* doFreeColumnInfoData(SArray* pColumnInfoData) {
|
||||
if (pColumnInfoData == NULL) {
|
||||
return NULL;
|
||||
|
@ -3934,263 +3692,3 @@ void tsdbCleanupReadHandle(tsdbReaderT queryHandle) {
|
|||
|
||||
taosMemoryFreeClear(pTsdbReadHandle);
|
||||
}
|
||||
|
||||
#if 0
|
||||
|
||||
static void applyFilterToSkipListNode(SSkipList *pSkipList, tExprNode *pExpr, SArray *pResult, SExprTraverseSupp *param) {
|
||||
SSkipListIterator* iter = tSkipListCreateIter(pSkipList);
|
||||
|
||||
// Scan each node in the skiplist by using iterator
|
||||
while (tSkipListIterNext(iter)) {
|
||||
SSkipListNode *pNode = tSkipListIterGet(iter);
|
||||
if (exprTreeApplyFilter(pExpr, pNode, param)) {
|
||||
taosArrayPush(pResult, &(SL_GET_NODE_DATA(pNode)));
|
||||
}
|
||||
}
|
||||
|
||||
tSkipListDestroyIter(iter);
|
||||
}
|
||||
|
||||
typedef struct {
|
||||
char* v;
|
||||
int32_t optr;
|
||||
} SEndPoint;
|
||||
|
||||
typedef struct {
|
||||
SEndPoint* start;
|
||||
SEndPoint* end;
|
||||
} SQueryCond;
|
||||
|
||||
// todo check for malloc failure
|
||||
static int32_t setQueryCond(tQueryInfo *queryColInfo, SQueryCond* pCond) {
|
||||
int32_t optr = queryColInfo->optr;
|
||||
|
||||
if (optr == TSDB_RELATION_GREATER || optr == TSDB_RELATION_GREATER_EQUAL ||
|
||||
optr == TSDB_RELATION_EQUAL || optr == TSDB_RELATION_NOT_EQUAL) {
|
||||
pCond->start = taosMemoryCalloc(1, sizeof(SEndPoint));
|
||||
pCond->start->optr = queryColInfo->optr;
|
||||
pCond->start->v = queryColInfo->q;
|
||||
} else if (optr == TSDB_RELATION_LESS || optr == TSDB_RELATION_LESS_EQUAL) {
|
||||
pCond->end = taosMemoryCalloc(1, sizeof(SEndPoint));
|
||||
pCond->end->optr = queryColInfo->optr;
|
||||
pCond->end->v = queryColInfo->q;
|
||||
} else if (optr == TSDB_RELATION_IN) {
|
||||
pCond->start = taosMemoryCalloc(1, sizeof(SEndPoint));
|
||||
pCond->start->optr = queryColInfo->optr;
|
||||
pCond->start->v = queryColInfo->q;
|
||||
} else if (optr == TSDB_RELATION_LIKE) {
|
||||
assert(0);
|
||||
} else if (optr == TSDB_RELATION_MATCH) {
|
||||
assert(0);
|
||||
} else if (optr == TSDB_RELATION_NMATCH) {
|
||||
assert(0);
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static void queryIndexedColumn(SSkipList* pSkipList, tQueryInfo* pQueryInfo, SArray* result) {
|
||||
SSkipListIterator* iter = NULL;
|
||||
|
||||
SQueryCond cond = {0};
|
||||
if (setQueryCond(pQueryInfo, &cond) != TSDB_CODE_SUCCESS) {
|
||||
//todo handle error
|
||||
}
|
||||
|
||||
if (cond.start != NULL) {
|
||||
iter = tSkipListCreateIterFromVal(pSkipList, (char*) cond.start->v, pSkipList->type, TSDB_ORDER_ASC);
|
||||
} else {
|
||||
iter = tSkipListCreateIterFromVal(pSkipList, (char*)(cond.end ? cond.end->v: NULL), pSkipList->type, TSDB_ORDER_DESC);
|
||||
}
|
||||
|
||||
if (cond.start != NULL) {
|
||||
int32_t optr = cond.start->optr;
|
||||
|
||||
if (optr == TSDB_RELATION_EQUAL) { // equals
|
||||
while(tSkipListIterNext(iter)) {
|
||||
SSkipListNode* pNode = tSkipListIterGet(iter);
|
||||
|
||||
int32_t ret = pQueryInfo->compare(SL_GET_NODE_KEY(pSkipList, pNode), cond.start->v);
|
||||
if (ret != 0) {
|
||||
break;
|
||||
}
|
||||
|
||||
STableKeyInfo info = {.pTable = (void*)SL_GET_NODE_DATA(pNode), .lastKey = TSKEY_INITIAL_VAL};
|
||||
taosArrayPush(result, &info);
|
||||
}
|
||||
} else if (optr == TSDB_RELATION_GREATER || optr == TSDB_RELATION_GREATER_EQUAL) { // greater equal
|
||||
bool comp = true;
|
||||
int32_t ret = 0;
|
||||
|
||||
while(tSkipListIterNext(iter)) {
|
||||
SSkipListNode* pNode = tSkipListIterGet(iter);
|
||||
|
||||
if (comp) {
|
||||
ret = pQueryInfo->compare(SL_GET_NODE_KEY(pSkipList, pNode), cond.start->v);
|
||||
assert(ret >= 0);
|
||||
}
|
||||
|
||||
if (ret == 0 && optr == TSDB_RELATION_GREATER) {
|
||||
continue;
|
||||
} else {
|
||||
STableKeyInfo info = {.pTable = (void*)SL_GET_NODE_DATA(pNode), .lastKey = TSKEY_INITIAL_VAL};
|
||||
taosArrayPush(result, &info);
|
||||
comp = false;
|
||||
}
|
||||
}
|
||||
} else if (optr == TSDB_RELATION_NOT_EQUAL) { // not equal
|
||||
bool comp = true;
|
||||
|
||||
while(tSkipListIterNext(iter)) {
|
||||
SSkipListNode* pNode = tSkipListIterGet(iter);
|
||||
comp = comp && (pQueryInfo->compare(SL_GET_NODE_KEY(pSkipList, pNode), cond.start->v) == 0);
|
||||
if (comp) {
|
||||
continue;
|
||||
}
|
||||
|
||||
STableKeyInfo info = {.pTable = (void*)SL_GET_NODE_DATA(pNode), .lastKey = TSKEY_INITIAL_VAL};
|
||||
taosArrayPush(result, &info);
|
||||
}
|
||||
|
||||
tSkipListDestroyIter(iter);
|
||||
|
||||
comp = true;
|
||||
iter = tSkipListCreateIterFromVal(pSkipList, (char*) cond.start->v, pSkipList->type, TSDB_ORDER_DESC);
|
||||
while(tSkipListIterNext(iter)) {
|
||||
SSkipListNode* pNode = tSkipListIterGet(iter);
|
||||
comp = comp && (pQueryInfo->compare(SL_GET_NODE_KEY(pSkipList, pNode), cond.start->v) == 0);
|
||||
if (comp) {
|
||||
continue;
|
||||
}
|
||||
|
||||
STableKeyInfo info = {.pTable = (void*)SL_GET_NODE_DATA(pNode), .lastKey = TSKEY_INITIAL_VAL};
|
||||
taosArrayPush(result, &info);
|
||||
}
|
||||
|
||||
} else if (optr == TSDB_RELATION_IN) {
|
||||
while(tSkipListIterNext(iter)) {
|
||||
SSkipListNode* pNode = tSkipListIterGet(iter);
|
||||
|
||||
int32_t ret = pQueryInfo->compare(SL_GET_NODE_KEY(pSkipList, pNode), cond.start->v);
|
||||
if (ret != 0) {
|
||||
break;
|
||||
}
|
||||
|
||||
STableKeyInfo info = {.pTable = (void*)SL_GET_NODE_DATA(pNode), .lastKey = TSKEY_INITIAL_VAL};
|
||||
taosArrayPush(result, &info);
|
||||
}
|
||||
|
||||
} else {
|
||||
assert(0);
|
||||
}
|
||||
} else {
|
||||
int32_t optr = cond.end ? cond.end->optr : TSDB_RELATION_INVALID;
|
||||
if (optr == TSDB_RELATION_LESS || optr == TSDB_RELATION_LESS_EQUAL) {
|
||||
bool comp = true;
|
||||
int32_t ret = 0;
|
||||
|
||||
while (tSkipListIterNext(iter)) {
|
||||
SSkipListNode *pNode = tSkipListIterGet(iter);
|
||||
|
||||
if (comp) {
|
||||
ret = pQueryInfo->compare(SL_GET_NODE_KEY(pSkipList, pNode), cond.end->v);
|
||||
assert(ret <= 0);
|
||||
}
|
||||
|
||||
if (ret == 0 && optr == TSDB_RELATION_LESS) {
|
||||
continue;
|
||||
} else {
|
||||
STableKeyInfo info = {.pTable = (void *)SL_GET_NODE_DATA(pNode), .lastKey = TSKEY_INITIAL_VAL};
|
||||
taosArrayPush(result, &info);
|
||||
comp = false; // no need to compare anymore
|
||||
}
|
||||
}
|
||||
} else {
|
||||
assert(pQueryInfo->optr == TSDB_RELATION_ISNULL || pQueryInfo->optr == TSDB_RELATION_NOTNULL);
|
||||
|
||||
while (tSkipListIterNext(iter)) {
|
||||
SSkipListNode *pNode = tSkipListIterGet(iter);
|
||||
|
||||
bool isnull = isNull(SL_GET_NODE_KEY(pSkipList, pNode), pQueryInfo->sch.type);
|
||||
if ((pQueryInfo->optr == TSDB_RELATION_ISNULL && isnull) ||
|
||||
(pQueryInfo->optr == TSDB_RELATION_NOTNULL && (!isnull))) {
|
||||
STableKeyInfo info = {.pTable = (void *)SL_GET_NODE_DATA(pNode), .lastKey = TSKEY_INITIAL_VAL};
|
||||
taosArrayPush(result, &info);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
taosMemoryFree(cond.start);
|
||||
taosMemoryFree(cond.end);
|
||||
tSkipListDestroyIter(iter);
|
||||
}
|
||||
|
||||
static void queryIndexlessColumn(SSkipList* pSkipList, tQueryInfo* pQueryInfo, SArray* res, __result_filter_fn_t filterFp) {
|
||||
SSkipListIterator* iter = tSkipListCreateIter(pSkipList);
|
||||
|
||||
while (tSkipListIterNext(iter)) {
|
||||
bool addToResult = false;
|
||||
|
||||
SSkipListNode *pNode = tSkipListIterGet(iter);
|
||||
|
||||
char *pData = SL_GET_NODE_DATA(pNode);
|
||||
tstr *name = (tstr*) tsdbGetTableName((void*) pData);
|
||||
|
||||
// todo speed up by using hash
|
||||
if (pQueryInfo->sch.colId == TSDB_TBNAME_COLUMN_INDEX) {
|
||||
if (pQueryInfo->optr == TSDB_RELATION_IN) {
|
||||
addToResult = pQueryInfo->compare(name, pQueryInfo->q);
|
||||
} else if (pQueryInfo->optr == TSDB_RELATION_LIKE ||
|
||||
pQueryInfo->optr == TSDB_RELATION_MATCH ||
|
||||
pQueryInfo->optr == TSDB_RELATION_NMATCH) {
|
||||
addToResult = !pQueryInfo->compare(name, pQueryInfo->q);
|
||||
}
|
||||
} else {
|
||||
addToResult = filterFp(pNode, pQueryInfo);
|
||||
}
|
||||
|
||||
if (addToResult) {
|
||||
STableKeyInfo info = {.pTable = (void*)pData, .lastKey = TSKEY_INITIAL_VAL};
|
||||
taosArrayPush(res, &info);
|
||||
}
|
||||
}
|
||||
|
||||
tSkipListDestroyIter(iter);
|
||||
}
|
||||
|
||||
// Apply the filter expression to each node in the skiplist to acquire the qualified nodes in skip list
|
||||
//void getTableListfromSkipList(tExprNode *pExpr, SSkipList *pSkipList, SArray *result, SExprTraverseSupp *param) {
|
||||
// if (pExpr == NULL) {
|
||||
// return;
|
||||
// }
|
||||
//
|
||||
// tExprNode *pLeft = pExpr->_node.pLeft;
|
||||
// tExprNode *pRight = pExpr->_node.pRight;
|
||||
//
|
||||
// // column project
|
||||
// if (pLeft->nodeType != TSQL_NODE_EXPR && pRight->nodeType != TSQL_NODE_EXPR) {
|
||||
// assert(pLeft->nodeType == TSQL_NODE_COL && (pRight->nodeType == TSQL_NODE_VALUE || pRight->nodeType == TSQL_NODE_DUMMY));
|
||||
//
|
||||
// param->setupInfoFn(pExpr, param->pExtInfo);
|
||||
//
|
||||
// tQueryInfo *pQueryInfo = pExpr->_node.info;
|
||||
// if (pQueryInfo->indexed && (pQueryInfo->optr != TSDB_RELATION_LIKE
|
||||
// && pQueryInfo->optr != TSDB_RELATION_MATCH && pQueryInfo->optr != TSDB_RELATION_NMATCH
|
||||
// && pQueryInfo->optr != TSDB_RELATION_IN)) {
|
||||
// queryIndexedColumn(pSkipList, pQueryInfo, result);
|
||||
// } else {
|
||||
// queryIndexlessColumn(pSkipList, pQueryInfo, result, param->nodeFilterFn);
|
||||
// }
|
||||
//
|
||||
// return;
|
||||
// }
|
||||
//
|
||||
// // The value of hasPK is always 0.
|
||||
// uint8_t weight = pLeft->_node.hasPK + pRight->_node.hasPK;
|
||||
// assert(weight == 0 && pSkipList != NULL && taosArrayGetSize(result) == 0);
|
||||
//
|
||||
// //apply the hierarchical filter expression to every node in skiplist to find the qualified nodes
|
||||
// applyFilterToSkipListNode(pSkipList, pExpr, result, param);
|
||||
//}
|
||||
#endif
|
||||
|
|
|
@ -49,19 +49,21 @@ enum {
|
|||
};
|
||||
|
||||
enum {
|
||||
CTG_ACT_UPDATE_VG = 0,
|
||||
CTG_ACT_UPDATE_TBL,
|
||||
CTG_ACT_REMOVE_DB,
|
||||
CTG_ACT_REMOVE_STB,
|
||||
CTG_ACT_REMOVE_TBL,
|
||||
CTG_ACT_UPDATE_USER,
|
||||
CTG_ACT_MAX
|
||||
CTG_OP_UPDATE_VGROUP = 0,
|
||||
CTG_OP_UPDATE_TB_META,
|
||||
CTG_OP_DROP_DB_CACHE,
|
||||
CTG_OP_DROP_STB_META,
|
||||
CTG_OP_DROP_TB_META,
|
||||
CTG_OP_UPDATE_USER,
|
||||
CTG_OP_UPDATE_VG_EPSET,
|
||||
CTG_OP_MAX
|
||||
};
|
||||
|
||||
typedef enum {
|
||||
CTG_TASK_GET_QNODE = 0,
|
||||
CTG_TASK_GET_DB_VGROUP,
|
||||
CTG_TASK_GET_DB_CFG,
|
||||
CTG_TASK_GET_DB_INFO,
|
||||
CTG_TASK_GET_TB_META,
|
||||
CTG_TASK_GET_TB_HASH,
|
||||
CTG_TASK_GET_INDEX,
|
||||
|
@ -98,6 +100,10 @@ typedef struct SCtgDbCfgCtx {
|
|||
char dbFName[TSDB_DB_FNAME_LEN];
|
||||
} SCtgDbCfgCtx;
|
||||
|
||||
typedef struct SCtgDbInfoCtx {
|
||||
char dbFName[TSDB_DB_FNAME_LEN];
|
||||
} SCtgDbInfoCtx;
|
||||
|
||||
typedef struct SCtgTbHashCtx {
|
||||
char dbFName[TSDB_DB_FNAME_LEN];
|
||||
SName* pName;
|
||||
|
@ -182,6 +188,7 @@ typedef struct SCtgJob {
|
|||
int32_t dbCfgNum;
|
||||
int32_t indexNum;
|
||||
int32_t userNum;
|
||||
int32_t dbInfoNum;
|
||||
} SCtgJob;
|
||||
|
||||
typedef struct SCtgMsgCtx {
|
||||
|
@ -285,16 +292,22 @@ typedef struct SCtgUpdateUserMsg {
|
|||
SGetUserAuthRsp userAuth;
|
||||
} SCtgUpdateUserMsg;
|
||||
|
||||
typedef struct SCtgUpdateEpsetMsg {
|
||||
SCatalog* pCtg;
|
||||
char dbFName[TSDB_DB_FNAME_LEN];
|
||||
int32_t vgId;
|
||||
SEpSet epSet;
|
||||
} SCtgUpdateEpsetMsg;
|
||||
|
||||
typedef struct SCtgMetaAction {
|
||||
int32_t act;
|
||||
typedef struct SCtgCacheOperation {
|
||||
int32_t opId;
|
||||
void *data;
|
||||
bool syncReq;
|
||||
uint64_t seqId;
|
||||
} SCtgMetaAction;
|
||||
} SCtgCacheOperation;
|
||||
|
||||
typedef struct SCtgQNode {
|
||||
SCtgMetaAction action;
|
||||
SCtgCacheOperation op;
|
||||
struct SCtgQNode *next;
|
||||
} SCtgQNode;
|
||||
|
||||
|
@ -321,13 +334,13 @@ typedef struct SCatalogMgmt {
|
|||
} SCatalogMgmt;
|
||||
|
||||
typedef uint32_t (*tableNameHashFp)(const char *, uint32_t);
|
||||
typedef int32_t (*ctgActFunc)(SCtgMetaAction *);
|
||||
typedef int32_t (*ctgOpFunc)(SCtgCacheOperation *);
|
||||
|
||||
typedef struct SCtgAction {
|
||||
int32_t actId;
|
||||
typedef struct SCtgOperation {
|
||||
int32_t opId;
|
||||
char name[32];
|
||||
ctgActFunc func;
|
||||
} SCtgAction;
|
||||
ctgOpFunc func;
|
||||
} SCtgOperation;
|
||||
|
||||
#define CTG_QUEUE_ADD() atomic_add_fetch_64(&gCtgMgmt.queue.qRemainNum, 1)
|
||||
#define CTG_QUEUE_SUB() atomic_sub_fetch_64(&gCtgMgmt.queue.qRemainNum, 1)
|
||||
|
@ -435,12 +448,13 @@ int32_t ctgdShowCacheInfo(void);
|
|||
int32_t ctgRemoveTbMetaFromCache(SCatalog* pCtg, SName* pTableName, bool syncReq);
|
||||
int32_t ctgGetTbMetaFromCache(CTG_PARAMS, SCtgTbMetaCtx* ctx, STableMeta** pTableMeta);
|
||||
|
||||
int32_t ctgActUpdateVg(SCtgMetaAction *action);
|
||||
int32_t ctgActUpdateTb(SCtgMetaAction *action);
|
||||
int32_t ctgActRemoveDB(SCtgMetaAction *action);
|
||||
int32_t ctgActRemoveStb(SCtgMetaAction *action);
|
||||
int32_t ctgActRemoveTb(SCtgMetaAction *action);
|
||||
int32_t ctgActUpdateUser(SCtgMetaAction *action);
|
||||
int32_t ctgOpUpdateVgroup(SCtgCacheOperation *action);
|
||||
int32_t ctgOpUpdateTbMeta(SCtgCacheOperation *action);
|
||||
int32_t ctgOpDropDbCache(SCtgCacheOperation *action);
|
||||
int32_t ctgOpDropStbMeta(SCtgCacheOperation *action);
|
||||
int32_t ctgOpDropTbMeta(SCtgCacheOperation *action);
|
||||
int32_t ctgOpUpdateUser(SCtgCacheOperation *action);
|
||||
int32_t ctgOpUpdateEpset(SCtgCacheOperation *operation);
|
||||
int32_t ctgAcquireVgInfoFromCache(SCatalog* pCtg, const char *dbFName, SCtgDBCache **pCache);
|
||||
void ctgReleaseDBCache(SCatalog *pCtg, SCtgDBCache *dbCache);
|
||||
void ctgReleaseVgInfo(SCtgDBCache *dbCache);
|
||||
|
@ -449,12 +463,13 @@ int32_t ctgTbMetaExistInCache(SCatalog* pCtg, char *dbFName, char* tbName, int32
|
|||
int32_t ctgReadTbMetaFromCache(SCatalog* pCtg, SCtgTbMetaCtx* ctx, STableMeta** pTableMeta);
|
||||
int32_t ctgReadTbVerFromCache(SCatalog *pCtg, const SName *pTableName, int32_t *sver, int32_t *tver, int32_t *tbType, uint64_t *suid, char *stbName);
|
||||
int32_t ctgChkAuthFromCache(SCatalog* pCtg, const char* user, const char* dbFName, AUTH_TYPE type, bool *inCache, bool *pass);
|
||||
int32_t ctgPutRmDBToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId);
|
||||
int32_t ctgPutRmStbToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *stbName, uint64_t suid, bool syncReq);
|
||||
int32_t ctgPutRmTbToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *tbName, bool syncReq);
|
||||
int32_t ctgPutUpdateVgToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, SDBVgInfo* dbInfo, bool syncReq);
|
||||
int32_t ctgPutUpdateTbToQueue(SCatalog* pCtg, STableMetaOutput *output, bool syncReq);
|
||||
int32_t ctgPutUpdateUserToQueue(SCatalog* pCtg, SGetUserAuthRsp *pAuth, bool syncReq);
|
||||
int32_t ctgDropDbCacheEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId);
|
||||
int32_t ctgDropStbMetaEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *stbName, uint64_t suid, bool syncReq);
|
||||
int32_t ctgDropTbMetaEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *tbName, bool syncReq);
|
||||
int32_t ctgUpdateVgroupEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, SDBVgInfo* dbInfo, bool syncReq);
|
||||
int32_t ctgUpdateTbMetaEnqueue(SCatalog* pCtg, STableMetaOutput *output, bool syncReq);
|
||||
int32_t ctgUpdateUserEnqueue(SCatalog* pCtg, SGetUserAuthRsp *pAuth, bool syncReq);
|
||||
int32_t ctgUpdateVgEpsetEnqueue(SCatalog* pCtg, char *dbFName, int32_t vgId, SEpSet* pEpSet);
|
||||
int32_t ctgMetaRentInit(SCtgRentMgmt *mgmt, uint32_t rentSec, int8_t type);
|
||||
int32_t ctgMetaRentAdd(SCtgRentMgmt *mgmt, void *meta, int64_t id, int32_t size);
|
||||
int32_t ctgMetaRentGet(SCtgRentMgmt *mgmt, void **res, uint32_t *num, int32_t size);
|
||||
|
|
|
@ -41,9 +41,9 @@ int32_t ctgRemoveTbMetaFromCache(SCatalog* pCtg, SName* pTableName, bool syncReq
|
|||
tNameGetFullDbName(pTableName, dbFName);
|
||||
|
||||
if (TSDB_SUPER_TABLE == tblMeta->tableType) {
|
||||
CTG_ERR_JRET(ctgPutRmStbToQueue(pCtg, dbFName, tbCtx.tbInfo.dbId, pTableName->tname, tblMeta->suid, syncReq));
|
||||
CTG_ERR_JRET(ctgDropStbMetaEnqueue(pCtg, dbFName, tbCtx.tbInfo.dbId, pTableName->tname, tblMeta->suid, syncReq));
|
||||
} else {
|
||||
CTG_ERR_JRET(ctgPutRmTbToQueue(pCtg, dbFName, tbCtx.tbInfo.dbId, pTableName->tname, syncReq));
|
||||
CTG_ERR_JRET(ctgDropTbMetaEnqueue(pCtg, dbFName, tbCtx.tbInfo.dbId, pTableName->tname, syncReq));
|
||||
}
|
||||
|
||||
_return:
|
||||
|
@ -72,7 +72,7 @@ int32_t ctgGetDBVgInfo(SCatalog* pCtg, void *pTrans, const SEpSet* pMgmtEps, con
|
|||
|
||||
CTG_ERR_JRET(ctgCloneVgInfo(DbOut.dbVgroup, pInfo));
|
||||
|
||||
CTG_ERR_RET(ctgPutUpdateVgToQueue(pCtg, dbFName, DbOut.dbId, DbOut.dbVgroup, false));
|
||||
CTG_ERR_RET(ctgUpdateVgroupEnqueue(pCtg, dbFName, DbOut.dbId, DbOut.dbVgroup, false));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -108,13 +108,13 @@ int32_t ctgRefreshDBVgInfo(SCatalog* pCtg, void *pTrans, const SEpSet* pMgmtEps,
|
|||
if (code) {
|
||||
if (CTG_DB_NOT_EXIST(code) && (NULL != dbCache)) {
|
||||
ctgDebug("db no longer exist, dbFName:%s, dbId:%" PRIx64, input.db, input.dbId);
|
||||
ctgPutRmDBToQueue(pCtg, input.db, input.dbId);
|
||||
ctgDropDbCacheEnqueue(pCtg, input.db, input.dbId);
|
||||
}
|
||||
|
||||
CTG_ERR_RET(code);
|
||||
}
|
||||
|
||||
CTG_ERR_RET(ctgPutUpdateVgToQueue(pCtg, dbFName, DbOut.dbId, DbOut.dbVgroup, true));
|
||||
CTG_ERR_RET(ctgUpdateVgroupEnqueue(pCtg, dbFName, DbOut.dbId, DbOut.dbVgroup, true));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -201,7 +201,7 @@ int32_t ctgRefreshTbMeta(CTG_PARAMS, SCtgTbMetaCtx* ctx, STableMetaOutput **pOut
|
|||
CTG_ERR_JRET(ctgCloneMetaOutput(output, pOutput));
|
||||
}
|
||||
|
||||
CTG_ERR_JRET(ctgPutUpdateTbToQueue(pCtg, output, syncReq));
|
||||
CTG_ERR_JRET(ctgUpdateTbMetaEnqueue(pCtg, output, syncReq));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -298,9 +298,9 @@ _return:
|
|||
}
|
||||
|
||||
if (TSDB_SUPER_TABLE == ctx->tbInfo.tbType) {
|
||||
ctgPutRmStbToQueue(pCtg, dbFName, ctx->tbInfo.dbId, ctx->pName->tname, ctx->tbInfo.suid, false);
|
||||
ctgDropStbMetaEnqueue(pCtg, dbFName, ctx->tbInfo.dbId, ctx->pName->tname, ctx->tbInfo.suid, false);
|
||||
} else {
|
||||
ctgPutRmTbToQueue(pCtg, dbFName, ctx->tbInfo.dbId, ctx->pName->tname, false);
|
||||
ctgDropTbMetaEnqueue(pCtg, dbFName, ctx->tbInfo.dbId, ctx->pName->tname, false);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -348,7 +348,7 @@ int32_t ctgChkAuth(SCatalog* pCtg, void *pTrans, const SEpSet* pMgmtEps, const c
|
|||
|
||||
_return:
|
||||
|
||||
ctgPutUpdateUserToQueue(pCtg, &authRsp, false);
|
||||
ctgUpdateUserEnqueue(pCtg, &authRsp, false);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -670,7 +670,7 @@ int32_t catalogUpdateDBVgInfo(SCatalog* pCtg, const char* dbFName, uint64_t dbId
|
|||
CTG_ERR_JRET(TSDB_CODE_CTG_INVALID_INPUT);
|
||||
}
|
||||
|
||||
code = ctgPutUpdateVgToQueue(pCtg, dbFName, dbId, dbInfo, false);
|
||||
code = ctgUpdateVgroupEnqueue(pCtg, dbFName, dbId, dbInfo, false);
|
||||
|
||||
_return:
|
||||
|
||||
|
@ -691,7 +691,7 @@ int32_t catalogRemoveDB(SCatalog* pCtg, const char* dbFName, uint64_t dbId) {
|
|||
CTG_API_LEAVE(TSDB_CODE_SUCCESS);
|
||||
}
|
||||
|
||||
CTG_ERR_JRET(ctgPutRmDBToQueue(pCtg, dbFName, dbId));
|
||||
CTG_ERR_JRET(ctgDropDbCacheEnqueue(pCtg, dbFName, dbId));
|
||||
|
||||
CTG_API_LEAVE(TSDB_CODE_SUCCESS);
|
||||
|
||||
|
@ -701,7 +701,19 @@ _return:
|
|||
}
|
||||
|
||||
int32_t catalogUpdateVgEpSet(SCatalog* pCtg, const char* dbFName, int32_t vgId, SEpSet *epSet) {
|
||||
return 0;
|
||||
CTG_API_ENTER();
|
||||
|
||||
int32_t code = 0;
|
||||
|
||||
if (NULL == pCtg || NULL == dbFName || NULL == epSet) {
|
||||
CTG_API_LEAVE(TSDB_CODE_CTG_INVALID_INPUT);
|
||||
}
|
||||
|
||||
CTG_ERR_JRET(ctgUpdateVgEpsetEnqueue(pCtg, (char*)dbFName, vgId, epSet));
|
||||
|
||||
_return:
|
||||
|
||||
CTG_API_LEAVE(code);
|
||||
}
|
||||
|
||||
int32_t catalogRemoveTableMeta(SCatalog* pCtg, SName* pTableName) {
|
||||
|
@ -738,7 +750,7 @@ int32_t catalogRemoveStbMeta(SCatalog* pCtg, const char* dbFName, uint64_t dbId,
|
|||
CTG_API_LEAVE(TSDB_CODE_SUCCESS);
|
||||
}
|
||||
|
||||
CTG_ERR_JRET(ctgPutRmStbToQueue(pCtg, dbFName, dbId, stbName, suid, true));
|
||||
CTG_ERR_JRET(ctgDropStbMetaEnqueue(pCtg, dbFName, dbId, stbName, suid, true));
|
||||
|
||||
CTG_API_LEAVE(TSDB_CODE_SUCCESS);
|
||||
|
||||
|
@ -791,7 +803,7 @@ int32_t catalogUpdateSTableMeta(SCatalog* pCtg, STableMetaRsp *rspMsg) {
|
|||
|
||||
CTG_ERR_JRET(queryCreateTableMetaFromMsg(rspMsg, true, &output->tbMeta));
|
||||
|
||||
CTG_ERR_JRET(ctgPutUpdateTbToQueue(pCtg, output, false));
|
||||
CTG_ERR_JRET(ctgUpdateTbMetaEnqueue(pCtg, output, false));
|
||||
|
||||
CTG_API_LEAVE(code);
|
||||
|
||||
|
@ -1152,7 +1164,7 @@ int32_t catalogUpdateUserAuthInfo(SCatalog* pCtg, SGetUserAuthRsp* pAuth) {
|
|||
CTG_API_LEAVE(TSDB_CODE_CTG_INVALID_INPUT);
|
||||
}
|
||||
|
||||
CTG_API_LEAVE(ctgPutUpdateUserToQueue(pCtg, pAuth, false));
|
||||
CTG_API_LEAVE(ctgUpdateUserEnqueue(pCtg, pAuth, false));
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -95,6 +95,30 @@ int32_t ctgInitGetDbCfgTask(SCtgJob *pJob, int32_t taskIdx, char *dbFName) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ctgInitGetDbInfoTask(SCtgJob *pJob, int32_t taskIdx, char *dbFName) {
|
||||
SCtgTask task = {0};
|
||||
|
||||
task.type = CTG_TASK_GET_DB_INFO;
|
||||
task.taskId = taskIdx;
|
||||
task.pJob = pJob;
|
||||
|
||||
task.taskCtx = taosMemoryCalloc(1, sizeof(SCtgDbInfoCtx));
|
||||
if (NULL == task.taskCtx) {
|
||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||
}
|
||||
|
||||
SCtgDbInfoCtx* ctx = task.taskCtx;
|
||||
|
||||
memcpy(ctx->dbFName, dbFName, sizeof(ctx->dbFName));
|
||||
|
||||
taosArrayPush(pJob->pTasks, &task);
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized, dbFName:%s", pJob->queryId, taskIdx, task.type, dbFName);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
||||
int32_t ctgInitGetTbHashTask(SCtgJob *pJob, int32_t taskIdx, SName *name) {
|
||||
SCtgTask task = {0};
|
||||
|
||||
|
@ -219,8 +243,9 @@ int32_t ctgInitJob(CTG_PARAMS, SCtgJob** job, uint64_t reqId, const SCatalogReq*
|
|||
int32_t dbCfgNum = (int32_t)taosArrayGetSize(pReq->pDbCfg);
|
||||
int32_t indexNum = (int32_t)taosArrayGetSize(pReq->pIndex);
|
||||
int32_t userNum = (int32_t)taosArrayGetSize(pReq->pUser);
|
||||
int32_t dbInfoNum = (int32_t)taosArrayGetSize(pReq->pDbInfo);
|
||||
|
||||
int32_t taskNum = tbMetaNum + dbVgNum + udfNum + tbHashNum + qnodeNum + dbCfgNum + indexNum + userNum;
|
||||
int32_t taskNum = tbMetaNum + dbVgNum + udfNum + tbHashNum + qnodeNum + dbCfgNum + indexNum + userNum + dbInfoNum;
|
||||
if (taskNum <= 0) {
|
||||
ctgError("empty input for job, taskNum:%d", taskNum);
|
||||
CTG_ERR_RET(TSDB_CODE_CTG_INVALID_INPUT);
|
||||
|
@ -249,6 +274,7 @@ int32_t ctgInitJob(CTG_PARAMS, SCtgJob** job, uint64_t reqId, const SCatalogReq*
|
|||
pJob->dbCfgNum = dbCfgNum;
|
||||
pJob->indexNum = indexNum;
|
||||
pJob->userNum = userNum;
|
||||
pJob->dbInfoNum = dbInfoNum;
|
||||
|
||||
pJob->pTasks = taosArrayInit(taskNum, sizeof(SCtgTask));
|
||||
|
||||
|
@ -268,6 +294,11 @@ int32_t ctgInitJob(CTG_PARAMS, SCtgJob** job, uint64_t reqId, const SCatalogReq*
|
|||
CTG_ERR_JRET(ctgInitGetDbCfgTask(pJob, taskIdx++, dbFName));
|
||||
}
|
||||
|
||||
for (int32_t i = 0; i < dbInfoNum; ++i) {
|
||||
char *dbFName = taosArrayGet(pReq->pDbInfo, i);
|
||||
CTG_ERR_JRET(ctgInitGetDbInfoTask(pJob, taskIdx++, dbFName));
|
||||
}
|
||||
|
||||
for (int32_t i = 0; i < tbMetaNum; ++i) {
|
||||
SName *name = taosArrayGet(pReq->pTableMeta, i);
|
||||
CTG_ERR_JRET(ctgInitGetTbMetaTask(pJob, taskIdx++, name));
|
||||
|
@ -395,6 +426,20 @@ int32_t ctgDumpDbCfgRes(SCtgTask* pTask) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ctgDumpDbInfoRes(SCtgTask* pTask) {
|
||||
SCtgJob* pJob = pTask->pJob;
|
||||
if (NULL == pJob->jobRes.pDbInfo) {
|
||||
pJob->jobRes.pDbInfo = taosArrayInit(pJob->dbInfoNum, sizeof(SDbInfo));
|
||||
if (NULL == pJob->jobRes.pDbInfo) {
|
||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||
}
|
||||
}
|
||||
|
||||
taosArrayPush(pJob->jobRes.pDbInfo, pTask->res);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ctgDumpUdfRes(SCtgTask* pTask) {
|
||||
SCtgJob* pJob = pTask->pJob;
|
||||
if (NULL == pJob->jobRes.pUdfList) {
|
||||
|
@ -620,7 +665,7 @@ int32_t ctgHandleGetDbVgRsp(SCtgTask* pTask, int32_t reqType, const SDataBuf *pM
|
|||
|
||||
CTG_ERR_JRET(ctgGenerateVgList(pCtg, pOut->dbVgroup->vgHash, (SArray**)&pTask->res));
|
||||
|
||||
CTG_ERR_JRET(ctgPutUpdateVgToQueue(pCtg, ctx->dbFName, pOut->dbId, pOut->dbVgroup, false));
|
||||
CTG_ERR_JRET(ctgUpdateVgroupEnqueue(pCtg, ctx->dbFName, pOut->dbId, pOut->dbVgroup, false));
|
||||
pOut->dbVgroup = NULL;
|
||||
|
||||
break;
|
||||
|
@ -659,7 +704,7 @@ int32_t ctgHandleGetTbHashRsp(SCtgTask* pTask, int32_t reqType, const SDataBuf *
|
|||
|
||||
CTG_ERR_JRET(ctgGetVgInfoFromHashValue(pCtg, pOut->dbVgroup, ctx->pName, (SVgroupInfo*)pTask->res));
|
||||
|
||||
CTG_ERR_JRET(ctgPutUpdateVgToQueue(pCtg, ctx->dbFName, pOut->dbId, pOut->dbVgroup, false));
|
||||
CTG_ERR_JRET(ctgUpdateVgroupEnqueue(pCtg, ctx->dbFName, pOut->dbId, pOut->dbVgroup, false));
|
||||
pOut->dbVgroup = NULL;
|
||||
|
||||
break;
|
||||
|
@ -691,6 +736,11 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgHandleGetDbInfoRsp(SCtgTask* pTask, int32_t reqType, const SDataBuf *pMsg, int32_t rspCode) {
|
||||
CTG_RET(TSDB_CODE_APP_ERROR);
|
||||
}
|
||||
|
||||
|
||||
int32_t ctgHandleGetQnodeRsp(SCtgTask* pTask, int32_t reqType, const SDataBuf *pMsg, int32_t rspCode) {
|
||||
int32_t code = 0;
|
||||
CTG_ERR_JRET(ctgProcessRspMsg(pTask->msgCtx.out, reqType, pMsg->pData, pMsg->len, rspCode, pTask->msgCtx.target));
|
||||
|
@ -769,7 +819,7 @@ _return:
|
|||
}
|
||||
}
|
||||
|
||||
ctgPutUpdateUserToQueue(pCtg, pOut, false);
|
||||
ctgUpdateUserEnqueue(pCtg, pOut, false);
|
||||
taosMemoryFreeClear(pTask->msgCtx.out);
|
||||
|
||||
ctgHandleTaskEnd(pTask, code);
|
||||
|
@ -933,6 +983,41 @@ int32_t ctgLaunchGetDbCfgTask(SCtgTask *pTask) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ctgLaunchGetDbInfoTask(SCtgTask *pTask) {
|
||||
int32_t code = 0;
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
|
||||
SCtgDBCache *dbCache = NULL;
|
||||
SCtgDbInfoCtx* pCtx = (SCtgDbInfoCtx*)pTask->taskCtx;
|
||||
|
||||
pTask->res = taosMemoryCalloc(1, sizeof(SDbInfo));
|
||||
if (NULL == pTask->res) {
|
||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||
}
|
||||
|
||||
SDbInfo* pInfo = (SDbInfo*)pTask->res;
|
||||
CTG_ERR_RET(ctgAcquireVgInfoFromCache(pCtg, pCtx->dbFName, &dbCache));
|
||||
if (NULL != dbCache) {
|
||||
pInfo->vgVer = dbCache->vgInfo->vgVersion;
|
||||
pInfo->dbId = dbCache->dbId;
|
||||
pInfo->tbNum = dbCache->vgInfo->numOfTable;
|
||||
} else {
|
||||
pInfo->vgVer = CTG_DEFAULT_INVALID_VERSION;
|
||||
}
|
||||
|
||||
CTG_ERR_JRET(ctgHandleTaskEnd(pTask, 0));
|
||||
|
||||
_return:
|
||||
|
||||
if (dbCache) {
|
||||
ctgReleaseVgInfo(dbCache);
|
||||
ctgReleaseDBCache(pCtg, dbCache);
|
||||
}
|
||||
|
||||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgLaunchGetIndexTask(SCtgTask *pTask) {
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
|
@ -992,6 +1077,7 @@ SCtgAsyncFps gCtgAsyncFps[] = {
|
|||
{ctgLaunchGetQnodeTask, ctgHandleGetQnodeRsp, ctgDumpQnodeRes},
|
||||
{ctgLaunchGetDbVgTask, ctgHandleGetDbVgRsp, ctgDumpDbVgRes},
|
||||
{ctgLaunchGetDbCfgTask, ctgHandleGetDbCfgRsp, ctgDumpDbCfgRes},
|
||||
{ctgLaunchGetDbInfoTask, ctgHandleGetDbInfoRsp, ctgDumpDbInfoRes},
|
||||
{ctgLaunchGetTbMetaTask, ctgHandleGetTbMetaRsp, ctgDumpTbMetaRes},
|
||||
{ctgLaunchGetTbHashTask, ctgHandleGetTbHashRsp, ctgDumpTbHashRes},
|
||||
{ctgLaunchGetIndexTask, ctgHandleGetIndexRsp, ctgDumpIndexRes},
|
||||
|
|
|
@ -19,37 +19,43 @@
|
|||
#include "catalogInt.h"
|
||||
#include "systable.h"
|
||||
|
||||
SCtgAction gCtgAction[CTG_ACT_MAX] = {
|
||||
SCtgOperation gCtgCacheOperation[CTG_OP_MAX] = {
|
||||
{
|
||||
CTG_ACT_UPDATE_VG,
|
||||
CTG_OP_UPDATE_VGROUP,
|
||||
"update vgInfo",
|
||||
ctgActUpdateVg
|
||||
ctgOpUpdateVgroup
|
||||
},
|
||||
{
|
||||
CTG_ACT_UPDATE_TBL,
|
||||
CTG_OP_UPDATE_TB_META,
|
||||
"update tbMeta",
|
||||
ctgActUpdateTb
|
||||
ctgOpUpdateTbMeta
|
||||
},
|
||||
{
|
||||
CTG_ACT_REMOVE_DB,
|
||||
"remove DB",
|
||||
ctgActRemoveDB
|
||||
CTG_OP_DROP_DB_CACHE,
|
||||
"drop DB",
|
||||
ctgOpDropDbCache
|
||||
},
|
||||
{
|
||||
CTG_ACT_REMOVE_STB,
|
||||
"remove stbMeta",
|
||||
ctgActRemoveStb
|
||||
CTG_OP_DROP_STB_META,
|
||||
"drop stbMeta",
|
||||
ctgOpDropStbMeta
|
||||
},
|
||||
{
|
||||
CTG_ACT_REMOVE_TBL,
|
||||
"remove tbMeta",
|
||||
ctgActRemoveTb
|
||||
CTG_OP_DROP_TB_META,
|
||||
"drop tbMeta",
|
||||
ctgOpDropTbMeta
|
||||
},
|
||||
{
|
||||
CTG_ACT_UPDATE_USER,
|
||||
CTG_OP_UPDATE_USER,
|
||||
"update user",
|
||||
ctgActUpdateUser
|
||||
ctgOpUpdateUser
|
||||
},
|
||||
{
|
||||
CTG_OP_UPDATE_VG_EPSET,
|
||||
"update epset",
|
||||
ctgOpUpdateEpset
|
||||
}
|
||||
|
||||
};
|
||||
|
||||
|
||||
|
@ -405,7 +411,7 @@ int32_t ctgReadTbVerFromCache(SCatalog *pCtg, const SName *pTableName, int32_t *
|
|||
}
|
||||
|
||||
|
||||
int32_t ctgGetTbTypeFromCache(SCatalog* pCtg, const char* dbFName, const char *tableName, int32_t *tbType) {
|
||||
int32_t ctgReadTbTypeFromCache(SCatalog* pCtg, const char* dbFName, const char *tableName, int32_t *tbType) {
|
||||
if (NULL == pCtg->dbCache) {
|
||||
ctgWarn("empty db cache, dbFName:%s, tbName:%s", dbFName, tableName);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -491,7 +497,7 @@ _return:
|
|||
}
|
||||
|
||||
|
||||
void ctgWaitAction(SCtgMetaAction *action) {
|
||||
void ctgWaitOpDone(SCtgCacheOperation *action) {
|
||||
while (true) {
|
||||
tsem_wait(&gCtgMgmt.queue.rspSem);
|
||||
|
||||
|
@ -509,7 +515,7 @@ void ctgWaitAction(SCtgMetaAction *action) {
|
|||
}
|
||||
}
|
||||
|
||||
void ctgPopAction(SCtgMetaAction **action) {
|
||||
void ctgDequeue(SCtgCacheOperation **op) {
|
||||
SCtgQNode *orig = gCtgMgmt.queue.head;
|
||||
|
||||
SCtgQNode *node = gCtgMgmt.queue.head->next;
|
||||
|
@ -519,20 +525,20 @@ void ctgPopAction(SCtgMetaAction **action) {
|
|||
|
||||
taosMemoryFreeClear(orig);
|
||||
|
||||
*action = &node->action;
|
||||
*op = &node->op;
|
||||
}
|
||||
|
||||
|
||||
int32_t ctgPushAction(SCatalog* pCtg, SCtgMetaAction *action) {
|
||||
int32_t ctgEnqueue(SCatalog* pCtg, SCtgCacheOperation *operation) {
|
||||
SCtgQNode *node = taosMemoryCalloc(1, sizeof(SCtgQNode));
|
||||
if (NULL == node) {
|
||||
qError("calloc %d failed", (int32_t)sizeof(SCtgQNode));
|
||||
CTG_RET(TSDB_CODE_CTG_MEM_ERROR);
|
||||
}
|
||||
|
||||
action->seqId = atomic_add_fetch_64(&gCtgMgmt.queue.seqId, 1);
|
||||
operation->seqId = atomic_add_fetch_64(&gCtgMgmt.queue.seqId, 1);
|
||||
|
||||
node->action = *action;
|
||||
node->op = *operation;
|
||||
|
||||
CTG_LOCK(CTG_WRITE, &gCtgMgmt.queue.qlock);
|
||||
gCtgMgmt.queue.tail->next = node;
|
||||
|
@ -544,19 +550,19 @@ int32_t ctgPushAction(SCatalog* pCtg, SCtgMetaAction *action) {
|
|||
|
||||
tsem_post(&gCtgMgmt.queue.reqSem);
|
||||
|
||||
ctgDebug("action [%s] added into queue", gCtgAction[action->act].name);
|
||||
ctgDebug("action [%s] added into queue", gCtgCacheOperation[operation->opId].name);
|
||||
|
||||
if (action->syncReq) {
|
||||
ctgWaitAction(action);
|
||||
if (operation->syncReq) {
|
||||
ctgWaitOpDone(operation);
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
||||
int32_t ctgPutRmDBToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId) {
|
||||
int32_t ctgDropDbCacheEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId) {
|
||||
int32_t code = 0;
|
||||
SCtgMetaAction action= {.act = CTG_ACT_REMOVE_DB};
|
||||
SCtgCacheOperation action= {.opId = CTG_OP_DROP_DB_CACHE};
|
||||
SCtgRemoveDBMsg *msg = taosMemoryMalloc(sizeof(SCtgRemoveDBMsg));
|
||||
if (NULL == msg) {
|
||||
ctgError("malloc %d failed", (int32_t)sizeof(SCtgRemoveDBMsg));
|
||||
|
@ -574,7 +580,7 @@ int32_t ctgPutRmDBToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId) {
|
|||
|
||||
action.data = msg;
|
||||
|
||||
CTG_ERR_JRET(ctgPushAction(pCtg, &action));
|
||||
CTG_ERR_JRET(ctgEnqueue(pCtg, &action));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -585,9 +591,9 @@ _return:
|
|||
}
|
||||
|
||||
|
||||
int32_t ctgPutRmStbToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *stbName, uint64_t suid, bool syncReq) {
|
||||
int32_t ctgDropStbMetaEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *stbName, uint64_t suid, bool syncReq) {
|
||||
int32_t code = 0;
|
||||
SCtgMetaAction action= {.act = CTG_ACT_REMOVE_STB, .syncReq = syncReq};
|
||||
SCtgCacheOperation action= {.opId = CTG_OP_DROP_STB_META, .syncReq = syncReq};
|
||||
SCtgRemoveStbMsg *msg = taosMemoryMalloc(sizeof(SCtgRemoveStbMsg));
|
||||
if (NULL == msg) {
|
||||
ctgError("malloc %d failed", (int32_t)sizeof(SCtgRemoveStbMsg));
|
||||
|
@ -602,7 +608,7 @@ int32_t ctgPutRmStbToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, co
|
|||
|
||||
action.data = msg;
|
||||
|
||||
CTG_ERR_JRET(ctgPushAction(pCtg, &action));
|
||||
CTG_ERR_JRET(ctgEnqueue(pCtg, &action));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -614,9 +620,9 @@ _return:
|
|||
|
||||
|
||||
|
||||
int32_t ctgPutRmTbToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *tbName, bool syncReq) {
|
||||
int32_t ctgDropTbMetaEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *tbName, bool syncReq) {
|
||||
int32_t code = 0;
|
||||
SCtgMetaAction action= {.act = CTG_ACT_REMOVE_TBL, .syncReq = syncReq};
|
||||
SCtgCacheOperation action= {.opId = CTG_OP_DROP_TB_META, .syncReq = syncReq};
|
||||
SCtgRemoveTblMsg *msg = taosMemoryMalloc(sizeof(SCtgRemoveTblMsg));
|
||||
if (NULL == msg) {
|
||||
ctgError("malloc %d failed", (int32_t)sizeof(SCtgRemoveTblMsg));
|
||||
|
@ -630,7 +636,7 @@ int32_t ctgPutRmTbToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, con
|
|||
|
||||
action.data = msg;
|
||||
|
||||
CTG_ERR_JRET(ctgPushAction(pCtg, &action));
|
||||
CTG_ERR_JRET(ctgEnqueue(pCtg, &action));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -640,9 +646,9 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgPutUpdateVgToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, SDBVgInfo* dbInfo, bool syncReq) {
|
||||
int32_t ctgUpdateVgroupEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, SDBVgInfo* dbInfo, bool syncReq) {
|
||||
int32_t code = 0;
|
||||
SCtgMetaAction action= {.act = CTG_ACT_UPDATE_VG, .syncReq = syncReq};
|
||||
SCtgCacheOperation action= {.opId = CTG_OP_UPDATE_VGROUP, .syncReq = syncReq};
|
||||
SCtgUpdateVgMsg *msg = taosMemoryMalloc(sizeof(SCtgUpdateVgMsg));
|
||||
if (NULL == msg) {
|
||||
ctgError("malloc %d failed", (int32_t)sizeof(SCtgUpdateVgMsg));
|
||||
|
@ -662,7 +668,7 @@ int32_t ctgPutUpdateVgToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId,
|
|||
|
||||
action.data = msg;
|
||||
|
||||
CTG_ERR_JRET(ctgPushAction(pCtg, &action));
|
||||
CTG_ERR_JRET(ctgEnqueue(pCtg, &action));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -673,9 +679,9 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgPutUpdateTbToQueue(SCatalog* pCtg, STableMetaOutput *output, bool syncReq) {
|
||||
int32_t ctgUpdateTbMetaEnqueue(SCatalog* pCtg, STableMetaOutput *output, bool syncReq) {
|
||||
int32_t code = 0;
|
||||
SCtgMetaAction action= {.act = CTG_ACT_UPDATE_TBL, .syncReq = syncReq};
|
||||
SCtgCacheOperation action= {.opId = CTG_OP_UPDATE_TB_META, .syncReq = syncReq};
|
||||
SCtgUpdateTblMsg *msg = taosMemoryMalloc(sizeof(SCtgUpdateTblMsg));
|
||||
if (NULL == msg) {
|
||||
ctgError("malloc %d failed", (int32_t)sizeof(SCtgUpdateTblMsg));
|
||||
|
@ -692,7 +698,7 @@ int32_t ctgPutUpdateTbToQueue(SCatalog* pCtg, STableMetaOutput *output, bool syn
|
|||
|
||||
action.data = msg;
|
||||
|
||||
CTG_ERR_JRET(ctgPushAction(pCtg, &action));
|
||||
CTG_ERR_JRET(ctgEnqueue(pCtg, &action));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -703,9 +709,38 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgPutUpdateUserToQueue(SCatalog* pCtg, SGetUserAuthRsp *pAuth, bool syncReq) {
|
||||
int32_t ctgUpdateVgEpsetEnqueue(SCatalog* pCtg, char *dbFName, int32_t vgId, SEpSet* pEpSet) {
|
||||
int32_t code = 0;
|
||||
SCtgMetaAction action= {.act = CTG_ACT_UPDATE_USER, .syncReq = syncReq};
|
||||
SCtgCacheOperation operation= {.opId = CTG_OP_UPDATE_VG_EPSET};
|
||||
SCtgUpdateEpsetMsg *msg = taosMemoryMalloc(sizeof(SCtgUpdateEpsetMsg));
|
||||
if (NULL == msg) {
|
||||
ctgError("malloc %d failed", (int32_t)sizeof(SCtgUpdateEpsetMsg));
|
||||
CTG_ERR_RET(TSDB_CODE_CTG_MEM_ERROR);
|
||||
}
|
||||
|
||||
msg->pCtg = pCtg;
|
||||
strcpy(msg->dbFName, dbFName);
|
||||
msg->vgId = vgId;
|
||||
msg->epSet = *pEpSet;
|
||||
|
||||
operation.data = msg;
|
||||
|
||||
CTG_ERR_JRET(ctgEnqueue(pCtg, &operation));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
_return:
|
||||
|
||||
taosMemoryFreeClear(msg);
|
||||
|
||||
CTG_RET(code);
|
||||
}
|
||||
|
||||
|
||||
|
||||
int32_t ctgUpdateUserEnqueue(SCatalog* pCtg, SGetUserAuthRsp *pAuth, bool syncReq) {
|
||||
int32_t code = 0;
|
||||
SCtgCacheOperation action= {.opId = CTG_OP_UPDATE_USER, .syncReq = syncReq};
|
||||
SCtgUpdateUserMsg *msg = taosMemoryMalloc(sizeof(SCtgUpdateUserMsg));
|
||||
if (NULL == msg) {
|
||||
ctgError("malloc %d failed", (int32_t)sizeof(SCtgUpdateUserMsg));
|
||||
|
@ -717,7 +752,7 @@ int32_t ctgPutUpdateUserToQueue(SCatalog* pCtg, SGetUserAuthRsp *pAuth, bool syn
|
|||
|
||||
action.data = msg;
|
||||
|
||||
CTG_ERR_JRET(ctgPushAction(pCtg, &action));
|
||||
CTG_ERR_JRET(ctgEnqueue(pCtg, &action));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -1219,7 +1254,7 @@ int32_t ctgUpdateTbMetaToCache(SCatalog* pCtg, STableMetaOutput* pOut, bool sync
|
|||
int32_t code = 0;
|
||||
|
||||
CTG_ERR_RET(ctgCloneMetaOutput(pOut, &pOutput));
|
||||
CTG_ERR_JRET(ctgPutUpdateTbToQueue(pCtg, pOutput, syncReq));
|
||||
CTG_ERR_JRET(ctgUpdateTbMetaEnqueue(pCtg, pOutput, syncReq));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -1230,9 +1265,9 @@ _return:
|
|||
}
|
||||
|
||||
|
||||
int32_t ctgActUpdateVg(SCtgMetaAction *action) {
|
||||
int32_t ctgOpUpdateVgroup(SCtgCacheOperation *operation) {
|
||||
int32_t code = 0;
|
||||
SCtgUpdateVgMsg *msg = action->data;
|
||||
SCtgUpdateVgMsg *msg = operation->data;
|
||||
|
||||
CTG_ERR_JRET(ctgWriteDBVgInfoToCache(msg->pCtg, msg->dbFName, msg->dbId, &msg->dbInfo));
|
||||
|
||||
|
@ -1244,9 +1279,9 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgActRemoveDB(SCtgMetaAction *action) {
|
||||
int32_t ctgOpDropDbCache(SCtgCacheOperation *operation) {
|
||||
int32_t code = 0;
|
||||
SCtgRemoveDBMsg *msg = action->data;
|
||||
SCtgRemoveDBMsg *msg = operation->data;
|
||||
SCatalog* pCtg = msg->pCtg;
|
||||
|
||||
SCtgDBCache *dbCache = NULL;
|
||||
|
@ -1270,9 +1305,9 @@ _return:
|
|||
}
|
||||
|
||||
|
||||
int32_t ctgActUpdateTb(SCtgMetaAction *action) {
|
||||
int32_t ctgOpUpdateTbMeta(SCtgCacheOperation *operation) {
|
||||
int32_t code = 0;
|
||||
SCtgUpdateTblMsg *msg = action->data;
|
||||
SCtgUpdateTblMsg *msg = operation->data;
|
||||
SCatalog* pCtg = msg->pCtg;
|
||||
STableMetaOutput* output = msg->output;
|
||||
SCtgDBCache *dbCache = NULL;
|
||||
|
@ -1316,9 +1351,9 @@ _return:
|
|||
}
|
||||
|
||||
|
||||
int32_t ctgActRemoveStb(SCtgMetaAction *action) {
|
||||
int32_t ctgOpDropStbMeta(SCtgCacheOperation *operation) {
|
||||
int32_t code = 0;
|
||||
SCtgRemoveStbMsg *msg = action->data;
|
||||
SCtgRemoveStbMsg *msg = operation->data;
|
||||
SCatalog* pCtg = msg->pCtg;
|
||||
|
||||
SCtgDBCache *dbCache = NULL;
|
||||
|
@ -1362,9 +1397,9 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgActRemoveTb(SCtgMetaAction *action) {
|
||||
int32_t ctgOpDropTbMeta(SCtgCacheOperation *operation) {
|
||||
int32_t code = 0;
|
||||
SCtgRemoveTblMsg *msg = action->data;
|
||||
SCtgRemoveTblMsg *msg = operation->data;
|
||||
SCatalog* pCtg = msg->pCtg;
|
||||
|
||||
SCtgDBCache *dbCache = NULL;
|
||||
|
@ -1397,9 +1432,9 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgActUpdateUser(SCtgMetaAction *action) {
|
||||
int32_t ctgOpUpdateUser(SCtgCacheOperation *operation) {
|
||||
int32_t code = 0;
|
||||
SCtgUpdateUserMsg *msg = action->data;
|
||||
SCtgUpdateUserMsg *msg = operation->data;
|
||||
SCatalog* pCtg = msg->pCtg;
|
||||
|
||||
if (NULL == pCtg->userCache) {
|
||||
|
@ -1460,14 +1495,60 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
void ctgUpdateThreadFuncUnexpectedStopped(void) {
|
||||
int32_t ctgOpUpdateEpset(SCtgCacheOperation *operation) {
|
||||
int32_t code = 0;
|
||||
SCtgUpdateEpsetMsg *msg = operation->data;
|
||||
SCatalog* pCtg = msg->pCtg;
|
||||
|
||||
SCtgDBCache *dbCache = NULL;
|
||||
CTG_ERR_RET(ctgAcquireDBCache(pCtg, msg->dbFName, &dbCache));
|
||||
if (NULL == dbCache) {
|
||||
ctgDebug("db %s not exist, ignore epset update", msg->dbFName);
|
||||
goto _return;
|
||||
}
|
||||
|
||||
SDBVgInfo *vgInfo = NULL;
|
||||
CTG_ERR_RET(ctgWAcquireVgInfo(pCtg, dbCache));
|
||||
|
||||
if (NULL == dbCache->vgInfo) {
|
||||
ctgWReleaseVgInfo(dbCache);
|
||||
ctgDebug("vgroup in db %s not cached, ignore epset update", msg->dbFName);
|
||||
goto _return;
|
||||
}
|
||||
|
||||
SVgroupInfo* pInfo = taosHashGet(dbCache->vgInfo->vgHash, &msg->vgId, sizeof(msg->vgId));
|
||||
if (NULL == pInfo) {
|
||||
ctgWReleaseVgInfo(dbCache);
|
||||
ctgDebug("no vgroup %d in db %s, ignore epset update", msg->vgId, msg->dbFName);
|
||||
goto _return;
|
||||
}
|
||||
|
||||
pInfo->epSet = msg->epSet;
|
||||
|
||||
ctgDebug("epset in vgroup %d updated, dbFName:%s", pInfo->vgId, msg->dbFName);
|
||||
|
||||
ctgWReleaseVgInfo(dbCache);
|
||||
|
||||
_return:
|
||||
|
||||
if (dbCache) {
|
||||
ctgReleaseDBCache(msg->pCtg, dbCache);
|
||||
}
|
||||
|
||||
taosMemoryFreeClear(msg);
|
||||
|
||||
CTG_RET(code);
|
||||
}
|
||||
|
||||
|
||||
void ctgUpdateThreadUnexpectedStopped(void) {
|
||||
if (CTG_IS_LOCKED(&gCtgMgmt.lock) > 0) CTG_UNLOCK(CTG_READ, &gCtgMgmt.lock);
|
||||
}
|
||||
|
||||
void* ctgUpdateThreadFunc(void* param) {
|
||||
setThreadName("catalog");
|
||||
#ifdef WINDOWS
|
||||
atexit(ctgUpdateThreadFuncUnexpectedStopped);
|
||||
atexit(ctgUpdateThreadUnexpectedStopped);
|
||||
#endif
|
||||
qInfo("catalog update thread started");
|
||||
|
||||
|
@ -1483,17 +1564,17 @@ void* ctgUpdateThreadFunc(void* param) {
|
|||
break;
|
||||
}
|
||||
|
||||
SCtgMetaAction *action = NULL;
|
||||
ctgPopAction(&action);
|
||||
SCatalog *pCtg = ((SCtgUpdateMsgHeader *)action->data)->pCtg;
|
||||
SCtgCacheOperation *operation = NULL;
|
||||
ctgDequeue(&operation);
|
||||
SCatalog *pCtg = ((SCtgUpdateMsgHeader *)operation->data)->pCtg;
|
||||
|
||||
ctgDebug("process [%s] action", gCtgAction[action->act].name);
|
||||
ctgDebug("process [%s] operation", gCtgCacheOperation[operation->opId].name);
|
||||
|
||||
(*gCtgAction[action->act].func)(action);
|
||||
(*gCtgCacheOperation[operation->opId].func)(operation);
|
||||
|
||||
gCtgMgmt.queue.seqDone = action->seqId;
|
||||
gCtgMgmt.queue.seqDone = operation->seqId;
|
||||
|
||||
if (action->syncReq) {
|
||||
if (operation->syncReq) {
|
||||
tsem_post(&gCtgMgmt.queue.rspSem);
|
||||
}
|
||||
|
||||
|
|
|
@ -42,6 +42,9 @@ void ctgFreeSMetaData(SMetaData* pData) {
|
|||
}
|
||||
taosArrayDestroy(pData->pDbCfg);
|
||||
pData->pDbCfg = NULL;
|
||||
|
||||
taosArrayDestroy(pData->pDbInfo);
|
||||
pData->pDbInfo = NULL;
|
||||
|
||||
taosArrayDestroy(pData->pIndex);
|
||||
pData->pIndex = NULL;
|
||||
|
@ -293,9 +296,12 @@ void ctgFreeTask(SCtgTask* pTask) {
|
|||
}
|
||||
case CTG_TASK_GET_DB_CFG: {
|
||||
taosMemoryFreeClear(pTask->taskCtx);
|
||||
if (pTask->res) {
|
||||
taosMemoryFreeClear(pTask->res);
|
||||
}
|
||||
taosMemoryFreeClear(pTask->res);
|
||||
break;
|
||||
}
|
||||
case CTG_TASK_GET_DB_INFO: {
|
||||
taosMemoryFreeClear(pTask->taskCtx);
|
||||
taosMemoryFreeClear(pTask->res);
|
||||
break;
|
||||
}
|
||||
case CTG_TASK_GET_TB_HASH: {
|
||||
|
|
|
@ -41,7 +41,7 @@
|
|||
namespace {
|
||||
|
||||
extern "C" int32_t ctgdGetClusterCacheNum(struct SCatalog* pCatalog, int32_t type);
|
||||
extern "C" int32_t ctgActUpdateTb(SCtgMetaAction *action);
|
||||
extern "C" int32_t ctgOpUpdateTbMeta(SCtgCacheOperation *action);
|
||||
extern "C" int32_t ctgdEnableDebug(char *option);
|
||||
extern "C" int32_t ctgdGetStatNum(char *option, void *res);
|
||||
|
||||
|
@ -888,9 +888,9 @@ void *ctgTestSetCtableMetaThread(void *param) {
|
|||
int32_t n = 0;
|
||||
STableMetaOutput *output = NULL;
|
||||
|
||||
SCtgMetaAction action = {0};
|
||||
SCtgCacheOperation operation = {0};
|
||||
|
||||
action.act = CTG_ACT_UPDATE_TBL;
|
||||
operation.opId = CTG_OP_UPDATE_TB_META;
|
||||
|
||||
while (!ctgTestStop) {
|
||||
output = (STableMetaOutput *)taosMemoryMalloc(sizeof(STableMetaOutput));
|
||||
|
@ -899,9 +899,9 @@ void *ctgTestSetCtableMetaThread(void *param) {
|
|||
SCtgUpdateTblMsg *msg = (SCtgUpdateTblMsg *)taosMemoryMalloc(sizeof(SCtgUpdateTblMsg));
|
||||
msg->pCtg = pCtg;
|
||||
msg->output = output;
|
||||
action.data = msg;
|
||||
operation.data = msg;
|
||||
|
||||
code = ctgActUpdateTb(&action);
|
||||
code = ctgOpUpdateTbMeta(&operation);
|
||||
if (code) {
|
||||
assert(0);
|
||||
}
|
||||
|
|
|
@ -156,6 +156,14 @@ static int32_t translatePercentile(SFunctionNode* pFunc, char* pErrBuf, int32_t
|
|||
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
//param0
|
||||
SNode* pParamNode0 = nodesListGetNode(pFunc->pParameterList, 0);
|
||||
if (nodeType(pParamNode0) != QUERY_NODE_COLUMN) {
|
||||
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
|
||||
"The first parameter of PERCENTILE function can only be column");
|
||||
}
|
||||
|
||||
//param1
|
||||
SValueNode* pValue = (SValueNode*)nodesListGetNode(pFunc->pParameterList, 1);
|
||||
|
||||
if (pValue->datum.i < 0 || pValue->datum.i > 100) {
|
||||
|
@ -170,6 +178,7 @@ static int32_t translatePercentile(SFunctionNode* pFunc, char* pErrBuf, int32_t
|
|||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
//set result type
|
||||
pFunc->node.resType = (SDataType){.bytes = tDataTypes[TSDB_DATA_TYPE_DOUBLE].bytes, .type = TSDB_DATA_TYPE_DOUBLE};
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -188,30 +197,47 @@ static int32_t translateApercentile(SFunctionNode* pFunc, char* pErrBuf, int32_t
|
|||
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
uint8_t para1Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
|
||||
uint8_t para2Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type;
|
||||
if (!IS_NUMERIC_TYPE(para1Type) || !IS_INTEGER_TYPE(para2Type)) {
|
||||
//param0
|
||||
SNode* pParamNode0 = nodesListGetNode(pFunc->pParameterList, 0);
|
||||
if (nodeType(pParamNode0) != QUERY_NODE_COLUMN) {
|
||||
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
|
||||
"The first parameter of APERCENTILE function can only be column");
|
||||
}
|
||||
|
||||
//param1
|
||||
SNode* pParamNode1 = nodesListGetNode(pFunc->pParameterList, 1);
|
||||
if (nodeType(pParamNode1) != QUERY_NODE_VALUE) {
|
||||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
SNode* pParamNode = nodesListGetNode(pFunc->pParameterList, 1);
|
||||
if (nodeType(pParamNode) != QUERY_NODE_VALUE) {
|
||||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
SValueNode* pValue = (SValueNode*)pParamNode;
|
||||
SValueNode* pValue = (SValueNode*)pParamNode1;
|
||||
if (pValue->datum.i < 0 || pValue->datum.i > 100) {
|
||||
return invaildFuncParaValueErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
pValue->notReserved = true;
|
||||
|
||||
uint8_t para1Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
|
||||
uint8_t para2Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type;
|
||||
if (!IS_NUMERIC_TYPE(para1Type) || !IS_INTEGER_TYPE(para2Type)) {
|
||||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
//param2
|
||||
if (3 == numOfParams) {
|
||||
SNode* pPara3 = nodesListGetNode(pFunc->pParameterList, 2);
|
||||
if (QUERY_NODE_VALUE != nodeType(pPara3) || !validAperventileAlgo((SValueNode*)pPara3)) {
|
||||
uint8_t para3Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 2))->resType.type;
|
||||
if (!IS_VAR_DATA_TYPE(para3Type)) {
|
||||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
SNode* pParamNode2 = nodesListGetNode(pFunc->pParameterList, 2);
|
||||
if (QUERY_NODE_VALUE != nodeType(pParamNode2) || !validAperventileAlgo((SValueNode*)pParamNode2)) {
|
||||
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
|
||||
"Third parameter algorithm of apercentile must be 'default' or 't-digest'");
|
||||
}
|
||||
|
||||
pValue = (SValueNode*)pParamNode2;
|
||||
pValue->notReserved = true;
|
||||
}
|
||||
|
||||
pFunc->node.resType = (SDataType){.bytes = tDataTypes[TSDB_DATA_TYPE_DOUBLE].bytes, .type = TSDB_DATA_TYPE_DOUBLE};
|
||||
|
@ -700,6 +726,11 @@ static int32_t translateDiff(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
|
|||
|
||||
//param1
|
||||
if (numOfParams == 2) {
|
||||
uint8_t paraType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type;
|
||||
if (!IS_INTEGER_TYPE(paraType)) {
|
||||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
SNode* pParamNode1 = nodesListGetNode(pFunc->pParameterList, 1);
|
||||
if (QUERY_NODE_VALUE != nodeType(pParamNode1)) {
|
||||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
|
|
|
@ -1965,7 +1965,7 @@ bool apercentileFunctionSetup(SqlFunctionCtx* pCtx, SResultRowEntryInfo* pResult
|
|||
if (pCtx->numOfParams == 2) {
|
||||
pInfo->algo = APERCT_ALGO_DEFAULT;
|
||||
} else if (pCtx->numOfParams == 3) {
|
||||
pInfo->algo = getApercentileAlgo(pCtx->param[2].param.pz);
|
||||
pInfo->algo = getApercentileAlgo(varDataVal(pCtx->param[2].param.pz));
|
||||
if (pInfo->algo == APERCT_ALGO_UNKNOWN) {
|
||||
return false;
|
||||
}
|
||||
|
|
|
@ -925,10 +925,9 @@ int32_t toUnixtimestampFunction(SScalarParam *pInput, int32_t inputNum, SScalarP
|
|||
int32_t ret = convertStringToTimestamp(type, input, timePrec, &timeVal);
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
colDataAppendNULL(pOutput->columnData, i);
|
||||
continue;
|
||||
} else {
|
||||
colDataAppend(pOutput->columnData, i, (char *)&timeVal, false);
|
||||
}
|
||||
|
||||
colDataAppend(pOutput->columnData, i, (char *)&timeVal, false);
|
||||
}
|
||||
|
||||
pOutput->numOfRows = pInput->numOfRows;
|
||||
|
|
|
@ -141,6 +141,8 @@ sql connect
|
|||
|
||||
sql select count(a), count(b), count(c), count(d), count(e), count(f), count(g), count(h) from d1.tb;
|
||||
sql select count(a), count(b), count(c), count(d), count(e), count(f), count(g), count(h) from d1.tb;
|
||||
|
||||
sql use d1
|
||||
sql select count(a), count(b), count(c), count(d), count(e), count(f), count(g), count(h) from tb
|
||||
if $data00 != 24 then
|
||||
return -1
|
||||
|
|
|
@ -22,8 +22,8 @@ class TDTestCase:
|
|||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
#tdSql.init(conn.cursor())
|
||||
tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
@ -186,7 +186,7 @@ class TDTestCase:
|
|||
time.sleep(1)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
|
||||
|
@ -228,7 +228,7 @@ class TDTestCase:
|
|||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'batchNum': 200, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
|
@ -300,7 +300,7 @@ class TDTestCase:
|
|||
time.sleep(1)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
|
||||
|
@ -349,8 +349,8 @@ class TDTestCase:
|
|||
'vgroups': 1, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 30000, \
|
||||
'batchNum': 100, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 200, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
|
@ -381,8 +381,8 @@ class TDTestCase:
|
|||
'vgroups': 1, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 30000, \
|
||||
'batchNum': 100, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 200, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict2['cfg'] = cfgPath
|
||||
tdSql.execute("create stable if not exists %s.%s (ts timestamp, c1 bigint, c2 binary(16)) tags(t1 int)"%(parameterDict2['dbName'], parameterDict2['stbName']))
|
||||
|
@ -432,7 +432,7 @@ class TDTestCase:
|
|||
time.sleep(1)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
|
||||
|
|
|
@ -22,8 +22,8 @@ class TDTestCase:
|
|||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
#tdSql.init(conn.cursor())
|
||||
tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
@ -167,7 +167,7 @@ class TDTestCase:
|
|||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
@ -197,7 +197,7 @@ class TDTestCase:
|
|||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -236,7 +236,7 @@ class TDTestCase:
|
|||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 20
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -264,7 +264,7 @@ class TDTestCase:
|
|||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
@ -298,7 +298,7 @@ class TDTestCase:
|
|||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 20
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -313,8 +313,8 @@ class TDTestCase:
|
|||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
if not (totalConsumeRows >= expectrowcnt):
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
|
@ -330,7 +330,7 @@ class TDTestCase:
|
|||
'vgroups': 4, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
@ -364,7 +364,7 @@ class TDTestCase:
|
|||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 10
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -399,7 +399,7 @@ class TDTestCase:
|
|||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
@ -416,7 +416,7 @@ class TDTestCase:
|
|||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
@ -446,7 +446,7 @@ class TDTestCase:
|
|||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 10
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -470,336 +470,6 @@ class TDTestCase:
|
|||
|
||||
tdLog.printNoPrefix("======== test case 3 end ...... ")
|
||||
|
||||
def tmqCase4(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 4: Produce while two consumers to subscribe one db, include 2 stb")
|
||||
tdLog.info("step 1: create database, stb, ctb and insert data")
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'dbName': 'db4', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict['dbName'], parameterDict['vgroups']))
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'dbName': 'db4', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
tdLog.info("create topics from db")
|
||||
topicName1 = 'topic_db1'
|
||||
|
||||
tdSql.execute("create topic %s as %s" %(topicName1, parameterDict['dbName']))
|
||||
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
|
||||
topicList = topicName1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
consumerId = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# wait for data ready
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 4 end ...... ")
|
||||
|
||||
def tmqCase5(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 5: Produce while two consumers to subscribe one db, firstly create one stb, after start consume create other stb")
|
||||
tdLog.info("step 1: create database, stb, ctb and insert data")
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'dbName': 'db5', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict['dbName'], parameterDict['vgroups']))
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'dbName': 'db5', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
tdLog.info("create topics from db")
|
||||
topicName1 = 'topic_db1'
|
||||
|
||||
tdSql.execute("create topic %s as %s" %(topicName1, parameterDict['dbName']))
|
||||
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
|
||||
topicList = topicName1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
consumerId = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
# wait for data ready
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows < expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 5 end ...... ")
|
||||
|
||||
def tmqCase6(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 6: Produce while one consumers to subscribe tow topic, Each contains one db")
|
||||
tdLog.info("step 1: create database, stb, ctb and insert data")
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'dbName': 'db60', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict['dbName'], parameterDict['vgroups']))
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'dbName': 'db61', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict2['dbName'], parameterDict2['vgroups']))
|
||||
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
tdLog.info("create topics from db")
|
||||
topicName1 = 'topic_db60'
|
||||
topicName2 = 'topic_db61'
|
||||
|
||||
tdSql.execute("create topic %s as %s" %(topicName1, parameterDict['dbName']))
|
||||
tdSql.execute("create topic %s as %s" %(topicName2, parameterDict2['dbName']))
|
||||
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
|
||||
topicList = topicName1 + ',' + topicName2
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
#consumerId = 1
|
||||
#self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# wait for data ready
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
tdSql.query("drop topic %s"%topicName2)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 6 end ...... ")
|
||||
|
||||
def tmqCase7(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 7: Produce while two consumers to subscribe tow topic, Each contains one db")
|
||||
tdLog.info("step 1: create database, stb, ctb and insert data")
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'dbName': 'db70', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict['dbName'], parameterDict['vgroups']))
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'dbName': 'db71', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict2['dbName'], parameterDict2['vgroups']))
|
||||
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
tdLog.info("create topics from db")
|
||||
topicName1 = 'topic_db60'
|
||||
topicName2 = 'topic_db61'
|
||||
|
||||
tdSql.execute("create topic %s as %s" %(topicName1, parameterDict['dbName']))
|
||||
tdSql.execute("create topic %s as %s" %(topicName2, parameterDict2['dbName']))
|
||||
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
|
||||
topicList = topicName1 + ',' + topicName2
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
consumerId = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# wait for data ready
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
tdSql.query("drop topic %s"%topicName2)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 7 end ...... ")
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
|
@ -815,12 +485,7 @@ class TDTestCase:
|
|||
self.tmqCase2(cfgPath, buildPath)
|
||||
self.tmqCase2a(cfgPath, buildPath)
|
||||
self.tmqCase3(cfgPath, buildPath)
|
||||
self.tmqCase4(cfgPath, buildPath)
|
||||
self.tmqCase5(cfgPath, buildPath)
|
||||
self.tmqCase6(cfgPath, buildPath)
|
||||
self.tmqCase7(cfgPath, buildPath)
|
||||
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success(f"{__file__} successfully executed")
|
||||
|
|
|
@ -0,0 +1,515 @@
|
|||
|
||||
import taos
|
||||
import sys
|
||||
import time
|
||||
import socket
|
||||
import os
|
||||
import threading
|
||||
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
from util.cases import *
|
||||
from util.dnodes import *
|
||||
|
||||
class TDTestCase:
|
||||
hostname = socket.gethostname()
|
||||
#rpcDebugFlagVal = '143'
|
||||
#clientCfgDict = {'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
|
||||
#clientCfgDict["rpcDebugFlag"] = rpcDebugFlagVal
|
||||
#updatecfgDict = {'clientCfg': {}, 'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
|
||||
#updatecfgDict["rpcDebugFlag"] = rpcDebugFlagVal
|
||||
#print ("===================: ", updatecfgDict)
|
||||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def newcur(self,cfg,host,port):
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
con=taos.connect(host=host, user=user, password=password, config=cfg ,port=port)
|
||||
cur=con.cursor()
|
||||
print(cur)
|
||||
return cur
|
||||
|
||||
def initConsumerTable(self,cdbName='cdb'):
|
||||
tdLog.info("create consume database, and consume info table, and consume result table")
|
||||
tdSql.query("create database if not exists %s vgroups 1"%(cdbName))
|
||||
tdSql.query("drop table if exists %s.consumeinfo "%(cdbName))
|
||||
tdSql.query("drop table if exists %s.consumeresult "%(cdbName))
|
||||
|
||||
tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName)
|
||||
tdSql.query("create table %s.consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)"%cdbName)
|
||||
|
||||
def insertConsumerInfo(self,consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifmanualcommit,cdbName='cdb'):
|
||||
sql = "insert into %s.consumeinfo values "%cdbName
|
||||
sql += "(now, %d, '%s', '%s', %d, %d, %d)"%(consumerId, topicList, keyList, expectrowcnt, ifcheckdata, ifmanualcommit)
|
||||
tdLog.info("consume info sql: %s"%sql)
|
||||
tdSql.query(sql)
|
||||
|
||||
def selectConsumeResult(self,expectRows,cdbName='cdb'):
|
||||
resultList=[]
|
||||
while 1:
|
||||
tdSql.query("select * from %s.consumeresult"%cdbName)
|
||||
#tdLog.info("row: %d, %l64d, %l64d"%(tdSql.getData(0, 1),tdSql.getData(0, 2),tdSql.getData(0, 3))
|
||||
if tdSql.getRows() == expectRows:
|
||||
break
|
||||
else:
|
||||
time.sleep(5)
|
||||
|
||||
for i in range(expectRows):
|
||||
tdLog.info ("consume id: %d, consume msgs: %d, consume rows: %d"%(tdSql.getData(i , 1), tdSql.getData(i , 2), tdSql.getData(i , 3)))
|
||||
resultList.append(tdSql.getData(i , 3))
|
||||
|
||||
return resultList
|
||||
|
||||
def startTmqSimProcess(self,buildPath,cfgPath,pollDelay,dbName,showMsg=1,showRow=1,cdbName='cdb',valgrind=0):
|
||||
shellCmd = 'nohup '
|
||||
if valgrind == 1:
|
||||
logFile = cfgPath + '/../log/valgrind-tmq.log'
|
||||
shellCmd = 'nohup valgrind --log-file=' + logFile
|
||||
shellCmd += '--tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all --num-callers=20 -v --workaround-gcc296-bugs=yes '
|
||||
|
||||
shellCmd += buildPath + '/build/bin/tmq_sim -c ' + cfgPath
|
||||
shellCmd += " -y %d -d %s -g %d -r %d -w %s "%(pollDelay, dbName, showMsg, showRow, cdbName)
|
||||
shellCmd += "> /dev/null 2>&1 &"
|
||||
tdLog.info(shellCmd)
|
||||
os.system(shellCmd)
|
||||
|
||||
def create_tables(self,tsql, dbName,vgroups,stbName,ctbNum):
|
||||
tsql.execute("create database if not exists %s vgroups %d"%(dbName, vgroups))
|
||||
tsql.execute("use %s" %dbName)
|
||||
tsql.execute("create table if not exists %s (ts timestamp, c1 bigint, c2 binary(16)) tags(t1 int)"%stbName)
|
||||
pre_create = "create table"
|
||||
sql = pre_create
|
||||
#tdLog.debug("doing create one stable %s and %d child table in %s ..." %(stbname, count ,dbname))
|
||||
for i in range(ctbNum):
|
||||
sql += " %s_%d using %s tags(%d)"%(stbName,i,stbName,i+1)
|
||||
if (i > 0) and (i%100 == 0):
|
||||
tsql.execute(sql)
|
||||
sql = pre_create
|
||||
if sql != pre_create:
|
||||
tsql.execute(sql)
|
||||
|
||||
event.set()
|
||||
tdLog.debug("complete to create database[%s], stable[%s] and %d child tables" %(dbName, stbName, ctbNum))
|
||||
return
|
||||
|
||||
def insert_data(self,tsql,dbName,stbName,ctbNum,rowsPerTbl,batchNum,startTs):
|
||||
tdLog.debug("start to insert data ............")
|
||||
tsql.execute("use %s" %dbName)
|
||||
pre_insert = "insert into "
|
||||
sql = pre_insert
|
||||
|
||||
t = time.time()
|
||||
startTs = int(round(t * 1000))
|
||||
#tdLog.debug("doing insert data into stable:%s rows:%d ..."%(stbName, allRows))
|
||||
for i in range(ctbNum):
|
||||
sql += " %s_%d values "%(stbName,i)
|
||||
for j in range(rowsPerTbl):
|
||||
sql += "(%d, %d, 'tmqrow_%d') "%(startTs + j, j, j)
|
||||
if (j > 0) and ((j%batchNum == 0) or (j == rowsPerTbl - 1)):
|
||||
tsql.execute(sql)
|
||||
if j < rowsPerTbl - 1:
|
||||
sql = "insert into %s_%d values " %(stbName,i)
|
||||
else:
|
||||
sql = "insert into "
|
||||
#end sql
|
||||
if sql != pre_insert:
|
||||
#print("insert sql:%s"%sql)
|
||||
tsql.execute(sql)
|
||||
tdLog.debug("insert data ............ [OK]")
|
||||
return
|
||||
|
||||
def prepareEnv(self, **parameterDict):
|
||||
print ("input parameters:")
|
||||
print (parameterDict)
|
||||
# create new connector for my thread
|
||||
tsql=self.newcur(parameterDict['cfg'], 'localhost', 6030)
|
||||
self.create_tables(tsql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["vgroups"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"])
|
||||
|
||||
self.insert_data(tsql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"],\
|
||||
parameterDict["startTs"])
|
||||
return
|
||||
|
||||
def tmqCase4(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 4: Produce while two consumers to subscribe one db, include 2 stb")
|
||||
tdLog.info("step 1: create database, stb, ctb and insert data")
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'dbName': 'db4', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict['dbName'], parameterDict['vgroups']))
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'dbName': 'db4', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
tdLog.info("create topics from db")
|
||||
topicName1 = 'topic_db1'
|
||||
|
||||
tdSql.execute("create topic %s as %s" %(topicName1, parameterDict['dbName']))
|
||||
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
|
||||
topicList = topicName1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
consumerId = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# wait for data ready
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 4 end ...... ")
|
||||
|
||||
def tmqCase5(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 5: Produce while two consumers to subscribe one db, firstly create one stb, after start consume create other stb")
|
||||
tdLog.info("step 1: create database, stb, ctb and insert data")
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'dbName': 'db5', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict['dbName'], parameterDict['vgroups']))
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'dbName': 'db5', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
tdLog.info("create topics from db")
|
||||
topicName1 = 'topic_db1'
|
||||
|
||||
tdSql.execute("create topic %s as %s" %(topicName1, parameterDict['dbName']))
|
||||
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
|
||||
topicList = topicName1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
consumerId = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
# wait for data ready
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows < expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 5 end ...... ")
|
||||
|
||||
def tmqCase6(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 6: Produce while one consumers to subscribe tow topic, Each contains one db")
|
||||
tdLog.info("step 1: create database, stb, ctb and insert data")
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'dbName': 'db60', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict['dbName'], parameterDict['vgroups']))
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'dbName': 'db61', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict2['dbName'], parameterDict2['vgroups']))
|
||||
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
tdLog.info("create topics from db")
|
||||
topicName1 = 'topic_db60'
|
||||
topicName2 = 'topic_db61'
|
||||
|
||||
tdSql.execute("create topic %s as %s" %(topicName1, parameterDict['dbName']))
|
||||
tdSql.execute("create topic %s as %s" %(topicName2, parameterDict2['dbName']))
|
||||
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
|
||||
topicList = topicName1 + ',' + topicName2
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
#consumerId = 1
|
||||
#self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# wait for data ready
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
tdSql.query("drop topic %s"%topicName2)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 6 end ...... ")
|
||||
|
||||
def tmqCase7(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 7: Produce while two consumers to subscribe tow topic, Each contains one db")
|
||||
tdLog.info("step 1: create database, stb, ctb and insert data")
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'dbName': 'db70', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict['dbName'], parameterDict['vgroups']))
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'dbName': 'db71', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict2['dbName'], parameterDict2['vgroups']))
|
||||
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
tdLog.info("create topics from db")
|
||||
topicName1 = 'topic_db60'
|
||||
topicName2 = 'topic_db61'
|
||||
|
||||
tdSql.execute("create topic %s as %s" %(topicName1, parameterDict['dbName']))
|
||||
tdSql.execute("create topic %s as %s" %(topicName2, parameterDict2['dbName']))
|
||||
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
|
||||
topicList = topicName1 + ',' + topicName2
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
consumerId = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# wait for data ready
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
tdSql.query("drop topic %s"%topicName2)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 7 end ...... ")
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
buildPath = self.getBuildPath()
|
||||
if (buildPath == ""):
|
||||
tdLog.exit("taosd not found!")
|
||||
else:
|
||||
tdLog.info("taosd found in %s" % buildPath)
|
||||
cfgPath = buildPath + "/../sim/psim/cfg"
|
||||
tdLog.info("cfgPath: %s" % cfgPath)
|
||||
|
||||
self.tmqCase4(cfgPath, buildPath)
|
||||
self.tmqCase5(cfgPath, buildPath)
|
||||
self.tmqCase6(cfgPath, buildPath)
|
||||
self.tmqCase7(cfgPath, buildPath)
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success(f"{__file__} successfully executed")
|
||||
|
||||
event = threading.Event()
|
||||
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
|
@ -22,8 +22,8 @@ class TDTestCase:
|
|||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
#tdSql.init(conn.cursor())
|
||||
tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
@ -198,7 +198,7 @@ class TDTestCase:
|
|||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -276,7 +276,7 @@ class TDTestCase:
|
|||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -354,7 +354,7 @@ class TDTestCase:
|
|||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 15
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -425,7 +425,7 @@ class TDTestCase:
|
|||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -29,8 +29,8 @@ class TDTestCase:
|
|||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
#tdSql.init(conn.cursor())
|
||||
tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
@ -183,18 +183,15 @@ class TDTestCase:
|
|||
|
||||
return
|
||||
|
||||
def tmqCase1(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 1: ")
|
||||
def tmqCase8(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 8: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
auotCtbNum = 5
|
||||
auotCtbPrefix = 'autoCtb'
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db1', \
|
||||
'dbName': 'db8', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
|
@ -204,67 +201,102 @@ class TDTestCase:
|
|||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,parameterDict["dbName"],parameterDict["stbName"],parameterDict["ctbNum"],parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * (auotCtbNum + parameterDict["ctbNum"])
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
auto.offset.reset:latest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
tdLog.info("start consume 0 processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# add some new child tables using auto ctreating mode
|
||||
time.sleep(1)
|
||||
for index in range(auotCtbNum):
|
||||
tdSql.query("create table %s.%s_%d using %s.%s tags(%d)"%(parameterDict["dbName"], auotCtbPrefix, index, parameterDict["dbName"], parameterDict["stbName"], index))
|
||||
|
||||
self.insert_data(tdSql,parameterDict["dbName"],auotCtbPrefix,auotCtbNum,parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
tdLog.info("start to check consume 0 result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 1 processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 2 processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 and 2 result")
|
||||
expectRows = 3
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt*2:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 1 end ...... ")
|
||||
tdLog.printNoPrefix("======== test case 8 end ...... ")
|
||||
|
||||
def tmqCase2(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 2: ")
|
||||
def tmqCase9(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 9: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
auotCtbNum = 10
|
||||
auotCtbPrefix = 'autoCtb'
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db2', \
|
||||
'dbName': 'db9', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
|
@ -274,58 +306,96 @@ class TDTestCase:
|
|||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,parameterDict["dbName"],parameterDict["stbName"],parameterDict["ctbNum"],parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
self.create_stable(tdSql, parameterDict["dbName"], 'stb2')
|
||||
|
||||
tdLog.info("create topics from stb0/stb1")
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
topicFromStb2 = 'topic_stb2'
|
||||
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb2, parameterDict['dbName'], 'stb2'))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * (auotCtbNum + parameterDict["ctbNum"])
|
||||
topicList = '%s, %s'%(topicFromStb1,topicFromStb2)
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
auto.offset.reset:latest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
tdLog.info("start consume 0 processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# add some new child tables using auto ctreating mode
|
||||
time.sleep(1)
|
||||
for index in range(auotCtbNum):
|
||||
tdSql.query("create table %s.%s_%d using %s.%s tags(%d)"%(parameterDict["dbName"], auotCtbPrefix, index, parameterDict["dbName"], 'stb2', index))
|
||||
|
||||
self.insert_data(tdSql,parameterDict["dbName"],auotCtbPrefix,auotCtbNum,parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
tdLog.info("start to check consume 0 result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 1 processor")
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
ifManualCommit = 0
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 2 processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 and 2 result")
|
||||
expectRows = 3
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt*2:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 2 end ...... ")
|
||||
tdLog.printNoPrefix("======== test case 9 end ...... ")
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
@ -338,8 +408,8 @@ class TDTestCase:
|
|||
cfgPath = buildPath + "/../sim/psim/cfg"
|
||||
tdLog.info("cfgPath: %s" % cfgPath)
|
||||
|
||||
self.tmqCase1(cfgPath, buildPath)
|
||||
self.tmqCase2(cfgPath, buildPath)
|
||||
self.tmqCase8(cfgPath, buildPath)
|
||||
self.tmqCase9(cfgPath, buildPath)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
|
|
|
@ -0,0 +1,607 @@
|
|||
|
||||
import taos
|
||||
import sys
|
||||
import time
|
||||
import socket
|
||||
import os
|
||||
import threading
|
||||
from enum import Enum
|
||||
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
from util.cases import *
|
||||
from util.dnodes import *
|
||||
|
||||
class actionType(Enum):
|
||||
CREATE_DATABASE = 0
|
||||
CREATE_STABLE = 1
|
||||
CREATE_CTABLE = 2
|
||||
INSERT_DATA = 3
|
||||
|
||||
class TDTestCase:
|
||||
hostname = socket.gethostname()
|
||||
#rpcDebugFlagVal = '143'
|
||||
#clientCfgDict = {'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
|
||||
#clientCfgDict["rpcDebugFlag"] = rpcDebugFlagVal
|
||||
#updatecfgDict = {'clientCfg': {}, 'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
|
||||
#updatecfgDict["rpcDebugFlag"] = rpcDebugFlagVal
|
||||
#print ("===================: ", updatecfgDict)
|
||||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def newcur(self,cfg,host,port):
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
con=taos.connect(host=host, user=user, password=password, config=cfg ,port=port)
|
||||
cur=con.cursor()
|
||||
print(cur)
|
||||
return cur
|
||||
|
||||
def initConsumerTable(self,cdbName='cdb'):
|
||||
tdLog.info("create consume database, and consume info table, and consume result table")
|
||||
tdSql.query("create database if not exists %s vgroups 1"%(cdbName))
|
||||
tdSql.query("drop table if exists %s.consumeinfo "%(cdbName))
|
||||
tdSql.query("drop table if exists %s.consumeresult "%(cdbName))
|
||||
|
||||
tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName)
|
||||
tdSql.query("create table %s.consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)"%cdbName)
|
||||
|
||||
def initConsumerInfoTable(self,cdbName='cdb'):
|
||||
tdLog.info("drop consumeinfo table")
|
||||
tdSql.query("drop table if exists %s.consumeinfo "%(cdbName))
|
||||
tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName)
|
||||
|
||||
def insertConsumerInfo(self,consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifmanualcommit,cdbName='cdb'):
|
||||
sql = "insert into %s.consumeinfo values "%cdbName
|
||||
sql += "(now, %d, '%s', '%s', %d, %d, %d)"%(consumerId, topicList, keyList, expectrowcnt, ifcheckdata, ifmanualcommit)
|
||||
tdLog.info("consume info sql: %s"%sql)
|
||||
tdSql.query(sql)
|
||||
|
||||
def selectConsumeResult(self,expectRows,cdbName='cdb'):
|
||||
resultList=[]
|
||||
while 1:
|
||||
tdSql.query("select * from %s.consumeresult"%cdbName)
|
||||
#tdLog.info("row: %d, %l64d, %l64d"%(tdSql.getData(0, 1),tdSql.getData(0, 2),tdSql.getData(0, 3))
|
||||
if tdSql.getRows() == expectRows:
|
||||
break
|
||||
else:
|
||||
time.sleep(5)
|
||||
|
||||
for i in range(expectRows):
|
||||
tdLog.info ("consume id: %d, consume msgs: %d, consume rows: %d"%(tdSql.getData(i , 1), tdSql.getData(i , 2), tdSql.getData(i , 3)))
|
||||
resultList.append(tdSql.getData(i , 3))
|
||||
|
||||
return resultList
|
||||
|
||||
def startTmqSimProcess(self,buildPath,cfgPath,pollDelay,dbName,showMsg=1,showRow=1,cdbName='cdb',valgrind=0):
|
||||
shellCmd = 'nohup '
|
||||
if valgrind == 1:
|
||||
logFile = cfgPath + '/../log/valgrind-tmq.log'
|
||||
shellCmd = 'nohup valgrind --log-file=' + logFile
|
||||
shellCmd += '--tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all --num-callers=20 -v --workaround-gcc296-bugs=yes '
|
||||
|
||||
shellCmd += buildPath + '/build/bin/tmq_sim -c ' + cfgPath
|
||||
shellCmd += " -y %d -d %s -g %d -r %d -w %s "%(pollDelay, dbName, showMsg, showRow, cdbName)
|
||||
shellCmd += "> /dev/null 2>&1 &"
|
||||
tdLog.info(shellCmd)
|
||||
os.system(shellCmd)
|
||||
|
||||
def create_database(self,tsql, dbName,dropFlag=1,vgroups=4,replica=1):
|
||||
if dropFlag == 1:
|
||||
tsql.execute("drop database if exists %s"%(dbName))
|
||||
|
||||
tsql.execute("create database if not exists %s vgroups %d replica %d"%(dbName, vgroups, replica))
|
||||
tdLog.debug("complete to create database %s"%(dbName))
|
||||
return
|
||||
|
||||
def create_stable(self,tsql, dbName,stbName):
|
||||
tsql.execute("create table if not exists %s.%s (ts timestamp, c1 bigint, c2 binary(16)) tags(t1 int)"%(dbName, stbName))
|
||||
tdLog.debug("complete to create %s.%s" %(dbName, stbName))
|
||||
return
|
||||
|
||||
def create_ctables(self,tsql, dbName,stbName,ctbNum):
|
||||
tsql.execute("use %s" %dbName)
|
||||
pre_create = "create table"
|
||||
sql = pre_create
|
||||
#tdLog.debug("doing create one stable %s and %d child table in %s ..." %(stbname, count ,dbname))
|
||||
for i in range(ctbNum):
|
||||
sql += " %s_%d using %s tags(%d)"%(stbName,i,stbName,i+1)
|
||||
if (i > 0) and (i%100 == 0):
|
||||
tsql.execute(sql)
|
||||
sql = pre_create
|
||||
if sql != pre_create:
|
||||
tsql.execute(sql)
|
||||
|
||||
tdLog.debug("complete to create %d child tables in %s.%s" %(ctbNum, dbName, stbName))
|
||||
return
|
||||
|
||||
def insert_data(self,tsql,dbName,stbName,ctbNum,rowsPerTbl,batchNum,startTs=0):
|
||||
tdLog.debug("start to insert data ............")
|
||||
tsql.execute("use %s" %dbName)
|
||||
pre_insert = "insert into "
|
||||
sql = pre_insert
|
||||
|
||||
if startTs == 0:
|
||||
t = time.time()
|
||||
startTs = int(round(t * 1000))
|
||||
|
||||
#tdLog.debug("doing insert data into stable:%s rows:%d ..."%(stbName, allRows))
|
||||
rowsOfSql = 0
|
||||
for i in range(ctbNum):
|
||||
sql += " %s_%d values "%(stbName,i)
|
||||
for j in range(rowsPerTbl):
|
||||
sql += "(%d, %d, 'tmqrow_%d') "%(startTs + j, j, j)
|
||||
rowsOfSql += 1
|
||||
if (j > 0) and ((rowsOfSql == batchNum) or (j == rowsPerTbl - 1)):
|
||||
tsql.execute(sql)
|
||||
rowsOfSql = 0
|
||||
if j < rowsPerTbl - 1:
|
||||
sql = "insert into %s_%d values " %(stbName,i)
|
||||
else:
|
||||
sql = "insert into "
|
||||
#end sql
|
||||
if sql != pre_insert:
|
||||
#print("insert sql:%s"%sql)
|
||||
tsql.execute(sql)
|
||||
tdLog.debug("insert data ............ [OK]")
|
||||
return
|
||||
|
||||
def prepareEnv(self, **parameterDict):
|
||||
# create new connector for my thread
|
||||
tsql=self.newcur(parameterDict['cfg'], 'localhost', 6030)
|
||||
|
||||
if parameterDict["actionType"] == actionType.CREATE_DATABASE:
|
||||
self.create_database(tsql, parameterDict["dbName"])
|
||||
elif parameterDict["actionType"] == actionType.CREATE_STABLE:
|
||||
self.create_stable(tsql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
elif parameterDict["actionType"] == actionType.CREATE_CTABLE:
|
||||
self.create_ctables(tsql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
elif parameterDict["actionType"] == actionType.INSERT_DATA:
|
||||
self.insert_data(tsql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
else:
|
||||
tdLog.exit("not support's action: ", parameterDict["actionType"])
|
||||
|
||||
return
|
||||
|
||||
def tmqCase10(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 10: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db10', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:latest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume 0 processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start to check consume 0 result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 1 processor")
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
ifManualCommit = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt-10000,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt-10000:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt-10000))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 2 processor")
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 2
|
||||
ifManualCommit = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt+10000,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 and 2 result")
|
||||
expectRows = 3
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt*2:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 10 end ...... ")
|
||||
|
||||
def tmqCase11(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 11: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db11', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:none'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:none'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("again start consume processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("again check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 11 end ...... ")
|
||||
|
||||
def tmqCase12(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 12: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db12', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt/4:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:none'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("again start consume processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("again check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt/4:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 12 end ...... ")
|
||||
|
||||
def tmqCase13(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 13: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db13', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt/4:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:none'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt/2,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("again start consume processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("again check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt*(1/2+1/4):
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*(1/2+1/4)))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 2
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:none'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("again start consume processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("again check consume result")
|
||||
expectRows = 3
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 13 end ...... ")
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
buildPath = self.getBuildPath()
|
||||
if (buildPath == ""):
|
||||
tdLog.exit("taosd not found!")
|
||||
else:
|
||||
tdLog.info("taosd found in %s" % buildPath)
|
||||
cfgPath = buildPath + "/../sim/psim/cfg"
|
||||
tdLog.info("cfgPath: %s" % cfgPath)
|
||||
|
||||
self.tmqCase10(cfgPath, buildPath)
|
||||
self.tmqCase11(cfgPath, buildPath)
|
||||
self.tmqCase12(cfgPath, buildPath)
|
||||
self.tmqCase13(cfgPath, buildPath)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success(f"{__file__} successfully executed")
|
||||
|
||||
event = threading.Event()
|
||||
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
|
@ -0,0 +1,351 @@
|
|||
|
||||
import taos
|
||||
import sys
|
||||
import time
|
||||
import socket
|
||||
import os
|
||||
import threading
|
||||
from enum import Enum
|
||||
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
from util.cases import *
|
||||
from util.dnodes import *
|
||||
|
||||
class actionType(Enum):
|
||||
CREATE_DATABASE = 0
|
||||
CREATE_STABLE = 1
|
||||
CREATE_CTABLE = 2
|
||||
INSERT_DATA = 3
|
||||
|
||||
class TDTestCase:
|
||||
hostname = socket.gethostname()
|
||||
#rpcDebugFlagVal = '143'
|
||||
#clientCfgDict = {'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
|
||||
#clientCfgDict["rpcDebugFlag"] = rpcDebugFlagVal
|
||||
#updatecfgDict = {'clientCfg': {}, 'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
|
||||
#updatecfgDict["rpcDebugFlag"] = rpcDebugFlagVal
|
||||
#print ("===================: ", updatecfgDict)
|
||||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def newcur(self,cfg,host,port):
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
con=taos.connect(host=host, user=user, password=password, config=cfg ,port=port)
|
||||
cur=con.cursor()
|
||||
print(cur)
|
||||
return cur
|
||||
|
||||
def initConsumerTable(self,cdbName='cdb'):
|
||||
tdLog.info("create consume database, and consume info table, and consume result table")
|
||||
tdSql.query("create database if not exists %s vgroups 1"%(cdbName))
|
||||
tdSql.query("drop table if exists %s.consumeinfo "%(cdbName))
|
||||
tdSql.query("drop table if exists %s.consumeresult "%(cdbName))
|
||||
|
||||
tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName)
|
||||
tdSql.query("create table %s.consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)"%cdbName)
|
||||
|
||||
def initConsumerInfoTable(self,cdbName='cdb'):
|
||||
tdLog.info("drop consumeinfo table")
|
||||
tdSql.query("drop table if exists %s.consumeinfo "%(cdbName))
|
||||
tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName)
|
||||
|
||||
def insertConsumerInfo(self,consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifmanualcommit,cdbName='cdb'):
|
||||
sql = "insert into %s.consumeinfo values "%cdbName
|
||||
sql += "(now, %d, '%s', '%s', %d, %d, %d)"%(consumerId, topicList, keyList, expectrowcnt, ifcheckdata, ifmanualcommit)
|
||||
tdLog.info("consume info sql: %s"%sql)
|
||||
tdSql.query(sql)
|
||||
|
||||
def selectConsumeResult(self,expectRows,cdbName='cdb'):
|
||||
resultList=[]
|
||||
while 1:
|
||||
tdSql.query("select * from %s.consumeresult"%cdbName)
|
||||
#tdLog.info("row: %d, %l64d, %l64d"%(tdSql.getData(0, 1),tdSql.getData(0, 2),tdSql.getData(0, 3))
|
||||
if tdSql.getRows() == expectRows:
|
||||
break
|
||||
else:
|
||||
time.sleep(5)
|
||||
|
||||
for i in range(expectRows):
|
||||
tdLog.info ("consume id: %d, consume msgs: %d, consume rows: %d"%(tdSql.getData(i , 1), tdSql.getData(i , 2), tdSql.getData(i , 3)))
|
||||
resultList.append(tdSql.getData(i , 3))
|
||||
|
||||
return resultList
|
||||
|
||||
def startTmqSimProcess(self,buildPath,cfgPath,pollDelay,dbName,showMsg=1,showRow=1,cdbName='cdb',valgrind=0):
|
||||
shellCmd = 'nohup '
|
||||
if valgrind == 1:
|
||||
logFile = cfgPath + '/../log/valgrind-tmq.log'
|
||||
shellCmd = 'nohup valgrind --log-file=' + logFile
|
||||
shellCmd += '--tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all --num-callers=20 -v --workaround-gcc296-bugs=yes '
|
||||
|
||||
shellCmd += buildPath + '/build/bin/tmq_sim -c ' + cfgPath
|
||||
shellCmd += " -y %d -d %s -g %d -r %d -w %s "%(pollDelay, dbName, showMsg, showRow, cdbName)
|
||||
shellCmd += "> /dev/null 2>&1 &"
|
||||
tdLog.info(shellCmd)
|
||||
os.system(shellCmd)
|
||||
|
||||
def create_database(self,tsql, dbName,dropFlag=1,vgroups=4,replica=1):
|
||||
if dropFlag == 1:
|
||||
tsql.execute("drop database if exists %s"%(dbName))
|
||||
|
||||
tsql.execute("create database if not exists %s vgroups %d replica %d"%(dbName, vgroups, replica))
|
||||
tdLog.debug("complete to create database %s"%(dbName))
|
||||
return
|
||||
|
||||
def create_stable(self,tsql, dbName,stbName):
|
||||
tsql.execute("create table if not exists %s.%s (ts timestamp, c1 bigint, c2 binary(16)) tags(t1 int)"%(dbName, stbName))
|
||||
tdLog.debug("complete to create %s.%s" %(dbName, stbName))
|
||||
return
|
||||
|
||||
def create_ctables(self,tsql, dbName,stbName,ctbNum):
|
||||
tsql.execute("use %s" %dbName)
|
||||
pre_create = "create table"
|
||||
sql = pre_create
|
||||
#tdLog.debug("doing create one stable %s and %d child table in %s ..." %(stbname, count ,dbname))
|
||||
for i in range(ctbNum):
|
||||
sql += " %s_%d using %s tags(%d)"%(stbName,i,stbName,i+1)
|
||||
if (i > 0) and (i%100 == 0):
|
||||
tsql.execute(sql)
|
||||
sql = pre_create
|
||||
if sql != pre_create:
|
||||
tsql.execute(sql)
|
||||
|
||||
tdLog.debug("complete to create %d child tables in %s.%s" %(ctbNum, dbName, stbName))
|
||||
return
|
||||
|
||||
def insert_data(self,tsql,dbName,stbName,ctbNum,rowsPerTbl,batchNum,startTs=0):
|
||||
tdLog.debug("start to insert data ............")
|
||||
tsql.execute("use %s" %dbName)
|
||||
pre_insert = "insert into "
|
||||
sql = pre_insert
|
||||
|
||||
if startTs == 0:
|
||||
t = time.time()
|
||||
startTs = int(round(t * 1000))
|
||||
|
||||
#tdLog.debug("doing insert data into stable:%s rows:%d ..."%(stbName, allRows))
|
||||
rowsOfSql = 0
|
||||
for i in range(ctbNum):
|
||||
sql += " %s_%d values "%(stbName,i)
|
||||
for j in range(rowsPerTbl):
|
||||
sql += "(%d, %d, 'tmqrow_%d') "%(startTs + j, j, j)
|
||||
rowsOfSql += 1
|
||||
if (j > 0) and ((rowsOfSql == batchNum) or (j == rowsPerTbl - 1)):
|
||||
tsql.execute(sql)
|
||||
rowsOfSql = 0
|
||||
if j < rowsPerTbl - 1:
|
||||
sql = "insert into %s_%d values " %(stbName,i)
|
||||
else:
|
||||
sql = "insert into "
|
||||
#end sql
|
||||
if sql != pre_insert:
|
||||
#print("insert sql:%s"%sql)
|
||||
tsql.execute(sql)
|
||||
tdLog.debug("insert data ............ [OK]")
|
||||
return
|
||||
|
||||
def prepareEnv(self, **parameterDict):
|
||||
# create new connector for my thread
|
||||
tsql=self.newcur(parameterDict['cfg'], 'localhost', 6030)
|
||||
|
||||
if parameterDict["actionType"] == actionType.CREATE_DATABASE:
|
||||
self.create_database(tsql, parameterDict["dbName"])
|
||||
elif parameterDict["actionType"] == actionType.CREATE_STABLE:
|
||||
self.create_stable(tsql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
elif parameterDict["actionType"] == actionType.CREATE_CTABLE:
|
||||
self.create_ctables(tsql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
elif parameterDict["actionType"] == actionType.INSERT_DATA:
|
||||
self.insert_data(tsql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
else:
|
||||
tdLog.exit("not support's action: ", parameterDict["actionType"])
|
||||
|
||||
return
|
||||
|
||||
def tmqCase1(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 1: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
auotCtbNum = 5
|
||||
auotCtbPrefix = 'autoCtb'
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db1', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,parameterDict["dbName"],parameterDict["stbName"],parameterDict["ctbNum"],parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * (auotCtbNum + parameterDict["ctbNum"])
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# add some new child tables using auto ctreating mode
|
||||
time.sleep(1)
|
||||
for index in range(auotCtbNum):
|
||||
tdSql.query("create table %s.%s_%d using %s.%s tags(%d)"%(parameterDict["dbName"], auotCtbPrefix, index, parameterDict["dbName"], parameterDict["stbName"], index))
|
||||
|
||||
self.insert_data(tdSql,parameterDict["dbName"],auotCtbPrefix,auotCtbNum,parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 1 end ...... ")
|
||||
|
||||
def tmqCase2(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 2: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
auotCtbNum = 10
|
||||
auotCtbPrefix = 'autoCtb'
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db2', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,parameterDict["dbName"],parameterDict["stbName"],parameterDict["ctbNum"],parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
|
||||
self.create_stable(tdSql, parameterDict["dbName"], 'stb2')
|
||||
|
||||
tdLog.info("create topics from stb0/stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
topicFromStb2 = 'topic_stb2'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb2, parameterDict['dbName'], 'stb2'))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * (auotCtbNum + parameterDict["ctbNum"])
|
||||
topicList = '%s, %s'%(topicFromStb1,topicFromStb2)
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# add some new child tables using auto ctreating mode
|
||||
time.sleep(1)
|
||||
for index in range(auotCtbNum):
|
||||
tdSql.query("create table %s.%s_%d using %s.%s tags(%d)"%(parameterDict["dbName"], auotCtbPrefix, index, parameterDict["dbName"], 'stb2', index))
|
||||
|
||||
self.insert_data(tdSql,parameterDict["dbName"],auotCtbPrefix,auotCtbNum,parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 2 end ...... ")
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
buildPath = self.getBuildPath()
|
||||
if (buildPath == ""):
|
||||
tdLog.exit("taosd not found!")
|
||||
else:
|
||||
tdLog.info("taosd found in %s" % buildPath)
|
||||
cfgPath = buildPath + "/../sim/psim/cfg"
|
||||
tdLog.info("cfgPath: %s" % cfgPath)
|
||||
|
||||
self.tmqCase1(cfgPath, buildPath)
|
||||
self.tmqCase2(cfgPath, buildPath)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success(f"{__file__} successfully executed")
|
||||
|
||||
event = threading.Event()
|
||||
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
File diff suppressed because it is too large
Load Diff
|
@ -29,8 +29,8 @@ class TDTestCase:
|
|||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
#tdSql.init(conn.cursor())
|
||||
tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
|
|
@ -86,4 +86,4 @@ python3 ./test.py -f 7-tmq/subscribeStb1.py
|
|||
python3 ./test.py -f 7-tmq/subscribeStb2.py
|
||||
python3 ./test.py -f 7-tmq/subscribeStb3.py
|
||||
python3 ./test.py -f 7-tmq/subscribeStb4.py
|
||||
python3 ./test.py -f 7-tmq/subscribeStb2.py
|
||||
python3 ./test.py -f 7-tmq/subscribeStb2.py
|
||||
|
|
Loading…
Reference in New Issue