Merge branch '3.0' into test/jcy
This commit is contained in:
commit
b91849f118
|
@ -7,8 +7,6 @@ description: "TAOS SQL 支持的语法规则、主要查询功能、支持的 SQ
|
|||
|
||||
TAOS SQL 是用户对 TDengine 进行数据写入和查询的主要工具。TAOS SQL 为了便于用户快速上手,在一定程度上提供与标准 SQL 类似的风格和模式。严格意义上,TAOS SQL 并不是也不试图提供标准的 SQL 语法。此外,由于 TDengine 针对的时序性结构化数据不提供删除功能,因此在 TAO SQL 中不提供数据删除的相关功能。
|
||||
|
||||
TAOS SQL 不支持关键字的缩写,例如 DESCRIBE 不能缩写为 DESC。
|
||||
|
||||
本章节 SQL 语法遵循如下约定:
|
||||
|
||||
- <\> 里的内容是用户需要输入的,但不要输入 <\> 本身
|
||||
|
|
|
@ -130,7 +130,7 @@ After TDengine server is running,execute `taosBenchmark` (previously named tao
|
|||
taosBenchmark
|
||||
```
|
||||
|
||||
This command will create a super table "meters" under database "test". Under "meters", 10000 tables are created with names from "d0" to "d9999". Each table has 10000 rows and each row has four columns (ts, current, voltage, phase). Time stamp is starting from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999". Each table has tags "location" and "groupId". groupId is set 1 to 10 randomly, and location is set to "California.SanFrancisco" or "California.SanDieo".
|
||||
This command will create a super table "meters" under database "test". Under "meters", 10000 tables are created with names from "d0" to "d9999". Each table has 10000 rows and each row has four columns (ts, current, voltage, phase). Time stamp is starting from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999". Each table has tags "location" and "groupId". groupId is set 1 to 10 randomly, and location is set to "California.SanFrancisco" or "California.SanDiego".
|
||||
|
||||
This command will insert 100 million rows into the database quickly. Time to insert depends on the hardware configuration, it only takes a dozen seconds for a regular PC server.
|
||||
|
||||
|
|
|
@ -1,24 +1,31 @@
|
|||
---
|
||||
sidebar_label: UDF
|
||||
title: User Defined Functions
|
||||
description: "Scalar functions and aggregate functions developed by users can be utilized by the query framework to expand the query capability"
|
||||
description: "Scalar functions and aggregate functions developed by users can be utilized by the query framework to expand query capability"
|
||||
---
|
||||
|
||||
In some use cases, the query capability required by application programs can't be achieved directly by builtin functions. With UDF, the functions developed by users can be utilized by query framework to meet some special requirements. UDF normally takes one column of data as input, but can also support the result of sub query as input.
|
||||
In some use cases, built-in functions are not adequate for the query capability required by application programs. With UDF, the functions developed by users can be utilized by the query framework to meet business and application requirements. UDF normally takes one column of data as input, but can also support the result of a sub-query as input.
|
||||
|
||||
From version 2.2.0.0, UDF programmed in C/C++ language can be supported by TDengine.
|
||||
From version 2.2.0.0, UDF written in C/C++ are supported by TDengine.
|
||||
|
||||
Two kinds of functions can be implemented by UDF: scalar function and aggregate function.
|
||||
|
||||
## Define UDF
|
||||
## Types of UDF
|
||||
|
||||
Two kinds of functions can be implemented by UDF: scalar functions and aggregate functions.
|
||||
|
||||
Scalar functions return multiple rows and aggregate functions return either 0 or 1 row.
|
||||
|
||||
In the case of a scalar function you only have to implement the "normal" function template.
|
||||
|
||||
In the case of an aggregate function, in addition to the "normal" function, you also need to implement the "merge" and "finalize" function templates even if the implementation is empty. This will become clear in the sections below.
|
||||
|
||||
### Scalar Function
|
||||
|
||||
Below function template can be used to define your own scalar function.
|
||||
As mentioned earlier, a scalar UDF only has to implement the "normal" function template. The function template below can be used to define your own scalar function.
|
||||
|
||||
`void udfNormalFunc(char* data, short itype, short ibytes, int numOfRows, long long* ts, char* dataOutput, char* interBuf, char* tsOutput, int* numOfOutput, short otype, short obytes, SUdfInit* buf)`
|
||||
|
||||
`udfNormalFunc` is the place holder of function name, a function implemented based on the above template can be used to perform scalar computation on data rows. The parameters are fixed to control the data exchange between UDF and TDengine.
|
||||
`udfNormalFunc` is the place holder for a function name. A function implemented based on the above template can be used to perform scalar computation on data rows. The parameters are fixed to control the data exchange between UDF and TDengine.
|
||||
|
||||
- Definitions of the parameters:
|
||||
|
||||
|
@ -30,20 +37,24 @@ Below function template can be used to define your own scalar function.
|
|||
- numOfRows:the number of rows in the input data
|
||||
- ts: the column of timestamp corresponding to the input data
|
||||
- dataOutput:the buffer for output data, total size is `oBytes * numberOfRows`
|
||||
- interBuf:the buffer for intermediate result, its size is specified by `BUFSIZE` parameter when creating a UDF. It's normally used when the intermediate result is not same as the final result, it's allocated and freed by TDengine.
|
||||
- interBuf:the buffer for an intermediate result. Its size is specified by the `BUFSIZE` parameter when creating a UDF. It's normally used when the intermediate result is not same as the final result. This buffer is allocated and freed by TDengine.
|
||||
- tsOutput:the column of timestamps corresponding to the output data; it can be used to output timestamp together with the output data if it's not NULL
|
||||
- numOfOutput:the number of rows in output data
|
||||
- buf:for the state exchange between UDF and TDengine
|
||||
|
||||
[add_one.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c) is one example of the simplest UDF implementations, i.e. one instance of the above `udfNormalFunc` template. It adds one to each value of a column passed in which can be filtered using `where` clause and outputs the result.
|
||||
[add_one.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c) is one example of a very simple UDF implementation, i.e. one instance of the above `udfNormalFunc` template. It adds one to each value of a passed in column, which can be filtered using the `where` clause, and outputs the result.
|
||||
|
||||
### Aggregate Function
|
||||
|
||||
Below function template can be used to define your own aggregate function.
|
||||
For aggregate UDF, as mentioned earlier you must implement a "normal" function template (described above) and also implement the "merge" and "finalize" templates.
|
||||
|
||||
`void abs_max_merge(char* data, int32_t numOfRows, char* dataOutput, int32_t* numOfOutput, SUdfInit* buf)`
|
||||
#### Merge Function Template
|
||||
|
||||
`udfMergeFunc` is the place holder of function name, the function implemented with the above template is used to aggregate the intermediate result, only can be used in the aggregate query for STable.
|
||||
The function template below can be used to define your own merge function for an aggregate UDF.
|
||||
|
||||
`void udfMergeFunc(char* data, int32_t numOfRows, char* dataOutput, int32_t* numOfOutput, SUdfInit* buf)`
|
||||
|
||||
`udfMergeFunc` is the place holder for a function name. The function implemented with the above template is used to aggregate intermediate results and can only be used in the aggregate query for STable.
|
||||
|
||||
Definitions of the parameters:
|
||||
|
||||
|
@ -53,17 +64,11 @@ Definitions of the parameters:
|
|||
- numOfOutput:number of rows in the output data
|
||||
- buf:for the state exchange between UDF and TDengine
|
||||
|
||||
[abs_max.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c) is an user defined aggregate function to get the maximum from the absolute value of a column.
|
||||
#### Finalize Function Template
|
||||
|
||||
The internal processing is that the data affected by the select statement will be divided into multiple row blocks and `udfNormalFunc`, i.e. `abs_max` in this case, is performed on each row block to generate the intermediate of each sub table, then `udfMergeFunc`, i.e. `abs_max_merge` in this case, is performed on the intermediate result of sub tables to aggregate to generate the final or intermediate result of STable. The intermediate result of STable is finally processed by `udfFinalizeFunc` to generate the final result, which contain either 0 or 1 row.
|
||||
The function template below can be used to finalize the result of your own UDF, normally used when interBuf is used.
|
||||
|
||||
Other typical scenarios, like covariance, can also be achieved by aggregate UDF.
|
||||
|
||||
### Finalize
|
||||
|
||||
Below function template can be used to finalize the result of your own UDF, normally used when interBuf is used.
|
||||
|
||||
`void abs_max_finalize(char* dataOutput, char* interBuf, int* numOfOutput, SUdfInit* buf)`
|
||||
`void udfFinalizeFunc(char* dataOutput, char* interBuf, int* numOfOutput, SUdfInit* buf)`
|
||||
|
||||
`udfFinalizeFunc` is the place holder of function name, definitions of the parameter are as below:
|
||||
|
||||
|
@ -72,47 +77,64 @@ Below function template can be used to finalize the result of your own UDF, norm
|
|||
- numOfOutput:number of output data, can only be 0 or 1 for aggregate function
|
||||
- buf:for state exchange between UDF and TDengine
|
||||
|
||||
## UDF Conventions
|
||||
### Example abs_max.c
|
||||
|
||||
The naming of 3 kinds of UDF, i.e. udfNormalFunc, udfMergeFunc, and udfFinalizeFunc is required to have same prefix, i.e. the actual name of udfNormalFunc, which means udfNormalFunc doesn't need a suffix following the function name. While udfMergeFunc should be udfNormalFunc followed by `_merge`, udfFinalizeFunc should be udfNormalFunc followed by `_finalize`. The naming convention is part of UDF framework, TDengine follows this convention to invoke corresponding actual functions.\
|
||||
[abs_max.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c) is an example of a user defined aggregate function to get the maximum from the absolute values of a column.
|
||||
|
||||
According to the kind of UDF to implement, the functions that need to be implemented are different.
|
||||
The internal processing happens as follows. The results of the select statement are divided into multiple row blocks and `udfNormalFunc`, i.e. `abs_max` in this case, is performed on each row block to generate the intermediate results for each sub table. Then `udfMergeFunc`, i.e. `abs_max_merge` in this case, is performed on the intermediate result of sub tables to aggregate and generate the final or intermediate result of STable. The intermediate result of STable is finally processed by `udfFinalizeFunc`, i.e. `abs_max_finalize` in this example, to generate the final result, which contains either 0 or 1 row.
|
||||
|
||||
- Scalar function:udfNormalFunc is required
|
||||
- Aggregate function:udfNormalFunc, udfMergeFunc (if query on STable) and udfFinalizeFunc are required
|
||||
Other typical aggregation functions such as covariance, can also be implemented using aggregate UDF.
|
||||
|
||||
To be more accurate, assuming we want to implement a UDF named "foo". If the function is a scalar function, what we really need to implement is `foo`; if the function is aggregate function, we need to implement `foo`, `foo_merge`, and `foo_finalize`. For aggregate UDF, even though one of the three functions is not necessary, there must be an empty implementation.
|
||||
## UDF Naming Conventions
|
||||
|
||||
The naming convention for the 3 kinds of function templates required by UDF is as follows:
|
||||
- udfNormalFunc, udfMergeFunc, and udfFinalizeFunc are required to have same prefix, i.e. the actual name of udfNormalFunc. The udfNormalFunc doesn't need a suffix following the function name.
|
||||
- udfMergeFunc should be udfNormalFunc followed by `_merge`
|
||||
- udfFinalizeFunc should be udfNormalFunc followed by `_finalize`.
|
||||
|
||||
The naming convention is part of TDengine's UDF framework. TDengine follows this convention to invoke the corresponding actual functions.
|
||||
|
||||
Depending on whether you are creating a scalar UDF or aggregate UDF, the functions that you need to implement are different.
|
||||
|
||||
- Scalar function:udfNormalFunc is required.
|
||||
- Aggregate function:udfNormalFunc, udfMergeFunc (if query on STable) and udfFinalizeFunc are required.
|
||||
|
||||
For clarity, assuming we want to implement a UDF named "foo":
|
||||
- If the function is a scalar function, we only need to implement the "normal" function template and it should be named simply `foo`.
|
||||
- If the function is an aggregate function, we need to implement `foo`, `foo_merge`, and `foo_finalize`. Note that for aggregate UDF, even though one of the three functions is not necessary, there must be an empty implementation.
|
||||
|
||||
## Compile UDF
|
||||
|
||||
The source code of UDF in C can't be utilized by TDengine directly. UDF can only be loaded into TDengine after compiling to dynamically linked library.
|
||||
The source code of UDF in C can't be utilized by TDengine directly. UDF can only be loaded into TDengine after compiling to dynamically linked library (DLL).
|
||||
|
||||
For example, the example UDF `add_one.c` mentioned in previous sections need to be compiled into DLL using below command on Linux Shell.
|
||||
For example, the example UDF `add_one.c` mentioned earlier, can be compiled into DLL using the command below, in a Linux Shell.
|
||||
|
||||
```bash
|
||||
gcc -g -O0 -fPIC -shared add_one.c -o add_one.so
|
||||
```
|
||||
|
||||
The generated DLL file `dd_one.so` can be used later when creating UDF. It's recommended to use GCC not older than 7.5.
|
||||
The generated DLL file `add_one.so` can be used later when creating a UDF. It's recommended to use GCC not older than 7.5.
|
||||
|
||||
## Create and Use UDF
|
||||
|
||||
When a UDF is created in a TDengine instance, it is available across the databases in that instance.
|
||||
|
||||
### Create UDF
|
||||
|
||||
SQL command can be executed on the same hos where the generated UDF DLL resides to load the UDF DLL into TDengine, this operation can't be done through REST interface or web console. Once created, all the clients of the current TDengine can use these UDF functions in their SQL commands. UDF are stored in the management node of TDengine. The UDFs loaded in TDengine would be still available after TDengine is restarted.
|
||||
SQL command can be executed on the host where the generated UDF DLL resides to load the UDF DLL into TDengine. This operation cannot be done through REST interface or web console. Once created, any client of the current TDengine can use these UDF functions in their SQL commands. UDF are stored in the management node of TDengine. The UDFs loaded in TDengine would be still available after TDengine is restarted.
|
||||
|
||||
When creating UDF, it needs to be clarified as either scalar function or aggregate function. If the specified type is wrong, the SQL statements using the function would fail with error. Besides, the input type and output type don't need to be same in UDF, but the input data type and output data type need to be consistent with the UDF definition.
|
||||
When creating UDF, the type of UDF, i.e. a scalar function or aggregate function must be specified. If the specified type is wrong, the SQL statements using the function would fail with errors. The input type and output type don't need to be the same in UDF, but the input data type and output data type must be consistent with the UDF definition.
|
||||
|
||||
- Create Scalar Function
|
||||
|
||||
```sql
|
||||
CREATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) [ BUFSIZE B ];
|
||||
CREATE FUNCTION userDefinedFunctionName AS "/absolute/path/to/userDefinedFunctionName.so" OUTPUTTYPE <supported TDengine type> [BUFSIZE B];
|
||||
```
|
||||
|
||||
- ids(X):the function name to be sued in SQL statement, must be consistent with the function name defined by `udfNormalFunc`
|
||||
- ids(Y):the absolute path of the DLL file including the implementation of the UDF, the path needs to be quoted by single or double quotes
|
||||
- typename(Z):the output data type, the value is the literal string of the type
|
||||
- B:the size of intermediate buffer, in bytes; it's an optional parameter and the range is [0,512]
|
||||
- userDefinedFunctionName:The function name to be used in SQL statement which must be consistent with the function name defined by `udfNormalFunc` and is also the name of the compiled DLL (.so file).
|
||||
- path:The absolute path of the DLL file including the name of the shared object file (.so). The path must be quoted with single or double quotes.
|
||||
- outputtype:The output data type, the value is the literal string of the supported TDengine data type.
|
||||
- B:the size of intermediate buffer, in bytes; it is an optional parameter and the range is [0,512].
|
||||
|
||||
For example, below SQL statement can be used to create a UDF from `add_one.so`.
|
||||
|
||||
|
@ -123,17 +145,17 @@ CREATE FUNCTION add_one AS "/home/taos/udf_example/add_one.so" OUTPUTTYPE INT;
|
|||
- Create Aggregate Function
|
||||
|
||||
```sql
|
||||
CREATE AGGREGATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) [ BUFSIZE B ];
|
||||
CREATE AGGREGATE FUNCTION userDefinedFunctionName AS "/absolute/path/to/userDefinedFunctionName.so" OUTPUTTYPE <supported TDengine data type> [ BUFSIZE B ];
|
||||
```
|
||||
|
||||
- ids(X):the function name to be sued in SQL statement, must be consistent with the function name defined by `udfNormalFunc`
|
||||
- ids(Y):the absolute path of the DLL file including the implementation of the UDF, the path needs to be quoted by single or double quotes
|
||||
- typename(Z):the output data type, the value is the literal string of the type
|
||||
- userDefinedFunctionName:the function name to be used in SQL statement which must be consistent with the function name defined by `udfNormalFunc` and is also the name of the compiled DLL (.so file).
|
||||
- path:the absolute path of the DLL file including the name of the shared object file (.so). The path needs to be quoted by single or double quotes.
|
||||
- OUTPUTTYPE:the output data type, the value is the literal string of the type
|
||||
- B:the size of intermediate buffer, in bytes; it's an optional parameter and the range is [0,512]
|
||||
|
||||
For details about how to use intermediate result, please refer to example program [demo.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/demo.c).
|
||||
|
||||
For example, below SQL statement can be used to create a UDF rom `demo.so`.
|
||||
For example, below SQL statement can be used to create a UDF from `demo.so`.
|
||||
|
||||
```sql
|
||||
CREATE AGGREGATE FUNCTION demo AS "/home/taos/udf_example/demo.so" OUTPUTTYPE DOUBLE bufsize 14;
|
||||
|
@ -176,11 +198,11 @@ In current version there are some restrictions for UDF
|
|||
1. Only Linux is supported when creating and invoking UDF for both client side and server side
|
||||
2. UDF can't be mixed with builtin functions
|
||||
3. Only one UDF can be used in a SQL statement
|
||||
4. Single column is supported as input for UDF
|
||||
4. Only a single column is supported as input for UDF
|
||||
5. Once created successfully, UDF is persisted in MNode of TDengineUDF
|
||||
6. UDF can't be created through REST interface
|
||||
7. The function name used when creating UDF in SQL must be consistent with the function name defined in the DLL, i.e. the name defined by `udfNormalFunc`
|
||||
8. The name name of UDF name should not conflict with any of builtin functions
|
||||
8. The name of a UDF should not conflict with any of TDengine's built-in functions
|
||||
|
||||
## Examples
|
||||
|
||||
|
|
|
@ -3,16 +3,16 @@ sidebar_label: Operation
|
|||
title: Manage DNODEs
|
||||
---
|
||||
|
||||
The previous section [Deployment](/cluster/deploy) introduced how to deploy and start a cluster from scratch. Once a cluster is ready, the dnode status in the cluster can be shown at any time, new dnode can be added to scale out the cluster, an existing dnode can be removed, even load balance can be performed manually.
|
||||
The previous section, [Deployment],(/cluster/deploy) showed you how to deploy and start a cluster from scratch. Once a cluster is ready, the status of dnode(s) in the cluster can be shown at any time. Dnodes can be managed from the TDengine CLI. New dnode(s) can be added to scale out the cluster, an existing dnode can be removed and you can even perform load balancing manually, if necessary.
|
||||
|
||||
:::note
|
||||
All the commands to be introduced in this chapter need to be run through TDengine CLI, sometimes it's necessary to use root privilege.
|
||||
All the commands introduced in this chapter must be run in the TDengine CLI - `taos`. Note that sometimes it is necessary to use root privilege.
|
||||
|
||||
:::
|
||||
|
||||
## Show DNODEs
|
||||
|
||||
The below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes, etc. It's suggested to execute this command to check after adding or removing a dnode.
|
||||
The below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes and so on. We recommend executing this command after adding or removing a dnode.
|
||||
|
||||
```sql
|
||||
SHOW DNODES;
|
||||
|
@ -30,7 +30,7 @@ Query OK, 1 row(s) in set (0.008298s)
|
|||
|
||||
## Show VGROUPs
|
||||
|
||||
To utilize system resources efficiently and provide scalability, data sharding is required. The data of each database is divided into multiple shards and stored in multiple vnodes. These vnodes may be located in different dnodes, scaling out can be achieved by adding more vnodes from more dnodes. Each vnode can only be used for a single DB, but one DB can have multiple vnodes. The allocation of vnode is scheduled automatically by mnode according to system resources of the dnodes.
|
||||
To utilize system resources efficiently and provide scalability, data sharding is required. The data of each database is divided into multiple shards and stored in multiple vnodes. These vnodes may be located on different dnodes. One way of scaling out is to add more vnodes on dnodes. Each vnode can only be used for a single DB, but one DB can have multiple vnodes. The allocation of vnode is scheduled automatically by mnode based on system resources of the dnodes.
|
||||
|
||||
Launch TDengine CLI `taos` and execute below command:
|
||||
|
||||
|
@ -87,7 +87,7 @@ taos> show dnodes;
|
|||
Query OK, 2 row(s) in set (0.001017s)
|
||||
```
|
||||
|
||||
It can be seen that the status of the new dnode is "offline", once the dnode is started and connects the firstEp of the cluster, execute the command again and get the example output below, from which it can be seen that two dnodes are both in "ready" status.
|
||||
It can be seen that the status of the new dnode is "offline". Once the dnode is started and connects to the firstEp of the cluster, you can execute the command again and get the example output below. As can be seen, both dnodes are in "ready" status.
|
||||
|
||||
```
|
||||
taos> show dnodes;
|
||||
|
@ -132,12 +132,12 @@ taos> show dnodes;
|
|||
Query OK, 1 row(s) in set (0.001137s)
|
||||
```
|
||||
|
||||
In the above example, when `show dnodes` is executed the first time, two dnodes are shown. Then `drop dnode 2` is executed, after that from the output of executing `show dnodes` again it can be seen that only the dnode with ID 1 is still in the cluster.
|
||||
In the above example, when `show dnodes` is executed the first time, two dnodes are shown. After `drop dnode 2` is executed, you can execute `show dnodes` again and it can be seen that only the dnode with ID 1 is still in the cluster.
|
||||
|
||||
:::note
|
||||
|
||||
- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Normally, before dropping a dnode, the data belonging to the dnode needs to be migrated to other place.
|
||||
- Please be noted that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped.
|
||||
- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Before dropping a dnode, the data belonging to the dnode MUST be migrated/backed up according to your data retention, data security or other SOPs.
|
||||
- Please note that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped.
|
||||
- Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode.
|
||||
- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication.
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ title: High Availability and Load Balancing
|
|||
|
||||
High availability of vnode and mnode can be achieved through replicas in TDengine.
|
||||
|
||||
The number of vnodes is associated with each DB, there can be multiple DBs in a TDengine cluster. A different number of replicas can be configured for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas, the default value is 1. With single replica, the high availability of the system can't be guaranteed. Whenever one node is down, the data service will be unavailable. The number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation would fail with error "more dnodes are needed". The SQL statement below is used to create a database named "demo" with 3 replicas.
|
||||
A TDengine cluster can have multiple databases. Each database has a number of vnodes associated with it. A different number of replicas can be configured for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas. The default value for `replica` is 1. Naturally, a single replica cannot guarantee high availability since if one node is down, the data service is unavailable. Note that the number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation will fail with error "more dnodes are needed". The SQL statement below is used to create a database named "demo" with 3 replicas.
|
||||
|
||||
```sql
|
||||
CREATE DATABASE demo replica 3;
|
||||
|
@ -15,19 +15,19 @@ CREATE DATABASE demo replica 3;
|
|||
|
||||
The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each vgroup is determined by the number of replicas set for the DB. The vnodes in each vgroup store exactly the same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in an online state, the vgroup is able to provide data access. Otherwise the vgroup can't provide data access for reading or inserting data.
|
||||
|
||||
There may be data for multiple DBs in a dnode. Once a dnode is down, multiple DBs may be affected. However, it's hard to say the cluster is guaranteed to work properly as long as over half of dnodes are online because vnodes are introduced and there may be complex mapping between vnodes and dnodes.
|
||||
There may be data for multiple DBs in a dnode. When a dnode is down, multiple DBs may be affected. While in theory, the cluster will provide data access for reading or inserting data if over half the vnodes in vgroups are online, because of the possibly complex mapping between vnodes and dnodes, it is difficult to guarantee that the cluster will work properly if over half of the dnodes are online.
|
||||
|
||||
## High Availability of Mnode
|
||||
|
||||
Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`, the valid time range is [1,3]. To make sure the data consistency between mnodes, the data replication between mnodes is performed in a synchronous way.
|
||||
Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`. The valid range for `numOfMnodes` is [1,3]. To ensure data consistency between mnodes, data replication between mnodes is performed synchronously.
|
||||
|
||||
There may be multiple dnodes in a cluster, but only one mnode can be started in each dnode. Which one or ones of the dnodes will be designated as mnodes is automatically determined by TDengine according to the cluster configuration and system resources. Command `show mnodes` can be executed in TDengine `taos` to show the mnodes in the cluster.
|
||||
There may be multiple dnodes in a cluster, but only one mnode can be started in each dnode. Which one or ones of the dnodes will be designated as mnodes is automatically determined by TDengine according to the cluster configuration and system resources. The command `show mnodes` can be executed in TDengine `taos` to show the mnodes in the cluster.
|
||||
|
||||
```sql
|
||||
SHOW MNODES;
|
||||
```
|
||||
|
||||
The end point and role/status (master, slave, unsynced, or offline) of all mnodes can be shown by the above command. When the first dnode is started in a cluster, there must be one mnode in this dnode, because there must be at least one mnode otherwise the cluster doesn't work. If `numOfMNodes` is configured to 2, another mnode will be started when the second dnode is launched.
|
||||
The end point and role/status (master, slave, unsynced, or offline) of all mnodes can be shown by the above command. When the first dnode is started in a cluster, there must be one mnode in this dnode. Without at least one mnode, the cluster cannot work. If `numOfMNodes` is configured to 2, another mnode will be started when the second dnode is launched.
|
||||
|
||||
For the high availability of mnode, `numOfMnodes` needs to be configured to 2 or a higher value. Because the data consistency between mnodes must be guaranteed, the replica confirmation parameter `quorum` is set to 2 automatically if `numOfMNodes` is set to 2 or higher.
|
||||
|
||||
|
@ -36,15 +36,16 @@ If high availability is important for your system, both vnode and mnode must be
|
|||
|
||||
:::
|
||||
|
||||
## Load Balance
|
||||
## Load Balancing
|
||||
|
||||
Load balance will be triggered in 3 cases without manual intervention.
|
||||
Load balancing will be triggered in 3 cases without manual intervention.
|
||||
|
||||
- When a new dnode is joined in the cluster, automatic load balancing may be triggered, some data from some dnodes may be transferred to the new dnode automatically.
|
||||
- When a new dnode joins the cluster, automatic load balancing may be triggered. Some data from other dnodes may be transferred to the new dnode automatically.
|
||||
- When a dnode is removed from the cluster, the data from this dnode will be transferred to other dnodes automatically.
|
||||
- When a dnode is too hot, i.e. too much data has been stored in it, automatic load balancing may be triggered to migrate some vnodes from this dnode to other dnodes.
|
||||
|
||||
:::tip
|
||||
Automatic load balancing is controlled by parameter `balance`, 0 means disabled and 1 means enabled.
|
||||
Automatic load balancing is controlled by the parameter `balance`, 0 means disabled and 1 means enabled. This is set in the file [taos.cfg](https://docs.tdengine.com/reference/config/#balance).
|
||||
|
||||
:::
|
||||
|
||||
|
@ -52,22 +53,22 @@ Automatic load balancing is controlled by parameter `balance`, 0 means disabled
|
|||
|
||||
When a dnode is offline, it can be detected by the TDengine cluster. There are two cases:
|
||||
|
||||
- The dnode becomes online again before the threshold configured in `offlineThreshold` is reached, it is still in the cluster and data replication is started automatically. The dnode can work properly after the data syncup is finished.
|
||||
- The dnode comes online before the threshold configured in `offlineThreshold` is reached. The dnode is still in the cluster and data replication is started automatically. The dnode can work properly after the data sync is finished.
|
||||
|
||||
- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. A system alert will be generated and automatic load balancing will be triggered if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not join in the cluster automatically, it can only be joined manually by the system operator.
|
||||
- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. A system alert will be generated and automatic load balancing will be triggered if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not join the cluster automatically. The system administrator has to manually join the dnode to the cluster.
|
||||
|
||||
:::note
|
||||
If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsynced status, the master node can only be voted after all the vnodes or mnodes in the group become online and can exchange status, then the vgroup (or mnode group) is able to provide service.
|
||||
If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsynced status, the master node can only be voted on, after all the vnodes or mnodes in the group become online and can exchange status. Following this, the vgroup (or mnode group) is able to provide service.
|
||||
|
||||
:::
|
||||
|
||||
## Arbitrator
|
||||
|
||||
If the number of replicas is set to an even number like 2, when half of the vnodes in a vgroup don't work a master node can't be voted. A similar case is also applicable to mnode if the number of mnodes is set to an even number like 2.
|
||||
The "arbitrator" component is used to address the special case when the number of replicas is set to an even number like 2,4 etc. If half of the vnodes in a vgroup don't work, it is impossible to vote and select a master node. This situation also applies to mnodes if the number of mnodes is set to an even number like 2,4 etc.
|
||||
|
||||
To resolve this problem, a new arbitrator component named `tarbitrator`, abbreviated for TDengine Arbitrator, was introduced. Arbitrator simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. As long as more than half of the vnode or mnode, including Arbitrator, are available the vnode group or mnode group can provide data insertion or query services normally.
|
||||
To resolve this problem, a new arbitrator component named `tarbitrator`, an abbreviation of TDengine Arbitrator, was introduced. The `tarbitrator` simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. As long as more than half of the vnode or mnode, including Arbitrator, are available the vnode group or mnode group can provide data insertion or query services normally.
|
||||
|
||||
Normally, it's suggested to configure a replica number of each DB or system parameter `numOfMNodes` to an odd number. However, if a user is very sensitive to storage space, a replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability.
|
||||
Normally, it's prudent to configure the replica number for each DB or system parameter `numOfMNodes` to be an odd number. However, if a user is very sensitive to storage space, a replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability.
|
||||
|
||||
Arbitrator component is installed with the server package. For details about how to install, please refer to [Install](/operation/pkg-install). The `-p` parameter of `tarbitrator` can be used to specify the port on which it provides service.
|
||||
|
||||
|
|
|
@ -1,17 +1,17 @@
|
|||
---
|
||||
title: Data Types
|
||||
description: "The data types supported by TDengine include timestamp, float, JSON, etc"
|
||||
description: "TDengine supports a variety of data types including timestamp, float, JSON and many others."
|
||||
---
|
||||
|
||||
When using TDengine to store and query data, the most important part of the data is timestamp. Timestamp must be specified when creating and inserting data rows or querying data, timestamp must follow the rules below:
|
||||
When using TDengine to store and query data, the most important part of the data is timestamp. Timestamp must be specified when creating and inserting data rows. Timestamp must follow the rules below:
|
||||
|
||||
- the format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128`
|
||||
- internal function `now` can be used to get the current timestamp of the client side
|
||||
- the current timestamp of the client side is applied when `now` is used to insert data
|
||||
- The format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128`
|
||||
- Internal function `now` can be used to get the current timestamp on the client side
|
||||
- The current timestamp of the client side is applied when `now` is used to insert data
|
||||
- Epoch Time:timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from 1970-01-01 00:00:00.000 (UTC/GMT)
|
||||
- timestamp can be applied with add/subtract operation, for example `now-2h` means 2 hours back from the time at which query is executed,the unit can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `select * from t1 where ts > now-2w and ts <= now-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operation.
|
||||
- Add/subtract operations can be carried out on timestamps. For example `now-2h` means 2 hours prior to the time at which query is executed. The units of time in operations can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `select * from t1 where ts > now-2w and ts <= now-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operations.
|
||||
|
||||
Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`, like below, the default time precision is millisecond.
|
||||
Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`. The default time precision is millisecond. In the statement below, the precision is set to nanonseconds.
|
||||
|
||||
```sql
|
||||
CREATE DATABASE db_name PRECISION 'ns';
|
||||
|
@ -30,8 +30,8 @@ In TDengine, the data types below can be used when specifying a column or tag.
|
|||
| 7 | SMALLINT | 2 | Short integer, the value range is [-32767, 32767], while -32768 is treated as NULL |
|
||||
| 8 | TINYINT | 1 | Single-byte integer, the value range is [-127, 127], while -128 is treated as NULL |
|
||||
| 9 | BOOL | 1 | Bool, the value range is {true, false} |
|
||||
| 10 | NCHAR | User Defined| Multiple-Byte string that can include like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\’`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
|
||||
| 11 | JSON | | json type can only be used on tag, a tag of json type is excluded with any other tags of any other type |
|
||||
| 10 | NCHAR | User Defined| Multi-Byte string that can include multi byte characters like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\’`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
|
||||
| 11 | JSON | | JSON type can only be used on tags. A tag of json type is excluded with any other tags of any other type |
|
||||
|
||||
:::tip
|
||||
TDengine is case insensitive and treats any characters in the sql command as lower case by default, case sensitive strings must be quoted with single quotes.
|
||||
|
@ -39,7 +39,7 @@ TDengine is case insensitive and treats any characters in the sql command as low
|
|||
:::
|
||||
|
||||
:::note
|
||||
Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multiple-byte characters must be stored in NCHAR type.
|
||||
Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multi-byte characters must be stored in NCHAR type.
|
||||
|
||||
:::
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@ title: Database
|
|||
description: "create and drop database, show or change database parameters"
|
||||
---
|
||||
|
||||
## Create Datable
|
||||
## Create Database
|
||||
|
||||
```
|
||||
CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
|
||||
|
@ -12,11 +12,11 @@ CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
|
|||
|
||||
:::info
|
||||
|
||||
1. KEEP specifies the number of days for which the data in the database to be created will be kept, the default value is 3650 days, i.e. 10 years. The data will be deleted automatically once its age exceeds this threshold.
|
||||
1. KEEP specifies the number of days for which the data in the database will be retained. The default value is 3650 days, i.e. 10 years. The data will be deleted automatically once its age exceeds this threshold.
|
||||
2. UPDATE specifies whether the data can be updated and how the data can be updated.
|
||||
1. UPDATE set to 0 means update operation is not allowed, the data with an existing timestamp will be dropped silently.
|
||||
2. UPDATE set to 1 means the whole row will be updated, the columns for which no value is specified will be set to NULL
|
||||
3. UPDATE set to 2 means updating a part of columns for a row is allowed, the columns for which no value is specified will be kept as no change
|
||||
1. UPDATE set to 0 means update operation is not allowed. The update for data with an existing timestamp will be discarded silently and the original record in the database will be preserved as is.
|
||||
2. UPDATE set to 1 means the whole row will be updated. The columns for which no value is specified will be set to NULL.
|
||||
3. UPDATE set to 2 means updating a subset of columns for a row is allowed. The columns for which no value is specified will be kept unchanged.
|
||||
3. The maximum length of database name is 33 bytes.
|
||||
4. The maximum length of a SQL statement is 65,480 bytes.
|
||||
5. Below are the parameters that can be used when creating a database
|
||||
|
@ -35,7 +35,7 @@ CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
|
|||
- maxVgroupsPerDb: [Description](/reference/config/#maxvgroupsperdb)
|
||||
- comp: [Description](/reference/config/#comp)
|
||||
- precision: [Description](/reference/config/#precision)
|
||||
6. Please note that all of the parameters mentioned in this section can be configured in configuration file `taosd.cfg` at server side and used by default, the default parameters can be overriden if they are specified in `create database` statement.
|
||||
6. Please note that all of the parameters mentioned in this section are configured in configuration file `taos.cfg` on the TDengine server. If not specified in the `create database` statement, the values from taos.cfg are used by default. To override default parameters, they must be specified in the `create database` statement.
|
||||
|
||||
:::
|
||||
|
||||
|
@ -52,7 +52,7 @@ USE db_name;
|
|||
```
|
||||
|
||||
:::note
|
||||
This way is not applicable when using a REST connection
|
||||
This way is not applicable when using a REST connection. In a REST connection the database name must be specified before a table or stable name. For e.g. to query the stable "meters" in database "test" the query would be "SELECT count(*) from test.meters"
|
||||
|
||||
:::
|
||||
|
||||
|
@ -63,13 +63,13 @@ DROP DATABASE [IF EXISTS] db_name;
|
|||
```
|
||||
|
||||
:::note
|
||||
All data in the database will be deleted too. This command must be used with caution.
|
||||
All data in the database will be deleted too. This command must be used with extreme caution. Please follow your organization's data integrity, data backup, data security or any other applicable SOPs before using this command.
|
||||
|
||||
:::
|
||||
|
||||
## Change Database Configuration
|
||||
|
||||
Some examples are shown below to demonstrate how to change the configuration of a database. Please note that some configuration parameters can be changed after the database is created, but some others can't, for details of the configuration parameters of database please refer to [Configuration Parameters](/reference/config/).
|
||||
Some examples are shown below to demonstrate how to change the configuration of a database. Please note that some configuration parameters can be changed after the database is created, but some cannot. For details of the configuration parameters of database please refer to [Configuration Parameters](/reference/config/).
|
||||
|
||||
```
|
||||
ALTER DATABASE db_name COMP 2;
|
||||
|
@ -81,7 +81,7 @@ COMP parameter specifies whether the data is compressed and how the data is comp
|
|||
ALTER DATABASE db_name REPLICA 2;
|
||||
```
|
||||
|
||||
REPLICA parameter specifies the number of replications of the database.
|
||||
REPLICA parameter specifies the number of replicas of the database.
|
||||
|
||||
```
|
||||
ALTER DATABASE db_name KEEP 365;
|
||||
|
@ -124,4 +124,4 @@ SHOW DATABASES;
|
|||
SHOW CREATE DATABASE db_name;
|
||||
```
|
||||
|
||||
This command is useful when migrating the data from one TDengine cluster to another one. This command can be used to get the CREATE statement, which can be used in another TDengine to create the exact same database.
|
||||
This command is useful when migrating the data from one TDengine cluster to another. This command can be used to get the CREATE statement, which can be used in another TDengine instance to create the exact same database.
|
||||
|
|
|
@ -12,10 +12,10 @@ CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_nam
|
|||
|
||||
:::info
|
||||
|
||||
1. The first column of a table must be of TIMESTAMP type, and it will be set as the primary key automatically
|
||||
1. The first column of a table MUST be of type TIMESTAMP. It is automatically set as the primary key.
|
||||
2. The maximum length of the table name is 192 bytes.
|
||||
3. The maximum length of each row is 16k bytes, please note that the extra 2 bytes used by each BINARY/NCHAR column are also counted.
|
||||
4. The name of the subtable can only consist of English characters, digits and underscore, and can't start with a digit. Table names are case insensitive.
|
||||
4. The name of the subtable can only consist of characters from the English alphabet, digits and underscore. Table names can't start with a digit. Table names are case insensitive.
|
||||
5. The maximum length in bytes must be specified when using BINARY or NCHAR types.
|
||||
6. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive. Only ASCII visible characters can be used with escape character.
|
||||
For example \`aBc\` and \`abc\` are different table names but `abc` and `aBc` are same table names because they are both converted to `abc` internally.
|
||||
|
@ -44,7 +44,7 @@ The tags for which no value is specified will be set to NULL.
|
|||
CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) [IF NOT EXISTS] tb_name2 USING stb_name TAGS (tag_value2, ...) ...;
|
||||
```
|
||||
|
||||
This can be used to create a lot of tables in a single SQL statement to accelerate the speed of the creating tables.
|
||||
This can be used to create a lot of tables in a single SQL statement while making table creation much faster.
|
||||
|
||||
:::info
|
||||
|
||||
|
@ -111,7 +111,7 @@ If a table is created using a super table as template, the table definition can
|
|||
ALTER TABLE tb_name MODIFY COLUMN field_name data_type(length);
|
||||
```
|
||||
|
||||
The type of a column is variable length, like BINARY or NCHAR, this can be used to change (or increase) the length of the column.
|
||||
If the type of a column is variable length, like BINARY or NCHAR, this command can be used to change the length of the column.
|
||||
|
||||
:::note
|
||||
If a table is created using a super table as template, the table definition can only be changed on the corresponding super table, and the change will be automatically applied to all the subtables created using this super table as template. For tables created in the normal way, the table definition can be changed directly on the table.
|
||||
|
|
|
@ -9,7 +9,7 @@ Keyword `STable`, abbreviated for super table, is supported since version 2.0.15
|
|||
|
||||
:::
|
||||
|
||||
## Crate STable
|
||||
## Create STable
|
||||
|
||||
```
|
||||
CREATE STable [IF NOT EXISTS] stb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]) TAGS (tag1_name tag_type1, tag2_name tag_type2 [, tag3_name tag_type3]);
|
||||
|
@ -19,7 +19,7 @@ The SQL statement of creating a STable is similar to that of creating a table, b
|
|||
|
||||
:::info
|
||||
|
||||
1. The tag types specified in TAGS should NOT be timestamp. Since 2.1.3.0 timestamp type can be used in TAGS column, but its value must be fixed and arithmetic operation can't be applied on it.
|
||||
1. A tag can be of type timestamp, since version 2.1.3.0, but its value must be fixed and arithmetic operations cannot be performed on it. Prior to version 2.1.3.0, tag types specified in TAGS could not be of type timestamp.
|
||||
2. The tag names specified in TAGS should NOT be the same as other columns.
|
||||
3. The tag names specified in TAGS should NOT be the same as any reserved keywords.(Please refer to [keywords](/taos-sql/keywords/)
|
||||
4. The maximum number of tags specified in TAGS is 128, there must be at least one tag, and the total length of all tag columns should NOT exceed 16KB.
|
||||
|
@ -76,7 +76,7 @@ ALTER STable stb_name DROP COLUMN field_name;
|
|||
ALTER STable stb_name MODIFY COLUMN field_name data_type(length);
|
||||
```
|
||||
|
||||
This command can be used to change (or increase, more specifically) the length of a column of variable length types, like BINARY or NCHAR.
|
||||
This command can be used to change (or more specifically, increase) the length of a column of variable length types, like BINARY or NCHAR.
|
||||
|
||||
## Change Tags of A STable
|
||||
|
||||
|
@ -94,7 +94,7 @@ This command is used to add a new tag for a STable and specify the tag type.
|
|||
ALTER STable stb_name DROP TAG tag_name;
|
||||
```
|
||||
|
||||
The tag will be removed automatically from all the subtables created using the super table as template once a tag is removed from a super table.
|
||||
The tag will be removed automatically from all the subtables, created using the super table as template, once a tag is removed from a super table.
|
||||
|
||||
### Change A Tag
|
||||
|
||||
|
@ -102,7 +102,7 @@ The tag will be removed automatically from all the subtables created using the s
|
|||
ALTER STable stb_name CHANGE TAG old_tag_name new_tag_name;
|
||||
```
|
||||
|
||||
The tag name will be changed automatically for all the subtables created using the super table as template once a tag name is changed for a super table.
|
||||
The tag name will be changed automatically for all the subtables, created using the super table as template, once a tag name is changed for a super table.
|
||||
|
||||
### Change Tag Length
|
||||
|
||||
|
@ -110,7 +110,7 @@ The tag name will be changed automatically for all the subtables created using t
|
|||
ALTER STable stb_name MODIFY TAG tag_name data_type(length);
|
||||
```
|
||||
|
||||
This command can be used to change (or increase, more specifically) the length of a tag of variable length types, like BINARY or NCHAR.
|
||||
This command can be used to change (or more specifically, increase) the length of a tag of variable length types, like BINARY or NCHAR.
|
||||
|
||||
:::note
|
||||
Changing tag values can be applied to only subtables. All other tag operations, like add tag, remove tag, however, can be applied to only STable. If a new tag is added for a STable, the tag will be added with NULL value for all its subtables.
|
||||
|
|
|
@ -21,7 +21,7 @@ SELECT select_expr [, select_expr ...]
|
|||
|
||||
## Wildcard
|
||||
|
||||
Wilcard \* can be used to specify all columns. The result includes only data columns for normal tables.
|
||||
Wildcard \* can be used to specify all columns. The result includes only data columns for normal tables.
|
||||
|
||||
```
|
||||
taos> SELECT * FROM d1001;
|
||||
|
@ -51,14 +51,14 @@ taos> SELECT * FROM meters;
|
|||
Query OK, 9 row(s) in set (0.002022s)
|
||||
```
|
||||
|
||||
Wildcard can be used with table name as prefix, both below SQL statements have same effects and return all columns.
|
||||
Wildcard can be used with table name as prefix. Both SQL statements below have the same effect and return all columns.
|
||||
|
||||
```SQL
|
||||
SELECT * FROM d1001;
|
||||
SELECT d1001.* FROM d1001;
|
||||
```
|
||||
|
||||
In JOIN query, however, with or without table name prefix will return different results. \* without table prefix will return all the columns of both tables, but \* with table name as prefix will return only the columns of that table.
|
||||
In a JOIN query, however, the results are different with or without a table name prefix. \* without table prefix will return all the columns of both tables, but \* with table name as prefix will return only the columns of that table.
|
||||
|
||||
```
|
||||
taos> SELECT * FROM d1001, d1003 WHERE d1001.ts=d1003.ts;
|
||||
|
@ -76,7 +76,7 @@ taos> SELECT d1001.* FROM d1001,d1003 WHERE d1001.ts = d1003.ts;
|
|||
Query OK, 1 row(s) in set (0.020443s)
|
||||
```
|
||||
|
||||
Wilcard \* can be used with some functions, but the result may be different depending on the function being used. For example, `count(*)` returns only one column, i.e. the number of rows; `first`, `last` and `last_row` return all columns of the selected row.
|
||||
Wildcard \* can be used with some functions, but the result may be different depending on the function being used. For example, `count(*)` returns only one column, i.e. the number of rows; `first`, `last` and `last_row` return all columns of the selected row.
|
||||
|
||||
```
|
||||
taos> SELECT COUNT(*) FROM d1001;
|
||||
|
@ -96,7 +96,7 @@ Query OK, 1 row(s) in set (0.000849s)
|
|||
|
||||
## Tags
|
||||
|
||||
Starting from version 2.0.14, tag columns can be selected together with data columns when querying sub tables. Please note that, however, wildcard \* doesn't represent any tag column, that means tag columns must be specified explicitly like the example below.
|
||||
Starting from version 2.0.14, tag columns can be selected together with data columns when querying sub tables. Please note however, that, wildcard \* cannot be used to represent any tag column. This means that tag columns must be specified explicitly like the example below.
|
||||
|
||||
```
|
||||
taos> SELECT location, groupid, current FROM d1001 LIMIT 2;
|
||||
|
@ -109,7 +109,7 @@ Query OK, 2 row(s) in set (0.003112s)
|
|||
|
||||
## Get distinct values
|
||||
|
||||
`DISTINCT` keyword can be used to get all the unique values of tag columns from a super table, it can also be used to get all the unique values of data columns from a table or subtable.
|
||||
`DISTINCT` keyword can be used to get all the unique values of tag columns from a super table. It can also be used to get all the unique values of data columns from a table or subtable.
|
||||
|
||||
```sql
|
||||
SELECT DISTINCT tag_name [, tag_name ...] FROM stb_name;
|
||||
|
@ -118,15 +118,15 @@ SELECT DISTINCT col_name [, col_name ...] FROM tb_name;
|
|||
|
||||
:::info
|
||||
|
||||
1. Configuration parameter `maxNumOfDistinctRes` in `taos.cfg` is used to control the number of rows to output. The minimum configurable value is 100,000, the maximum configurable value is 100,000,000, the default value is 1000,000. If the actual number of rows exceeds the value of this parameter, only the number of rows specified by this parameter will be output.
|
||||
2. It can't be guaranteed that the results selected by using `DISTINCT` on columns of `FLOAT` or `DOUBLE` are exactly unique because of the precision nature of floating numbers.
|
||||
1. Configuration parameter `maxNumOfDistinctRes` in `taos.cfg` is used to control the number of rows to output. The minimum configurable value is 100,000, the maximum configurable value is 100,000,000, the default value is 1,000,000. If the actual number of rows exceeds the value of this parameter, only the number of rows specified by this parameter will be output.
|
||||
2. It can't be guaranteed that the results selected by using `DISTINCT` on columns of `FLOAT` or `DOUBLE` are exactly unique because of the precision errors in floating point numbers.
|
||||
3. `DISTINCT` can't be used in the sub-query of a nested query statement, and can't be used together with aggregate functions, `GROUP BY` or `JOIN` in the same SQL statement.
|
||||
|
||||
:::
|
||||
|
||||
## Columns Names of Result Set
|
||||
|
||||
When using `SELECT`, the column names in the result set will be same as that in the select clause if `AS` is not used. `AS` can be used to rename the column names in the result set. For example
|
||||
When using `SELECT`, the column names in the result set will be the same as that in the select clause if `AS` is not used. `AS` can be used to rename the column names in the result set. For example
|
||||
|
||||
```
|
||||
taos> SELECT ts, ts AS primary_key_ts FROM d1001;
|
||||
|
@ -161,7 +161,7 @@ SELECT * FROM d1001;
|
|||
|
||||
## Special Query
|
||||
|
||||
Some special query functionalities can be performed without `FORM` sub-clause. For example, below statement can be used to get the current database in use.
|
||||
Some special query functions can be invoked without `FROM` sub-clause. For example, the statement below can be used to get the current database in use.
|
||||
|
||||
```
|
||||
taos> SELECT DATABASE();
|
||||
|
@ -181,7 +181,7 @@ taos> SELECT DATABASE();
|
|||
Query OK, 1 row(s) in set (0.000184s)
|
||||
```
|
||||
|
||||
Below statement can be used to get the version of client or server.
|
||||
The statement below can be used to get the version of client or server.
|
||||
|
||||
```
|
||||
taos> SELECT CLIENT_VERSION();
|
||||
|
@ -197,7 +197,7 @@ taos> SELECT SERVER_VERSION();
|
|||
Query OK, 1 row(s) in set (0.000077s)
|
||||
```
|
||||
|
||||
Below statement is used to check the server status. One integer, like `1`, is returned if the server status is OK, otherwise an error code is returned. This is compatible with the status check for TDengine from connection pool or 3rd party tools, and can avoid the problem of losing the connection from a connection pool when using the wrong heartbeat checking SQL statement.
|
||||
The statement below is used to check the server status. An integer, like `1`, is returned if the server status is OK, otherwise an error code is returned. This is compatible with the status check for TDengine from connection pool or 3rd party tools, and can avoid the problem of losing the connection from a connection pool when using the wrong heartbeat checking SQL statement.
|
||||
|
||||
```
|
||||
taos> SELECT SERVER_STATUS();
|
||||
|
@ -284,7 +284,7 @@ taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2;
|
|||
Query OK, 1 row(s) in set (0.001091s)
|
||||
```
|
||||
|
||||
- Wildcard \* can be used to get all columns, or specific column names can be specified. Arithmetic operation can be performed on columns of number types, columns can be renamed in the result set.
|
||||
- Wildcard \* can be used to get all columns, or specific column names can be specified. Arithmetic operation can be performed on columns of numerical types, columns can be renamed in the result set.
|
||||
- Arithmetic operation on columns can't be used in where clause. For example, `where a*2>6;` is not allowed but `where a>6/2;` can be used instead for the same purpose.
|
||||
- Arithmetic operation on columns can't be used as the objectives of select statement. For example, `select min(2*a) from t;` is not allowed but `select 2*min(a) from t;` can be used instead.
|
||||
- Logical operation can be used in `WHERE` clause to filter numeric values, wildcard can be used to filter string values.
|
||||
|
@ -318,13 +318,13 @@ Logical operations in below table can be used in the `where` clause to filter th
|
|||
- Operator `like` is used together with wildcards to match strings
|
||||
- '%' matches 0 or any number of characters, '\_' matches any single ASCII character.
|
||||
- `\_` is used to match the \_ in the string.
|
||||
- The maximum length of wildcard string is 100 bytes from version 2.1.6.1 (before that the maximum length is 20 bytes). `maxWildCardsLength` in `taos.cfg` can be used to control this threshold. Too long wildcard string may slowdown the execution performance of `LIKE` operator.
|
||||
- The maximum length of wildcard string is 100 bytes from version 2.1.6.1 (before that the maximum length is 20 bytes). `maxWildCardsLength` in `taos.cfg` can be used to control this threshold. A very long wildcard string may slowdown the execution performance of `LIKE` operator.
|
||||
- `AND` keyword can be used to filter multiple columns simultaneously. AND/OR operation can be performed on single or multiple columns from version 2.3.0.0. However, before 2.3.0.0 `OR` can't be used on multiple columns.
|
||||
- For timestamp column, only one condition can be used; for other columns or tags, `OR` keyword can be used to combine multiple logical operators. For example, `((value > 20 AND value < 30) OR (value < 12))`.
|
||||
- From version 2.3.0.0, multiple conditions can be used on timestamp column, but the result set can only contain single time range.
|
||||
- From version 2.0.17.0, operator `BETWEEN AND` can be used in where clause, for example `WHERE col2 BETWEEN 1.5 AND 3.25` means the filter condition is equal to "1.5 ≤ col2 ≤ 3.25".
|
||||
- From version 2.1.4.0, operator `IN` can be used in the where clause. For example, `WHERE city IN ('California.SanFrancisco', 'California.SanDiego')`. For bool type, both `{true, false}` and `{0, 1}` are allowed, but integers other than 0 or 1 are not allowed. FLOAT and DOUBLE types are impacted by floating precision, only values that match the condition within the tolerance will be selected. Non-primary key column of timestamp type can be used with `IN`.
|
||||
- From version 2.3.0.0, regular expression is supported in the where clause with keyword `match` or `nmatch`, the regular expression is case insensitive.
|
||||
- From version 2.1.4.0, operator `IN` can be used in the where clause. For example, `WHERE city IN ('California.SanFrancisco', 'California.SanDiego')`. For bool type, both `{true, false}` and `{0, 1}` are allowed, but integers other than 0 or 1 are not allowed. FLOAT and DOUBLE types are impacted by floating point precision errors. Only values that match the condition within the tolerance will be selected. Non-primary key column of timestamp type can be used with `IN`.
|
||||
- From version 2.3.0.0, regular expression is supported in the where clause with keyword `match` or `nmatch`. The regular expression is case insensitive.
|
||||
|
||||
## Regular Expression
|
||||
|
||||
|
@ -364,7 +364,7 @@ FROM temp_STable t1, temp_STable t2
|
|||
WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
|
||||
```
|
||||
|
||||
Similary, join operation can be performed on the result set of multiple sub queries.
|
||||
Similarly, join operations can be performed on the result set of multiple sub queries.
|
||||
|
||||
:::note
|
||||
Restrictions on join operation:
|
||||
|
@ -380,7 +380,7 @@ Restrictions on join operation:
|
|||
|
||||
## Nested Query
|
||||
|
||||
Nested query is also called sub query, that means in a single SQL statement the result of inner query can be used as the data source of the outer query.
|
||||
Nested query is also called sub query. This means that in a single SQL statement the result of inner query can be used as the data source of the outer query.
|
||||
|
||||
From 2.2.0.0, unassociated sub query can be used in the `FROM` clause. Unassociated means the sub query doesn't use the parameters in the parent query. More specifically, in the `tb_name_list` of `SELECT` statement, an independent SELECT statement can be used. So a complete nested query looks like:
|
||||
|
||||
|
@ -390,14 +390,14 @@ SELECT ... FROM (SELECT ... FROM ...) ...;
|
|||
|
||||
:::info
|
||||
|
||||
- Only one layer of nesting is allowed, that means no sub query is allowed in a sub query
|
||||
- The result set returned by the inner query will be used as a "virtual table" by the outer query, the "virtual table" can be renamed using `AS` keyword for easy reference in the outer query.
|
||||
- Only one layer of nesting is allowed, that means no sub query is allowed within a sub query
|
||||
- The result set returned by the inner query will be used as a "virtual table" by the outer query. The "virtual table" can be renamed using `AS` keyword for easy reference in the outer query.
|
||||
- Sub query is not allowed in continuous query.
|
||||
- JOIN operation is allowed between tables/STables inside both inner and outer queries. Join operation can be performed on the result set of the inner query.
|
||||
- UNION operation is not allowed in either inner query or outer query.
|
||||
- The functionalities that can be used in the inner query is same as non-nested query.
|
||||
- `ORDER BY` inside the inner query doesn't make any sense but will slow down the query performance significantly, so please avoid such usage.
|
||||
- Compared to the non-nested query, the functionalities that can be used in the outer query have such restrictions as:
|
||||
- The functions that can be used in the inner query are the same as those that can be used in a non-nested query.
|
||||
- `ORDER BY` inside the inner query is unnecessary and will slow down the query performance significantly. It is best to avoid the use of `ORDER BY` inside the inner query.
|
||||
- Compared to the non-nested query, the functionality that can be used in the outer query has the following restrictions:
|
||||
- Functions
|
||||
- If the result set returned by the inner query doesn't contain timestamp column, then functions relying on timestamp can't be used in the outer query, like `TOP`, `BOTTOM`, `FIRST`, `LAST`, `DIFF`.
|
||||
- Functions that need to scan the data twice can't be used in the outer query, like `STDDEV`, `PERCENTILE`.
|
||||
|
@ -442,8 +442,8 @@ The sum of col1 and col2 for rows later than 2018-06-01 08:00:00.000 and whose c
|
|||
SELECT (col1 + col2) AS 'complex' FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' AND col2 > 1.2 LIMIT 10 OFFSET 5;
|
||||
```
|
||||
|
||||
The rows in the past 10 minutes and whose col2 is bigger than 3.14 are selected and output to the result file `/home/testoutpu.csv` with below SQL statement:
|
||||
The rows in the past 10 minutes and whose col2 is bigger than 3.14 are selected and output to the result file `/home/testoutput.csv` with below SQL statement:
|
||||
|
||||
```SQL
|
||||
SELECT COUNT(*) FROM tb1 WHERE ts >= NOW - 10m AND col2 > 3.14 >> /home/testoutpu.csv;
|
||||
SELECT COUNT(*) FROM tb1 WHERE ts >= NOW - 10m AND col2 > 3.14 >> /home/testoutput.csv;
|
||||
```
|
||||
|
|
|
@ -3,11 +3,9 @@ title: TDengine SQL
|
|||
description: "The syntax supported by TDengine SQL "
|
||||
---
|
||||
|
||||
This section explains the syntax to operating databases, tables, STables, inserting data, selecting data, functions and some tips that can be used in TDengine SQL. It would be easier to understand with some fundamental knowledge of SQL.
|
||||
This section explains the syntax of SQL to perform operations on databases, tables and STables, insert data, select data and use functions. We also provide some tips that can be used in TDengine SQL. If you have previous experience with SQL this section will be fairly easy to understand. If you do not have previous experience with SQL, you'll come to appreciate the simplicity and power of SQL.
|
||||
|
||||
TDengine SQL is the major interface for users to write data into or query from TDengine. For users to easily use, syntax similar to standard SQL is provided. However, please note that TDengine SQL is not standard SQL. For instance, TDengine doesn't provide the functionality of deleting time series data, thus corresponding statements are not provided in TDengine SQL.
|
||||
|
||||
TDengine SQL doesn't support abbreviation for keywords, for example `DESCRIBE` can't be abbreviated as `DESC`.
|
||||
TDengine SQL is the major interface for users to write data into or query from TDengine. For ease of use, the syntax is similar to that of standard SQL. However, please note that TDengine SQL is not standard SQL. For instance, TDengine doesn't provide a delete function for time series data and so corresponding statements are not provided in TDengine SQL.
|
||||
|
||||
Syntax Specifications used in this chapter:
|
||||
|
||||
|
@ -16,7 +14,7 @@ Syntax Specifications used in this chapter:
|
|||
- | means one of a few options, excluding | itself.
|
||||
- … means the item prior to it can be repeated multiple times.
|
||||
|
||||
To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of meters. Assuming each meter collects 3 data measurements: current, voltage, phase. The data model is shown below:
|
||||
To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of data from electric meters. Each meter collects 3 data measurements: current, voltage, phase. The data model is shown below:
|
||||
|
||||
```sql
|
||||
taos> DESCRIBE meters;
|
||||
|
@ -30,4 +28,4 @@ taos> DESCRIBE meters;
|
|||
groupid | INT | 4 | TAG |
|
||||
```
|
||||
|
||||
The data set includes the data collected by 4 meters, the corresponding table name is d1001, d1002, d1003, d1004 respectively based on the data model of TDengine.
|
||||
The data set includes the data collected by 4 meters, the corresponding table name is d1001, d1002, d1003 and d1004 based on the data model of TDengine.
|
||||
|
|
|
@ -10,7 +10,7 @@ One difference from the native connector is that the REST interface is stateless
|
|||
|
||||
## Installation
|
||||
|
||||
The REST interface does not rely on any TDengine native library, so the client application does not need to install any TDengine libraries. The client application's development language supports the HTTP protocol is enough.
|
||||
The REST interface does not rely on any TDengine native library, so the client application does not need to install any TDengine libraries. The client application's development language only needs to support the HTTP protocol.
|
||||
|
||||
## Verification
|
||||
|
||||
|
|
|
@ -53,7 +53,7 @@ Earlier TDengine client software includes the Python connector. If the Python co
|
|||
|
||||
:::
|
||||
|
||||
#### to install `taospy`
|
||||
#### To install `taospy`
|
||||
|
||||
<Tabs>
|
||||
<TabItem label="Install from PyPI" value="pypi">
|
||||
|
@ -320,7 +320,7 @@ All database operations will be thrown directly if an exception occurs. The appl
|
|||
|
||||
### About nanoseconds
|
||||
|
||||
Due to the current imperfection of Python's nanosecond support (see link below), the current implementation returns integers at nanosecond precision instead of the `datetime` type produced by `ms and `us`, which application developers will need to handle on their own. And it is recommended to use pandas' to_datetime(). The Python Connector may modify the interface in the future if Python officially supports nanoseconds in full.
|
||||
Due to the current imperfection of Python's nanosecond support (see link below), the current implementation returns integers at nanosecond precision instead of the `datetime` type produced by `ms` and `us`, which application developers will need to handle on their own. And it is recommended to use pandas' to_datetime(). The Python Connector may modify the interface in the future if Python officially supports nanoseconds in full.
|
||||
|
||||
1. https://stackoverflow.com/questions/10611328/parsing-datetime-strings-containing-nanoseconds
|
||||
2. https://www.python.org/dev/peps/pep-0564/
|
||||
|
@ -328,7 +328,7 @@ Due to the current imperfection of Python's nanosecond support (see link below),
|
|||
|
||||
## Frequently Asked Questions
|
||||
|
||||
Welcome to [ask questions or report questions] (https://github.com/taosdata/taos-connector-python/issues).
|
||||
Welcome to [ask questions or report questions](https://github.com/taosdata/taos-connector-python/issues).
|
||||
|
||||
## Important Update
|
||||
|
||||
|
|
|
@ -30,9 +30,9 @@ taosAdapter provides the following features.
|
|||
|
||||
### Install taosAdapter
|
||||
|
||||
taosAdapter has been part of TDengine server software since TDengine v2.4.0.0. If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [TAOSData official website](https://taosdata.com/en/all-downloads/) to download the TDengine server installation package (taosAdapter is included in v2.4.0.0 and later version). If you need to deploy taosAdapter separately on another server other than the TDengine server, you should install the full TDengine on that server to install taosAdapter. If you need to build taosAdapter from source code, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/develop/BUILD.md) documentation.
|
||||
taosAdapter has been part of TDengine server software since TDengine v2.4.0.0. If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [TDengine official website](https://tdengine.com/all-downloads/) to download the TDengine server installation package (taosAdapter is included in v2.4.0.0 and later version). If you need to deploy taosAdapter separately on another server other than the TDengine server, you should install the full TDengine on that server to install taosAdapter. If you need to build taosAdapter from source code, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/develop/BUILD.md) documentation.
|
||||
|
||||
### start/stop taosAdapter
|
||||
### Start/Stop taosAdapter
|
||||
|
||||
On Linux systems, the taosAdapter service is managed by `systemd` by default. You can use the command `systemctl start taosadapter` to start the taosAdapter service and use the command `systemctl stop taosadapter` to stop the taosAdapter service.
|
||||
|
||||
|
@ -153,8 +153,7 @@ See [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/bl
|
|||
|
||||
## Feature List
|
||||
|
||||
- Compatible with RESTful interfaces
|
||||
[https://www.taosdata.com/cn/documentation/connector#restful](https://www.taosdata.com/cn/documentation/connector#restful)
|
||||
- Compatible with RESTful interfaces [REST API](/reference/rest-api/)
|
||||
- Compatible with InfluxDB v1 write interface
|
||||
[https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
|
||||
- Compatible with OpenTSDB JSON and telnet format writes
|
||||
|
@ -187,7 +186,7 @@ You can use any client that supports the http protocol to write data to or query
|
|||
|
||||
### InfluxDB
|
||||
|
||||
You can use any client that supports the http protocol to access the Restful interface address `http://<fqdn>:6041/<APIEndPoint>` to write data in InfluxDB compatible format to TDengine. The EndPoint is as follows:
|
||||
You can use any client that supports the http protocol to access the RESTful interface address `http://<fqdn>:6041/<APIEndPoint>` to write data in InfluxDB compatible format to TDengine. The EndPoint is as follows:
|
||||
|
||||
```text
|
||||
/influxdb/v1/write
|
||||
|
@ -204,7 +203,7 @@ Note: InfluxDB token authorization is not supported at present. Only Basic autho
|
|||
|
||||
### OpenTSDB
|
||||
|
||||
You can use any client that supports the http protocol to access the Restful interface address `http://<fqdn>:6041/<APIEndPoint>` to write data in OpenTSDB compatible format to TDengine.
|
||||
You can use any client that supports the http protocol to access the RESTful interface address `http://<fqdn>:6041/<APIEndPoint>` to write data in OpenTSDB compatible format to TDengine.
|
||||
|
||||
```text
|
||||
/opentsdb/v1/put/json/:db
|
||||
|
|
|
@ -12,14 +12,13 @@ taosdump can back up a database, a super table, or a normal table as a logical d
|
|||
Suppose the specified location already has data files. In that case, taosdump will prompt the user and exit immediately to avoid data overwriting which means that the same path can only be used for one backup.
|
||||
Please be careful if you see a prompt for this.
|
||||
|
||||
taosdump is a logical backup tool and should not be used to back up any raw data, environment settings,
|
||||
Users should not use taosdump to back up raw data, environment settings, hardware information, server configuration, or cluster topology. taosdump uses [Apache AVRO](https://avro.apache.org/) as the data file format to store backup data.
|
||||
|
||||
## Installation
|
||||
|
||||
There are two ways to install taosdump:
|
||||
|
||||
- Install the taosTools official installer. Please find taosTools from [All download links](https://www.taosdata.com/all-downloads) page and download and install it.
|
||||
- Install the taosTools official installer. Please find taosTools from [All download links](https://www.tdengine.com/all-downloads) page and download and install it.
|
||||
|
||||
- Compile taos-tools separately and install it. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
|
||||
|
||||
|
@ -28,14 +27,14 @@ There are two ways to install taosdump:
|
|||
### taosdump backup data
|
||||
|
||||
1. backing up all databases: specify `-A` or `-all-databases` parameter.
|
||||
2. backup multiple specified databases: use `-D db1,db2,... ` parameters; 3.
|
||||
2. backup multiple specified databases: use `-D db1,db2,... ` parameters;
|
||||
3. back up some super or normal tables in the specified database: use `-dbname stbname1 stbname2 tbname1 tbname2 ... ` parameters. Note that the first parameter of this input sequence is the database name, and only one database is supported. The second and subsequent parameters are the names of super or normal tables in that database, separated by spaces.
|
||||
4. back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command-line parameter.
|
||||
5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use This can reduce the backup data time and backup data footprint if table names, column names, and tag names do not use `escape character`. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](/taos-sql/escape) for a description of escaped characters.
|
||||
|
||||
:::tip
|
||||
- taosdump versions after 1.4.1 provide the `-I` argument for parsing Avro file schema and data. If users specify `-s` then only taosdump will parse schema.
|
||||
- Backups after taosdump 1.4.2 use the batch count specified by the `-B` parameter. The default value is 16384. If, in some environments, low network speed or disk performance causes "Error actual dump ... batch ..." can be tried by challenging the `-B` parameter to a smaller value.
|
||||
- Backups after taosdump 1.4.2 use the batch count specified by the `-B` parameter. The default value is 16384. If, in some environments, low network speed or disk performance causes "Error actual dump ... batch ...", then try changing the `-B` parameter to a smaller value.
|
||||
|
||||
:::
|
||||
|
||||
|
@ -44,7 +43,7 @@ There are two ways to install taosdump:
|
|||
Restore the data file in the specified path: use the `-i` parameter plus the path to the data file. You should not use the same directory to backup different data sets, and you should not backup the same data set multiple times in the same path. Otherwise, the backup data will cause overwriting or multiple backups.
|
||||
|
||||
:::tip
|
||||
taosdump internally uses TDengine stmt binding API for writing recovery data and currently uses 16384 as one write batch for better data recovery performance. If there are more columns in the backup data, it may cause a "WAL size exceeds limit" error. You can try to adjust to a smaller value by using the `-B` parameter.
|
||||
taosdump internally uses TDengine stmt binding API for writing recovery data with a default batch size of 16384 for better data recovery performance. If there are more columns in the backup data, it may cause a "WAL size exceeds limit" error. You can try to adjust the batch size to a smaller value by using the `-B` parameter.
|
||||
|
||||
:::
|
||||
|
||||
|
|
|
@ -61,7 +61,7 @@ sudo yum install \
|
|||
|
||||
## Automated deployment of TDinsight
|
||||
|
||||
We provide an installation script [`TDinsight.sh`](https://github.com/taosdata/grafanaplugin/releases/latest/download/TDinsight.sh) script to allow users to configure the installation automatically and quickly.
|
||||
We provide an installation script [`TDinsight.sh`](https://github.com/taosdata/grafanaplugin/releases/latest/download/TDinsight.sh) to allow users to configure the installation automatically and quickly.
|
||||
|
||||
You can download the script via `wget` or other tools:
|
||||
|
||||
|
@ -300,7 +300,7 @@ This section contains the current information and status of the cluster, the ale
|
|||
|
||||

|
||||
|
||||
1. **MNodes Status**: a simple table view of `show mnodes`. 2.
|
||||
1. **MNodes Status**: a simple table view of `show mnodes`.
|
||||
2. **MNodes Number**: similar to `DNodes Number`, the number of MNodes changes.
|
||||
|
||||
### Request
|
||||
|
@ -317,9 +317,9 @@ This section contains the current information and status of the cluster, the ale
|
|||
|
||||
Database usage, repeated for each value of the variable `$database` i.e. multiple rows per database.
|
||||
|
||||
1. **STables**: number of super tables. 2.
|
||||
2. **Total Tables**: number of all tables. 3.
|
||||
3. **Sub Tables**: the number of all super table sub-tables. 4.
|
||||
1. **STables**: number of super tables.
|
||||
2. **Total Tables**: number of all tables.
|
||||
3. **Sub Tables**: the number of all super table subtables.
|
||||
4. **Tables**: graph of all normal table numbers over time.
|
||||
5. **Tables Number Foreach VGroups**: The number of tables contained in each VGroups.
|
||||
|
||||
|
@ -330,18 +330,18 @@ Database usage, repeated for each value of the variable `$database` i.e. multipl
|
|||
Data node resource usage display with repeated multiple rows for the variable `$fqdn` i.e., each data node. Includes.
|
||||
|
||||
1. **Uptime**: the time elapsed since the dnode was created.
|
||||
2. **Has MNodes?**: whether the current dnode is a mnode. 3.
|
||||
3. **CPU Cores**: the number of CPU cores. 4.
|
||||
4. **VNodes Number**: the number of VNodes in the current dnode. 5.
|
||||
5. **VNodes Masters**: the number of vnodes in the master role. 6.
|
||||
2. **Has MNodes?**: whether the current dnode is a mnode.
|
||||
3. **CPU Cores**: the number of CPU cores.
|
||||
4. **VNodes Number**: the number of VNodes in the current dnode.
|
||||
5. **VNodes Masters**: the number of vnodes in the master role.
|
||||
6. **Current CPU Usage of taosd**: CPU usage rate of taosd processes.
|
||||
7. **Current Memory Usage of taosd**: memory usage of taosd processes.
|
||||
8. **Disk Used**: The total disk usage percentage of the taosd data directory.
|
||||
9. **CPU Usage**: Process and system CPU usage. 10.
|
||||
9. **CPU Usage**: Process and system CPU usage.
|
||||
10. **RAM Usage**: Time series view of RAM usage metrics.
|
||||
11. **Disk Used**: Disks used at each level of multi-level storage (default is level0).
|
||||
12. **Disk Increasing Rate per Minute**: Percentage increase or decrease in disk usage per minute.
|
||||
13. **Disk IO**: Disk IO rate. 14.
|
||||
13. **Disk IO**: Disk IO rate.
|
||||
14. **Net IO**: Network IO, the aggregate network IO rate in addition to the local network.
|
||||
|
||||
### Login History
|
||||
|
@ -376,7 +376,7 @@ TDinsight installed via the `TDinsight.sh` script can be cleaned up using the co
|
|||
To completely uninstall TDinsight during a manual installation, you need to clean up the following.
|
||||
|
||||
1. the TDinsight Dashboard in Grafana.
|
||||
2. the Data Source in Grafana. 3.
|
||||
2. the Data Source in Grafana.
|
||||
3. remove the `tdengine-datasource` plugin from the plugin installation directory.
|
||||
|
||||
## Integrated Docker Example
|
||||
|
|
|
@ -4,11 +4,11 @@ sidebar_label: TDengine CLI
|
|||
description: Instructions and tips for using the TDengine CLI
|
||||
---
|
||||
|
||||
The TDengine command-line application (hereafter referred to as `TDengine CLI`) is the most simplest way for users to manipulate and interact with TDengine instances.
|
||||
The TDengine command-line application (hereafter referred to as `TDengine CLI`) is the simplest way for users to manipulate and interact with TDengine instances.
|
||||
|
||||
## Installation
|
||||
|
||||
If executed on the TDengine server-side, there is no need for additional installation steps to install TDengine CLI as it is already included and installed automatically. To run TDengine CLI on the environment which no TDengine server running, the TDengine client installation package needs to be installed first. For details, please refer to [connector](/reference/connector/).
|
||||
If executed on the TDengine server-side, there is no need for additional installation steps to install TDengine CLI as it is already included and installed automatically. To run TDengine CLI in an environment where no TDengine server is running, the TDengine client installation package needs to be installed first. For details, please refer to [connector](/reference/connector/).
|
||||
|
||||
## Execution
|
||||
|
||||
|
|
|
@ -315,13 +315,13 @@ password: taosdata
|
|||
taoslog-td2:
|
||||
```
|
||||
|
||||
:::note
|
||||
:::note
|
||||
- The `VERSION` environment variable is used to set the tdengine image tag
|
||||
- `TAOS_FIRST_EP` must be set on the newly created instance so that it can join the TDengine cluster; if there is a high availability requirement, `TAOS_SECOND_EP` needs to be used at the same time
|
||||
- `TAOS_REPLICA` is used to set the default number of database replicas. Its value range is [1,3]
|
||||
We recommend setting with `TAOS_ARBITRATOR` to use arbitrator in a two-nodes environment.
|
||||
:::
|
||||
We recommend setting it with `TAOS_ARBITRATOR` to use arbitrator in a two-nodes environment.
|
||||
|
||||
:::
|
||||
|
||||
2. Start the cluster
|
||||
|
||||
|
|
|
@ -65,7 +65,7 @@ taos --dump-config
|
|||
| ------------- | ------------------------------------------------------------------------ |
|
||||
| Applicable | Server Only |
|
||||
| Meaning | The FQDN of the host where `taosd` will be started. It can be IP address |
|
||||
| Default Value | The first hostname configured for the hos |
|
||||
| Default Value | The first hostname configured for the host |
|
||||
| Note | It should be within 96 bytes |
|
||||
|
||||
### serverPort
|
||||
|
@ -78,7 +78,7 @@ taos --dump-config
|
|||
| Note | REST service is provided by `taosd` before 2.4.0.0 but by `taosAdapter` after 2.4.0.0, the default port of REST service is 6041 |
|
||||
|
||||
:::note
|
||||
TDengine uses continuous 13 ports, both TCP and TCP, from the port specified by `serverPort`. These ports need to be kept as open if firewall is enabled. Below table describes the ports used by TDengine in details.
|
||||
TDengine uses continuous 13 ports, both TCP and UDP, from the port specified by `serverPort`. These ports need to be kept open if firewall is enabled. Below table describes the ports used by TDengine in details.
|
||||
|
||||
:::
|
||||
|
||||
|
@ -182,8 +182,8 @@ TDengine uses continuous 13 ports, both TCP and TCP, from the port specified by
|
|||
| ------------- | -------------------------------------------- |
|
||||
| Applicable | Server Only |
|
||||
| Meaning | The maximum number of distinct rows returned |
|
||||
| Value Range | [100,000 - 100, 000, 000] |
|
||||
| Default Value | 100, 000 |
|
||||
| Value Range | [100,000 - 100,000,000] |
|
||||
| Default Value | 100,000 |
|
||||
| Note | After version 2.3.0.0 |
|
||||
|
||||
## Locale Parameters
|
||||
|
@ -240,7 +240,7 @@ To avoid the problems of using time strings, Unix timestamp can be used directly
|
|||
| Default Value | Locale configured in host |
|
||||
|
||||
:::info
|
||||
A specific type "nchar" is provided in TDengine to store non-ASCII characters such as Chinese, Japanese, Korean. The characters to be stored in nchar type are firstly encoded in UCS4-LE before sending to server side. To store non-ASCII characters correctly, the encoding format of the client side needs to be set properly.
|
||||
A specific type "nchar" is provided in TDengine to store non-ASCII characters such as Chinese, Japanese, and Korean. The characters to be stored in nchar type are firstly encoded in UCS4-LE before sending to server side. To store non-ASCII characters correctly, the encoding format of the client side needs to be set properly.
|
||||
|
||||
The characters input on the client side are encoded using the default system encoding, which is UTF-8 on Linux, or GB18030 or GBK on some systems in Chinese, POSIX in docker, CP936 on Windows in Chinese. The encoding of the operating system in use must be set correctly so that the characters in nchar type can be converted to UCS4-LE.
|
||||
|
||||
|
@ -779,7 +779,7 @@ To prevent system resource from being exhausted by multiple concurrent streams,
|
|||
|
||||
:::note
|
||||
HTTP server had been provided by `taosd` prior to version 2.4.0.0, now is provided by `taosAdapter` after version 2.4.0.0.
|
||||
The parameters described in this section are only application in versions prior to 2.4.0.0. If you are using any version from 2.4.0.0, please refer to [taosAdapter]](/reference/taosadapter/).
|
||||
The parameters described in this section are only application in versions prior to 2.4.0.0. If you are using any version from 2.4.0.0, please refer to [taosAdapter](/reference/taosadapter/).
|
||||
|
||||
:::
|
||||
|
||||
|
|
|
@ -32,7 +32,7 @@ All executable files of TDengine are in the _/usr/local/taos/bin_ directory by d
|
|||
- _taosd-dump-cfg.gdb_: script to facilitate debugging of taosd's gdb execution.
|
||||
|
||||
:::note
|
||||
taosdump after version 2.4.0.0 require taosTools as a standalone installation. A few version taosBenchmark is include in taosTools too.
|
||||
taosdump after version 2.4.0.0 require taosTools as a standalone installation. A new version of taosBenchmark is include in taosTools too.
|
||||
:::
|
||||
|
||||
:::tip
|
||||
|
|
|
@ -3,17 +3,17 @@ title: Schemaless Writing
|
|||
description: "The Schemaless write method eliminates the need to create super tables/sub tables in advance and automatically creates the storage structure corresponding to the data as it is written to the interface."
|
||||
---
|
||||
|
||||
In IoT applications, many data items are often collected for intelligent control, business analysis, device monitoring, etc. Due to the version upgrade of the application logic, or the hardware adjustment of the device itself, the data collection items may change more frequently. To facilitate the data logging work in such cases, TDengine starting from version 2.2.0.0, it provides a series of interfaces to the schemaless writing method, which eliminates the need to create super tables/sub tables in advance and automatically creates the storage structure corresponding to the data as the data is written to the interface. And when necessary, Schemaless writing will automatically add the required columns to ensure that the data written by the user is stored correctly.
|
||||
In IoT applications, many data items are often collected for intelligent control, business analysis, device monitoring, etc. Due to the version upgrades of the application logic, or the hardware adjustment of the devices themselves, the data collection items may change frequently. To facilitate the data logging work in such cases, TDengine starting from version 2.2.0.0 provides a series of interfaces to the schemaless writing method, which eliminate the need to create super tables and subtables in advance by automatically creating the storage structure corresponding to the data as the data is written to the interface. And when necessary, schemaless writing will automatically add the required columns to ensure that the data written by the user is stored correctly.
|
||||
|
||||
The schemaless writing method creates super tables and their corresponding sub-tables completely indistinguishable from the super tables and sub-tables created directly via SQL. You can write data directly to them via SQL statements. Note that the names of tables created by schemaless writing are based on fixed mapping rules for tag values, so they are not explicitly ideographic and lack readability.
|
||||
The schemaless writing method creates super tables and their corresponding subtables completely indistinguishable from the super tables and subtables created directly via SQL. You can write data directly to them via SQL statements. Note that the names of tables created by schemaless writing are based on fixed mapping rules for tag values, so they are not explicitly ideographic and lack readability.
|
||||
|
||||
## Schemaless Writing Line Protocol
|
||||
|
||||
TDengine's schemaless writing line protocol supports to be compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. However, when using these three protocols, you need to specify in the API the standard of the parsing protocol to be used for the input content.
|
||||
TDengine's schemaless writing line protocol supports InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. However, when using these three protocols, you need to specify in the API the standard of the parsing protocol to be used for the input content.
|
||||
|
||||
For the standard writing protocols of InfluxDB and OpenTSDB, please refer to the documentation of each protocol. The following is a description of TDengine's extended protocol, based on InfluxDB's line protocol first. They allow users to control the (super table) schema more granularly.
|
||||
|
||||
With the following formatting conventions, Schemaless writing uses a single string to express a data row (multiple rows can be passed into the writing API at once to enable bulk writing).
|
||||
With the following formatting conventions, schemaless writing uses a single string to express a data row (multiple rows can be passed into the writing API at once to enable bulk writing).
|
||||
|
||||
```json
|
||||
measurement,tag_set field_set timestamp
|
||||
|
@ -23,7 +23,7 @@ where :
|
|||
|
||||
- measurement will be used as the data table name. It will be separated from tag_set by a comma.
|
||||
- tag_set will be used as tag data in the format `<tag_key>=<tag_value>,<tag_key>=<tag_value>`, i.e. multiple tags' data can be separated by a comma. It is separated from field_set by space.
|
||||
- field_set will be used as normal column data in the format of `<field_key>=<field_value>,<field_key>=<field_value>`, again using a comma to separate multiple normal columns of data. It is separated from the timestamp by space.
|
||||
- field_set will be used as normal column data in the format of `<field_key>=<field_value>,<field_key>=<field_value>`, again using a comma to separate multiple normal columns of data. It is separated from the timestamp by a space.
|
||||
- The timestamp is the primary key corresponding to the data in this row.
|
||||
|
||||
All data in tag_set is automatically converted to the NCHAR data type and does not require double quotes (").
|
||||
|
@ -32,7 +32,7 @@ In the schemaless writing data line protocol, each data item in the field_set ne
|
|||
|
||||
- If there are English double quotes on both sides, it indicates the BINARY(32) type. For example, `"abc"`.
|
||||
- If there are double quotes on both sides and an L prefix, it means NCHAR(32) type. For example, `L"error message"`.
|
||||
- Spaces, equal signs (=), commas (,), and double quotes (") need to be escaped with a backslash (\) in front. (All refer to the ASCII character)
|
||||
- Spaces, equal signs (=), commas (,), and double quotes (") need to be escaped with a backslash (\\) in front. (All refer to the ASCII character)
|
||||
- Numeric types will be distinguished from data types by the suffix.
|
||||
|
||||
| **Serial number** | **Postfix** | **Mapping type** | **Size (bytes)** |
|
||||
|
@ -58,21 +58,21 @@ Note that if the wrong case is used when describing the data type suffix, or if
|
|||
|
||||
Schemaless writes process row data according to the following principles.
|
||||
|
||||
1. You can use the following rules to generate the sub-table names: first, combine the measurement name and the key and value of the label into the next string:
|
||||
1. You can use the following rules to generate the subtable names: first, combine the measurement name and the key and value of the label into the next string:
|
||||
|
||||
```json
|
||||
"measurement,tag_key1=tag_value1,tag_key2=tag_value2"
|
||||
```
|
||||
|
||||
Note that tag_key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol.
|
||||
The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t*" is a fixed prefix that every table generated by this mapping relationship has. 2.
|
||||
The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t*" is a fixed prefix that every table generated by this mapping relationship has.
|
||||
|
||||
2. If the super table obtained by parsing the line protocol does not exist, this super table is created.
|
||||
If the sub-table obtained by the parse line protocol does not exist, Schemaless creates the sub-table according to the sub-table name determined in steps 1 or 2. 4.
|
||||
If the subtable obtained by the parse line protocol does not exist, Schemaless creates the sub-table according to the subtable name determined in steps 1 or 2.
|
||||
4. If the specified tag or regular column in the data row does not exist, the corresponding tag or regular column is added to the super table (only incremental).
|
||||
5. If there are some tag columns or regular columns in the super table that are not specified to take values in a data row, then the values of these columns are set to NULL.
|
||||
6. For BINARY or NCHAR columns, if the length of the value provided in a data row exceeds the column type limit, the maximum length of characters allowed to be stored in the column is automatically increased (only incremented and not decremented) to ensure complete preservation of the data.
|
||||
7. If the specified data sub-table already exists, and the specified tag column takes a value different from the saved value this time, the value in the latest data row overwrites the old tag column take value.
|
||||
7. If the specified data subtable already exists, and the specified tag column takes a value different from the saved value this time, the value in the latest data row overwrites the old tag column take value.
|
||||
8. Errors encountered throughout the processing will interrupt the writing process and return an error code.
|
||||
|
||||
:::tip
|
||||
|
|
|
@ -23,7 +23,7 @@ You can download The Grafana plugin for TDengine from <https://github.com/taosda
|
|||
|
||||
Recommend using the [``grafana-cli`` command-line tool](https://grafana.com/docs/grafana/latest/administration/cli/) for plugin installation.
|
||||
|
||||
``bash
|
||||
```bash
|
||||
sudo -u grafana grafana-cli \
|
||||
--pluginUrl https://github.com/taosdata/grafanaplugin/releases/download/v3.1.4/tdengine-datasource-3.1.4.zip \
|
||||
plugins install tdengine-datasource
|
||||
|
@ -88,7 +88,7 @@ Go back to the main interface to create the Dashboard, click Add Query to enter
|
|||
|
||||
As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query.
|
||||
|
||||
- INPUT SQL: enter the statement to be queried (the result set of the SQL statement should be two columns and multiple rows), for example: `select avg(mem_system) from log.dn where ts >= $from and ts < $to interval($interval)`, where, from, to and interval are built-in variables of the TDengine plugin, indicating the range and time interval of queries fetched from the Grafana plugin panel. In addition to the built-in variables, ` custom template variables are also supported.
|
||||
- INPUT SQL: enter the statement to be queried (the result set of the SQL statement should be two columns and multiple rows), for example: `select avg(mem_system) from log.dn where ts >= $from and ts < $to interval($interval)`, where, from, to and interval are built-in variables of the TDengine plugin, indicating the range and time interval of queries fetched from the Grafana plugin panel. In addition to the built-in variables, custom template variables are also supported.
|
||||
- ALIAS BY: This allows you to set the current query alias.
|
||||
- GENERATE SQL: Clicking this button will automatically replace the corresponding variables and generate the final executed statement.
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ sidebar_label: EMQX Broker
|
|||
title: EMQX Broker writing
|
||||
---
|
||||
|
||||
MQTT is a popular IoT data transfer protocol, [EMQX](https://github.com/emqx/emqx) is an open-source MQTT Broker software, without any code, only need to use "rules" in EMQX Dashboard to do simple configuration. You can write MQTT data directly to TDengine. EMQX supports saving data to TDengine by sending it to web services and provides a native TDengine driver for direct saving in the Enterprise Edition. Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use it. tdengine).
|
||||
MQTT is a popular IoT data transfer protocol, [EMQX](https://github.com/emqx/emqx) is an open-source MQTT Broker software, you can write MQTT data directly to TDengine without any code, you only need to use "rules" in EMQX Dashboard to create a simple configuration. EMQX supports saving data to TDengine by sending it to web services and provides a native TDengine driver for direct saving in the Enterprise Edition. Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use it.).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
|
|
@ -228,7 +228,7 @@ taos> select * from meters;
|
|||
Query OK, 4 row(s) in set (0.004208s)
|
||||
```
|
||||
|
||||
If you see the above data, the synchronization is successful. If not, check the logs of Kafka Connect. For detailed description of configuration parameters, see [Configuration Reference](#Configuration Reference).
|
||||
If you see the above data, the synchronization is successful. If not, check the logs of Kafka Connect. For detailed description of configuration parameters, see [Configuration Reference](#configuration-reference).
|
||||
|
||||
## The use of TDengine Source Connector
|
||||
|
||||
|
|
|
@ -118,7 +118,7 @@ Output is like below:
|
|||
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["test","2021-08-18 06:01:11.021",10000,4,1,1,10,"3650,3650,3650",16,6,100,4096,1,3000,2,0,"ms",0,"ready"],["log","2021-08-18 05:51:51.065",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":2}
|
||||
```
|
||||
|
||||
For details of REST API please refer to [REST API]](/reference/rest-api/).
|
||||
For details of REST API please refer to [REST API](/reference/rest-api/).
|
||||
|
||||
### Run TDengine server and taosAdapter inside container
|
||||
|
||||
|
@ -265,7 +265,7 @@ Below is an example output:
|
|||
$ taos> select groupid, location from test.d0;
|
||||
groupid | location |
|
||||
=================================
|
||||
0 | California.SanDieo |
|
||||
0 | California.SanDiego |
|
||||
Query OK, 1 row(s) in set (0.003490s)
|
||||
```
|
||||
|
||||
|
|
|
@ -182,14 +182,14 @@ int main() {
|
|||
// query callback ...
|
||||
// ts current voltage phase location groupid
|
||||
// numOfRow = 8
|
||||
// 1538548685000 10.300000 219 0.310000 beijing.chaoyang 2
|
||||
// 1538548695000 12.600000 218 0.330000 beijing.chaoyang 2
|
||||
// 1538548696800 12.300000 221 0.310000 beijing.chaoyang 2
|
||||
// 1538548696650 10.300000 218 0.250000 beijing.chaoyang 3
|
||||
// 1538548685500 11.800000 221 0.280000 beijing.haidian 2
|
||||
// 1538548696600 13.400000 223 0.290000 beijing.haidian 2
|
||||
// 1538548685000 10.800000 223 0.290000 beijing.haidian 3
|
||||
// 1538548686500 11.500000 221 0.350000 beijing.haidian 3
|
||||
// 1538548685500 11.800000 221 0.280000 california.losangeles 2
|
||||
// 1538548696600 13.400000 223 0.290000 california.losangeles 2
|
||||
// 1538548685000 10.800000 223 0.290000 california.losangeles 3
|
||||
// 1538548686500 11.500000 221 0.350000 california.losangeles 3
|
||||
// 1538548685000 10.300000 219 0.310000 california.sanfrancisco 2
|
||||
// 1538548695000 12.600000 218 0.330000 california.sanfrancisco 2
|
||||
// 1538548696800 12.300000 221 0.310000 california.sanfrancisco 2
|
||||
// 1538548696650 10.300000 218 0.250000 california.sanfrancisco 3
|
||||
// numOfRow = 0
|
||||
// no more data, close the connection.
|
||||
// ANCHOR_END: demo
|
|
@ -224,15 +224,15 @@ namespace TDengineExample
|
|||
}
|
||||
|
||||
//output:
|
||||
//Connect to TDengine success
|
||||
//8 rows async retrieved
|
||||
// Connect to TDengine success
|
||||
// 8 rows async retrieved
|
||||
|
||||
//1538548685000 | 10.3 | 219 | 0.31 | beijing.chaoyang | 2 |
|
||||
//1538548695000 | 12.6 | 218 | 0.33 | beijing.chaoyang | 2 |
|
||||
//1538548696800 | 12.3 | 221 | 0.31 | beijing.chaoyang | 2 |
|
||||
//1538548696650 | 10.3 | 218 | 0.25 | beijing.chaoyang | 3 |
|
||||
//1538548685500 | 11.8 | 221 | 0.28 | beijing.haidian | 2 |
|
||||
//1538548696600 | 13.4 | 223 | 0.29 | beijing.haidian | 2 |
|
||||
//1538548685000 | 10.8 | 223 | 0.29 | beijing.haidian | 3 |
|
||||
//1538548686500 | 11.5 | 221 | 0.35 | beijing.haidian | 3 |
|
||||
//async retrieve complete.
|
||||
// 1538548685500 | 11.8 | 221 | 0.28 | california.losangeles | 2 |
|
||||
// 1538548696600 | 13.4 | 223 | 0.29 | california.losangeles | 2 |
|
||||
// 1538548685000 | 10.8 | 223 | 0.29 | california.losangeles | 3 |
|
||||
// 1538548686500 | 11.5 | 221 | 0.35 | california.losangeles | 3 |
|
||||
// 1538548685000 | 10.3 | 219 | 0.31 | california.sanfrancisco | 2 |
|
||||
// 1538548695000 | 12.6 | 218 | 0.33 | california.sanfrancisco | 2 |
|
||||
// 1538548696800 | 12.3 | 221 | 0.31 | california.sanfrancisco | 2 |
|
||||
// 1538548696650 | 10.3 | 218 | 0.25 | california.sanfrancisco | 3 |
|
||||
// async retrieve complete.
|
|
@ -13,7 +13,7 @@ print(df.head(3))
|
|||
# output:
|
||||
# RangeIndex(start=0, stop=8, step=1)
|
||||
# <class 'pandas._libs.tslibs.timestamps.Timestamp'>
|
||||
# ts current voltage phase location groupid
|
||||
# 0 2018-10-03 14:38:05.000 10.3 219 0.31 beijing.chaoyang 2
|
||||
# 1 2018-10-03 14:38:15.000 12.6 218 0.33 beijing.chaoyang 2
|
||||
# 2 2018-10-03 14:38:16.800 12.3 221 0.31 beijing.chaoyang 2
|
||||
# ts current ... location groupid
|
||||
# 0 2018-10-03 14:38:05.500 11.8 ... california.losangeles 2
|
||||
# 1 2018-10-03 14:38:16.600 13.4 ... california.losangeles 2
|
||||
# 2 2018-10-03 14:38:05.000 10.8 ... california.losangeles 3
|
||||
|
|
|
@ -11,9 +11,9 @@ print(type(df.ts[0]))
|
|||
print(df.head(3))
|
||||
|
||||
# output:
|
||||
# <class 'datetime.datetime'>
|
||||
# RangeIndex(start=0, stop=8, step=1)
|
||||
# ts current ... location groupid
|
||||
# 0 2018-10-03 14:38:05+08:00 10.3 ... beijing.chaoyang 2
|
||||
# 1 2018-10-03 14:38:15+08:00 12.6 ... beijing.chaoyang 2
|
||||
# 2 2018-10-03 14:38:16.800000+08:00 12.3 ... beijing.chaoyang 2
|
||||
# <class 'pandas._libs.tslibs.timestamps.Timestamp'>
|
||||
# ts current ... location groupid
|
||||
# 0 2018-10-03 06:38:05.500000+00:00 11.8 ... california.losangeles 2
|
||||
# 1 2018-10-03 06:38:16.600000+00:00 13.4 ... california.losangeles 2
|
||||
# 2 2018-10-03 06:38:05+00:00 10.8 ... california.losangeles 3
|
||||
|
|
|
@ -38,8 +38,7 @@ for row in data:
|
|||
# inserted row count: 8
|
||||
# queried row count: 3
|
||||
# ['ts', 'current', 'voltage', 'phase', 'location', 'groupid']
|
||||
# [datetime.datetime(2018, 10, 3, 14, 38, 5, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 10.3, 219, 0.31, 'beijing.chaoyang', 2]
|
||||
# [datetime.datetime(2018, 10, 3, 14, 38, 15, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 12.6, 218, 0.33, 'beijing.chaoyang', 2]
|
||||
# [datetime.datetime(2018, 10, 3, 14, 38, 16, 800000, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 12.3, 221, 0.31, 'beijing.chaoyang', 2]
|
||||
|
||||
# [datetime.datetime(2018, 10, 3, 14, 38, 5, 500000, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 11.8, 221, 0.28, 'california.losangeles', 2]
|
||||
# [datetime.datetime(2018, 10, 3, 14, 38, 16, 600000, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 13.4, 223, 0.29, 'california.losangeles', 2]
|
||||
# [datetime.datetime(2018, 10, 3, 14, 38, 5, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 10.8, 223, 0.29, 'california.losangeles', 3]
|
||||
# ANCHOR_END: basic
|
||||
|
|
|
@ -5,9 +5,9 @@ from taos import SmlProtocol, SmlPrecision
|
|||
|
||||
lines = [{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "California.SanFrancisco", "groupid": 2}},
|
||||
{"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219,
|
||||
"tags": {"location": "California.LosAngeles", "groupid": 1}},
|
||||
"tags": {"location": "California.LosAngeles", "groupid": 1}},
|
||||
{"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6,
|
||||
"tags": {"location": "California.SanFrancisco", "groupid": 2}},
|
||||
"tags": {"location": "California.SanFrancisco", "groupid": 2}},
|
||||
{"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "California.LosAngeles", "groupid": 1}}]
|
||||
|
||||
|
||||
|
|
|
@ -12,10 +12,10 @@ def query_api_demo(conn: taos.TaosConnection):
|
|||
|
||||
|
||||
# field count: 7
|
||||
# meta of files[1]: {name: ts, type: 9, bytes: 8}
|
||||
# meta of fields[1]: {name: ts, type: 9, bytes: 8}
|
||||
# ======================Iterate on result=========================
|
||||
# ('d1001', datetime.datetime(2018, 10, 3, 14, 38, 5), 10.300000190734863, 219, 0.3100000023841858, 'California.SanFrancisco', 2)
|
||||
# ('d1001', datetime.datetime(2018, 10, 3, 14, 38, 15), 12.600000381469727, 218, 0.33000001311302185, 'California.SanFrancisco', 2)
|
||||
# ('d1003', datetime.datetime(2018, 10, 3, 14, 38, 5, 500000), 11.800000190734863, 221, 0.2800000011920929, 'california.losangeles', 2)
|
||||
# ('d1003', datetime.datetime(2018, 10, 3, 14, 38, 16, 600000), 13.399999618530273, 223, 0.28999999165534973, 'california.losangeles', 2)
|
||||
# ANCHOR_END: iter
|
||||
|
||||
# ANCHOR: fetch_all
|
||||
|
@ -29,8 +29,8 @@ def fetch_all_demo(conn: taos.TaosConnection):
|
|||
|
||||
# row count: 2
|
||||
# ===============all data===================
|
||||
# [{'ts': datetime.datetime(2018, 10, 3, 14, 38, 5), 'current': 10.300000190734863},
|
||||
# {'ts': datetime.datetime(2018, 10, 3, 14, 38, 15), 'current': 12.600000381469727}]
|
||||
# [{'ts': datetime.datetime(2018, 10, 3, 14, 38, 5, 500000), 'current': 11.800000190734863},
|
||||
# {'ts': datetime.datetime(2018, 10, 3, 14, 38, 16, 600000), 'current': 13.399999618530273}]
|
||||
# ANCHOR_END: fetch_all
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
|
|
@ -105,12 +105,14 @@ typedef struct SColumnInfoData {
|
|||
} SColumnInfoData;
|
||||
|
||||
typedef struct SQueryTableDataCond {
|
||||
STimeWindow twindow;
|
||||
//STimeWindow twindow;
|
||||
int32_t order; // desc|asc order to iterate the data block
|
||||
int32_t numOfCols;
|
||||
SColumnInfo *colList;
|
||||
bool loadExternalRows; // load external rows or not
|
||||
int32_t type; // data block load type:
|
||||
int32_t numOfTWindows;
|
||||
STimeWindow *twindows;
|
||||
} SQueryTableDataCond;
|
||||
|
||||
void* blockDataDestroy(SSDataBlock* pBlock);
|
||||
|
|
|
@ -230,7 +230,7 @@ SSDataBlock* createOneDataBlock(const SSDataBlock* pDataBlock, bool copyData);
|
|||
void blockDebugShowData(const SArray* dataBlocks);
|
||||
|
||||
int32_t buildSubmitReqFromDataBlock(SSubmitReq** pReq, const SArray* pDataBlocks, STSchema* pTSchema, int32_t vgId,
|
||||
tb_uid_t uid, tb_uid_t suid);
|
||||
tb_uid_t suid);
|
||||
|
||||
SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pSchema, bool createTb, int64_t suid,
|
||||
const char* stbFullName, int32_t vgId);
|
||||
|
@ -299,4 +299,3 @@ static FORCE_INLINE void blockCompressEncode(const SSDataBlock* pBlock, char* da
|
|||
#endif
|
||||
|
||||
#endif /*_TD_COMMON_EP_H_*/
|
||||
|
||||
|
|
|
@ -81,7 +81,7 @@ int32_t mndGetLoad(SMnode *pMnode, SMnodeLoad *pLoad);
|
|||
* @param pMsg The request msg.
|
||||
* @return int32_t 0 for success, -1 for failure.
|
||||
*/
|
||||
int32_t mndProcessMsg(SRpcMsg *pMsg);
|
||||
int32_t mndProcessRpcMsg(SRpcMsg *pMsg);
|
||||
int32_t mndProcessSyncMsg(SRpcMsg *pMsg);
|
||||
|
||||
/**
|
||||
|
|
|
@ -52,23 +52,31 @@ typedef struct SUserAuthInfo {
|
|||
AUTH_TYPE type;
|
||||
} SUserAuthInfo;
|
||||
|
||||
typedef struct SDbInfo {
|
||||
int32_t vgVer;
|
||||
int32_t tbNum;
|
||||
int64_t dbId;
|
||||
} SDbInfo;
|
||||
|
||||
typedef struct SCatalogReq {
|
||||
SArray *pTableMeta; // element is SNAME
|
||||
SArray *pDbVgroup; // element is db full name
|
||||
SArray *pDbCfg; // element is db full name
|
||||
SArray *pDbInfo; // element is db full name
|
||||
SArray *pTableMeta; // element is SNAME
|
||||
SArray *pTableHash; // element is SNAME
|
||||
SArray *pUdf; // element is udf name
|
||||
SArray *pDbCfg; // element is db full name
|
||||
SArray *pIndex; // element is index name
|
||||
SArray *pUser; // element is SUserAuthInfo
|
||||
bool qNodeRequired; // valid qnode
|
||||
} SCatalogReq;
|
||||
|
||||
typedef struct SMetaData {
|
||||
SArray *pTableMeta; // SArray<STableMeta*>
|
||||
SArray *pDbVgroup; // SArray<SArray<SVgroupInfo>*>
|
||||
SArray *pDbCfg; // SArray<SDbCfgInfo>
|
||||
SArray *pDbInfo; // SArray<SDbInfo>
|
||||
SArray *pTableMeta; // SArray<STableMeta*>
|
||||
SArray *pTableHash; // SArray<SVgroupInfo>
|
||||
SArray *pUdfList; // SArray<SFuncInfo>
|
||||
SArray *pDbCfg; // SArray<SDbCfgInfo>
|
||||
SArray *pIndex; // SArray<SIndexInfo>
|
||||
SArray *pUser; // SArray<bool>
|
||||
SArray *pQnodeList; // SArray<SQueryNodeAddr>
|
||||
|
@ -269,6 +277,8 @@ int32_t catalogChkAuth(SCatalog* pCtg, void *pRpc, const SEpSet* pMgmtEps, const
|
|||
|
||||
int32_t catalogUpdateUserAuthInfo(SCatalog* pCtg, SGetUserAuthRsp* pAuth);
|
||||
|
||||
int32_t catalogUpdateVgEpSet(SCatalog* pCtg, const char* dbFName, int32_t vgId, SEpSet *epSet);
|
||||
|
||||
|
||||
int32_t ctgdLaunchAsyncCall(SCatalog* pCtg, void *pTrans, const SEpSet* pMgmtEps, uint64_t reqId);
|
||||
|
||||
|
|
|
@ -43,6 +43,12 @@ typedef enum {
|
|||
TASK_TYPE_TEMP,
|
||||
} ETaskType;
|
||||
|
||||
typedef enum {
|
||||
TARGET_TYPE_MNODE = 1,
|
||||
TARGET_TYPE_VNODE,
|
||||
TARGET_TYPE_OTHER,
|
||||
} ETargetType;
|
||||
|
||||
typedef struct STableComInfo {
|
||||
uint8_t numOfTags; // the number of tags in schema
|
||||
uint8_t precision; // the number of precision
|
||||
|
@ -126,11 +132,18 @@ typedef struct SDataBuf {
|
|||
void* handle;
|
||||
} SDataBuf;
|
||||
|
||||
typedef struct STargetInfo {
|
||||
ETargetType type;
|
||||
char dbFName[TSDB_DB_FNAME_LEN]; // used to update db's vgroup epset
|
||||
int32_t vgId;
|
||||
} STargetInfo;
|
||||
|
||||
typedef int32_t (*__async_send_cb_fn_t)(void* param, const SDataBuf* pMsg, int32_t code);
|
||||
typedef int32_t (*__async_exec_fn_t)(void* param);
|
||||
|
||||
typedef struct SMsgSendInfo {
|
||||
__async_send_cb_fn_t fp; // async callback function
|
||||
STargetInfo target; // for update epset
|
||||
void* param;
|
||||
uint64_t requestId;
|
||||
uint64_t requestObjRefId;
|
||||
|
|
|
@ -729,23 +729,55 @@ static void destroySendMsgInfo(SMsgSendInfo* pMsgBody) {
|
|||
taosMemoryFreeClear(pMsgBody);
|
||||
}
|
||||
|
||||
void updateTargetEpSet(SMsgSendInfo* pSendInfo, STscObj* pTscObj, SRpcMsg* pMsg, SEpSet* pEpSet) {
|
||||
if (NULL == pEpSet) {
|
||||
return;
|
||||
}
|
||||
|
||||
switch (pSendInfo->target.type) {
|
||||
case TARGET_TYPE_MNODE:
|
||||
if (NULL == pTscObj) {
|
||||
tscError("mnode epset changed but not able to update it, reqObjRefId:%" PRIx64, pSendInfo->requestObjRefId);
|
||||
return;
|
||||
}
|
||||
|
||||
updateEpSet_s(&pTscObj->pAppInfo->mgmtEp, pEpSet);
|
||||
break;
|
||||
case TARGET_TYPE_VNODE: {
|
||||
if (NULL == pTscObj) {
|
||||
tscError("vnode epset changed but not able to update it, reqObjRefId:%" PRIx64, pSendInfo->requestObjRefId);
|
||||
return;
|
||||
}
|
||||
|
||||
SCatalog* pCatalog = NULL;
|
||||
int32_t code = catalogGetHandle(pTscObj->pAppInfo->clusterId, &pCatalog);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
tscError("fail to get catalog handle, clusterId:%" PRIx64 ", error %s", pTscObj->pAppInfo->clusterId, tstrerror(code));
|
||||
return;
|
||||
}
|
||||
|
||||
catalogUpdateVgEpSet(pCatalog, pSendInfo->target.dbFName, pSendInfo->target.vgId, pEpSet);
|
||||
break;
|
||||
}
|
||||
default:
|
||||
tscDebug("epset changed, not updated, msgType %s", TMSG_INFO(pMsg->msgType));
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void processMsgFromServer(void* parent, SRpcMsg* pMsg, SEpSet* pEpSet) {
|
||||
SMsgSendInfo* pSendInfo = (SMsgSendInfo*)pMsg->info.ahandle;
|
||||
assert(pMsg->info.ahandle != NULL);
|
||||
SRequestObj* pRequest = NULL;
|
||||
STscObj* pTscObj = NULL;
|
||||
|
||||
if (pSendInfo->requestObjRefId != 0) {
|
||||
SRequestObj* pRequest = (SRequestObj*)taosAcquireRef(clientReqRefPool, pSendInfo->requestObjRefId);
|
||||
assert(pRequest->self == pSendInfo->requestObjRefId);
|
||||
|
||||
pRequest->metric.rsp = taosGetTimestampUs();
|
||||
|
||||
//STscObj* pTscObj = pRequest->pTscObj;
|
||||
//if (pEpSet) {
|
||||
// if (!isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, pEpSet)) {
|
||||
// updateEpSet_s(&pTscObj->pAppInfo->mgmtEp, pEpSet);
|
||||
// }
|
||||
//}
|
||||
|
||||
pTscObj = pRequest->pTscObj;
|
||||
/*
|
||||
* There is not response callback function for submit response.
|
||||
* The actual inserted number of points is the first number.
|
||||
|
@ -762,6 +794,8 @@ void processMsgFromServer(void* parent, SRpcMsg* pMsg, SEpSet* pEpSet) {
|
|||
taosReleaseRef(clientReqRefPool, pSendInfo->requestObjRefId);
|
||||
}
|
||||
|
||||
updateTargetEpSet(pSendInfo, pTscObj, pMsg, pEpSet);
|
||||
|
||||
SDataBuf buf = {.len = pMsg->contLen, .pData = NULL, .handle = pMsg->info.handle};
|
||||
|
||||
if (pMsg->contLen > 0) {
|
||||
|
|
|
@ -1508,14 +1508,11 @@ void blockDebugShowData(const SArray* dataBlocks) {
|
|||
* @param pReq
|
||||
* @param pDataBlocks
|
||||
* @param vgId
|
||||
* @param uid set as parameter temporarily // TODO: remove this parameter, and the executor should set uid in
|
||||
* SDataBlock->info.uid
|
||||
* @param suid // TODO: check with Liao whether suid response is reasonable
|
||||
*
|
||||
* TODO: colId should be set
|
||||
*/
|
||||
int32_t buildSubmitReqFromDataBlock(SSubmitReq** pReq, const SArray* pDataBlocks, STSchema* pTSchema, int32_t vgId,
|
||||
tb_uid_t uid, tb_uid_t suid) {
|
||||
int32_t buildSubmitReqFromDataBlock(SSubmitReq** pReq, const SArray* pDataBlocks, STSchema* pTSchema, int32_t vgId, tb_uid_t suid) {
|
||||
int32_t sz = taosArrayGetSize(pDataBlocks);
|
||||
int32_t bufSize = sizeof(SSubmitReq);
|
||||
for (int32_t i = 0; i < sz; ++i) {
|
||||
|
@ -1551,7 +1548,7 @@ int32_t buildSubmitReqFromDataBlock(SSubmitReq** pReq, const SArray* pDataBlocks
|
|||
|
||||
SSubmitBlk* pSubmitBlk = POINTER_SHIFT(pDataBuf, msgLen);
|
||||
pSubmitBlk->suid = suid;
|
||||
pSubmitBlk->uid = uid;
|
||||
pSubmitBlk->uid = pDataBlock->info.groupId;
|
||||
pSubmitBlk->numOfRows = rows;
|
||||
|
||||
++numOfBlks;
|
||||
|
@ -1562,6 +1559,7 @@ int32_t buildSubmitReqFromDataBlock(SSubmitReq** pReq, const SArray* pDataBlocks
|
|||
tdSRowResetBuf(&rb, POINTER_SHIFT(pDataBuf, msgLen)); // set row buf
|
||||
printf("|");
|
||||
bool isStartKey = false;
|
||||
int32_t offset = 0;
|
||||
for (int32_t k = 0; k < colNum; ++k) { // iterate by column
|
||||
SColumnInfoData* pColInfoData = taosArrayGet(pDataBlock->pDataBlock, k);
|
||||
void* var = POINTER_SHIFT(pColInfoData->pData, j * pColInfoData->info.bytes);
|
||||
|
@ -1570,18 +1568,18 @@ int32_t buildSubmitReqFromDataBlock(SSubmitReq** pReq, const SArray* pDataBlocks
|
|||
if (!isStartKey) {
|
||||
isStartKey = true;
|
||||
tdAppendColValToRow(&rb, PRIMARYKEY_TIMESTAMP_COL_ID, TSDB_DATA_TYPE_TIMESTAMP, TD_VTYPE_NORM, var, true,
|
||||
0, 0);
|
||||
offset, k);
|
||||
|
||||
} else {
|
||||
tdAppendColValToRow(&rb, 2, TSDB_DATA_TYPE_TIMESTAMP, TD_VTYPE_NORM, var, true, 8, k);
|
||||
break;
|
||||
tdAppendColValToRow(&rb, 2, TSDB_DATA_TYPE_TIMESTAMP, TD_VTYPE_NORM, var, true, offset, k);
|
||||
}
|
||||
break;
|
||||
case TSDB_DATA_TYPE_NCHAR: {
|
||||
tdAppendColValToRow(&rb, 2, TSDB_DATA_TYPE_NCHAR, TD_VTYPE_NORM, var, true, 8, k);
|
||||
tdAppendColValToRow(&rb, 2, TSDB_DATA_TYPE_NCHAR, TD_VTYPE_NORM, var, true, offset, k);
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_VARCHAR: { // TSDB_DATA_TYPE_BINARY
|
||||
tdAppendColValToRow(&rb, 2, TSDB_DATA_TYPE_VARCHAR, TD_VTYPE_NORM, var, true, 8, k);
|
||||
tdAppendColValToRow(&rb, 2, TSDB_DATA_TYPE_VARCHAR, TD_VTYPE_NORM, var, true, offset, k);
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_VARBINARY:
|
||||
|
@ -1593,13 +1591,14 @@ int32_t buildSubmitReqFromDataBlock(SSubmitReq** pReq, const SArray* pDataBlocks
|
|||
break;
|
||||
default:
|
||||
if (pColInfoData->info.type < TSDB_DATA_TYPE_MAX && pColInfoData->info.type > TSDB_DATA_TYPE_NULL) {
|
||||
tdAppendColValToRow(&rb, 2, pColInfoData->info.type, TD_VTYPE_NORM, var, true, 8, k);
|
||||
tdAppendColValToRow(&rb, 2, pColInfoData->info.type, TD_VTYPE_NORM, var, true, offset, k);
|
||||
} else {
|
||||
printf("the column type %" PRIi16 " is undefined\n", pColInfoData->info.type);
|
||||
TASSERT(0);
|
||||
}
|
||||
break;
|
||||
}
|
||||
offset += TYPE_BYTES[pColInfoData->info.type];
|
||||
}
|
||||
dataLen += TD_ROW_LEN(rb.pBuf);
|
||||
}
|
||||
|
|
|
@ -1191,9 +1191,9 @@ bool tdGetTpRowDataOfCol(STSRowIter *pIter, col_type_t colType, int32_t offset,
|
|||
}
|
||||
|
||||
static FORCE_INLINE int32_t compareKvRowColId(const void *key1, const void *key2) {
|
||||
if (*(int16_t *)key1 > ((SColIdx *)key2)->colId) {
|
||||
if (*(col_id_t *)key1 > ((SKvRowIdx *)key2)->colId) {
|
||||
return 1;
|
||||
} else if (*(int16_t *)key1 < ((SColIdx *)key2)->colId) {
|
||||
} else if (*(col_id_t *)key1 < ((SKvRowIdx *)key2)->colId) {
|
||||
return -1;
|
||||
} else {
|
||||
return 0;
|
||||
|
|
|
@ -40,7 +40,7 @@ static void mmProcessQueue(SQueueInfo *pInfo, SRpcMsg *pMsg) {
|
|||
break;
|
||||
default:
|
||||
pMsg->info.node = pMgmt->pMnode;
|
||||
code = mndProcessMsg(pMsg);
|
||||
code = mndProcessRpcMsg(pMsg);
|
||||
}
|
||||
|
||||
if (IsReq(pMsg) && pMsg->info.handle != NULL && code != TSDB_CODE_ACTION_IN_PROGRESS) {
|
||||
|
|
|
@ -75,13 +75,12 @@ typedef struct {
|
|||
} STelemMgmt;
|
||||
|
||||
typedef struct {
|
||||
SWal *pWal;
|
||||
sem_t syncSem;
|
||||
int64_t sync;
|
||||
bool standby;
|
||||
bool restored;
|
||||
int32_t errCode;
|
||||
int32_t transId;
|
||||
SWal *pWal;
|
||||
sem_t syncSem;
|
||||
int64_t sync;
|
||||
bool standby;
|
||||
int32_t errCode;
|
||||
int32_t transId;
|
||||
} SSyncMgmt;
|
||||
|
||||
typedef struct {
|
||||
|
@ -90,34 +89,45 @@ typedef struct {
|
|||
} SGrantInfo;
|
||||
|
||||
typedef struct SMnode {
|
||||
int32_t selfDnodeId;
|
||||
int64_t clusterId;
|
||||
TdThread thread;
|
||||
bool deploy;
|
||||
bool stopped;
|
||||
int8_t replica;
|
||||
int8_t selfIndex;
|
||||
SReplica replicas[TSDB_MAX_REPLICA];
|
||||
char *path;
|
||||
int64_t checkTime;
|
||||
SSdb *pSdb;
|
||||
SMgmtWrapper *pWrapper;
|
||||
SArray *pSteps;
|
||||
SQHandle *pQuery;
|
||||
SShowMgmt showMgmt;
|
||||
SProfileMgmt profileMgmt;
|
||||
STelemMgmt telemMgmt;
|
||||
SSyncMgmt syncMgmt;
|
||||
SHashObj *infosMeta;
|
||||
SHashObj *perfsMeta;
|
||||
SGrantInfo grant;
|
||||
MndMsgFp msgFp[TDMT_MAX];
|
||||
SMsgCb msgCb;
|
||||
int32_t selfDnodeId;
|
||||
int64_t clusterId;
|
||||
TdThread thread;
|
||||
TdThreadRwlock lock;
|
||||
int32_t rpcRef;
|
||||
int32_t syncRef;
|
||||
bool stopped;
|
||||
bool restored;
|
||||
bool deploy;
|
||||
int8_t replica;
|
||||
int8_t selfIndex;
|
||||
SReplica replicas[TSDB_MAX_REPLICA];
|
||||
char *path;
|
||||
int64_t checkTime;
|
||||
SSdb *pSdb;
|
||||
SArray *pSteps;
|
||||
SQHandle *pQuery;
|
||||
SHashObj *infosMeta;
|
||||
SHashObj *perfsMeta;
|
||||
SShowMgmt showMgmt;
|
||||
SProfileMgmt profileMgmt;
|
||||
STelemMgmt telemMgmt;
|
||||
SSyncMgmt syncMgmt;
|
||||
SGrantInfo grant;
|
||||
MndMsgFp msgFp[TDMT_MAX];
|
||||
SMsgCb msgCb;
|
||||
} SMnode;
|
||||
|
||||
void mndSetMsgHandle(SMnode *pMnode, tmsg_t msgType, MndMsgFp fp);
|
||||
int64_t mndGenerateUid(char *name, int32_t len);
|
||||
|
||||
int32_t mndAcquireRpcRef(SMnode *pMnode);
|
||||
void mndReleaseRpcRef(SMnode *pMnode);
|
||||
void mndSetRestore(SMnode *pMnode, bool restored);
|
||||
void mndSetStop(SMnode *pMnode);
|
||||
bool mndGetStop(SMnode *pMnode);
|
||||
int32_t mndAcquireSyncRef(SMnode *pMnode);
|
||||
void mndReleaseSyncRef(SMnode *pMnode);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -85,7 +85,7 @@ static void *mndThreadFp(void *param) {
|
|||
while (1) {
|
||||
lastTime++;
|
||||
taosMsleep(100);
|
||||
if (pMnode->stopped) break;
|
||||
if (mndGetStop(pMnode)) break;
|
||||
|
||||
if (lastTime % (tsTransPullupInterval * 10) == 0) {
|
||||
mndPullupTrans(pMnode);
|
||||
|
@ -118,7 +118,6 @@ static int32_t mndInitTimer(SMnode *pMnode) {
|
|||
}
|
||||
|
||||
static void mndCleanupTimer(SMnode *pMnode) {
|
||||
pMnode->stopped = true;
|
||||
if (taosCheckPthreadValid(pMnode->thread)) {
|
||||
taosThreadJoin(pMnode->thread, NULL);
|
||||
taosThreadClear(&pMnode->thread);
|
||||
|
@ -335,15 +334,19 @@ void mndClose(SMnode *pMnode) {
|
|||
int32_t mndStart(SMnode *pMnode) {
|
||||
mndSyncStart(pMnode);
|
||||
if (pMnode->deploy) {
|
||||
if (sdbDeploy(pMnode->pSdb) != 0) return -1;
|
||||
pMnode->syncMgmt.restored = true;
|
||||
if (sdbDeploy(pMnode->pSdb) != 0) {
|
||||
mError("failed to deploy sdb while start mnode");
|
||||
return -1;
|
||||
}
|
||||
mndSetRestore(pMnode, true);
|
||||
}
|
||||
return mndInitTimer(pMnode);
|
||||
}
|
||||
|
||||
void mndStop(SMnode *pMnode) {
|
||||
mndSetStop(pMnode);
|
||||
mndSyncStop(pMnode);
|
||||
return mndCleanupTimer(pMnode);
|
||||
mndCleanupTimer(pMnode);
|
||||
}
|
||||
|
||||
int32_t mndProcessSyncMsg(SRpcMsg *pMsg) {
|
||||
|
@ -362,6 +365,11 @@ int32_t mndProcessSyncMsg(SRpcMsg *pMsg) {
|
|||
return TAOS_SYNC_PROPOSE_OTHER_ERROR;
|
||||
}
|
||||
|
||||
if (mndAcquireSyncRef(pMnode) != 0) {
|
||||
mError("failed to process sync msg:%p type:%s since %s", pMsg, TMSG_INFO(pMsg->msgType), terrstr());
|
||||
return TAOS_SYNC_PROPOSE_OTHER_ERROR;
|
||||
}
|
||||
|
||||
char logBuf[512];
|
||||
char *syncNodeStr = sync2SimpleStr(pMgmt->sync);
|
||||
snprintf(logBuf, sizeof(logBuf), "==vnodeProcessSyncReq== msgType:%d, syncNode: %s", pMsg->msgType, syncNodeStr);
|
||||
|
@ -405,59 +413,45 @@ int32_t mndProcessSyncMsg(SRpcMsg *pMsg) {
|
|||
code = TAOS_SYNC_PROPOSE_OTHER_ERROR;
|
||||
}
|
||||
|
||||
mndReleaseSyncRef(pMnode);
|
||||
return code;
|
||||
}
|
||||
|
||||
static int32_t mndCheckMnodeMaster(SRpcMsg *pMsg) {
|
||||
if (!IsReq(pMsg)) return 0;
|
||||
if (mndIsMaster(pMsg->info.node)) return 0;
|
||||
static int32_t mndCheckMnodeState(SRpcMsg *pMsg) {
|
||||
if (mndAcquireRpcRef(pMsg->info.node) == 0) return 0;
|
||||
|
||||
if (pMsg->msgType == TDMT_MND_MQ_TIMER || pMsg->msgType == TDMT_MND_TELEM_TIMER ||
|
||||
pMsg->msgType == TDMT_MND_TRANS_TIMER) {
|
||||
return -1;
|
||||
}
|
||||
mError("msg:%p, failed to check master since %s, app:%p type:%s", pMsg, terrstr(), pMsg->info.ahandle,
|
||||
TMSG_INFO(pMsg->msgType));
|
||||
if (IsReq(pMsg) && pMsg->msgType != TDMT_MND_MQ_TIMER && pMsg->msgType != TDMT_MND_TELEM_TIMER &&
|
||||
pMsg->msgType != TDMT_MND_TRANS_TIMER) {
|
||||
mError("msg:%p, failed to check mnode state since %s, app:%p type:%s", pMsg, terrstr(), pMsg->info.ahandle,
|
||||
TMSG_INFO(pMsg->msgType));
|
||||
|
||||
SEpSet epSet = {0};
|
||||
mndGetMnodeEpSet(pMsg->info.node, &epSet);
|
||||
SEpSet epSet = {0};
|
||||
mndGetMnodeEpSet(pMsg->info.node, &epSet);
|
||||
|
||||
#if 0
|
||||
mTrace("msg:%p, is redirected, num:%d use:%d", pMsg, epSet.numOfEps, epSet.inUse);
|
||||
for (int32_t i = 0; i < epSet.numOfEps; ++i) {
|
||||
mTrace("mnode index:%d %s:%u", i, epSet.eps[i].fqdn, epSet.eps[i].port);
|
||||
if (strcmp(epSet.eps[i].fqdn, tsLocalFqdn) == 0 && epSet.eps[i].port == tsServerPort) {
|
||||
epSet.inUse = (i + 1) % epSet.numOfEps;
|
||||
int32_t contLen = tSerializeSEpSet(NULL, 0, &epSet);
|
||||
pMsg->info.rsp = rpcMallocCont(contLen);
|
||||
if (pMsg->info.rsp != NULL) {
|
||||
tSerializeSEpSet(pMsg->info.rsp, contLen, &epSet);
|
||||
pMsg->info.rspLen = contLen;
|
||||
terrno = TSDB_CODE_RPC_REDIRECT;
|
||||
} else {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
int32_t contLen = tSerializeSEpSet(NULL, 0, &epSet);
|
||||
pMsg->info.rsp = rpcMallocCont(contLen);
|
||||
if (pMsg->info.rsp != NULL) {
|
||||
tSerializeSEpSet(pMsg->info.rsp, contLen, &epSet);
|
||||
pMsg->info.rspLen = contLen;
|
||||
terrno = TSDB_CODE_RPC_REDIRECT;
|
||||
} else {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
static int32_t mndCheckRequestValid(SRpcMsg *pMsg) {
|
||||
static int32_t mndCheckMsgContent(SRpcMsg *pMsg) {
|
||||
if (!IsReq(pMsg)) return 0;
|
||||
if (pMsg->contLen != 0 && pMsg->pCont != NULL) return 0;
|
||||
|
||||
mError("msg:%p, failed to valid request, app:%p type:%s", pMsg, pMsg->info.ahandle, TMSG_INFO(pMsg->msgType));
|
||||
mError("msg:%p, failed to check msg content, app:%p type:%s", pMsg, pMsg->info.ahandle, TMSG_INFO(pMsg->msgType));
|
||||
terrno = TSDB_CODE_INVALID_MSG_LEN;
|
||||
return -1;
|
||||
}
|
||||
|
||||
int32_t mndProcessMsg(SRpcMsg *pMsg) {
|
||||
if (mndCheckMnodeMaster(pMsg) != 0) return -1;
|
||||
if (mndCheckRequestValid(pMsg) != 0) return -1;
|
||||
|
||||
int32_t mndProcessRpcMsg(SRpcMsg *pMsg) {
|
||||
SMnode *pMnode = pMsg->info.node;
|
||||
MndMsgFp fp = pMnode->msgFp[TMSG_INDEX(pMsg->msgType)];
|
||||
if (fp == NULL) {
|
||||
|
@ -466,8 +460,13 @@ int32_t mndProcessMsg(SRpcMsg *pMsg) {
|
|||
return -1;
|
||||
}
|
||||
|
||||
mTrace("msg:%p, will be processed in mnode, app:%p type:%s", pMsg, pMsg->info.ahandle, TMSG_INFO(pMsg->msgType));
|
||||
if (mndCheckMsgContent(pMsg) != 0) return -1;
|
||||
if (mndCheckMnodeState(pMsg) != 0) return -1;
|
||||
|
||||
mTrace("msg:%p, start to process in mnode, app:%p type:%s", pMsg, pMsg->info.ahandle, TMSG_INFO(pMsg->msgType));
|
||||
int32_t code = (*fp)(pMsg);
|
||||
mndReleaseRpcRef(pMnode);
|
||||
|
||||
if (code == TSDB_CODE_ACTION_IN_PROGRESS) {
|
||||
mTrace("msg:%p, won't response immediately since in progress", pMsg);
|
||||
} else if (code == 0) {
|
||||
|
@ -476,6 +475,7 @@ int32_t mndProcessMsg(SRpcMsg *pMsg) {
|
|||
mError("msg:%p, failed to process since %s, app:%p type:%s", pMsg, terrstr(), pMsg->info.ahandle,
|
||||
TMSG_INFO(pMsg->msgType));
|
||||
}
|
||||
|
||||
return code;
|
||||
}
|
||||
|
||||
|
@ -502,7 +502,7 @@ int64_t mndGenerateUid(char *name, int32_t len) {
|
|||
|
||||
int32_t mndGetMonitorInfo(SMnode *pMnode, SMonClusterInfo *pClusterInfo, SMonVgroupInfo *pVgroupInfo,
|
||||
SMonGrantInfo *pGrantInfo) {
|
||||
if (!mndIsMaster(pMnode)) return -1;
|
||||
if (mndAcquireRpcRef(pMnode) != 0) return -1;
|
||||
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
int64_t ms = taosGetTimestampMs();
|
||||
|
@ -511,6 +511,7 @@ int32_t mndGetMonitorInfo(SMnode *pMnode, SMonClusterInfo *pClusterInfo, SMonVgr
|
|||
pClusterInfo->mnodes = taosArrayInit(sdbGetSize(pSdb, SDB_MNODE), sizeof(SMonMnodeDesc));
|
||||
pVgroupInfo->vgroups = taosArrayInit(sdbGetSize(pSdb, SDB_VGROUP), sizeof(SMonVgroupDesc));
|
||||
if (pClusterInfo->dnodes == NULL || pClusterInfo->mnodes == NULL || pVgroupInfo->vgroups == NULL) {
|
||||
mndReleaseRpcRef(pMnode);
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
@ -605,6 +606,7 @@ int32_t mndGetMonitorInfo(SMnode *pMnode, SMonClusterInfo *pClusterInfo, SMonVgr
|
|||
pGrantInfo->timeseries_total = INT32_MAX;
|
||||
}
|
||||
|
||||
mndReleaseRpcRef(pMnode);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -612,3 +614,76 @@ int32_t mndGetLoad(SMnode *pMnode, SMnodeLoad *pLoad) {
|
|||
pLoad->syncState = syncGetMyRole(pMnode->syncMgmt.sync);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t mndAcquireRpcRef(SMnode *pMnode) {
|
||||
int32_t code = 0;
|
||||
taosThreadRwlockRdlock(&pMnode->lock);
|
||||
if (pMnode->stopped) {
|
||||
terrno = TSDB_CODE_APP_NOT_READY;
|
||||
code = -1;
|
||||
} else if (!mndIsMaster(pMnode)) {
|
||||
code = -1;
|
||||
} else {
|
||||
int32_t ref = atomic_add_fetch_32(&pMnode->rpcRef, 1);
|
||||
mTrace("mnode rpc is acquired, ref:%d", ref);
|
||||
}
|
||||
taosThreadRwlockUnlock(&pMnode->lock);
|
||||
return code;
|
||||
}
|
||||
|
||||
void mndReleaseRpcRef(SMnode *pMnode) {
|
||||
taosThreadRwlockRdlock(&pMnode->lock);
|
||||
int32_t ref = atomic_sub_fetch_32(&pMnode->rpcRef, 1);
|
||||
mTrace("mnode rpc is released, ref:%d", ref);
|
||||
taosThreadRwlockUnlock(&pMnode->lock);
|
||||
}
|
||||
|
||||
void mndSetRestore(SMnode *pMnode, bool restored) {
|
||||
if (restored) {
|
||||
taosThreadRwlockWrlock(&pMnode->lock);
|
||||
pMnode->restored = true;
|
||||
taosThreadRwlockUnlock(&pMnode->lock);
|
||||
mTrace("mnode set restored:%d", restored);
|
||||
} else {
|
||||
taosThreadRwlockWrlock(&pMnode->lock);
|
||||
pMnode->restored = false;
|
||||
taosThreadRwlockUnlock(&pMnode->lock);
|
||||
mTrace("mnode set restored:%d", restored);
|
||||
while (1) {
|
||||
if (pMnode->rpcRef <= 0) break;
|
||||
taosMsleep(3);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
bool mndGetRestored(SMnode *pMnode) { return pMnode->restored; }
|
||||
|
||||
void mndSetStop(SMnode *pMnode) {
|
||||
taosThreadRwlockWrlock(&pMnode->lock);
|
||||
pMnode->stopped = true;
|
||||
taosThreadRwlockUnlock(&pMnode->lock);
|
||||
mTrace("mnode set stopped");
|
||||
}
|
||||
|
||||
bool mndGetStop(SMnode *pMnode) { return pMnode->stopped; }
|
||||
|
||||
int32_t mndAcquireSyncRef(SMnode *pMnode) {
|
||||
int32_t code = 0;
|
||||
taosThreadRwlockRdlock(&pMnode->lock);
|
||||
if (pMnode->stopped) {
|
||||
terrno = TSDB_CODE_APP_NOT_READY;
|
||||
code = -1;
|
||||
} else {
|
||||
int32_t ref = atomic_add_fetch_32(&pMnode->syncRef, 1);
|
||||
mTrace("mnode sync is acquired, ref:%d", ref);
|
||||
}
|
||||
taosThreadRwlockUnlock(&pMnode->lock);
|
||||
return code;
|
||||
}
|
||||
|
||||
void mndReleaseSyncRef(SMnode *pMnode) {
|
||||
taosThreadRwlockRdlock(&pMnode->lock);
|
||||
int32_t ref = atomic_sub_fetch_32(&pMnode->syncRef, 1);
|
||||
mTrace("mnode sync is released, ref:%d", ref);
|
||||
taosThreadRwlockUnlock(&pMnode->lock);
|
||||
}
|
|
@ -32,7 +32,7 @@ void mndSyncCommitMsg(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbM
|
|||
SSyncMgmt *pMgmt = &pMnode->syncMgmt;
|
||||
SSdbRaw *pRaw = pMsg->pCont;
|
||||
|
||||
int32_t transId = sdbGetIdFromRaw(pRaw);
|
||||
int32_t transId = sdbGetIdFromRaw(pMnode->pSdb, pRaw);
|
||||
pMgmt->errCode = cbMeta.code;
|
||||
mTrace("trans:%d, is proposed, savedTransId:%d code:0x%x, ver:%" PRId64 " term:%" PRId64 " role:%s raw:%p", transId,
|
||||
pMgmt->transId, cbMeta.code, cbMeta.index, cbMeta.term, syncStr(cbMeta.state), pRaw);
|
||||
|
@ -63,28 +63,41 @@ void mndRestoreFinish(struct SSyncFSM *pFsm) {
|
|||
if (!pMnode->deploy) {
|
||||
mInfo("mnode sync restore finished");
|
||||
mndTransPullup(pMnode);
|
||||
pMnode->syncMgmt.restored = true;
|
||||
mndSetRestore(pMnode, true);
|
||||
}
|
||||
}
|
||||
|
||||
int32_t mndSnapshotRead(struct SSyncFSM* pFsm, const SSnapshot* pSnapshot, void** ppIter, char** ppBuf, int32_t* len) {
|
||||
/*
|
||||
int32_t mndSnapshotRead(struct SSyncFSM *pFsm, const SSnapshot *pSnapshot, void **ppIter, char **ppBuf, int32_t *len) {
|
||||
SMnode *pMnode = pFsm->data;
|
||||
SSdbIter *pIter;
|
||||
if (iter == NULL) {
|
||||
pIter = sdbIterInit(pMnode->sdb)
|
||||
mInfo("start to read snapshot from sdb");
|
||||
|
||||
int32_t code = sdbReadSnapshot(pMnode->pSdb, (SSdbIter **)ppIter, ppBuf, len);
|
||||
if (code != 0) {
|
||||
mError("failed to read snapshot from sdb since %s", terrstr());
|
||||
} else {
|
||||
pIter = iter;
|
||||
if (*ppIter == NULL) {
|
||||
mInfo("successfully to read snapshot from sdb");
|
||||
}
|
||||
}
|
||||
*/
|
||||
|
||||
return 0;
|
||||
return code;
|
||||
}
|
||||
|
||||
int32_t mndSnapshotApply(struct SSyncFSM* pFsm, const SSnapshot* pSnapshot, char* pBuf, int32_t len) {
|
||||
int32_t mndSnapshotApply(struct SSyncFSM *pFsm, const SSnapshot *pSnapshot, char *pBuf, int32_t len) {
|
||||
SMnode *pMnode = pFsm->data;
|
||||
sdbWrite(pMnode->pSdb, (SSdbRaw*)pBuf);
|
||||
return 0;
|
||||
mndSetRestore(pMnode, false);
|
||||
mInfo("start to apply snapshot to sdb, len:%d", len);
|
||||
|
||||
int32_t code = sdbApplySnapshot(pMnode->pSdb, pBuf, len);
|
||||
if (code != 0) {
|
||||
mError("failed to apply snapshot to sdb, len:%d", len);
|
||||
} else {
|
||||
mInfo("successfully to apply snapshot to sdb, len:%d", len);
|
||||
mndSetRestore(pMnode, true);
|
||||
}
|
||||
|
||||
// taosMemoryFree(pBuf);
|
||||
return code;
|
||||
}
|
||||
|
||||
void mndReConfig(struct SSyncFSM *pFsm, SSyncCfg newCfg, SReConfigCbMeta cbMeta) {
|
||||
|
@ -150,8 +163,7 @@ int32_t mndInitSync(SMnode *pMnode) {
|
|||
SSyncCfg *pCfg = &syncInfo.syncCfg;
|
||||
pCfg->replicaNum = pMnode->replica;
|
||||
pCfg->myIndex = pMnode->selfIndex;
|
||||
mInfo("start to open mnode sync, replica:%d myindex:%d standby:%d", pCfg->replicaNum, pCfg->myIndex,
|
||||
pMgmt->standby);
|
||||
mInfo("start to open mnode sync, replica:%d myindex:%d standby:%d", pCfg->replicaNum, pCfg->myIndex, pMgmt->standby);
|
||||
for (int32_t i = 0; i < pMnode->replica; ++i) {
|
||||
SNodeInfo *pNode = &pCfg->nodeInfo[i];
|
||||
tstrncpy(pNode->nodeFqdn, pMnode->replicas[i].fqdn, sizeof(pNode->nodeFqdn));
|
||||
|
@ -219,17 +231,12 @@ void mndSyncStart(SMnode *pMnode) {
|
|||
SSyncMgmt *pMgmt = &pMnode->syncMgmt;
|
||||
syncSetMsgCb(pMgmt->sync, &pMnode->msgCb);
|
||||
|
||||
syncStart(pMgmt->sync);
|
||||
|
||||
#if 0
|
||||
if (pMgmt->standby) {
|
||||
syncStartStandBy(pMgmt->sync);
|
||||
} else {
|
||||
syncStart(pMgmt->sync);
|
||||
}
|
||||
#endif
|
||||
|
||||
mDebug("sync:%" PRId64 " is started", pMgmt->sync);
|
||||
mDebug("sync:%" PRId64 " is started, standby:%d", pMgmt->sync, pMgmt->standby);
|
||||
}
|
||||
|
||||
void mndSyncStop(SMnode *pMnode) {}
|
||||
|
@ -243,7 +250,7 @@ bool mndIsMaster(SMnode *pMnode) {
|
|||
return false;
|
||||
}
|
||||
|
||||
if (!pMgmt->restored) {
|
||||
if (!pMnode->restored) {
|
||||
terrno = TSDB_CODE_APP_NOT_READY;
|
||||
return false;
|
||||
}
|
||||
|
|
|
@ -166,7 +166,6 @@ typedef struct SSdbRow {
|
|||
typedef struct SSdb {
|
||||
SMnode *pMnode;
|
||||
char *currDir;
|
||||
char *syncDir;
|
||||
char *tmpDir;
|
||||
int64_t lastCommitVer;
|
||||
int64_t curVer;
|
||||
|
@ -182,11 +181,12 @@ typedef struct SSdb {
|
|||
SdbDeployFp deployFps[SDB_MAX];
|
||||
SdbEncodeFp encodeFps[SDB_MAX];
|
||||
SdbDecodeFp decodeFps[SDB_MAX];
|
||||
TdThreadMutex filelock;
|
||||
} SSdb;
|
||||
|
||||
typedef struct SSdbIter {
|
||||
TdFilePtr file;
|
||||
int64_t readlen;
|
||||
int64_t total;
|
||||
} SSdbIter;
|
||||
|
||||
typedef struct {
|
||||
|
@ -380,13 +380,12 @@ SSdbRow *sdbAllocRow(int32_t objSize);
|
|||
void *sdbGetRowObj(SSdbRow *pRow);
|
||||
void sdbFreeRow(SSdb *pSdb, SSdbRow *pRow, bool callFunc);
|
||||
|
||||
SSdbIter *sdbIterInit(SSdb *pSdb);
|
||||
SSdbIter *sdbIterRead(SSdb *pSdb, SSdbIter *iter, char **ppBuf, int32_t *len);
|
||||
int32_t sdbReadSnapshot(SSdb *pSdb, SSdbIter **ppIter, char **ppBuf, int32_t *len);
|
||||
int32_t sdbApplySnapshot(SSdb *pSdb, char *pBuf, int32_t len);
|
||||
|
||||
const char *sdbTableName(ESdbType type);
|
||||
void sdbPrintOper(SSdb *pSdb, SSdbRow *pRow, const char *oper);
|
||||
|
||||
int32_t sdbGetIdFromRaw(SSdbRaw *pRaw);
|
||||
int32_t sdbGetIdFromRaw(SSdb *pSdb, SSdbRaw *pRaw);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
|
|
@ -1,61 +0,0 @@
|
|||
/*
|
||||
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
|
||||
*
|
||||
* This program is free software: you can use, redistribute, and/or modify
|
||||
* it under the terms of the GNU Affero General Public License, version 3
|
||||
* or later ("AGPL"), as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE.
|
||||
*
|
||||
* You should have received a copy of the GNU Affero General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#ifndef _TD_SDB_INT_H_
|
||||
#define _TD_SDB_INT_H_
|
||||
|
||||
#include "os.h"
|
||||
|
||||
#include "sdb.h"
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
// clang-format off
|
||||
#define mFatal(...) { if (mDebugFlag & DEBUG_FATAL) { taosPrintLog("MND FATAL ", DEBUG_FATAL, 255, __VA_ARGS__); }}
|
||||
#define mError(...) { if (mDebugFlag & DEBUG_ERROR) { taosPrintLog("MND ERROR ", DEBUG_ERROR, 255, __VA_ARGS__); }}
|
||||
#define mWarn(...) { if (mDebugFlag & DEBUG_WARN) { taosPrintLog("MND WARN ", DEBUG_WARN, 255, __VA_ARGS__); }}
|
||||
#define mInfo(...) { if (mDebugFlag & DEBUG_INFO) { taosPrintLog("MND ", DEBUG_INFO, 255, __VA_ARGS__); }}
|
||||
#define mDebug(...) { if (mDebugFlag & DEBUG_DEBUG) { taosPrintLog("MND ", DEBUG_DEBUG, mDebugFlag, __VA_ARGS__); }}
|
||||
#define mTrace(...) { if (mDebugFlag & DEBUG_TRACE) { taosPrintLog("MND ", DEBUG_TRACE, mDebugFlag, __VA_ARGS__); }}
|
||||
// clang-format on
|
||||
|
||||
typedef struct SSdbRaw {
|
||||
int8_t type;
|
||||
int8_t status;
|
||||
int8_t sver;
|
||||
int8_t reserved;
|
||||
int32_t dataLen;
|
||||
char pData[];
|
||||
} SSdbRaw;
|
||||
|
||||
typedef struct SSdbRow {
|
||||
ESdbType type;
|
||||
ESdbStatus status;
|
||||
int32_t refCount;
|
||||
char pObj[];
|
||||
} SSdbRow;
|
||||
|
||||
const char *sdbTableName(ESdbType type);
|
||||
void sdbPrintOper(SSdb *pSdb, SSdbRow *pRow, const char *oper);
|
||||
|
||||
void sdbFreeRow(SSdb *pSdb, SSdbRow *pRow, bool callFunc);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /*_TD_SDB_INT_H_*/
|
|
@ -56,6 +56,7 @@ SSdb *sdbInit(SSdbOpt *pOption) {
|
|||
pSdb->curTerm = -1;
|
||||
pSdb->lastCommitVer = -1;
|
||||
pSdb->pMnode = pOption->pMnode;
|
||||
taosThreadMutexInit(&pSdb->filelock, NULL);
|
||||
mDebug("sdb init successfully");
|
||||
return pSdb;
|
||||
}
|
||||
|
@ -69,10 +70,6 @@ void sdbCleanup(SSdb *pSdb) {
|
|||
taosMemoryFreeClear(pSdb->currDir);
|
||||
}
|
||||
|
||||
if (pSdb->syncDir != NULL) {
|
||||
taosMemoryFreeClear(pSdb->syncDir);
|
||||
}
|
||||
|
||||
if (pSdb->tmpDir != NULL) {
|
||||
taosMemoryFreeClear(pSdb->tmpDir);
|
||||
}
|
||||
|
@ -104,6 +101,7 @@ void sdbCleanup(SSdb *pSdb) {
|
|||
mDebug("sdb table:%s is cleaned up", sdbTableName(i));
|
||||
}
|
||||
|
||||
taosThreadMutexDestroy(&pSdb->filelock);
|
||||
taosMemoryFree(pSdb);
|
||||
mDebug("sdb is cleaned up");
|
||||
}
|
||||
|
|
|
@ -22,13 +22,14 @@
|
|||
#define SDB_RESERVE_SIZE 512
|
||||
#define SDB_FILE_VER 1
|
||||
|
||||
static int32_t sdbRunDeployFp(SSdb *pSdb) {
|
||||
static int32_t sdbDeployData(SSdb *pSdb) {
|
||||
mDebug("start to deploy sdb");
|
||||
|
||||
for (int32_t i = SDB_MAX - 1; i >= 0; --i) {
|
||||
SdbDeployFp fp = pSdb->deployFps[i];
|
||||
if (fp == NULL) continue;
|
||||
|
||||
mDebug("start to deploy sdb:%s", sdbTableName(i));
|
||||
if ((*fp)(pSdb->pMnode) != 0) {
|
||||
mError("failed to deploy sdb:%s since %s", sdbTableName(i), terrstr());
|
||||
return -1;
|
||||
|
@ -39,6 +40,39 @@ static int32_t sdbRunDeployFp(SSdb *pSdb) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void sdbResetData(SSdb *pSdb) {
|
||||
mDebug("start to reset sdb");
|
||||
|
||||
for (ESdbType i = 0; i < SDB_MAX; ++i) {
|
||||
SHashObj *hash = pSdb->hashObjs[i];
|
||||
if (hash == NULL) continue;
|
||||
|
||||
SSdbRow **ppRow = taosHashIterate(hash, NULL);
|
||||
while (ppRow != NULL) {
|
||||
SSdbRow *pRow = *ppRow;
|
||||
if (pRow == NULL) continue;
|
||||
|
||||
sdbFreeRow(pSdb, pRow, true);
|
||||
ppRow = taosHashIterate(hash, ppRow);
|
||||
}
|
||||
}
|
||||
|
||||
for (ESdbType i = 0; i < SDB_MAX; ++i) {
|
||||
SHashObj *hash = pSdb->hashObjs[i];
|
||||
if (hash == NULL) continue;
|
||||
|
||||
taosHashClear(pSdb->hashObjs[i]);
|
||||
pSdb->tableVer[i] = 0;
|
||||
pSdb->maxId[i] = 0;
|
||||
mDebug("sdb:%s is reset", sdbTableName(i));
|
||||
}
|
||||
|
||||
pSdb->curVer = -1;
|
||||
pSdb->curTerm = -1;
|
||||
pSdb->lastCommitVer = -1;
|
||||
mDebug("sdb reset successfully");
|
||||
}
|
||||
|
||||
static int32_t sdbReadFileHead(SSdb *pSdb, TdFilePtr pFile) {
|
||||
int64_t sver = 0;
|
||||
int32_t ret = taosReadFile(pFile, &sver, sizeof(int64_t));
|
||||
|
@ -169,11 +203,15 @@ static int32_t sdbWriteFileHead(SSdb *pSdb, TdFilePtr pFile) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t sdbReadFile(SSdb *pSdb) {
|
||||
static int32_t sdbReadFileImp(SSdb *pSdb) {
|
||||
int64_t offset = 0;
|
||||
int32_t code = 0;
|
||||
int32_t readLen = 0;
|
||||
int64_t ret = 0;
|
||||
char file[PATH_MAX] = {0};
|
||||
|
||||
snprintf(file, sizeof(file), "%s%ssdb.data", pSdb->currDir, TD_DIRSEP);
|
||||
mDebug("start to read file:%s", file);
|
||||
|
||||
SSdbRaw *pRaw = taosMemoryMalloc(WAL_MAX_SIZE + 100);
|
||||
if (pRaw == NULL) {
|
||||
|
@ -182,10 +220,6 @@ int32_t sdbReadFile(SSdb *pSdb) {
|
|||
return -1;
|
||||
}
|
||||
|
||||
char file[PATH_MAX] = {0};
|
||||
snprintf(file, sizeof(file), "%s%ssdb.data", pSdb->currDir, TD_DIRSEP);
|
||||
mDebug("start to read file:%s", file);
|
||||
|
||||
TdFilePtr pFile = taosOpenFile(file, TD_FILE_READ);
|
||||
if (pFile == NULL) {
|
||||
taosMemoryFree(pRaw);
|
||||
|
@ -196,8 +230,6 @@ int32_t sdbReadFile(SSdb *pSdb) {
|
|||
|
||||
if (sdbReadFileHead(pSdb, pFile) != 0) {
|
||||
mError("failed to read file:%s head since %s", file, terrstr());
|
||||
pSdb->curVer = -1;
|
||||
pSdb->curTerm = -1;
|
||||
taosMemoryFree(pRaw);
|
||||
taosCloseFile(&pFile);
|
||||
return -1;
|
||||
|
@ -264,6 +296,20 @@ _OVER:
|
|||
return code;
|
||||
}
|
||||
|
||||
int32_t sdbReadFile(SSdb *pSdb) {
|
||||
taosThreadMutexLock(&pSdb->filelock);
|
||||
|
||||
sdbResetData(pSdb);
|
||||
int32_t code = sdbReadFileImp(pSdb);
|
||||
if (code != 0) {
|
||||
mError("failed to read sdb since %s", terrstr());
|
||||
sdbResetData(pSdb);
|
||||
}
|
||||
|
||||
taosThreadMutexUnlock(&pSdb->filelock);
|
||||
return code;
|
||||
}
|
||||
|
||||
static int32_t sdbWriteFileImp(SSdb *pSdb) {
|
||||
int32_t code = 0;
|
||||
|
||||
|
@ -378,32 +424,41 @@ int32_t sdbWriteFile(SSdb *pSdb) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
return sdbWriteFileImp(pSdb);
|
||||
taosThreadMutexLock(&pSdb->filelock);
|
||||
int32_t code = sdbWriteFileImp(pSdb);
|
||||
if (code != 0) {
|
||||
mError("failed to write sdb since %s", terrstr());
|
||||
}
|
||||
taosThreadMutexUnlock(&pSdb->filelock);
|
||||
return code;
|
||||
}
|
||||
|
||||
int32_t sdbDeploy(SSdb *pSdb) {
|
||||
if (sdbRunDeployFp(pSdb) != 0) {
|
||||
if (sdbDeployData(pSdb) != 0) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (sdbWriteFileImp(pSdb) != 0) {
|
||||
if (sdbWriteFile(pSdb) != 0) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
SSdbIter *sdbIterInit(SSdb *pSdb) {
|
||||
static SSdbIter *sdbOpenIter(SSdb *pSdb) {
|
||||
char datafile[PATH_MAX] = {0};
|
||||
char tmpfile[PATH_MAX] = {0};
|
||||
snprintf(datafile, sizeof(datafile), "%s%ssdb.data", pSdb->currDir, TD_DIRSEP);
|
||||
snprintf(tmpfile, sizeof(datafile), "%s%ssdb.data", pSdb->tmpDir, TD_DIRSEP);
|
||||
snprintf(tmpfile, sizeof(tmpfile), "%s%ssdb.data", pSdb->tmpDir, TD_DIRSEP);
|
||||
|
||||
taosThreadMutexLock(&pSdb->filelock);
|
||||
if (taosCopyFile(datafile, tmpfile) != 0) {
|
||||
taosThreadMutexUnlock(&pSdb->filelock);
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
mError("failed to copy file %s to %s since %s", datafile, tmpfile, terrstr());
|
||||
return NULL;
|
||||
}
|
||||
taosThreadMutexUnlock(&pSdb->filelock);
|
||||
|
||||
SSdbIter *pIter = taosMemoryCalloc(1, sizeof(SSdbIter));
|
||||
if (pIter == NULL) {
|
||||
|
@ -414,44 +469,144 @@ SSdbIter *sdbIterInit(SSdb *pSdb) {
|
|||
pIter->file = taosOpenFile(tmpfile, TD_FILE_READ);
|
||||
if (pIter->file == NULL) {
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
mError("failed to read snapshot file:%s since %s", tmpfile, terrstr());
|
||||
mError("failed to read file:%s since %s", tmpfile, terrstr());
|
||||
taosMemoryFree(pIter);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
mDebug("start to read snapshot file:%s, iter:%p", tmpfile, pIter);
|
||||
return pIter;
|
||||
}
|
||||
|
||||
SSdbIter *sdbIterRead(SSdb *pSdb, SSdbIter *pIter, char **ppBuf, int32_t *buflen) {
|
||||
const int32_t maxlen = 100;
|
||||
static void sdbCloseIter(SSdb *pSdb, SSdbIter *pIter) {
|
||||
if (pIter == NULL) return;
|
||||
if (pIter->file != NULL) {
|
||||
taosCloseFile(&pIter->file);
|
||||
}
|
||||
|
||||
char *pBuf = taosMemoryCalloc(1, maxlen);
|
||||
char tmpfile[PATH_MAX] = {0};
|
||||
snprintf(tmpfile, sizeof(tmpfile), "%s%ssdb.data", pSdb->tmpDir, TD_DIRSEP);
|
||||
taosRemoveFile(tmpfile);
|
||||
|
||||
taosMemoryFree(pIter);
|
||||
mInfo("sdbiter:%p, is closed", pIter);
|
||||
}
|
||||
|
||||
static SSdbIter *sdbGetIter(SSdb *pSdb, SSdbIter **ppIter) {
|
||||
SSdbIter *pIter = NULL;
|
||||
if (ppIter != NULL) pIter = *ppIter;
|
||||
|
||||
if (pIter == NULL) {
|
||||
pIter = sdbOpenIter(pSdb);
|
||||
if (pIter != NULL) {
|
||||
mInfo("sdbiter:%p, is created to read snapshot", pIter);
|
||||
*ppIter = pIter;
|
||||
} else {
|
||||
mError("failed to create sdbiter to read snapshot since %s", terrstr());
|
||||
*ppIter = NULL;
|
||||
return NULL;
|
||||
}
|
||||
} else {
|
||||
mInfo("sdbiter:%p, continue to read snapshot, total:%" PRId64, pIter, pIter->total);
|
||||
}
|
||||
|
||||
return pIter;
|
||||
}
|
||||
|
||||
int32_t sdbReadSnapshot(SSdb *pSdb, SSdbIter **ppIter, char **ppBuf, int32_t *len) {
|
||||
SSdbIter *pIter = sdbGetIter(pSdb, ppIter);
|
||||
if (pIter == NULL) return -1;
|
||||
|
||||
int32_t maxlen = 100;
|
||||
char *pBuf = taosMemoryCalloc(1, maxlen);
|
||||
if (pBuf == NULL) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return NULL;
|
||||
sdbCloseIter(pSdb, pIter);
|
||||
return -1;
|
||||
}
|
||||
|
||||
int32_t readlen = taosReadFile(pIter->file, pBuf, maxlen);
|
||||
if (readlen == 0) {
|
||||
mTrace("read snapshot to the end, readlen:%" PRId64, pIter->readlen);
|
||||
taosMemoryFree(pBuf);
|
||||
taosCloseFile(&pIter->file);
|
||||
taosMemoryFree(pIter);
|
||||
pIter = NULL;
|
||||
} else if (readlen < 0) {
|
||||
if (readlen < 0 || (readlen == 0 && errno != 0)) {
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
mError("failed to read snapshot since %s, readlen:%" PRId64, terrstr(), pIter->readlen);
|
||||
mError("sdbiter:%p, failed to read snapshot since %s, total:%" PRId64, pIter, terrstr(), pIter->total);
|
||||
*ppBuf = NULL;
|
||||
*len = 0;
|
||||
*ppIter = NULL;
|
||||
sdbCloseIter(pSdb, pIter);
|
||||
taosMemoryFree(pBuf);
|
||||
taosCloseFile(&pIter->file);
|
||||
taosMemoryFree(pIter);
|
||||
pIter = NULL;
|
||||
} else {
|
||||
pIter->readlen += readlen;
|
||||
mTrace("read snapshot, readlen:%" PRId64, pIter->readlen);
|
||||
return -1;
|
||||
} else if (readlen == 0) {
|
||||
mInfo("sdbiter:%p, read snapshot to the end, total:%" PRId64, pIter, pIter->total);
|
||||
*ppBuf = NULL;
|
||||
*len = 0;
|
||||
*ppIter = NULL;
|
||||
sdbCloseIter(pSdb, pIter);
|
||||
taosMemoryFree(pBuf);
|
||||
return 0;
|
||||
} else if ((readlen < maxlen && errno != 0) || readlen == maxlen) {
|
||||
pIter->total += readlen;
|
||||
mInfo("sdbiter:%p, read:%d bytes from snapshot, total:%" PRId64, pIter, readlen, pIter->total);
|
||||
*ppBuf = pBuf;
|
||||
*buflen = readlen;
|
||||
*len = readlen;
|
||||
return 0;
|
||||
} else if (readlen < maxlen && errno == 0) {
|
||||
mInfo("sdbiter:%p, read snapshot to the end, total:%" PRId64, pIter, pIter->total);
|
||||
*ppBuf = pBuf;
|
||||
*len = readlen;
|
||||
*ppIter = NULL;
|
||||
sdbCloseIter(pSdb, pIter);
|
||||
return 0;
|
||||
} else {
|
||||
// impossible
|
||||
mError("sdbiter:%p, read:%d bytes from snapshot, total:%" PRId64, pIter, readlen, pIter->total);
|
||||
*ppBuf = NULL;
|
||||
*len = 0;
|
||||
*ppIter = NULL;
|
||||
sdbCloseIter(pSdb, pIter);
|
||||
taosMemoryFree(pBuf);
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
int32_t sdbApplySnapshot(SSdb *pSdb, char *pBuf, int32_t len) {
|
||||
char datafile[PATH_MAX] = {0};
|
||||
char tmpfile[PATH_MAX] = {0};
|
||||
snprintf(datafile, sizeof(datafile), "%s%ssdb.data", pSdb->currDir, TD_DIRSEP);
|
||||
snprintf(tmpfile, sizeof(tmpfile), "%s%ssdb.data", pSdb->tmpDir, TD_DIRSEP);
|
||||
|
||||
TdFilePtr pFile = taosOpenFile(tmpfile, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_TRUNC);
|
||||
if (pFile == NULL) {
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
mError("failed to write %s since %s", tmpfile, terrstr());
|
||||
return -1;
|
||||
}
|
||||
|
||||
return pIter;
|
||||
int32_t writelen = taosWriteFile(pFile, pBuf, len);
|
||||
if (writelen != len) {
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
mError("failed to write %s since %s", tmpfile, terrstr());
|
||||
taosCloseFile(&pFile);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (taosFsyncFile(pFile) != 0) {
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
mError("failed to fsync %s since %s", tmpfile, terrstr());
|
||||
taosCloseFile(&pFile);
|
||||
return -1;
|
||||
}
|
||||
|
||||
(void)taosCloseFile(&pFile);
|
||||
|
||||
if (taosRenameFile(tmpfile, datafile) != 0) {
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
mError("failed to rename file %s to %s since %s", tmpfile, datafile, terrstr());
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (sdbReadFile(pSdb) != 0) {
|
||||
mError("failed to read from %s since %s", datafile, terrstr());
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
|
@ -16,9 +16,14 @@
|
|||
#define _DEFAULT_SOURCE
|
||||
#include "sdb.h"
|
||||
|
||||
int32_t sdbGetIdFromRaw(SSdbRaw *pRaw) {
|
||||
int32_t id = *((int32_t *)(pRaw->pData));
|
||||
return id;
|
||||
int32_t sdbGetIdFromRaw(SSdb *pSdb, SSdbRaw *pRaw) {
|
||||
EKeyType keytype = pSdb->keyTypes[pRaw->type];
|
||||
if (keytype == SDB_KEY_INT32) {
|
||||
int32_t id = *((int32_t *)(pRaw->pData));
|
||||
return id;
|
||||
} else {
|
||||
return -2;
|
||||
}
|
||||
}
|
||||
|
||||
SSdbRaw *sdbAllocRaw(ESdbType type, int8_t sver, int32_t dataLen) {
|
||||
|
|
|
@ -111,7 +111,7 @@ bool tsdbNextDataBlock(tsdbReaderT pTsdbReadHandle);
|
|||
void tsdbRetrieveDataBlockInfo(tsdbReaderT *pTsdbReadHandle, SDataBlockInfo *pBlockInfo);
|
||||
int32_t tsdbRetrieveDataBlockStatisInfo(tsdbReaderT *pTsdbReadHandle, SColumnDataAgg ***pBlockStatis, bool *allHave);
|
||||
SArray *tsdbRetrieveDataBlock(tsdbReaderT *pTsdbReadHandle, SArray *pColumnIdList);
|
||||
void tsdbResetReadHandle(tsdbReaderT queryHandle, SQueryTableDataCond *pCond);
|
||||
void tsdbResetReadHandle(tsdbReaderT queryHandle, SQueryTableDataCond *pCond, int32_t tWinIdx);
|
||||
void tsdbCleanupReadHandle(tsdbReaderT queryHandle);
|
||||
|
||||
// tq
|
||||
|
|
|
@ -155,44 +155,52 @@ int metaTbCursorNext(SMTbCursor *pTbCur) {
|
|||
}
|
||||
|
||||
SSchemaWrapper *metaGetTableSchema(SMeta *pMeta, tb_uid_t uid, int32_t sver, bool isinline) {
|
||||
void *pKey = NULL;
|
||||
void *pVal = NULL;
|
||||
int kLen = 0;
|
||||
int vLen = 0;
|
||||
int ret;
|
||||
SSkmDbKey skmDbKey;
|
||||
SSchemaWrapper *pSW = NULL;
|
||||
SSchema *pSchema = NULL;
|
||||
void *pBuf;
|
||||
SDecoder coder = {0};
|
||||
void *pData = NULL;
|
||||
int nData = 0;
|
||||
int64_t version;
|
||||
SSchemaWrapper schema = {0};
|
||||
SSchemaWrapper *pSchema = NULL;
|
||||
SDecoder dc = {0};
|
||||
|
||||
// fetch
|
||||
skmDbKey.uid = uid;
|
||||
skmDbKey.sver = sver;
|
||||
pKey = &skmDbKey;
|
||||
kLen = sizeof(skmDbKey);
|
||||
metaRLock(pMeta);
|
||||
ret = tdbTbGet(pMeta->pSkmDb, pKey, kLen, &pVal, &vLen);
|
||||
metaULock(pMeta);
|
||||
if (ret < 0) {
|
||||
return NULL;
|
||||
if (sver < 0) {
|
||||
if (tdbTbGet(pMeta->pUidIdx, &uid, sizeof(uid), &pData, &nData) < 0) {
|
||||
goto _err;
|
||||
}
|
||||
|
||||
version = *(int64_t *)pData;
|
||||
|
||||
tdbTbGet(pMeta->pTbDb, &(STbDbKey){.uid = uid, .version = version}, sizeof(STbDbKey), &pData, &nData);
|
||||
|
||||
SMetaEntry me = {0};
|
||||
tDecoderInit(&dc, pData, nData);
|
||||
metaDecodeEntry(&dc, &me);
|
||||
if (me.type == TSDB_SUPER_TABLE) {
|
||||
pSchema = tCloneSSchemaWrapper(&me.stbEntry.schemaRow);
|
||||
} else if (me.type == TSDB_NORMAL_TABLE) {
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
tDecoderClear(&dc);
|
||||
} else {
|
||||
if (tdbTbGet(pMeta->pSkmDb, &(SSkmDbKey){.uid = uid, .sver = sver}, sizeof(SSkmDbKey), &pData, &nData) < 0) {
|
||||
goto _err;
|
||||
}
|
||||
|
||||
tDecoderInit(&dc, pData, nData);
|
||||
tDecodeSSchemaWrapper(&dc, &schema);
|
||||
pSchema = tCloneSSchemaWrapper(&schema);
|
||||
tDecoderClear(&dc);
|
||||
}
|
||||
|
||||
// decode
|
||||
pBuf = pVal;
|
||||
pSW = taosMemoryMalloc(sizeof(SSchemaWrapper));
|
||||
metaULock(pMeta);
|
||||
tdbFree(pData);
|
||||
return pSchema;
|
||||
|
||||
tDecoderInit(&coder, pVal, vLen);
|
||||
tDecodeSSchemaWrapper(&coder, pSW);
|
||||
pSchema = taosMemoryMalloc(sizeof(SSchema) * pSW->nCols);
|
||||
memcpy(pSchema, pSW->pSchema, sizeof(SSchema) * pSW->nCols);
|
||||
tDecoderClear(&coder);
|
||||
|
||||
pSW->pSchema = pSchema;
|
||||
|
||||
tdbFree(pVal);
|
||||
|
||||
return pSW;
|
||||
_err:
|
||||
metaULock(pMeta);
|
||||
tdbFree(pData);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
struct SMCtbCursor {
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
static FORCE_INLINE int32_t tdUidStorePut(STbUidStore *pStore, tb_uid_t suid, tb_uid_t *uid);
|
||||
static FORCE_INLINE int32_t tdUpdateTbUidListImpl(SSma *pSma, tb_uid_t *suid, SArray *tbUids);
|
||||
static FORCE_INLINE int32_t tdExecuteRSmaImpl(SSma *pSma, const void *pMsg, int32_t inputType, qTaskInfo_t *taskInfo,
|
||||
STSchema *pTSchema, tb_uid_t suid, tb_uid_t uid, int8_t level);
|
||||
STSchema *pTSchema, tb_uid_t suid, int8_t level);
|
||||
|
||||
struct SRSmaInfo {
|
||||
void *taskInfo[TSDB_RETENTION_L2]; // qTaskInfo_t
|
||||
|
@ -364,7 +364,7 @@ static int32_t tdFetchSubmitReqSuids(SSubmitReq *pMsg, STbUidStore *pStore) {
|
|||
}
|
||||
|
||||
static FORCE_INLINE int32_t tdExecuteRSmaImpl(SSma *pSma, const void *pMsg, int32_t inputType, qTaskInfo_t *taskInfo,
|
||||
STSchema *pTSchema, tb_uid_t suid, tb_uid_t uid, int8_t level) {
|
||||
STSchema *pTSchema, tb_uid_t suid, int8_t level) {
|
||||
SArray *pResult = NULL;
|
||||
|
||||
if (!taskInfo) {
|
||||
|
@ -399,7 +399,7 @@ static FORCE_INLINE int32_t tdExecuteRSmaImpl(SSma *pSma, const void *pMsg, int3
|
|||
blockDebugShowData(pResult);
|
||||
STsdb *sinkTsdb = (level == TSDB_RETENTION_L1 ? pSma->pRSmaTsdb1 : pSma->pRSmaTsdb2);
|
||||
SSubmitReq *pReq = NULL;
|
||||
if (buildSubmitReqFromDataBlock(&pReq, pResult, pTSchema, SMA_VID(pSma), uid, suid) != 0) {
|
||||
if (buildSubmitReqFromDataBlock(&pReq, pResult, pTSchema, SMA_VID(pSma), suid) != 0) {
|
||||
taosArrayDestroy(pResult);
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
@ -418,15 +418,13 @@ static FORCE_INLINE int32_t tdExecuteRSmaImpl(SSma *pSma, const void *pMsg, int3
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t tdExecuteRSma(SSma *pSma, const void *pMsg, int32_t inputType, tb_uid_t suid, tb_uid_t uid) {
|
||||
static int32_t tdExecuteRSma(SSma *pSma, const void *pMsg, int32_t inputType, tb_uid_t suid) {
|
||||
SSmaEnv *pEnv = SMA_RSMA_ENV(pSma);
|
||||
if (!pEnv) {
|
||||
// only applicable when rsma env exists
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
ASSERT(uid != 0); // TODO: remove later
|
||||
|
||||
SSmaStat *pStat = SMA_ENV_STAT(pEnv);
|
||||
SRSmaInfo *pRSmaInfo = NULL;
|
||||
|
||||
|
@ -448,8 +446,8 @@ static int32_t tdExecuteRSma(SSma *pSma, const void *pMsg, int32_t inputType, tb
|
|||
terrno = TSDB_CODE_TDB_IVD_TB_SCHEMA_VERSION;
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
tdExecuteRSmaImpl(pSma, pMsg, inputType, pRSmaInfo->taskInfo[0], pTSchema, suid, uid, TSDB_RETENTION_L1);
|
||||
tdExecuteRSmaImpl(pSma, pMsg, inputType, pRSmaInfo->taskInfo[1], pTSchema, suid, uid, TSDB_RETENTION_L2);
|
||||
tdExecuteRSmaImpl(pSma, pMsg, inputType, pRSmaInfo->taskInfo[0], pTSchema, suid, TSDB_RETENTION_L1);
|
||||
tdExecuteRSmaImpl(pSma, pMsg, inputType, pRSmaInfo->taskInfo[1], pTSchema, suid, TSDB_RETENTION_L2);
|
||||
taosMemoryFree(pTSchema);
|
||||
}
|
||||
|
||||
|
@ -468,12 +466,12 @@ int32_t tdProcessRSmaSubmit(SSma *pSma, void *pMsg, int32_t inputType) {
|
|||
tdFetchSubmitReqSuids(pMsg, &uidStore);
|
||||
|
||||
if (uidStore.suid != 0) {
|
||||
tdExecuteRSma(pSma, pMsg, inputType, uidStore.suid, uidStore.uid);
|
||||
tdExecuteRSma(pSma, pMsg, inputType, uidStore.suid);
|
||||
|
||||
void *pIter = taosHashIterate(uidStore.uidHash, NULL);
|
||||
while (pIter) {
|
||||
tb_uid_t *pTbSuid = (tb_uid_t *)taosHashGetKey(pIter, NULL);
|
||||
tdExecuteRSma(pSma, pMsg, inputType, *pTbSuid, 0);
|
||||
tdExecuteRSma(pSma, pMsg, inputType, *pTbSuid);
|
||||
pIter = taosHashIterate(uidStore.uidHash, pIter);
|
||||
}
|
||||
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include "vnode.h"
|
||||
#include "tsdb.h"
|
||||
|
||||
#define EXTRA_BYTES 2
|
||||
|
@ -140,12 +141,6 @@ typedef struct STsdbReadHandle {
|
|||
STSchema* pSchema;
|
||||
} STsdbReadHandle;
|
||||
|
||||
typedef struct STableGroupSupporter {
|
||||
int32_t numOfCols;
|
||||
SColIndex* pCols;
|
||||
SSchema* pTagSchema;
|
||||
} STableGroupSupporter;
|
||||
|
||||
static STimeWindow updateLastrowForEachGroup(STableListInfo* pList);
|
||||
static int32_t checkForCachedLastRow(STsdbReadHandle* pTsdbReadHandle, STableListInfo* pList);
|
||||
static int32_t checkForCachedLast(STsdbReadHandle* pTsdbReadHandle);
|
||||
|
@ -211,12 +206,6 @@ int64_t tsdbGetNumOfRowsInMemTable(tsdbReaderT* pHandle) {
|
|||
return rows;
|
||||
}
|
||||
|
||||
// STableData* pMem = NULL;
|
||||
// STableData* pIMem = NULL;
|
||||
|
||||
// SMemTable* pMemT = pMemRef->snapshot.mem;
|
||||
// SMemTable* pIMemT = pMemRef->snapshot.imem;
|
||||
|
||||
size_t size = taosArrayGetSize(pTsdbReadHandle->pTableCheckInfo);
|
||||
for (int32_t i = 0; i < size; ++i) {
|
||||
STableCheckInfo* pCheckInfo = taosArrayGet(pTsdbReadHandle->pTableCheckInfo, i);
|
||||
|
@ -317,28 +306,28 @@ static int64_t getEarliestValidTimestamp(STsdb* pTsdb) {
|
|||
return now - (tsTickPerMin[pCfg->precision] * pCfg->keep2) + 1; // needs to add one tick
|
||||
}
|
||||
|
||||
static void setQueryTimewindow(STsdbReadHandle* pTsdbReadHandle, SQueryTableDataCond* pCond) {
|
||||
pTsdbReadHandle->window = pCond->twindow;
|
||||
static void setQueryTimewindow(STsdbReadHandle* pTsdbReadHandle, SQueryTableDataCond* pCond, int32_t tWinIdx) {
|
||||
pTsdbReadHandle->window = pCond->twindows[tWinIdx];
|
||||
|
||||
bool updateTs = false;
|
||||
int64_t startTs = getEarliestValidTimestamp(pTsdbReadHandle->pTsdb);
|
||||
if (ASCENDING_TRAVERSE(pTsdbReadHandle->order)) {
|
||||
if (startTs > pTsdbReadHandle->window.skey) {
|
||||
pTsdbReadHandle->window.skey = startTs;
|
||||
pCond->twindow.skey = startTs;
|
||||
pCond->twindows[tWinIdx].skey = startTs;
|
||||
updateTs = true;
|
||||
}
|
||||
} else {
|
||||
if (startTs > pTsdbReadHandle->window.ekey) {
|
||||
pTsdbReadHandle->window.ekey = startTs;
|
||||
pCond->twindow.ekey = startTs;
|
||||
pCond->twindows[tWinIdx].ekey = startTs;
|
||||
updateTs = true;
|
||||
}
|
||||
}
|
||||
|
||||
if (updateTs) {
|
||||
tsdbDebug("%p update the query time window, old:%" PRId64 " - %" PRId64 ", new:%" PRId64 " - %" PRId64 ", %s",
|
||||
pTsdbReadHandle, pCond->twindow.skey, pCond->twindow.ekey, pTsdbReadHandle->window.skey,
|
||||
pTsdbReadHandle, pCond->twindows[tWinIdx].skey, pCond->twindows[tWinIdx].ekey, pTsdbReadHandle->window.skey,
|
||||
pTsdbReadHandle->window.ekey, pTsdbReadHandle->idStr);
|
||||
}
|
||||
}
|
||||
|
@ -382,7 +371,7 @@ static STsdbReadHandle* tsdbQueryTablesImpl(SVnode* pVnode, SQueryTableDataCond*
|
|||
goto _end;
|
||||
}
|
||||
|
||||
STsdb* pTsdb = getTsdbByRetentions(pVnode, pReadHandle, pCond->twindow.skey, pVnode->config.tsdbCfg.retentions);
|
||||
STsdb* pTsdb = getTsdbByRetentions(pVnode, pReadHandle, pCond->twindows[0].skey, pVnode->config.tsdbCfg.retentions);
|
||||
|
||||
pReadHandle->order = pCond->order;
|
||||
pReadHandle->pTsdb = pTsdb;
|
||||
|
@ -408,7 +397,7 @@ static STsdbReadHandle* tsdbQueryTablesImpl(SVnode* pVnode, SQueryTableDataCond*
|
|||
}
|
||||
|
||||
assert(pCond != NULL);
|
||||
setQueryTimewindow(pReadHandle, pCond);
|
||||
setQueryTimewindow(pReadHandle, pCond, 0);
|
||||
|
||||
if (pCond->numOfCols > 0) {
|
||||
int32_t rowLen = 0;
|
||||
|
@ -447,10 +436,10 @@ static STsdbReadHandle* tsdbQueryTablesImpl(SVnode* pVnode, SQueryTableDataCond*
|
|||
}
|
||||
|
||||
pReadHandle->suppInfo.defaultLoadColumn = getDefaultLoadColumns(pReadHandle, true);
|
||||
pReadHandle->suppInfo.slotIds =
|
||||
taosMemoryMalloc(sizeof(int32_t) * taosArrayGetSize(pReadHandle->suppInfo.defaultLoadColumn));
|
||||
pReadHandle->suppInfo.plist =
|
||||
taosMemoryCalloc(taosArrayGetSize(pReadHandle->suppInfo.defaultLoadColumn), POINTER_BYTES);
|
||||
|
||||
size_t size = taosArrayGetSize(pReadHandle->suppInfo.defaultLoadColumn);
|
||||
pReadHandle->suppInfo.slotIds = taosMemoryCalloc(size, sizeof(int32_t));
|
||||
pReadHandle->suppInfo.plist = taosMemoryCalloc(size, POINTER_BYTES);
|
||||
}
|
||||
|
||||
pReadHandle->pDataCols = tdNewDataCols(1000, pVnode->config.tsdbCfg.maxRows);
|
||||
|
@ -471,6 +460,39 @@ _end:
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static int32_t setCurrentSchema(SVnode* pVnode, STsdbReadHandle* pTsdbReadHandle) {
|
||||
STableCheckInfo* pCheckInfo = taosArrayGet(pTsdbReadHandle->pTableCheckInfo, 0);
|
||||
|
||||
int32_t sversion = 1;
|
||||
|
||||
SMetaReader mr = {0};
|
||||
metaReaderInit(&mr, pVnode->pMeta, 0);
|
||||
int32_t code = metaGetTableEntryByUid(&mr, pCheckInfo->tableId);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
terrno = TSDB_CODE_TDB_INVALID_TABLE_ID;
|
||||
metaReaderClear(&mr);
|
||||
return terrno;
|
||||
}
|
||||
|
||||
if (mr.me.type == TSDB_CHILD_TABLE) {
|
||||
tb_uid_t suid = mr.me.ctbEntry.suid;
|
||||
code = metaGetTableEntryByUid(&mr, suid);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
terrno = TSDB_CODE_TDB_INVALID_TABLE_ID;
|
||||
metaReaderClear(&mr);
|
||||
return terrno;
|
||||
}
|
||||
sversion = mr.me.stbEntry.schemaRow.version;
|
||||
} else {
|
||||
ASSERT(mr.me.type == TSDB_NORMAL_TABLE);
|
||||
sversion = mr.me.ntbEntry.schemaRow.version;
|
||||
}
|
||||
|
||||
metaReaderClear(&mr);
|
||||
pTsdbReadHandle->pSchema = metaGetTbTSchema(pVnode->pMeta, pCheckInfo->tableId, sversion);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
tsdbReaderT* tsdbQueryTables(SVnode* pVnode, SQueryTableDataCond* pCond, STableListInfo* tableList, uint64_t qId,
|
||||
uint64_t taskId) {
|
||||
STsdbReadHandle* pTsdbReadHandle = tsdbQueryTablesImpl(pVnode, pCond, qId, taskId);
|
||||
|
@ -490,9 +512,12 @@ tsdbReaderT* tsdbQueryTables(SVnode* pVnode, SQueryTableDataCond* pCond, STableL
|
|||
return NULL;
|
||||
}
|
||||
|
||||
STableCheckInfo* pCheckInfo = taosArrayGet(pTsdbReadHandle->pTableCheckInfo, 0);
|
||||
int32_t code = setCurrentSchema(pVnode, pTsdbReadHandle);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
terrno = code;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
pTsdbReadHandle->pSchema = metaGetTbTSchema(pVnode->pMeta, pCheckInfo->tableId, 1);
|
||||
int32_t numOfCols = taosArrayGetSize(pTsdbReadHandle->suppInfo.defaultLoadColumn);
|
||||
int16_t* ids = pTsdbReadHandle->suppInfo.defaultLoadColumn->pData;
|
||||
|
||||
|
@ -520,7 +545,7 @@ tsdbReaderT* tsdbQueryTables(SVnode* pVnode, SQueryTableDataCond* pCond, STableL
|
|||
return (tsdbReaderT)pTsdbReadHandle;
|
||||
}
|
||||
|
||||
void tsdbResetReadHandle(tsdbReaderT queryHandle, SQueryTableDataCond* pCond) {
|
||||
void tsdbResetReadHandle(tsdbReaderT queryHandle, SQueryTableDataCond* pCond, int32_t tWinIdx) {
|
||||
STsdbReadHandle* pTsdbReadHandle = queryHandle;
|
||||
|
||||
if (emptyQueryTimewindow(pTsdbReadHandle)) {
|
||||
|
@ -533,7 +558,7 @@ void tsdbResetReadHandle(tsdbReaderT queryHandle, SQueryTableDataCond* pCond) {
|
|||
}
|
||||
|
||||
pTsdbReadHandle->order = pCond->order;
|
||||
pTsdbReadHandle->window = pCond->twindow;
|
||||
setQueryTimewindow(pTsdbReadHandle, pCond, tWinIdx);
|
||||
pTsdbReadHandle->type = TSDB_QUERY_TYPE_ALL;
|
||||
pTsdbReadHandle->cur.fid = -1;
|
||||
pTsdbReadHandle->cur.win = TSWINDOW_INITIALIZER;
|
||||
|
@ -558,11 +583,11 @@ void tsdbResetReadHandle(tsdbReaderT queryHandle, SQueryTableDataCond* pCond) {
|
|||
resetCheckInfo(pTsdbReadHandle);
|
||||
}
|
||||
|
||||
void tsdbResetQueryHandleForNewTable(tsdbReaderT queryHandle, SQueryTableDataCond* pCond, STableListInfo* tableList) {
|
||||
void tsdbResetQueryHandleForNewTable(tsdbReaderT queryHandle, SQueryTableDataCond* pCond, STableListInfo* tableList, int32_t tWinIdx) {
|
||||
STsdbReadHandle* pTsdbReadHandle = queryHandle;
|
||||
|
||||
pTsdbReadHandle->order = pCond->order;
|
||||
pTsdbReadHandle->window = pCond->twindow;
|
||||
pTsdbReadHandle->window = pCond->twindows[tWinIdx];
|
||||
pTsdbReadHandle->type = TSDB_QUERY_TYPE_ALL;
|
||||
pTsdbReadHandle->cur.fid = -1;
|
||||
pTsdbReadHandle->cur.win = TSWINDOW_INITIALIZER;
|
||||
|
@ -602,7 +627,7 @@ void tsdbResetQueryHandleForNewTable(tsdbReaderT queryHandle, SQueryTableDataCon
|
|||
|
||||
tsdbReaderT tsdbQueryLastRow(SVnode* pVnode, SQueryTableDataCond* pCond, STableListInfo* pList, uint64_t qId,
|
||||
uint64_t taskId) {
|
||||
pCond->twindow = updateLastrowForEachGroup(pList);
|
||||
pCond->twindows[0] = updateLastrowForEachGroup(pList);
|
||||
|
||||
// no qualified table
|
||||
if (taosArrayGetSize(pList->pTableList) == 0) {
|
||||
|
@ -620,7 +645,7 @@ tsdbReaderT tsdbQueryLastRow(SVnode* pVnode, SQueryTableDataCond* pCond, STableL
|
|||
return NULL;
|
||||
}
|
||||
|
||||
assert(pCond->order == TSDB_ORDER_ASC && pCond->twindow.skey <= pCond->twindow.ekey);
|
||||
assert(pCond->order == TSDB_ORDER_ASC && pCond->twindows[0].skey <= pCond->twindows[0].ekey);
|
||||
if (pTsdbReadHandle->cachelastrow) {
|
||||
pTsdbReadHandle->type = TSDB_QUERY_TYPE_LAST;
|
||||
}
|
||||
|
@ -3471,7 +3496,6 @@ void tsdbRetrieveDataBlockInfo(tsdbReaderT* pTsdbReadHandle, SDataBlockInfo* pDa
|
|||
|
||||
pDataBlockInfo->rows = cur->rows;
|
||||
pDataBlockInfo->window = cur->win;
|
||||
// ASSERT(pDataBlockInfo->numOfCols >= (int32_t)(QH_GET_NUM_OF_COLS(pHandle));
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -3537,9 +3561,9 @@ int32_t tsdbRetrieveDataBlockStatisInfo(tsdbReaderT* pTsdbReadHandle, SColumnDat
|
|||
if (IS_BSMA_ON(&(pHandle->pSchema->columns[slotIds[i]]))) {
|
||||
if (pHandle->suppInfo.pstatis[i].numOfNull == -1) { // set the column data are all NULL
|
||||
pHandle->suppInfo.pstatis[i].numOfNull = pBlockInfo->compBlock->numOfRows;
|
||||
} else {
|
||||
pHandle->suppInfo.plist[i] = &pHandle->suppInfo.pstatis[i];
|
||||
}
|
||||
|
||||
pHandle->suppInfo.plist[i] = &pHandle->suppInfo.pstatis[i];
|
||||
} else {
|
||||
*allHave = false;
|
||||
}
|
||||
|
@ -3588,108 +3612,6 @@ SArray* tsdbRetrieveDataBlock(tsdbReaderT* pTsdbReadHandle, SArray* pIdList) {
|
|||
}
|
||||
}
|
||||
}
|
||||
#if 0
|
||||
void filterPrepare(void* expr, void* param) {
|
||||
tExprNode* pExpr = (tExprNode*)expr;
|
||||
if (pExpr->_node.info != NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
pExpr->_node.info = taosMemoryCalloc(1, sizeof(tQueryInfo));
|
||||
|
||||
STSchema* pTSSchema = (STSchema*) param;
|
||||
tQueryInfo* pInfo = pExpr->_node.info;
|
||||
tVariant* pCond = pExpr->_node.pRight->pVal;
|
||||
SSchema* pSchema = pExpr->_node.pLeft->pSchema;
|
||||
|
||||
pInfo->sch = *pSchema;
|
||||
pInfo->optr = pExpr->_node.optr;
|
||||
pInfo->compare = getComparFunc(pInfo->sch.type, pInfo->optr);
|
||||
pInfo->indexed = pTSSchema->columns->colId == pInfo->sch.colId;
|
||||
|
||||
if (pInfo->optr == TSDB_RELATION_IN) {
|
||||
int dummy = -1;
|
||||
SHashObj *pObj = NULL;
|
||||
if (pInfo->sch.colId == TSDB_TBNAME_COLUMN_INDEX) {
|
||||
pObj = taosHashInit(256, taosGetDefaultHashFunction(pInfo->sch.type), true, false);
|
||||
SArray *arr = (SArray *)(pCond->arr);
|
||||
for (size_t i = 0; i < taosArrayGetSize(arr); i++) {
|
||||
char* p = taosArrayGetP(arr, i);
|
||||
strntolower_s(varDataVal(p), varDataVal(p), varDataLen(p));
|
||||
taosHashPut(pObj, varDataVal(p), varDataLen(p), &dummy, sizeof(dummy));
|
||||
}
|
||||
} else {
|
||||
buildFilterSetFromBinary((void **)&pObj, pCond->pz, pCond->nLen);
|
||||
}
|
||||
pInfo->q = (char *)pObj;
|
||||
} else if (pCond != NULL) {
|
||||
uint32_t size = pCond->nLen * TSDB_NCHAR_SIZE;
|
||||
if (size < (uint32_t)pSchema->bytes) {
|
||||
size = pSchema->bytes;
|
||||
}
|
||||
// to make sure tonchar does not cause invalid write, since the '\0' needs at least sizeof(TdUcs4) space.
|
||||
pInfo->q = taosMemoryCalloc(1, size + TSDB_NCHAR_SIZE + VARSTR_HEADER_SIZE);
|
||||
tVariantDump(pCond, pInfo->q, pSchema->type, true);
|
||||
}
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
static int32_t tableGroupComparFn(const void* p1, const void* p2, const void* param) {
|
||||
#if 0
|
||||
STableGroupSupporter* pTableGroupSupp = (STableGroupSupporter*) param;
|
||||
STable* pTable1 = ((STableKeyInfo*) p1)->uid;
|
||||
STable* pTable2 = ((STableKeyInfo*) p2)->uid;
|
||||
|
||||
for (int32_t i = 0; i < pTableGroupSupp->numOfCols; ++i) {
|
||||
SColIndex* pColIndex = &pTableGroupSupp->pCols[i];
|
||||
int32_t colIndex = pColIndex->colIndex;
|
||||
|
||||
assert(colIndex >= TSDB_TBNAME_COLUMN_INDEX);
|
||||
|
||||
char * f1 = NULL;
|
||||
char * f2 = NULL;
|
||||
int32_t type = 0;
|
||||
int32_t bytes = 0;
|
||||
|
||||
if (colIndex == TSDB_TBNAME_COLUMN_INDEX) {
|
||||
f1 = (char*) TABLE_NAME(pTable1);
|
||||
f2 = (char*) TABLE_NAME(pTable2);
|
||||
type = TSDB_DATA_TYPE_BINARY;
|
||||
bytes = tGetTbnameColumnSchema()->bytes;
|
||||
} else {
|
||||
if (pTableGroupSupp->pTagSchema && colIndex < pTableGroupSupp->pTagSchema->numOfCols) {
|
||||
STColumn* pCol = schemaColAt(pTableGroupSupp->pTagSchema, colIndex);
|
||||
bytes = pCol->bytes;
|
||||
type = pCol->type;
|
||||
f1 = tdGetKVRowValOfCol(pTable1->tagVal, pCol->colId);
|
||||
f2 = tdGetKVRowValOfCol(pTable2->tagVal, pCol->colId);
|
||||
}
|
||||
}
|
||||
|
||||
// this tags value may be NULL
|
||||
if (f1 == NULL && f2 == NULL) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (f1 == NULL) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (f2 == NULL) {
|
||||
return 1;
|
||||
}
|
||||
|
||||
int32_t ret = doCompare(f1, f2, type, bytes);
|
||||
if (ret == 0) {
|
||||
continue;
|
||||
} else {
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tsdbCheckInfoCompar(const void* key1, const void* key2) {
|
||||
if (((STableCheckInfo*)key1)->tableId < ((STableCheckInfo*)key2)->tableId) {
|
||||
|
@ -3702,170 +3624,6 @@ static int tsdbCheckInfoCompar(const void* key1, const void* key2) {
|
|||
}
|
||||
}
|
||||
|
||||
void createTableGroupImpl(SArray* pGroups, SArray* pTableList, size_t numOfTables, TSKEY skey,
|
||||
STableGroupSupporter* pSupp, __ext_compar_fn_t compareFn) {
|
||||
STable* pTable = taosArrayGetP(pTableList, 0);
|
||||
SArray* g = taosArrayInit(16, sizeof(STableKeyInfo));
|
||||
|
||||
STableKeyInfo info = {.lastKey = skey};
|
||||
taosArrayPush(g, &info);
|
||||
|
||||
for (int32_t i = 1; i < numOfTables; ++i) {
|
||||
STable** prev = taosArrayGet(pTableList, i - 1);
|
||||
STable** p = taosArrayGet(pTableList, i);
|
||||
|
||||
int32_t ret = compareFn(prev, p, pSupp);
|
||||
assert(ret == 0 || ret == -1);
|
||||
|
||||
if (ret == 0) {
|
||||
STableKeyInfo info1 = {.lastKey = skey};
|
||||
taosArrayPush(g, &info1);
|
||||
} else {
|
||||
taosArrayPush(pGroups, &g); // current group is ended, start a new group
|
||||
g = taosArrayInit(16, sizeof(STableKeyInfo));
|
||||
|
||||
STableKeyInfo info1 = {.lastKey = skey};
|
||||
taosArrayPush(g, &info1);
|
||||
}
|
||||
}
|
||||
|
||||
taosArrayPush(pGroups, &g);
|
||||
}
|
||||
|
||||
SArray* createTableGroup(SArray* pTableList, SSchemaWrapper* pTagSchema, SColIndex* pCols, int32_t numOfOrderCols,
|
||||
TSKEY skey) {
|
||||
assert(pTableList != NULL);
|
||||
SArray* pTableGroup = taosArrayInit(1, POINTER_BYTES);
|
||||
|
||||
size_t size = taosArrayGetSize(pTableList);
|
||||
if (size == 0) {
|
||||
tsdbDebug("no qualified tables");
|
||||
return pTableGroup;
|
||||
}
|
||||
|
||||
if (numOfOrderCols == 0 || size == 1) { // no group by tags clause or only one table
|
||||
SArray* sa = taosArrayDup(pTableList);
|
||||
if (sa == NULL) {
|
||||
taosArrayDestroy(pTableGroup);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
taosArrayPush(pTableGroup, &sa);
|
||||
tsdbDebug("all %" PRIzu " tables belong to one group", size);
|
||||
} else {
|
||||
STableGroupSupporter sup = {0};
|
||||
sup.numOfCols = numOfOrderCols;
|
||||
sup.pTagSchema = pTagSchema->pSchema;
|
||||
sup.pCols = pCols;
|
||||
|
||||
taosqsort(pTableList->pData, size, sizeof(STableKeyInfo), &sup, tableGroupComparFn);
|
||||
createTableGroupImpl(pTableGroup, pTableList, size, skey, &sup, tableGroupComparFn);
|
||||
}
|
||||
|
||||
return pTableGroup;
|
||||
}
|
||||
|
||||
// static bool tableFilterFp(const void* pNode, void* param) {
|
||||
// tQueryInfo* pInfo = (tQueryInfo*) param;
|
||||
//
|
||||
// STable* pTable = (STable*)(SL_GET_NODE_DATA((SSkipListNode*)pNode));
|
||||
//
|
||||
// char* val = NULL;
|
||||
// if (pInfo->sch.colId == TSDB_TBNAME_COLUMN_INDEX) {
|
||||
// val = (char*) TABLE_NAME(pTable);
|
||||
// } else {
|
||||
// val = tdGetKVRowValOfCol(pTable->tagVal, pInfo->sch.colId);
|
||||
// }
|
||||
//
|
||||
// if (pInfo->optr == TSDB_RELATION_ISNULL || pInfo->optr == TSDB_RELATION_NOTNULL) {
|
||||
// if (pInfo->optr == TSDB_RELATION_ISNULL) {
|
||||
// return (val == NULL) || isNull(val, pInfo->sch.type);
|
||||
// } else if (pInfo->optr == TSDB_RELATION_NOTNULL) {
|
||||
// return (val != NULL) && (!isNull(val, pInfo->sch.type));
|
||||
// }
|
||||
// } else if (pInfo->optr == TSDB_RELATION_IN) {
|
||||
// int type = pInfo->sch.type;
|
||||
// if (type == TSDB_DATA_TYPE_BOOL || IS_SIGNED_NUMERIC_TYPE(type) || type == TSDB_DATA_TYPE_TIMESTAMP) {
|
||||
// int64_t v;
|
||||
// GET_TYPED_DATA(v, int64_t, pInfo->sch.type, val);
|
||||
// return NULL != taosHashGet((SHashObj *)pInfo->q, (char *)&v, sizeof(v));
|
||||
// } else if (IS_UNSIGNED_NUMERIC_TYPE(type)) {
|
||||
// uint64_t v;
|
||||
// GET_TYPED_DATA(v, uint64_t, pInfo->sch.type, val);
|
||||
// return NULL != taosHashGet((SHashObj *)pInfo->q, (char *)&v, sizeof(v));
|
||||
// }
|
||||
// else if (type == TSDB_DATA_TYPE_DOUBLE || type == TSDB_DATA_TYPE_FLOAT) {
|
||||
// double v;
|
||||
// GET_TYPED_DATA(v, double, pInfo->sch.type, val);
|
||||
// return NULL != taosHashGet((SHashObj *)pInfo->q, (char *)&v, sizeof(v));
|
||||
// } else if (type == TSDB_DATA_TYPE_BINARY || type == TSDB_DATA_TYPE_NCHAR){
|
||||
// return NULL != taosHashGet((SHashObj *)pInfo->q, varDataVal(val), varDataLen(val));
|
||||
// }
|
||||
//
|
||||
// }
|
||||
//
|
||||
// int32_t ret = 0;
|
||||
// if (val == NULL) { //the val is possible to be null, so check it out carefully
|
||||
// ret = -1; // val is missing in table tags value pairs
|
||||
// } else {
|
||||
// ret = pInfo->compare(val, pInfo->q);
|
||||
// }
|
||||
//
|
||||
// switch (pInfo->optr) {
|
||||
// case TSDB_RELATION_EQUAL: {
|
||||
// return ret == 0;
|
||||
// }
|
||||
// case TSDB_RELATION_NOT_EQUAL: {
|
||||
// return ret != 0;
|
||||
// }
|
||||
// case TSDB_RELATION_GREATER_EQUAL: {
|
||||
// return ret >= 0;
|
||||
// }
|
||||
// case TSDB_RELATION_GREATER: {
|
||||
// return ret > 0;
|
||||
// }
|
||||
// case TSDB_RELATION_LESS_EQUAL: {
|
||||
// return ret <= 0;
|
||||
// }
|
||||
// case TSDB_RELATION_LESS: {
|
||||
// return ret < 0;
|
||||
// }
|
||||
// case TSDB_RELATION_LIKE: {
|
||||
// return ret == 0;
|
||||
// }
|
||||
// case TSDB_RELATION_MATCH: {
|
||||
// return ret == 0;
|
||||
// }
|
||||
// case TSDB_RELATION_NMATCH: {
|
||||
// return ret == 0;
|
||||
// }
|
||||
// case TSDB_RELATION_IN: {
|
||||
// return ret == 1;
|
||||
// }
|
||||
//
|
||||
// default:
|
||||
// assert(false);
|
||||
// }
|
||||
//
|
||||
// return true;
|
||||
//}
|
||||
|
||||
// static void getTableListfromSkipList(tExprNode *pExpr, SSkipList *pSkipList, SArray *result, SExprTraverseSupp
|
||||
// *param);
|
||||
|
||||
// static int32_t doQueryTableList(STable* pSTable, SArray* pRes, tExprNode* pExpr) {
|
||||
// // // query according to the expression tree
|
||||
// SExprTraverseSupp supp = {
|
||||
// .nodeFilterFn = (__result_filter_fn_t)tableFilterFp,
|
||||
// .setupInfoFn = filterPrepare,
|
||||
// .pExtInfo = pSTable->tagSchema,
|
||||
// };
|
||||
//
|
||||
// getTableListfromSkipList(pExpr, pSTable->pIndex, pRes, &supp);
|
||||
// tExprTreeDestroy(pExpr, destroyHelper);
|
||||
// return TSDB_CODE_SUCCESS;
|
||||
//}
|
||||
|
||||
static void* doFreeColumnInfoData(SArray* pColumnInfoData) {
|
||||
if (pColumnInfoData == NULL) {
|
||||
return NULL;
|
||||
|
@ -3934,263 +3692,3 @@ void tsdbCleanupReadHandle(tsdbReaderT queryHandle) {
|
|||
|
||||
taosMemoryFreeClear(pTsdbReadHandle);
|
||||
}
|
||||
|
||||
#if 0
|
||||
|
||||
static void applyFilterToSkipListNode(SSkipList *pSkipList, tExprNode *pExpr, SArray *pResult, SExprTraverseSupp *param) {
|
||||
SSkipListIterator* iter = tSkipListCreateIter(pSkipList);
|
||||
|
||||
// Scan each node in the skiplist by using iterator
|
||||
while (tSkipListIterNext(iter)) {
|
||||
SSkipListNode *pNode = tSkipListIterGet(iter);
|
||||
if (exprTreeApplyFilter(pExpr, pNode, param)) {
|
||||
taosArrayPush(pResult, &(SL_GET_NODE_DATA(pNode)));
|
||||
}
|
||||
}
|
||||
|
||||
tSkipListDestroyIter(iter);
|
||||
}
|
||||
|
||||
typedef struct {
|
||||
char* v;
|
||||
int32_t optr;
|
||||
} SEndPoint;
|
||||
|
||||
typedef struct {
|
||||
SEndPoint* start;
|
||||
SEndPoint* end;
|
||||
} SQueryCond;
|
||||
|
||||
// todo check for malloc failure
|
||||
static int32_t setQueryCond(tQueryInfo *queryColInfo, SQueryCond* pCond) {
|
||||
int32_t optr = queryColInfo->optr;
|
||||
|
||||
if (optr == TSDB_RELATION_GREATER || optr == TSDB_RELATION_GREATER_EQUAL ||
|
||||
optr == TSDB_RELATION_EQUAL || optr == TSDB_RELATION_NOT_EQUAL) {
|
||||
pCond->start = taosMemoryCalloc(1, sizeof(SEndPoint));
|
||||
pCond->start->optr = queryColInfo->optr;
|
||||
pCond->start->v = queryColInfo->q;
|
||||
} else if (optr == TSDB_RELATION_LESS || optr == TSDB_RELATION_LESS_EQUAL) {
|
||||
pCond->end = taosMemoryCalloc(1, sizeof(SEndPoint));
|
||||
pCond->end->optr = queryColInfo->optr;
|
||||
pCond->end->v = queryColInfo->q;
|
||||
} else if (optr == TSDB_RELATION_IN) {
|
||||
pCond->start = taosMemoryCalloc(1, sizeof(SEndPoint));
|
||||
pCond->start->optr = queryColInfo->optr;
|
||||
pCond->start->v = queryColInfo->q;
|
||||
} else if (optr == TSDB_RELATION_LIKE) {
|
||||
assert(0);
|
||||
} else if (optr == TSDB_RELATION_MATCH) {
|
||||
assert(0);
|
||||
} else if (optr == TSDB_RELATION_NMATCH) {
|
||||
assert(0);
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static void queryIndexedColumn(SSkipList* pSkipList, tQueryInfo* pQueryInfo, SArray* result) {
|
||||
SSkipListIterator* iter = NULL;
|
||||
|
||||
SQueryCond cond = {0};
|
||||
if (setQueryCond(pQueryInfo, &cond) != TSDB_CODE_SUCCESS) {
|
||||
//todo handle error
|
||||
}
|
||||
|
||||
if (cond.start != NULL) {
|
||||
iter = tSkipListCreateIterFromVal(pSkipList, (char*) cond.start->v, pSkipList->type, TSDB_ORDER_ASC);
|
||||
} else {
|
||||
iter = tSkipListCreateIterFromVal(pSkipList, (char*)(cond.end ? cond.end->v: NULL), pSkipList->type, TSDB_ORDER_DESC);
|
||||
}
|
||||
|
||||
if (cond.start != NULL) {
|
||||
int32_t optr = cond.start->optr;
|
||||
|
||||
if (optr == TSDB_RELATION_EQUAL) { // equals
|
||||
while(tSkipListIterNext(iter)) {
|
||||
SSkipListNode* pNode = tSkipListIterGet(iter);
|
||||
|
||||
int32_t ret = pQueryInfo->compare(SL_GET_NODE_KEY(pSkipList, pNode), cond.start->v);
|
||||
if (ret != 0) {
|
||||
break;
|
||||
}
|
||||
|
||||
STableKeyInfo info = {.pTable = (void*)SL_GET_NODE_DATA(pNode), .lastKey = TSKEY_INITIAL_VAL};
|
||||
taosArrayPush(result, &info);
|
||||
}
|
||||
} else if (optr == TSDB_RELATION_GREATER || optr == TSDB_RELATION_GREATER_EQUAL) { // greater equal
|
||||
bool comp = true;
|
||||
int32_t ret = 0;
|
||||
|
||||
while(tSkipListIterNext(iter)) {
|
||||
SSkipListNode* pNode = tSkipListIterGet(iter);
|
||||
|
||||
if (comp) {
|
||||
ret = pQueryInfo->compare(SL_GET_NODE_KEY(pSkipList, pNode), cond.start->v);
|
||||
assert(ret >= 0);
|
||||
}
|
||||
|
||||
if (ret == 0 && optr == TSDB_RELATION_GREATER) {
|
||||
continue;
|
||||
} else {
|
||||
STableKeyInfo info = {.pTable = (void*)SL_GET_NODE_DATA(pNode), .lastKey = TSKEY_INITIAL_VAL};
|
||||
taosArrayPush(result, &info);
|
||||
comp = false;
|
||||
}
|
||||
}
|
||||
} else if (optr == TSDB_RELATION_NOT_EQUAL) { // not equal
|
||||
bool comp = true;
|
||||
|
||||
while(tSkipListIterNext(iter)) {
|
||||
SSkipListNode* pNode = tSkipListIterGet(iter);
|
||||
comp = comp && (pQueryInfo->compare(SL_GET_NODE_KEY(pSkipList, pNode), cond.start->v) == 0);
|
||||
if (comp) {
|
||||
continue;
|
||||
}
|
||||
|
||||
STableKeyInfo info = {.pTable = (void*)SL_GET_NODE_DATA(pNode), .lastKey = TSKEY_INITIAL_VAL};
|
||||
taosArrayPush(result, &info);
|
||||
}
|
||||
|
||||
tSkipListDestroyIter(iter);
|
||||
|
||||
comp = true;
|
||||
iter = tSkipListCreateIterFromVal(pSkipList, (char*) cond.start->v, pSkipList->type, TSDB_ORDER_DESC);
|
||||
while(tSkipListIterNext(iter)) {
|
||||
SSkipListNode* pNode = tSkipListIterGet(iter);
|
||||
comp = comp && (pQueryInfo->compare(SL_GET_NODE_KEY(pSkipList, pNode), cond.start->v) == 0);
|
||||
if (comp) {
|
||||
continue;
|
||||
}
|
||||
|
||||
STableKeyInfo info = {.pTable = (void*)SL_GET_NODE_DATA(pNode), .lastKey = TSKEY_INITIAL_VAL};
|
||||
taosArrayPush(result, &info);
|
||||
}
|
||||
|
||||
} else if (optr == TSDB_RELATION_IN) {
|
||||
while(tSkipListIterNext(iter)) {
|
||||
SSkipListNode* pNode = tSkipListIterGet(iter);
|
||||
|
||||
int32_t ret = pQueryInfo->compare(SL_GET_NODE_KEY(pSkipList, pNode), cond.start->v);
|
||||
if (ret != 0) {
|
||||
break;
|
||||
}
|
||||
|
||||
STableKeyInfo info = {.pTable = (void*)SL_GET_NODE_DATA(pNode), .lastKey = TSKEY_INITIAL_VAL};
|
||||
taosArrayPush(result, &info);
|
||||
}
|
||||
|
||||
} else {
|
||||
assert(0);
|
||||
}
|
||||
} else {
|
||||
int32_t optr = cond.end ? cond.end->optr : TSDB_RELATION_INVALID;
|
||||
if (optr == TSDB_RELATION_LESS || optr == TSDB_RELATION_LESS_EQUAL) {
|
||||
bool comp = true;
|
||||
int32_t ret = 0;
|
||||
|
||||
while (tSkipListIterNext(iter)) {
|
||||
SSkipListNode *pNode = tSkipListIterGet(iter);
|
||||
|
||||
if (comp) {
|
||||
ret = pQueryInfo->compare(SL_GET_NODE_KEY(pSkipList, pNode), cond.end->v);
|
||||
assert(ret <= 0);
|
||||
}
|
||||
|
||||
if (ret == 0 && optr == TSDB_RELATION_LESS) {
|
||||
continue;
|
||||
} else {
|
||||
STableKeyInfo info = {.pTable = (void *)SL_GET_NODE_DATA(pNode), .lastKey = TSKEY_INITIAL_VAL};
|
||||
taosArrayPush(result, &info);
|
||||
comp = false; // no need to compare anymore
|
||||
}
|
||||
}
|
||||
} else {
|
||||
assert(pQueryInfo->optr == TSDB_RELATION_ISNULL || pQueryInfo->optr == TSDB_RELATION_NOTNULL);
|
||||
|
||||
while (tSkipListIterNext(iter)) {
|
||||
SSkipListNode *pNode = tSkipListIterGet(iter);
|
||||
|
||||
bool isnull = isNull(SL_GET_NODE_KEY(pSkipList, pNode), pQueryInfo->sch.type);
|
||||
if ((pQueryInfo->optr == TSDB_RELATION_ISNULL && isnull) ||
|
||||
(pQueryInfo->optr == TSDB_RELATION_NOTNULL && (!isnull))) {
|
||||
STableKeyInfo info = {.pTable = (void *)SL_GET_NODE_DATA(pNode), .lastKey = TSKEY_INITIAL_VAL};
|
||||
taosArrayPush(result, &info);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
taosMemoryFree(cond.start);
|
||||
taosMemoryFree(cond.end);
|
||||
tSkipListDestroyIter(iter);
|
||||
}
|
||||
|
||||
static void queryIndexlessColumn(SSkipList* pSkipList, tQueryInfo* pQueryInfo, SArray* res, __result_filter_fn_t filterFp) {
|
||||
SSkipListIterator* iter = tSkipListCreateIter(pSkipList);
|
||||
|
||||
while (tSkipListIterNext(iter)) {
|
||||
bool addToResult = false;
|
||||
|
||||
SSkipListNode *pNode = tSkipListIterGet(iter);
|
||||
|
||||
char *pData = SL_GET_NODE_DATA(pNode);
|
||||
tstr *name = (tstr*) tsdbGetTableName((void*) pData);
|
||||
|
||||
// todo speed up by using hash
|
||||
if (pQueryInfo->sch.colId == TSDB_TBNAME_COLUMN_INDEX) {
|
||||
if (pQueryInfo->optr == TSDB_RELATION_IN) {
|
||||
addToResult = pQueryInfo->compare(name, pQueryInfo->q);
|
||||
} else if (pQueryInfo->optr == TSDB_RELATION_LIKE ||
|
||||
pQueryInfo->optr == TSDB_RELATION_MATCH ||
|
||||
pQueryInfo->optr == TSDB_RELATION_NMATCH) {
|
||||
addToResult = !pQueryInfo->compare(name, pQueryInfo->q);
|
||||
}
|
||||
} else {
|
||||
addToResult = filterFp(pNode, pQueryInfo);
|
||||
}
|
||||
|
||||
if (addToResult) {
|
||||
STableKeyInfo info = {.pTable = (void*)pData, .lastKey = TSKEY_INITIAL_VAL};
|
||||
taosArrayPush(res, &info);
|
||||
}
|
||||
}
|
||||
|
||||
tSkipListDestroyIter(iter);
|
||||
}
|
||||
|
||||
// Apply the filter expression to each node in the skiplist to acquire the qualified nodes in skip list
|
||||
//void getTableListfromSkipList(tExprNode *pExpr, SSkipList *pSkipList, SArray *result, SExprTraverseSupp *param) {
|
||||
// if (pExpr == NULL) {
|
||||
// return;
|
||||
// }
|
||||
//
|
||||
// tExprNode *pLeft = pExpr->_node.pLeft;
|
||||
// tExprNode *pRight = pExpr->_node.pRight;
|
||||
//
|
||||
// // column project
|
||||
// if (pLeft->nodeType != TSQL_NODE_EXPR && pRight->nodeType != TSQL_NODE_EXPR) {
|
||||
// assert(pLeft->nodeType == TSQL_NODE_COL && (pRight->nodeType == TSQL_NODE_VALUE || pRight->nodeType == TSQL_NODE_DUMMY));
|
||||
//
|
||||
// param->setupInfoFn(pExpr, param->pExtInfo);
|
||||
//
|
||||
// tQueryInfo *pQueryInfo = pExpr->_node.info;
|
||||
// if (pQueryInfo->indexed && (pQueryInfo->optr != TSDB_RELATION_LIKE
|
||||
// && pQueryInfo->optr != TSDB_RELATION_MATCH && pQueryInfo->optr != TSDB_RELATION_NMATCH
|
||||
// && pQueryInfo->optr != TSDB_RELATION_IN)) {
|
||||
// queryIndexedColumn(pSkipList, pQueryInfo, result);
|
||||
// } else {
|
||||
// queryIndexlessColumn(pSkipList, pQueryInfo, result, param->nodeFilterFn);
|
||||
// }
|
||||
//
|
||||
// return;
|
||||
// }
|
||||
//
|
||||
// // The value of hasPK is always 0.
|
||||
// uint8_t weight = pLeft->_node.hasPK + pRight->_node.hasPK;
|
||||
// assert(weight == 0 && pSkipList != NULL && taosArrayGetSize(result) == 0);
|
||||
//
|
||||
// //apply the hierarchical filter expression to every node in skiplist to find the qualified nodes
|
||||
// applyFilterToSkipListNode(pSkipList, pExpr, result, param);
|
||||
//}
|
||||
#endif
|
||||
|
|
|
@ -2040,7 +2040,7 @@ static FORCE_INLINE int32_t tsdbExecuteRSmaImpl(STsdb *pTsdb, const void *pMsg,
|
|||
blockDebugShowData(pResult);
|
||||
STsdb *sinkTsdb = (level == TSDB_RETENTION_L1 ? pTsdb->pVnode->pRSma1 : pTsdb->pVnode->pRSma2);
|
||||
SSubmitReq *pReq = NULL;
|
||||
if (buildSubmitReqFromDataBlock(&pReq, pResult, pTSchema, TD_VID(pTsdb->pVnode), uid, suid) != 0) {
|
||||
if (buildSubmitReqFromDataBlock(&pReq, pResult, pTSchema, TD_VID(pTsdb->pVnode), suid) != 0) {
|
||||
taosArrayDestroy(pResult);
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
@ -2083,7 +2083,7 @@ static int32_t tsdbExecuteRSma(STsdb *pTsdb, const void *pMsg, int32_t inputType
|
|||
}
|
||||
|
||||
if (inputType == STREAM_DATA_TYPE_SUBMIT_BLOCK) {
|
||||
// TODO: use the proper schema instead of 0, and cache STSchema in cache
|
||||
// TODO: use the proper schema instead of 1, and cache STSchema in cache
|
||||
STSchema *pTSchema = metaGetTbTSchema(pTsdb->pVnode->pMeta, suid, 1);
|
||||
if (!pTSchema) {
|
||||
terrno = TSDB_CODE_TDB_IVD_TB_SCHEMA_VERSION;
|
||||
|
|
|
@ -49,19 +49,21 @@ enum {
|
|||
};
|
||||
|
||||
enum {
|
||||
CTG_ACT_UPDATE_VG = 0,
|
||||
CTG_ACT_UPDATE_TBL,
|
||||
CTG_ACT_REMOVE_DB,
|
||||
CTG_ACT_REMOVE_STB,
|
||||
CTG_ACT_REMOVE_TBL,
|
||||
CTG_ACT_UPDATE_USER,
|
||||
CTG_ACT_MAX
|
||||
CTG_OP_UPDATE_VGROUP = 0,
|
||||
CTG_OP_UPDATE_TB_META,
|
||||
CTG_OP_DROP_DB_CACHE,
|
||||
CTG_OP_DROP_STB_META,
|
||||
CTG_OP_DROP_TB_META,
|
||||
CTG_OP_UPDATE_USER,
|
||||
CTG_OP_UPDATE_VG_EPSET,
|
||||
CTG_OP_MAX
|
||||
};
|
||||
|
||||
typedef enum {
|
||||
CTG_TASK_GET_QNODE = 0,
|
||||
CTG_TASK_GET_DB_VGROUP,
|
||||
CTG_TASK_GET_DB_CFG,
|
||||
CTG_TASK_GET_DB_INFO,
|
||||
CTG_TASK_GET_TB_META,
|
||||
CTG_TASK_GET_TB_HASH,
|
||||
CTG_TASK_GET_INDEX,
|
||||
|
@ -98,6 +100,10 @@ typedef struct SCtgDbCfgCtx {
|
|||
char dbFName[TSDB_DB_FNAME_LEN];
|
||||
} SCtgDbCfgCtx;
|
||||
|
||||
typedef struct SCtgDbInfoCtx {
|
||||
char dbFName[TSDB_DB_FNAME_LEN];
|
||||
} SCtgDbInfoCtx;
|
||||
|
||||
typedef struct SCtgTbHashCtx {
|
||||
char dbFName[TSDB_DB_FNAME_LEN];
|
||||
SName* pName;
|
||||
|
@ -182,6 +188,7 @@ typedef struct SCtgJob {
|
|||
int32_t dbCfgNum;
|
||||
int32_t indexNum;
|
||||
int32_t userNum;
|
||||
int32_t dbInfoNum;
|
||||
} SCtgJob;
|
||||
|
||||
typedef struct SCtgMsgCtx {
|
||||
|
@ -285,16 +292,22 @@ typedef struct SCtgUpdateUserMsg {
|
|||
SGetUserAuthRsp userAuth;
|
||||
} SCtgUpdateUserMsg;
|
||||
|
||||
typedef struct SCtgUpdateEpsetMsg {
|
||||
SCatalog* pCtg;
|
||||
char dbFName[TSDB_DB_FNAME_LEN];
|
||||
int32_t vgId;
|
||||
SEpSet epSet;
|
||||
} SCtgUpdateEpsetMsg;
|
||||
|
||||
typedef struct SCtgMetaAction {
|
||||
int32_t act;
|
||||
typedef struct SCtgCacheOperation {
|
||||
int32_t opId;
|
||||
void *data;
|
||||
bool syncReq;
|
||||
uint64_t seqId;
|
||||
} SCtgMetaAction;
|
||||
} SCtgCacheOperation;
|
||||
|
||||
typedef struct SCtgQNode {
|
||||
SCtgMetaAction action;
|
||||
SCtgCacheOperation op;
|
||||
struct SCtgQNode *next;
|
||||
} SCtgQNode;
|
||||
|
||||
|
@ -321,13 +334,13 @@ typedef struct SCatalogMgmt {
|
|||
} SCatalogMgmt;
|
||||
|
||||
typedef uint32_t (*tableNameHashFp)(const char *, uint32_t);
|
||||
typedef int32_t (*ctgActFunc)(SCtgMetaAction *);
|
||||
typedef int32_t (*ctgOpFunc)(SCtgCacheOperation *);
|
||||
|
||||
typedef struct SCtgAction {
|
||||
int32_t actId;
|
||||
typedef struct SCtgOperation {
|
||||
int32_t opId;
|
||||
char name[32];
|
||||
ctgActFunc func;
|
||||
} SCtgAction;
|
||||
ctgOpFunc func;
|
||||
} SCtgOperation;
|
||||
|
||||
#define CTG_QUEUE_ADD() atomic_add_fetch_64(&gCtgMgmt.queue.qRemainNum, 1)
|
||||
#define CTG_QUEUE_SUB() atomic_sub_fetch_64(&gCtgMgmt.queue.qRemainNum, 1)
|
||||
|
@ -435,12 +448,13 @@ int32_t ctgdShowCacheInfo(void);
|
|||
int32_t ctgRemoveTbMetaFromCache(SCatalog* pCtg, SName* pTableName, bool syncReq);
|
||||
int32_t ctgGetTbMetaFromCache(CTG_PARAMS, SCtgTbMetaCtx* ctx, STableMeta** pTableMeta);
|
||||
|
||||
int32_t ctgActUpdateVg(SCtgMetaAction *action);
|
||||
int32_t ctgActUpdateTb(SCtgMetaAction *action);
|
||||
int32_t ctgActRemoveDB(SCtgMetaAction *action);
|
||||
int32_t ctgActRemoveStb(SCtgMetaAction *action);
|
||||
int32_t ctgActRemoveTb(SCtgMetaAction *action);
|
||||
int32_t ctgActUpdateUser(SCtgMetaAction *action);
|
||||
int32_t ctgOpUpdateVgroup(SCtgCacheOperation *action);
|
||||
int32_t ctgOpUpdateTbMeta(SCtgCacheOperation *action);
|
||||
int32_t ctgOpDropDbCache(SCtgCacheOperation *action);
|
||||
int32_t ctgOpDropStbMeta(SCtgCacheOperation *action);
|
||||
int32_t ctgOpDropTbMeta(SCtgCacheOperation *action);
|
||||
int32_t ctgOpUpdateUser(SCtgCacheOperation *action);
|
||||
int32_t ctgOpUpdateEpset(SCtgCacheOperation *operation);
|
||||
int32_t ctgAcquireVgInfoFromCache(SCatalog* pCtg, const char *dbFName, SCtgDBCache **pCache);
|
||||
void ctgReleaseDBCache(SCatalog *pCtg, SCtgDBCache *dbCache);
|
||||
void ctgReleaseVgInfo(SCtgDBCache *dbCache);
|
||||
|
@ -449,12 +463,13 @@ int32_t ctgTbMetaExistInCache(SCatalog* pCtg, char *dbFName, char* tbName, int32
|
|||
int32_t ctgReadTbMetaFromCache(SCatalog* pCtg, SCtgTbMetaCtx* ctx, STableMeta** pTableMeta);
|
||||
int32_t ctgReadTbVerFromCache(SCatalog *pCtg, const SName *pTableName, int32_t *sver, int32_t *tver, int32_t *tbType, uint64_t *suid, char *stbName);
|
||||
int32_t ctgChkAuthFromCache(SCatalog* pCtg, const char* user, const char* dbFName, AUTH_TYPE type, bool *inCache, bool *pass);
|
||||
int32_t ctgPutRmDBToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId);
|
||||
int32_t ctgPutRmStbToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *stbName, uint64_t suid, bool syncReq);
|
||||
int32_t ctgPutRmTbToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *tbName, bool syncReq);
|
||||
int32_t ctgPutUpdateVgToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, SDBVgInfo* dbInfo, bool syncReq);
|
||||
int32_t ctgPutUpdateTbToQueue(SCatalog* pCtg, STableMetaOutput *output, bool syncReq);
|
||||
int32_t ctgPutUpdateUserToQueue(SCatalog* pCtg, SGetUserAuthRsp *pAuth, bool syncReq);
|
||||
int32_t ctgDropDbCacheEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId);
|
||||
int32_t ctgDropStbMetaEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *stbName, uint64_t suid, bool syncReq);
|
||||
int32_t ctgDropTbMetaEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *tbName, bool syncReq);
|
||||
int32_t ctgUpdateVgroupEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, SDBVgInfo* dbInfo, bool syncReq);
|
||||
int32_t ctgUpdateTbMetaEnqueue(SCatalog* pCtg, STableMetaOutput *output, bool syncReq);
|
||||
int32_t ctgUpdateUserEnqueue(SCatalog* pCtg, SGetUserAuthRsp *pAuth, bool syncReq);
|
||||
int32_t ctgUpdateVgEpsetEnqueue(SCatalog* pCtg, char *dbFName, int32_t vgId, SEpSet* pEpSet);
|
||||
int32_t ctgMetaRentInit(SCtgRentMgmt *mgmt, uint32_t rentSec, int8_t type);
|
||||
int32_t ctgMetaRentAdd(SCtgRentMgmt *mgmt, void *meta, int64_t id, int32_t size);
|
||||
int32_t ctgMetaRentGet(SCtgRentMgmt *mgmt, void **res, uint32_t *num, int32_t size);
|
||||
|
|
|
@ -41,9 +41,9 @@ int32_t ctgRemoveTbMetaFromCache(SCatalog* pCtg, SName* pTableName, bool syncReq
|
|||
tNameGetFullDbName(pTableName, dbFName);
|
||||
|
||||
if (TSDB_SUPER_TABLE == tblMeta->tableType) {
|
||||
CTG_ERR_JRET(ctgPutRmStbToQueue(pCtg, dbFName, tbCtx.tbInfo.dbId, pTableName->tname, tblMeta->suid, syncReq));
|
||||
CTG_ERR_JRET(ctgDropStbMetaEnqueue(pCtg, dbFName, tbCtx.tbInfo.dbId, pTableName->tname, tblMeta->suid, syncReq));
|
||||
} else {
|
||||
CTG_ERR_JRET(ctgPutRmTbToQueue(pCtg, dbFName, tbCtx.tbInfo.dbId, pTableName->tname, syncReq));
|
||||
CTG_ERR_JRET(ctgDropTbMetaEnqueue(pCtg, dbFName, tbCtx.tbInfo.dbId, pTableName->tname, syncReq));
|
||||
}
|
||||
|
||||
_return:
|
||||
|
@ -72,7 +72,7 @@ int32_t ctgGetDBVgInfo(SCatalog* pCtg, void *pTrans, const SEpSet* pMgmtEps, con
|
|||
|
||||
CTG_ERR_JRET(ctgCloneVgInfo(DbOut.dbVgroup, pInfo));
|
||||
|
||||
CTG_ERR_RET(ctgPutUpdateVgToQueue(pCtg, dbFName, DbOut.dbId, DbOut.dbVgroup, false));
|
||||
CTG_ERR_RET(ctgUpdateVgroupEnqueue(pCtg, dbFName, DbOut.dbId, DbOut.dbVgroup, false));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -108,13 +108,13 @@ int32_t ctgRefreshDBVgInfo(SCatalog* pCtg, void *pTrans, const SEpSet* pMgmtEps,
|
|||
if (code) {
|
||||
if (CTG_DB_NOT_EXIST(code) && (NULL != dbCache)) {
|
||||
ctgDebug("db no longer exist, dbFName:%s, dbId:%" PRIx64, input.db, input.dbId);
|
||||
ctgPutRmDBToQueue(pCtg, input.db, input.dbId);
|
||||
ctgDropDbCacheEnqueue(pCtg, input.db, input.dbId);
|
||||
}
|
||||
|
||||
CTG_ERR_RET(code);
|
||||
}
|
||||
|
||||
CTG_ERR_RET(ctgPutUpdateVgToQueue(pCtg, dbFName, DbOut.dbId, DbOut.dbVgroup, true));
|
||||
CTG_ERR_RET(ctgUpdateVgroupEnqueue(pCtg, dbFName, DbOut.dbId, DbOut.dbVgroup, true));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -201,7 +201,7 @@ int32_t ctgRefreshTbMeta(CTG_PARAMS, SCtgTbMetaCtx* ctx, STableMetaOutput **pOut
|
|||
CTG_ERR_JRET(ctgCloneMetaOutput(output, pOutput));
|
||||
}
|
||||
|
||||
CTG_ERR_JRET(ctgPutUpdateTbToQueue(pCtg, output, syncReq));
|
||||
CTG_ERR_JRET(ctgUpdateTbMetaEnqueue(pCtg, output, syncReq));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -298,9 +298,9 @@ _return:
|
|||
}
|
||||
|
||||
if (TSDB_SUPER_TABLE == ctx->tbInfo.tbType) {
|
||||
ctgPutRmStbToQueue(pCtg, dbFName, ctx->tbInfo.dbId, ctx->pName->tname, ctx->tbInfo.suid, false);
|
||||
ctgDropStbMetaEnqueue(pCtg, dbFName, ctx->tbInfo.dbId, ctx->pName->tname, ctx->tbInfo.suid, false);
|
||||
} else {
|
||||
ctgPutRmTbToQueue(pCtg, dbFName, ctx->tbInfo.dbId, ctx->pName->tname, false);
|
||||
ctgDropTbMetaEnqueue(pCtg, dbFName, ctx->tbInfo.dbId, ctx->pName->tname, false);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -348,7 +348,7 @@ int32_t ctgChkAuth(SCatalog* pCtg, void *pTrans, const SEpSet* pMgmtEps, const c
|
|||
|
||||
_return:
|
||||
|
||||
ctgPutUpdateUserToQueue(pCtg, &authRsp, false);
|
||||
ctgUpdateUserEnqueue(pCtg, &authRsp, false);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -670,7 +670,7 @@ int32_t catalogUpdateDBVgInfo(SCatalog* pCtg, const char* dbFName, uint64_t dbId
|
|||
CTG_ERR_JRET(TSDB_CODE_CTG_INVALID_INPUT);
|
||||
}
|
||||
|
||||
code = ctgPutUpdateVgToQueue(pCtg, dbFName, dbId, dbInfo, false);
|
||||
code = ctgUpdateVgroupEnqueue(pCtg, dbFName, dbId, dbInfo, false);
|
||||
|
||||
_return:
|
||||
|
||||
|
@ -691,7 +691,7 @@ int32_t catalogRemoveDB(SCatalog* pCtg, const char* dbFName, uint64_t dbId) {
|
|||
CTG_API_LEAVE(TSDB_CODE_SUCCESS);
|
||||
}
|
||||
|
||||
CTG_ERR_JRET(ctgPutRmDBToQueue(pCtg, dbFName, dbId));
|
||||
CTG_ERR_JRET(ctgDropDbCacheEnqueue(pCtg, dbFName, dbId));
|
||||
|
||||
CTG_API_LEAVE(TSDB_CODE_SUCCESS);
|
||||
|
||||
|
@ -701,7 +701,19 @@ _return:
|
|||
}
|
||||
|
||||
int32_t catalogUpdateVgEpSet(SCatalog* pCtg, const char* dbFName, int32_t vgId, SEpSet *epSet) {
|
||||
return 0;
|
||||
CTG_API_ENTER();
|
||||
|
||||
int32_t code = 0;
|
||||
|
||||
if (NULL == pCtg || NULL == dbFName || NULL == epSet) {
|
||||
CTG_API_LEAVE(TSDB_CODE_CTG_INVALID_INPUT);
|
||||
}
|
||||
|
||||
CTG_ERR_JRET(ctgUpdateVgEpsetEnqueue(pCtg, (char*)dbFName, vgId, epSet));
|
||||
|
||||
_return:
|
||||
|
||||
CTG_API_LEAVE(code);
|
||||
}
|
||||
|
||||
int32_t catalogRemoveTableMeta(SCatalog* pCtg, SName* pTableName) {
|
||||
|
@ -738,7 +750,7 @@ int32_t catalogRemoveStbMeta(SCatalog* pCtg, const char* dbFName, uint64_t dbId,
|
|||
CTG_API_LEAVE(TSDB_CODE_SUCCESS);
|
||||
}
|
||||
|
||||
CTG_ERR_JRET(ctgPutRmStbToQueue(pCtg, dbFName, dbId, stbName, suid, true));
|
||||
CTG_ERR_JRET(ctgDropStbMetaEnqueue(pCtg, dbFName, dbId, stbName, suid, true));
|
||||
|
||||
CTG_API_LEAVE(TSDB_CODE_SUCCESS);
|
||||
|
||||
|
@ -791,7 +803,7 @@ int32_t catalogUpdateSTableMeta(SCatalog* pCtg, STableMetaRsp *rspMsg) {
|
|||
|
||||
CTG_ERR_JRET(queryCreateTableMetaFromMsg(rspMsg, true, &output->tbMeta));
|
||||
|
||||
CTG_ERR_JRET(ctgPutUpdateTbToQueue(pCtg, output, false));
|
||||
CTG_ERR_JRET(ctgUpdateTbMetaEnqueue(pCtg, output, false));
|
||||
|
||||
CTG_API_LEAVE(code);
|
||||
|
||||
|
@ -1152,7 +1164,7 @@ int32_t catalogUpdateUserAuthInfo(SCatalog* pCtg, SGetUserAuthRsp* pAuth) {
|
|||
CTG_API_LEAVE(TSDB_CODE_CTG_INVALID_INPUT);
|
||||
}
|
||||
|
||||
CTG_API_LEAVE(ctgPutUpdateUserToQueue(pCtg, pAuth, false));
|
||||
CTG_API_LEAVE(ctgUpdateUserEnqueue(pCtg, pAuth, false));
|
||||
}
|
||||
|
||||
|
||||
|
@ -1194,7 +1206,7 @@ void catalogDestroy(void) {
|
|||
taosHashCleanup(gCtgMgmt.pCluster);
|
||||
gCtgMgmt.pCluster = NULL;
|
||||
|
||||
CTG_UNLOCK(CTG_WRITE, &gCtgMgmt.lock);
|
||||
if (CTG_IS_LOCKED(&gCtgMgmt.lock) == TD_RWLATCH_WRITE_FLAG_COPY) CTG_UNLOCK(CTG_WRITE, &gCtgMgmt.lock);
|
||||
|
||||
qInfo("catalog destroyed");
|
||||
}
|
||||
|
|
|
@ -95,6 +95,30 @@ int32_t ctgInitGetDbCfgTask(SCtgJob *pJob, int32_t taskIdx, char *dbFName) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ctgInitGetDbInfoTask(SCtgJob *pJob, int32_t taskIdx, char *dbFName) {
|
||||
SCtgTask task = {0};
|
||||
|
||||
task.type = CTG_TASK_GET_DB_INFO;
|
||||
task.taskId = taskIdx;
|
||||
task.pJob = pJob;
|
||||
|
||||
task.taskCtx = taosMemoryCalloc(1, sizeof(SCtgDbInfoCtx));
|
||||
if (NULL == task.taskCtx) {
|
||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||
}
|
||||
|
||||
SCtgDbInfoCtx* ctx = task.taskCtx;
|
||||
|
||||
memcpy(ctx->dbFName, dbFName, sizeof(ctx->dbFName));
|
||||
|
||||
taosArrayPush(pJob->pTasks, &task);
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized, dbFName:%s", pJob->queryId, taskIdx, task.type, dbFName);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
||||
int32_t ctgInitGetTbHashTask(SCtgJob *pJob, int32_t taskIdx, SName *name) {
|
||||
SCtgTask task = {0};
|
||||
|
||||
|
@ -219,8 +243,9 @@ int32_t ctgInitJob(CTG_PARAMS, SCtgJob** job, uint64_t reqId, const SCatalogReq*
|
|||
int32_t dbCfgNum = (int32_t)taosArrayGetSize(pReq->pDbCfg);
|
||||
int32_t indexNum = (int32_t)taosArrayGetSize(pReq->pIndex);
|
||||
int32_t userNum = (int32_t)taosArrayGetSize(pReq->pUser);
|
||||
int32_t dbInfoNum = (int32_t)taosArrayGetSize(pReq->pDbInfo);
|
||||
|
||||
int32_t taskNum = tbMetaNum + dbVgNum + udfNum + tbHashNum + qnodeNum + dbCfgNum + indexNum + userNum;
|
||||
int32_t taskNum = tbMetaNum + dbVgNum + udfNum + tbHashNum + qnodeNum + dbCfgNum + indexNum + userNum + dbInfoNum;
|
||||
if (taskNum <= 0) {
|
||||
ctgError("empty input for job, taskNum:%d", taskNum);
|
||||
CTG_ERR_RET(TSDB_CODE_CTG_INVALID_INPUT);
|
||||
|
@ -249,6 +274,7 @@ int32_t ctgInitJob(CTG_PARAMS, SCtgJob** job, uint64_t reqId, const SCatalogReq*
|
|||
pJob->dbCfgNum = dbCfgNum;
|
||||
pJob->indexNum = indexNum;
|
||||
pJob->userNum = userNum;
|
||||
pJob->dbInfoNum = dbInfoNum;
|
||||
|
||||
pJob->pTasks = taosArrayInit(taskNum, sizeof(SCtgTask));
|
||||
|
||||
|
@ -268,6 +294,11 @@ int32_t ctgInitJob(CTG_PARAMS, SCtgJob** job, uint64_t reqId, const SCatalogReq*
|
|||
CTG_ERR_JRET(ctgInitGetDbCfgTask(pJob, taskIdx++, dbFName));
|
||||
}
|
||||
|
||||
for (int32_t i = 0; i < dbInfoNum; ++i) {
|
||||
char *dbFName = taosArrayGet(pReq->pDbInfo, i);
|
||||
CTG_ERR_JRET(ctgInitGetDbInfoTask(pJob, taskIdx++, dbFName));
|
||||
}
|
||||
|
||||
for (int32_t i = 0; i < tbMetaNum; ++i) {
|
||||
SName *name = taosArrayGet(pReq->pTableMeta, i);
|
||||
CTG_ERR_JRET(ctgInitGetTbMetaTask(pJob, taskIdx++, name));
|
||||
|
@ -395,6 +426,20 @@ int32_t ctgDumpDbCfgRes(SCtgTask* pTask) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ctgDumpDbInfoRes(SCtgTask* pTask) {
|
||||
SCtgJob* pJob = pTask->pJob;
|
||||
if (NULL == pJob->jobRes.pDbInfo) {
|
||||
pJob->jobRes.pDbInfo = taosArrayInit(pJob->dbInfoNum, sizeof(SDbInfo));
|
||||
if (NULL == pJob->jobRes.pDbInfo) {
|
||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||
}
|
||||
}
|
||||
|
||||
taosArrayPush(pJob->jobRes.pDbInfo, pTask->res);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ctgDumpUdfRes(SCtgTask* pTask) {
|
||||
SCtgJob* pJob = pTask->pJob;
|
||||
if (NULL == pJob->jobRes.pUdfList) {
|
||||
|
@ -620,7 +665,7 @@ int32_t ctgHandleGetDbVgRsp(SCtgTask* pTask, int32_t reqType, const SDataBuf *pM
|
|||
|
||||
CTG_ERR_JRET(ctgGenerateVgList(pCtg, pOut->dbVgroup->vgHash, (SArray**)&pTask->res));
|
||||
|
||||
CTG_ERR_JRET(ctgPutUpdateVgToQueue(pCtg, ctx->dbFName, pOut->dbId, pOut->dbVgroup, false));
|
||||
CTG_ERR_JRET(ctgUpdateVgroupEnqueue(pCtg, ctx->dbFName, pOut->dbId, pOut->dbVgroup, false));
|
||||
pOut->dbVgroup = NULL;
|
||||
|
||||
break;
|
||||
|
@ -659,7 +704,7 @@ int32_t ctgHandleGetTbHashRsp(SCtgTask* pTask, int32_t reqType, const SDataBuf *
|
|||
|
||||
CTG_ERR_JRET(ctgGetVgInfoFromHashValue(pCtg, pOut->dbVgroup, ctx->pName, (SVgroupInfo*)pTask->res));
|
||||
|
||||
CTG_ERR_JRET(ctgPutUpdateVgToQueue(pCtg, ctx->dbFName, pOut->dbId, pOut->dbVgroup, false));
|
||||
CTG_ERR_JRET(ctgUpdateVgroupEnqueue(pCtg, ctx->dbFName, pOut->dbId, pOut->dbVgroup, false));
|
||||
pOut->dbVgroup = NULL;
|
||||
|
||||
break;
|
||||
|
@ -691,6 +736,11 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgHandleGetDbInfoRsp(SCtgTask* pTask, int32_t reqType, const SDataBuf *pMsg, int32_t rspCode) {
|
||||
CTG_RET(TSDB_CODE_APP_ERROR);
|
||||
}
|
||||
|
||||
|
||||
int32_t ctgHandleGetQnodeRsp(SCtgTask* pTask, int32_t reqType, const SDataBuf *pMsg, int32_t rspCode) {
|
||||
int32_t code = 0;
|
||||
CTG_ERR_JRET(ctgProcessRspMsg(pTask->msgCtx.out, reqType, pMsg->pData, pMsg->len, rspCode, pTask->msgCtx.target));
|
||||
|
@ -769,7 +819,7 @@ _return:
|
|||
}
|
||||
}
|
||||
|
||||
ctgPutUpdateUserToQueue(pCtg, pOut, false);
|
||||
ctgUpdateUserEnqueue(pCtg, pOut, false);
|
||||
taosMemoryFreeClear(pTask->msgCtx.out);
|
||||
|
||||
ctgHandleTaskEnd(pTask, code);
|
||||
|
@ -933,6 +983,41 @@ int32_t ctgLaunchGetDbCfgTask(SCtgTask *pTask) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ctgLaunchGetDbInfoTask(SCtgTask *pTask) {
|
||||
int32_t code = 0;
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
|
||||
SCtgDBCache *dbCache = NULL;
|
||||
SCtgDbInfoCtx* pCtx = (SCtgDbInfoCtx*)pTask->taskCtx;
|
||||
|
||||
pTask->res = taosMemoryCalloc(1, sizeof(SDbInfo));
|
||||
if (NULL == pTask->res) {
|
||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||
}
|
||||
|
||||
SDbInfo* pInfo = (SDbInfo*)pTask->res;
|
||||
CTG_ERR_RET(ctgAcquireVgInfoFromCache(pCtg, pCtx->dbFName, &dbCache));
|
||||
if (NULL != dbCache) {
|
||||
pInfo->vgVer = dbCache->vgInfo->vgVersion;
|
||||
pInfo->dbId = dbCache->dbId;
|
||||
pInfo->tbNum = dbCache->vgInfo->numOfTable;
|
||||
} else {
|
||||
pInfo->vgVer = CTG_DEFAULT_INVALID_VERSION;
|
||||
}
|
||||
|
||||
CTG_ERR_JRET(ctgHandleTaskEnd(pTask, 0));
|
||||
|
||||
_return:
|
||||
|
||||
if (dbCache) {
|
||||
ctgReleaseVgInfo(dbCache);
|
||||
ctgReleaseDBCache(pCtg, dbCache);
|
||||
}
|
||||
|
||||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgLaunchGetIndexTask(SCtgTask *pTask) {
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
|
@ -992,6 +1077,7 @@ SCtgAsyncFps gCtgAsyncFps[] = {
|
|||
{ctgLaunchGetQnodeTask, ctgHandleGetQnodeRsp, ctgDumpQnodeRes},
|
||||
{ctgLaunchGetDbVgTask, ctgHandleGetDbVgRsp, ctgDumpDbVgRes},
|
||||
{ctgLaunchGetDbCfgTask, ctgHandleGetDbCfgRsp, ctgDumpDbCfgRes},
|
||||
{ctgLaunchGetDbInfoTask, ctgHandleGetDbInfoRsp, ctgDumpDbInfoRes},
|
||||
{ctgLaunchGetTbMetaTask, ctgHandleGetTbMetaRsp, ctgDumpTbMetaRes},
|
||||
{ctgLaunchGetTbHashTask, ctgHandleGetTbHashRsp, ctgDumpTbHashRes},
|
||||
{ctgLaunchGetIndexTask, ctgHandleGetIndexRsp, ctgDumpIndexRes},
|
||||
|
|
|
@ -19,37 +19,43 @@
|
|||
#include "catalogInt.h"
|
||||
#include "systable.h"
|
||||
|
||||
SCtgAction gCtgAction[CTG_ACT_MAX] = {
|
||||
SCtgOperation gCtgCacheOperation[CTG_OP_MAX] = {
|
||||
{
|
||||
CTG_ACT_UPDATE_VG,
|
||||
CTG_OP_UPDATE_VGROUP,
|
||||
"update vgInfo",
|
||||
ctgActUpdateVg
|
||||
ctgOpUpdateVgroup
|
||||
},
|
||||
{
|
||||
CTG_ACT_UPDATE_TBL,
|
||||
CTG_OP_UPDATE_TB_META,
|
||||
"update tbMeta",
|
||||
ctgActUpdateTb
|
||||
ctgOpUpdateTbMeta
|
||||
},
|
||||
{
|
||||
CTG_ACT_REMOVE_DB,
|
||||
"remove DB",
|
||||
ctgActRemoveDB
|
||||
CTG_OP_DROP_DB_CACHE,
|
||||
"drop DB",
|
||||
ctgOpDropDbCache
|
||||
},
|
||||
{
|
||||
CTG_ACT_REMOVE_STB,
|
||||
"remove stbMeta",
|
||||
ctgActRemoveStb
|
||||
CTG_OP_DROP_STB_META,
|
||||
"drop stbMeta",
|
||||
ctgOpDropStbMeta
|
||||
},
|
||||
{
|
||||
CTG_ACT_REMOVE_TBL,
|
||||
"remove tbMeta",
|
||||
ctgActRemoveTb
|
||||
CTG_OP_DROP_TB_META,
|
||||
"drop tbMeta",
|
||||
ctgOpDropTbMeta
|
||||
},
|
||||
{
|
||||
CTG_ACT_UPDATE_USER,
|
||||
CTG_OP_UPDATE_USER,
|
||||
"update user",
|
||||
ctgActUpdateUser
|
||||
ctgOpUpdateUser
|
||||
},
|
||||
{
|
||||
CTG_OP_UPDATE_VG_EPSET,
|
||||
"update epset",
|
||||
ctgOpUpdateEpset
|
||||
}
|
||||
|
||||
};
|
||||
|
||||
|
||||
|
@ -405,7 +411,7 @@ int32_t ctgReadTbVerFromCache(SCatalog *pCtg, const SName *pTableName, int32_t *
|
|||
}
|
||||
|
||||
|
||||
int32_t ctgGetTbTypeFromCache(SCatalog* pCtg, const char* dbFName, const char *tableName, int32_t *tbType) {
|
||||
int32_t ctgReadTbTypeFromCache(SCatalog* pCtg, const char* dbFName, const char *tableName, int32_t *tbType) {
|
||||
if (NULL == pCtg->dbCache) {
|
||||
ctgWarn("empty db cache, dbFName:%s, tbName:%s", dbFName, tableName);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -491,7 +497,7 @@ _return:
|
|||
}
|
||||
|
||||
|
||||
void ctgWaitAction(SCtgMetaAction *action) {
|
||||
void ctgWaitOpDone(SCtgCacheOperation *action) {
|
||||
while (true) {
|
||||
tsem_wait(&gCtgMgmt.queue.rspSem);
|
||||
|
||||
|
@ -509,7 +515,7 @@ void ctgWaitAction(SCtgMetaAction *action) {
|
|||
}
|
||||
}
|
||||
|
||||
void ctgPopAction(SCtgMetaAction **action) {
|
||||
void ctgDequeue(SCtgCacheOperation **op) {
|
||||
SCtgQNode *orig = gCtgMgmt.queue.head;
|
||||
|
||||
SCtgQNode *node = gCtgMgmt.queue.head->next;
|
||||
|
@ -519,20 +525,20 @@ void ctgPopAction(SCtgMetaAction **action) {
|
|||
|
||||
taosMemoryFreeClear(orig);
|
||||
|
||||
*action = &node->action;
|
||||
*op = &node->op;
|
||||
}
|
||||
|
||||
|
||||
int32_t ctgPushAction(SCatalog* pCtg, SCtgMetaAction *action) {
|
||||
int32_t ctgEnqueue(SCatalog* pCtg, SCtgCacheOperation *operation) {
|
||||
SCtgQNode *node = taosMemoryCalloc(1, sizeof(SCtgQNode));
|
||||
if (NULL == node) {
|
||||
qError("calloc %d failed", (int32_t)sizeof(SCtgQNode));
|
||||
CTG_RET(TSDB_CODE_CTG_MEM_ERROR);
|
||||
}
|
||||
|
||||
action->seqId = atomic_add_fetch_64(&gCtgMgmt.queue.seqId, 1);
|
||||
operation->seqId = atomic_add_fetch_64(&gCtgMgmt.queue.seqId, 1);
|
||||
|
||||
node->action = *action;
|
||||
node->op = *operation;
|
||||
|
||||
CTG_LOCK(CTG_WRITE, &gCtgMgmt.queue.qlock);
|
||||
gCtgMgmt.queue.tail->next = node;
|
||||
|
@ -544,19 +550,19 @@ int32_t ctgPushAction(SCatalog* pCtg, SCtgMetaAction *action) {
|
|||
|
||||
tsem_post(&gCtgMgmt.queue.reqSem);
|
||||
|
||||
ctgDebug("action [%s] added into queue", gCtgAction[action->act].name);
|
||||
ctgDebug("action [%s] added into queue", gCtgCacheOperation[operation->opId].name);
|
||||
|
||||
if (action->syncReq) {
|
||||
ctgWaitAction(action);
|
||||
if (operation->syncReq) {
|
||||
ctgWaitOpDone(operation);
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
||||
int32_t ctgPutRmDBToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId) {
|
||||
int32_t ctgDropDbCacheEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId) {
|
||||
int32_t code = 0;
|
||||
SCtgMetaAction action= {.act = CTG_ACT_REMOVE_DB};
|
||||
SCtgCacheOperation action= {.opId = CTG_OP_DROP_DB_CACHE};
|
||||
SCtgRemoveDBMsg *msg = taosMemoryMalloc(sizeof(SCtgRemoveDBMsg));
|
||||
if (NULL == msg) {
|
||||
ctgError("malloc %d failed", (int32_t)sizeof(SCtgRemoveDBMsg));
|
||||
|
@ -574,7 +580,7 @@ int32_t ctgPutRmDBToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId) {
|
|||
|
||||
action.data = msg;
|
||||
|
||||
CTG_ERR_JRET(ctgPushAction(pCtg, &action));
|
||||
CTG_ERR_JRET(ctgEnqueue(pCtg, &action));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -585,9 +591,9 @@ _return:
|
|||
}
|
||||
|
||||
|
||||
int32_t ctgPutRmStbToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *stbName, uint64_t suid, bool syncReq) {
|
||||
int32_t ctgDropStbMetaEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *stbName, uint64_t suid, bool syncReq) {
|
||||
int32_t code = 0;
|
||||
SCtgMetaAction action= {.act = CTG_ACT_REMOVE_STB, .syncReq = syncReq};
|
||||
SCtgCacheOperation action= {.opId = CTG_OP_DROP_STB_META, .syncReq = syncReq};
|
||||
SCtgRemoveStbMsg *msg = taosMemoryMalloc(sizeof(SCtgRemoveStbMsg));
|
||||
if (NULL == msg) {
|
||||
ctgError("malloc %d failed", (int32_t)sizeof(SCtgRemoveStbMsg));
|
||||
|
@ -602,7 +608,7 @@ int32_t ctgPutRmStbToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, co
|
|||
|
||||
action.data = msg;
|
||||
|
||||
CTG_ERR_JRET(ctgPushAction(pCtg, &action));
|
||||
CTG_ERR_JRET(ctgEnqueue(pCtg, &action));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -614,9 +620,9 @@ _return:
|
|||
|
||||
|
||||
|
||||
int32_t ctgPutRmTbToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *tbName, bool syncReq) {
|
||||
int32_t ctgDropTbMetaEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *tbName, bool syncReq) {
|
||||
int32_t code = 0;
|
||||
SCtgMetaAction action= {.act = CTG_ACT_REMOVE_TBL, .syncReq = syncReq};
|
||||
SCtgCacheOperation action= {.opId = CTG_OP_DROP_TB_META, .syncReq = syncReq};
|
||||
SCtgRemoveTblMsg *msg = taosMemoryMalloc(sizeof(SCtgRemoveTblMsg));
|
||||
if (NULL == msg) {
|
||||
ctgError("malloc %d failed", (int32_t)sizeof(SCtgRemoveTblMsg));
|
||||
|
@ -630,7 +636,7 @@ int32_t ctgPutRmTbToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, con
|
|||
|
||||
action.data = msg;
|
||||
|
||||
CTG_ERR_JRET(ctgPushAction(pCtg, &action));
|
||||
CTG_ERR_JRET(ctgEnqueue(pCtg, &action));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -640,9 +646,9 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgPutUpdateVgToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, SDBVgInfo* dbInfo, bool syncReq) {
|
||||
int32_t ctgUpdateVgroupEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, SDBVgInfo* dbInfo, bool syncReq) {
|
||||
int32_t code = 0;
|
||||
SCtgMetaAction action= {.act = CTG_ACT_UPDATE_VG, .syncReq = syncReq};
|
||||
SCtgCacheOperation action= {.opId = CTG_OP_UPDATE_VGROUP, .syncReq = syncReq};
|
||||
SCtgUpdateVgMsg *msg = taosMemoryMalloc(sizeof(SCtgUpdateVgMsg));
|
||||
if (NULL == msg) {
|
||||
ctgError("malloc %d failed", (int32_t)sizeof(SCtgUpdateVgMsg));
|
||||
|
@ -662,7 +668,7 @@ int32_t ctgPutUpdateVgToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId,
|
|||
|
||||
action.data = msg;
|
||||
|
||||
CTG_ERR_JRET(ctgPushAction(pCtg, &action));
|
||||
CTG_ERR_JRET(ctgEnqueue(pCtg, &action));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -673,9 +679,9 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgPutUpdateTbToQueue(SCatalog* pCtg, STableMetaOutput *output, bool syncReq) {
|
||||
int32_t ctgUpdateTbMetaEnqueue(SCatalog* pCtg, STableMetaOutput *output, bool syncReq) {
|
||||
int32_t code = 0;
|
||||
SCtgMetaAction action= {.act = CTG_ACT_UPDATE_TBL, .syncReq = syncReq};
|
||||
SCtgCacheOperation action= {.opId = CTG_OP_UPDATE_TB_META, .syncReq = syncReq};
|
||||
SCtgUpdateTblMsg *msg = taosMemoryMalloc(sizeof(SCtgUpdateTblMsg));
|
||||
if (NULL == msg) {
|
||||
ctgError("malloc %d failed", (int32_t)sizeof(SCtgUpdateTblMsg));
|
||||
|
@ -692,7 +698,7 @@ int32_t ctgPutUpdateTbToQueue(SCatalog* pCtg, STableMetaOutput *output, bool syn
|
|||
|
||||
action.data = msg;
|
||||
|
||||
CTG_ERR_JRET(ctgPushAction(pCtg, &action));
|
||||
CTG_ERR_JRET(ctgEnqueue(pCtg, &action));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -703,9 +709,38 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgPutUpdateUserToQueue(SCatalog* pCtg, SGetUserAuthRsp *pAuth, bool syncReq) {
|
||||
int32_t ctgUpdateVgEpsetEnqueue(SCatalog* pCtg, char *dbFName, int32_t vgId, SEpSet* pEpSet) {
|
||||
int32_t code = 0;
|
||||
SCtgMetaAction action= {.act = CTG_ACT_UPDATE_USER, .syncReq = syncReq};
|
||||
SCtgCacheOperation operation= {.opId = CTG_OP_UPDATE_VG_EPSET};
|
||||
SCtgUpdateEpsetMsg *msg = taosMemoryMalloc(sizeof(SCtgUpdateEpsetMsg));
|
||||
if (NULL == msg) {
|
||||
ctgError("malloc %d failed", (int32_t)sizeof(SCtgUpdateEpsetMsg));
|
||||
CTG_ERR_RET(TSDB_CODE_CTG_MEM_ERROR);
|
||||
}
|
||||
|
||||
msg->pCtg = pCtg;
|
||||
strcpy(msg->dbFName, dbFName);
|
||||
msg->vgId = vgId;
|
||||
msg->epSet = *pEpSet;
|
||||
|
||||
operation.data = msg;
|
||||
|
||||
CTG_ERR_JRET(ctgEnqueue(pCtg, &operation));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
_return:
|
||||
|
||||
taosMemoryFreeClear(msg);
|
||||
|
||||
CTG_RET(code);
|
||||
}
|
||||
|
||||
|
||||
|
||||
int32_t ctgUpdateUserEnqueue(SCatalog* pCtg, SGetUserAuthRsp *pAuth, bool syncReq) {
|
||||
int32_t code = 0;
|
||||
SCtgCacheOperation action= {.opId = CTG_OP_UPDATE_USER, .syncReq = syncReq};
|
||||
SCtgUpdateUserMsg *msg = taosMemoryMalloc(sizeof(SCtgUpdateUserMsg));
|
||||
if (NULL == msg) {
|
||||
ctgError("malloc %d failed", (int32_t)sizeof(SCtgUpdateUserMsg));
|
||||
|
@ -717,7 +752,7 @@ int32_t ctgPutUpdateUserToQueue(SCatalog* pCtg, SGetUserAuthRsp *pAuth, bool syn
|
|||
|
||||
action.data = msg;
|
||||
|
||||
CTG_ERR_JRET(ctgPushAction(pCtg, &action));
|
||||
CTG_ERR_JRET(ctgEnqueue(pCtg, &action));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -1219,7 +1254,7 @@ int32_t ctgUpdateTbMetaToCache(SCatalog* pCtg, STableMetaOutput* pOut, bool sync
|
|||
int32_t code = 0;
|
||||
|
||||
CTG_ERR_RET(ctgCloneMetaOutput(pOut, &pOutput));
|
||||
CTG_ERR_JRET(ctgPutUpdateTbToQueue(pCtg, pOutput, syncReq));
|
||||
CTG_ERR_JRET(ctgUpdateTbMetaEnqueue(pCtg, pOutput, syncReq));
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
|
@ -1230,9 +1265,9 @@ _return:
|
|||
}
|
||||
|
||||
|
||||
int32_t ctgActUpdateVg(SCtgMetaAction *action) {
|
||||
int32_t ctgOpUpdateVgroup(SCtgCacheOperation *operation) {
|
||||
int32_t code = 0;
|
||||
SCtgUpdateVgMsg *msg = action->data;
|
||||
SCtgUpdateVgMsg *msg = operation->data;
|
||||
|
||||
CTG_ERR_JRET(ctgWriteDBVgInfoToCache(msg->pCtg, msg->dbFName, msg->dbId, &msg->dbInfo));
|
||||
|
||||
|
@ -1244,9 +1279,9 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgActRemoveDB(SCtgMetaAction *action) {
|
||||
int32_t ctgOpDropDbCache(SCtgCacheOperation *operation) {
|
||||
int32_t code = 0;
|
||||
SCtgRemoveDBMsg *msg = action->data;
|
||||
SCtgRemoveDBMsg *msg = operation->data;
|
||||
SCatalog* pCtg = msg->pCtg;
|
||||
|
||||
SCtgDBCache *dbCache = NULL;
|
||||
|
@ -1270,9 +1305,9 @@ _return:
|
|||
}
|
||||
|
||||
|
||||
int32_t ctgActUpdateTb(SCtgMetaAction *action) {
|
||||
int32_t ctgOpUpdateTbMeta(SCtgCacheOperation *operation) {
|
||||
int32_t code = 0;
|
||||
SCtgUpdateTblMsg *msg = action->data;
|
||||
SCtgUpdateTblMsg *msg = operation->data;
|
||||
SCatalog* pCtg = msg->pCtg;
|
||||
STableMetaOutput* output = msg->output;
|
||||
SCtgDBCache *dbCache = NULL;
|
||||
|
@ -1316,9 +1351,9 @@ _return:
|
|||
}
|
||||
|
||||
|
||||
int32_t ctgActRemoveStb(SCtgMetaAction *action) {
|
||||
int32_t ctgOpDropStbMeta(SCtgCacheOperation *operation) {
|
||||
int32_t code = 0;
|
||||
SCtgRemoveStbMsg *msg = action->data;
|
||||
SCtgRemoveStbMsg *msg = operation->data;
|
||||
SCatalog* pCtg = msg->pCtg;
|
||||
|
||||
SCtgDBCache *dbCache = NULL;
|
||||
|
@ -1362,9 +1397,9 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgActRemoveTb(SCtgMetaAction *action) {
|
||||
int32_t ctgOpDropTbMeta(SCtgCacheOperation *operation) {
|
||||
int32_t code = 0;
|
||||
SCtgRemoveTblMsg *msg = action->data;
|
||||
SCtgRemoveTblMsg *msg = operation->data;
|
||||
SCatalog* pCtg = msg->pCtg;
|
||||
|
||||
SCtgDBCache *dbCache = NULL;
|
||||
|
@ -1397,9 +1432,9 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgActUpdateUser(SCtgMetaAction *action) {
|
||||
int32_t ctgOpUpdateUser(SCtgCacheOperation *operation) {
|
||||
int32_t code = 0;
|
||||
SCtgUpdateUserMsg *msg = action->data;
|
||||
SCtgUpdateUserMsg *msg = operation->data;
|
||||
SCatalog* pCtg = msg->pCtg;
|
||||
|
||||
if (NULL == pCtg->userCache) {
|
||||
|
@ -1460,14 +1495,60 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
void ctgUpdateThreadFuncUnexpectedStopped(void) {
|
||||
int32_t ctgOpUpdateEpset(SCtgCacheOperation *operation) {
|
||||
int32_t code = 0;
|
||||
SCtgUpdateEpsetMsg *msg = operation->data;
|
||||
SCatalog* pCtg = msg->pCtg;
|
||||
|
||||
SCtgDBCache *dbCache = NULL;
|
||||
CTG_ERR_RET(ctgAcquireDBCache(pCtg, msg->dbFName, &dbCache));
|
||||
if (NULL == dbCache) {
|
||||
ctgDebug("db %s not exist, ignore epset update", msg->dbFName);
|
||||
goto _return;
|
||||
}
|
||||
|
||||
SDBVgInfo *vgInfo = NULL;
|
||||
CTG_ERR_RET(ctgWAcquireVgInfo(pCtg, dbCache));
|
||||
|
||||
if (NULL == dbCache->vgInfo) {
|
||||
ctgWReleaseVgInfo(dbCache);
|
||||
ctgDebug("vgroup in db %s not cached, ignore epset update", msg->dbFName);
|
||||
goto _return;
|
||||
}
|
||||
|
||||
SVgroupInfo* pInfo = taosHashGet(dbCache->vgInfo->vgHash, &msg->vgId, sizeof(msg->vgId));
|
||||
if (NULL == pInfo) {
|
||||
ctgWReleaseVgInfo(dbCache);
|
||||
ctgDebug("no vgroup %d in db %s, ignore epset update", msg->vgId, msg->dbFName);
|
||||
goto _return;
|
||||
}
|
||||
|
||||
pInfo->epSet = msg->epSet;
|
||||
|
||||
ctgDebug("epset in vgroup %d updated, dbFName:%s", pInfo->vgId, msg->dbFName);
|
||||
|
||||
ctgWReleaseVgInfo(dbCache);
|
||||
|
||||
_return:
|
||||
|
||||
if (dbCache) {
|
||||
ctgReleaseDBCache(msg->pCtg, dbCache);
|
||||
}
|
||||
|
||||
taosMemoryFreeClear(msg);
|
||||
|
||||
CTG_RET(code);
|
||||
}
|
||||
|
||||
|
||||
void ctgUpdateThreadUnexpectedStopped(void) {
|
||||
if (CTG_IS_LOCKED(&gCtgMgmt.lock) > 0) CTG_UNLOCK(CTG_READ, &gCtgMgmt.lock);
|
||||
}
|
||||
|
||||
void* ctgUpdateThreadFunc(void* param) {
|
||||
setThreadName("catalog");
|
||||
#ifdef WINDOWS
|
||||
atexit(ctgUpdateThreadFuncUnexpectedStopped);
|
||||
atexit(ctgUpdateThreadUnexpectedStopped);
|
||||
#endif
|
||||
qInfo("catalog update thread started");
|
||||
|
||||
|
@ -1483,17 +1564,17 @@ void* ctgUpdateThreadFunc(void* param) {
|
|||
break;
|
||||
}
|
||||
|
||||
SCtgMetaAction *action = NULL;
|
||||
ctgPopAction(&action);
|
||||
SCatalog *pCtg = ((SCtgUpdateMsgHeader *)action->data)->pCtg;
|
||||
SCtgCacheOperation *operation = NULL;
|
||||
ctgDequeue(&operation);
|
||||
SCatalog *pCtg = ((SCtgUpdateMsgHeader *)operation->data)->pCtg;
|
||||
|
||||
ctgDebug("process [%s] action", gCtgAction[action->act].name);
|
||||
ctgDebug("process [%s] operation", gCtgCacheOperation[operation->opId].name);
|
||||
|
||||
(*gCtgAction[action->act].func)(action);
|
||||
(*gCtgCacheOperation[operation->opId].func)(operation);
|
||||
|
||||
gCtgMgmt.queue.seqDone = action->seqId;
|
||||
gCtgMgmt.queue.seqDone = operation->seqId;
|
||||
|
||||
if (action->syncReq) {
|
||||
if (operation->syncReq) {
|
||||
tsem_post(&gCtgMgmt.queue.rspSem);
|
||||
}
|
||||
|
||||
|
|
|
@ -43,6 +43,9 @@ void ctgFreeSMetaData(SMetaData* pData) {
|
|||
taosArrayDestroy(pData->pDbCfg);
|
||||
pData->pDbCfg = NULL;
|
||||
|
||||
taosArrayDestroy(pData->pDbInfo);
|
||||
pData->pDbInfo = NULL;
|
||||
|
||||
taosArrayDestroy(pData->pIndex);
|
||||
pData->pIndex = NULL;
|
||||
|
||||
|
@ -293,9 +296,12 @@ void ctgFreeTask(SCtgTask* pTask) {
|
|||
}
|
||||
case CTG_TASK_GET_DB_CFG: {
|
||||
taosMemoryFreeClear(pTask->taskCtx);
|
||||
if (pTask->res) {
|
||||
taosMemoryFreeClear(pTask->res);
|
||||
}
|
||||
taosMemoryFreeClear(pTask->res);
|
||||
break;
|
||||
}
|
||||
case CTG_TASK_GET_DB_INFO: {
|
||||
taosMemoryFreeClear(pTask->taskCtx);
|
||||
taosMemoryFreeClear(pTask->res);
|
||||
break;
|
||||
}
|
||||
case CTG_TASK_GET_TB_HASH: {
|
||||
|
|
|
@ -41,7 +41,7 @@
|
|||
namespace {
|
||||
|
||||
extern "C" int32_t ctgdGetClusterCacheNum(struct SCatalog* pCatalog, int32_t type);
|
||||
extern "C" int32_t ctgActUpdateTb(SCtgMetaAction *action);
|
||||
extern "C" int32_t ctgOpUpdateTbMeta(SCtgCacheOperation *action);
|
||||
extern "C" int32_t ctgdEnableDebug(char *option);
|
||||
extern "C" int32_t ctgdGetStatNum(char *option, void *res);
|
||||
|
||||
|
@ -888,9 +888,9 @@ void *ctgTestSetCtableMetaThread(void *param) {
|
|||
int32_t n = 0;
|
||||
STableMetaOutput *output = NULL;
|
||||
|
||||
SCtgMetaAction action = {0};
|
||||
SCtgCacheOperation operation = {0};
|
||||
|
||||
action.act = CTG_ACT_UPDATE_TBL;
|
||||
operation.opId = CTG_OP_UPDATE_TB_META;
|
||||
|
||||
while (!ctgTestStop) {
|
||||
output = (STableMetaOutput *)taosMemoryMalloc(sizeof(STableMetaOutput));
|
||||
|
@ -899,9 +899,9 @@ void *ctgTestSetCtableMetaThread(void *param) {
|
|||
SCtgUpdateTblMsg *msg = (SCtgUpdateTblMsg *)taosMemoryMalloc(sizeof(SCtgUpdateTblMsg));
|
||||
msg->pCtg = pCtg;
|
||||
msg->output = output;
|
||||
action.data = msg;
|
||||
operation.data = msg;
|
||||
|
||||
code = ctgActUpdateTb(&action);
|
||||
code = ctgOpUpdateTbMeta(&operation);
|
||||
if (code) {
|
||||
assert(0);
|
||||
}
|
||||
|
|
|
@ -334,6 +334,8 @@ typedef struct STableScanInfo {
|
|||
int32_t dataBlockLoadFlag;
|
||||
double sampleRatio; // data block sample ratio, 1 by default
|
||||
SInterval interval; // if the upstream is an interval operator, the interval info is also kept here to get the time window to check if current data block needs to be loaded.
|
||||
|
||||
int32_t curTWinIdx;
|
||||
} STableScanInfo;
|
||||
|
||||
typedef struct STagScanInfo {
|
||||
|
@ -803,6 +805,8 @@ SResultWindowInfo* getSessionTimeWindow(SArray* pWinInfos, TSKEY ts, int64_t gap
|
|||
int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pTs, int32_t rows,
|
||||
int32_t start, int64_t gap, SHashObj* pStDeleted);
|
||||
bool functionNeedToExecute(SqlFunctionCtx* pCtx);
|
||||
|
||||
int32_t compareTimeWindow(const void* p1, const void* p2, const void* param);
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -4706,6 +4706,18 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
|
|||
return pOptr;
|
||||
}
|
||||
|
||||
int32_t compareTimeWindow(const void* p1, const void* p2, const void* param) {
|
||||
const SQueryTableDataCond *pCond = param;
|
||||
const STimeWindow *pWin1 = p1;
|
||||
const STimeWindow *pWin2 = p2;
|
||||
if (pCond->order == TSDB_ORDER_ASC) {
|
||||
return pWin1->skey - pWin2->skey;
|
||||
} else if (pCond->order == TSDB_ORDER_DESC) {
|
||||
return pWin2->skey - pWin1->skey;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t initQueryTableDataCond(SQueryTableDataCond* pCond, const STableScanPhysiNode* pTableScanNode) {
|
||||
pCond->loadExternalRows = false;
|
||||
|
||||
|
@ -4717,16 +4729,34 @@ int32_t initQueryTableDataCond(SQueryTableDataCond* pCond, const STableScanPhysi
|
|||
return terrno;
|
||||
}
|
||||
|
||||
pCond->twindow = pTableScanNode->scanRange;
|
||||
//pCond->twindow = pTableScanNode->scanRange;
|
||||
//TODO: get it from stable scan node
|
||||
pCond->numOfTWindows = 1;
|
||||
pCond->twindows = taosMemoryCalloc(pCond->numOfTWindows, sizeof(STimeWindow));
|
||||
pCond->twindows[0] = pTableScanNode->scanRange;
|
||||
|
||||
#if 1
|
||||
// todo work around a problem, remove it later
|
||||
if ((pCond->order == TSDB_ORDER_ASC && pCond->twindow.skey > pCond->twindow.ekey) ||
|
||||
(pCond->order == TSDB_ORDER_DESC && pCond->twindow.skey < pCond->twindow.ekey)) {
|
||||
TSWAP(pCond->twindow.skey, pCond->twindow.ekey);
|
||||
for (int32_t i = 0; i < pCond->numOfTWindows; ++i) {
|
||||
if ((pCond->order == TSDB_ORDER_ASC && pCond->twindows[i].skey > pCond->twindows[i].ekey) ||
|
||||
(pCond->order == TSDB_ORDER_DESC && pCond->twindows[i].skey < pCond->twindows[i].ekey)) {
|
||||
TSWAP(pCond->twindows[i].skey, pCond->twindows[i].ekey);
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
for (int32_t i = 0; i < pCond->numOfTWindows; ++i) {
|
||||
if ((pCond->order == TSDB_ORDER_ASC && pCond->twindows[i].skey > pCond->twindows[i].ekey) ||
|
||||
(pCond->order == TSDB_ORDER_DESC && pCond->twindows[i].skey < pCond->twindows[i].ekey)) {
|
||||
TSWAP(pCond->twindows[i].skey, pCond->twindows[i].ekey);
|
||||
}
|
||||
}
|
||||
taosqsort(pCond->twindows,
|
||||
pCond->numOfTWindows,
|
||||
sizeof(STimeWindow),
|
||||
pCond,
|
||||
compareTimeWindow);
|
||||
|
||||
pCond->type = BLOCK_LOAD_OFFSET_SEQ_ORDER;
|
||||
// pCond->type = pTableScanNode->scanFlag;
|
||||
|
||||
|
|
|
@ -274,9 +274,17 @@ static void prepareForDescendingScan(STableScanInfo* pTableScanInfo, SqlFunction
|
|||
switchCtxOrder(pCtx, numOfOutput);
|
||||
// setupQueryRangeForReverseScan(pTableScanInfo);
|
||||
|
||||
STimeWindow* pTWindow = &pTableScanInfo->cond.twindow;
|
||||
TSWAP(pTWindow->skey, pTWindow->ekey);
|
||||
pTableScanInfo->cond.order = TSDB_ORDER_DESC;
|
||||
for (int32_t i = 0; i < pTableScanInfo->cond.numOfTWindows; ++i) {
|
||||
STimeWindow* pTWindow = &pTableScanInfo->cond.twindows[i];
|
||||
TSWAP(pTWindow->skey, pTWindow->ekey);
|
||||
}
|
||||
SQueryTableDataCond *pCond = &pTableScanInfo->cond;
|
||||
taosqsort(pCond->twindows,
|
||||
pCond->numOfTWindows,
|
||||
sizeof(STimeWindow),
|
||||
pCond,
|
||||
compareTimeWindow);
|
||||
}
|
||||
|
||||
void addTagPseudoColumnData(STableScanInfo* pTableScanInfo, SSDataBlock* pBlock) {
|
||||
|
@ -380,7 +388,6 @@ static SSDataBlock* doTableScanImpl(SOperatorInfo* pOperator) {
|
|||
pOperator->cost.totalCost = pTableScanInfo->readRecorder.elapsedTime;
|
||||
return pBlock;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -395,9 +402,15 @@ static SSDataBlock* doTableScan(SOperatorInfo* pOperator) {
|
|||
|
||||
// do the ascending order traverse in the first place.
|
||||
while (pTableScanInfo->scanTimes < pTableScanInfo->scanInfo.numOfAsc) {
|
||||
SSDataBlock* p = doTableScanImpl(pOperator);
|
||||
if (p != NULL) {
|
||||
return p;
|
||||
while (pTableScanInfo->curTWinIdx < pTableScanInfo->cond.numOfTWindows) {
|
||||
SSDataBlock* p = doTableScanImpl(pOperator);
|
||||
if (p != NULL) {
|
||||
return p;
|
||||
}
|
||||
pTableScanInfo->curTWinIdx += 1;
|
||||
if (pTableScanInfo->curTWinIdx < pTableScanInfo->cond.numOfTWindows) {
|
||||
tsdbResetReadHandle(pTableScanInfo->dataReader, &pTableScanInfo->cond, pTableScanInfo->curTWinIdx);
|
||||
}
|
||||
}
|
||||
|
||||
pTableScanInfo->scanTimes += 1;
|
||||
|
@ -405,14 +418,14 @@ static SSDataBlock* doTableScan(SOperatorInfo* pOperator) {
|
|||
if (pTableScanInfo->scanTimes < pTableScanInfo->scanInfo.numOfAsc) {
|
||||
setTaskStatus(pTaskInfo, TASK_NOT_COMPLETED);
|
||||
pTableScanInfo->scanFlag = REPEAT_SCAN;
|
||||
|
||||
STimeWindow* pWin = &pTableScanInfo->cond.twindow;
|
||||
qDebug("%s start to repeat ascending order scan data blocks due to query func required, qrange:%" PRId64
|
||||
"-%" PRId64,
|
||||
GET_TASKID(pTaskInfo), pWin->skey, pWin->ekey);
|
||||
|
||||
qDebug("%s start to repeat ascending order scan data blocks due to query func required", GET_TASKID(pTaskInfo));
|
||||
for (int32_t i = 0; i < pTableScanInfo->cond.numOfTWindows; ++i) {
|
||||
STimeWindow* pWin = &pTableScanInfo->cond.twindows[i];
|
||||
qDebug("%s\t qrange:%" PRId64 "-%" PRId64, GET_TASKID(pTaskInfo), pWin->skey, pWin->ekey);
|
||||
}
|
||||
// do prepare for the next round table scan operation
|
||||
tsdbResetReadHandle(pTableScanInfo->dataReader, &pTableScanInfo->cond);
|
||||
tsdbResetReadHandle(pTableScanInfo->dataReader, &pTableScanInfo->cond, 0);
|
||||
pTableScanInfo->curTWinIdx = 0;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -420,31 +433,40 @@ static SSDataBlock* doTableScan(SOperatorInfo* pOperator) {
|
|||
if (pTableScanInfo->scanTimes < total) {
|
||||
if (pTableScanInfo->cond.order == TSDB_ORDER_ASC) {
|
||||
prepareForDescendingScan(pTableScanInfo, pTableScanInfo->pCtx, pTableScanInfo->numOfOutput);
|
||||
tsdbResetReadHandle(pTableScanInfo->dataReader, &pTableScanInfo->cond);
|
||||
tsdbResetReadHandle(pTableScanInfo->dataReader, &pTableScanInfo->cond, 0);
|
||||
pTableScanInfo->curTWinIdx = 0;
|
||||
}
|
||||
|
||||
STimeWindow* pWin = &pTableScanInfo->cond.twindow;
|
||||
qDebug("%s start to descending order scan data blocks due to query func required, qrange:%" PRId64 "-%" PRId64,
|
||||
GET_TASKID(pTaskInfo), pWin->skey, pWin->ekey);
|
||||
|
||||
qDebug("%s start to descending order scan data blocks due to query func required", GET_TASKID(pTaskInfo));
|
||||
for (int32_t i = 0; i < pTableScanInfo->cond.numOfTWindows; ++i) {
|
||||
STimeWindow* pWin = &pTableScanInfo->cond.twindows[i];
|
||||
qDebug("%s\t qrange:%" PRId64 "-%" PRId64, GET_TASKID(pTaskInfo), pWin->skey, pWin->ekey);
|
||||
}
|
||||
while (pTableScanInfo->scanTimes < total) {
|
||||
SSDataBlock* p = doTableScanImpl(pOperator);
|
||||
if (p != NULL) {
|
||||
return p;
|
||||
while (pTableScanInfo->curTWinIdx < pTableScanInfo->cond.numOfTWindows) {
|
||||
SSDataBlock* p = doTableScanImpl(pOperator);
|
||||
if (p != NULL) {
|
||||
return p;
|
||||
}
|
||||
pTableScanInfo->curTWinIdx += 1;
|
||||
if (pTableScanInfo->curTWinIdx < pTableScanInfo->cond.numOfTWindows) {
|
||||
tsdbResetReadHandle(pTableScanInfo->dataReader, &pTableScanInfo->cond, pTableScanInfo->curTWinIdx);
|
||||
}
|
||||
}
|
||||
|
||||
pTableScanInfo->scanTimes += 1;
|
||||
|
||||
if (pTableScanInfo->scanTimes < pTableScanInfo->scanInfo.numOfAsc) {
|
||||
if (pTableScanInfo->scanTimes < total) {
|
||||
setTaskStatus(pTaskInfo, TASK_NOT_COMPLETED);
|
||||
pTableScanInfo->scanFlag = REPEAT_SCAN;
|
||||
|
||||
qDebug("%s start to repeat descending order scan data blocks due to query func required, qrange:%" PRId64
|
||||
"-%" PRId64,
|
||||
GET_TASKID(pTaskInfo), pTaskInfo->window.skey, pTaskInfo->window.ekey);
|
||||
|
||||
// do prepare for the next round table scan operation
|
||||
tsdbResetReadHandle(pTableScanInfo->dataReader, &pTableScanInfo->cond);
|
||||
qDebug("%s start to repeat descending order scan data blocks due to query func required", GET_TASKID(pTaskInfo));
|
||||
for (int32_t i = 0; i < pTableScanInfo->cond.numOfTWindows; ++i) {
|
||||
STimeWindow* pWin = &pTableScanInfo->cond.twindows[i];
|
||||
qDebug("%s\t qrange:%" PRId64 "-%" PRId64, GET_TASKID(pTaskInfo), pWin->skey, pWin->ekey);
|
||||
}
|
||||
tsdbResetReadHandle(pTableScanInfo->dataReader, &pTableScanInfo->cond, 0);
|
||||
pTableScanInfo->curTWinIdx = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -524,6 +546,7 @@ SOperatorInfo* createTableScanOperatorInfo(STableScanPhysiNode* pTableScanNode,
|
|||
pInfo->dataReader = pDataReader;
|
||||
pInfo->scanFlag = MAIN_SCAN;
|
||||
pInfo->pColMatchInfo = pColList;
|
||||
pInfo->curTWinIdx = 0;
|
||||
|
||||
pOperator->name = "TableScanOperator"; // for debug purpose
|
||||
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN;
|
||||
|
@ -678,8 +701,9 @@ static bool prepareDataScan(SStreamBlockScanInfo* pInfo) {
|
|||
binarySearchForKey, NULL, TSDB_ORDER_ASC);
|
||||
}
|
||||
STableScanInfo* pTableScanInfo = pInfo->pOperatorDumy->info;
|
||||
pTableScanInfo->cond.twindow = win;
|
||||
tsdbResetReadHandle(pTableScanInfo->dataReader, &pTableScanInfo->cond);
|
||||
pTableScanInfo->cond.twindows[0] = win;
|
||||
pTableScanInfo->curTWinIdx = 0;
|
||||
tsdbResetReadHandle(pTableScanInfo->dataReader, &pTableScanInfo->cond, 0);
|
||||
pTableScanInfo->scanTimes = 0;
|
||||
return true;
|
||||
} else {
|
||||
|
|
|
@ -156,6 +156,14 @@ static int32_t translatePercentile(SFunctionNode* pFunc, char* pErrBuf, int32_t
|
|||
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
//param0
|
||||
SNode* pParamNode0 = nodesListGetNode(pFunc->pParameterList, 0);
|
||||
if (nodeType(pParamNode0) != QUERY_NODE_COLUMN) {
|
||||
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
|
||||
"The first parameter of PERCENTILE function can only be column");
|
||||
}
|
||||
|
||||
//param1
|
||||
SValueNode* pValue = (SValueNode*)nodesListGetNode(pFunc->pParameterList, 1);
|
||||
|
||||
if (pValue->datum.i < 0 || pValue->datum.i > 100) {
|
||||
|
@ -170,6 +178,7 @@ static int32_t translatePercentile(SFunctionNode* pFunc, char* pErrBuf, int32_t
|
|||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
//set result type
|
||||
pFunc->node.resType = (SDataType){.bytes = tDataTypes[TSDB_DATA_TYPE_DOUBLE].bytes, .type = TSDB_DATA_TYPE_DOUBLE};
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -188,30 +197,47 @@ static int32_t translateApercentile(SFunctionNode* pFunc, char* pErrBuf, int32_t
|
|||
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
uint8_t para1Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
|
||||
uint8_t para2Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type;
|
||||
if (!IS_NUMERIC_TYPE(para1Type) || !IS_INTEGER_TYPE(para2Type)) {
|
||||
//param0
|
||||
SNode* pParamNode0 = nodesListGetNode(pFunc->pParameterList, 0);
|
||||
if (nodeType(pParamNode0) != QUERY_NODE_COLUMN) {
|
||||
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
|
||||
"The first parameter of APERCENTILE function can only be column");
|
||||
}
|
||||
|
||||
//param1
|
||||
SNode* pParamNode1 = nodesListGetNode(pFunc->pParameterList, 1);
|
||||
if (nodeType(pParamNode1) != QUERY_NODE_VALUE) {
|
||||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
SNode* pParamNode = nodesListGetNode(pFunc->pParameterList, 1);
|
||||
if (nodeType(pParamNode) != QUERY_NODE_VALUE) {
|
||||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
SValueNode* pValue = (SValueNode*)pParamNode;
|
||||
SValueNode* pValue = (SValueNode*)pParamNode1;
|
||||
if (pValue->datum.i < 0 || pValue->datum.i > 100) {
|
||||
return invaildFuncParaValueErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
pValue->notReserved = true;
|
||||
|
||||
uint8_t para1Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
|
||||
uint8_t para2Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type;
|
||||
if (!IS_NUMERIC_TYPE(para1Type) || !IS_INTEGER_TYPE(para2Type)) {
|
||||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
//param2
|
||||
if (3 == numOfParams) {
|
||||
SNode* pPara3 = nodesListGetNode(pFunc->pParameterList, 2);
|
||||
if (QUERY_NODE_VALUE != nodeType(pPara3) || !validAperventileAlgo((SValueNode*)pPara3)) {
|
||||
uint8_t para3Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 2))->resType.type;
|
||||
if (!IS_VAR_DATA_TYPE(para3Type)) {
|
||||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
SNode* pParamNode2 = nodesListGetNode(pFunc->pParameterList, 2);
|
||||
if (QUERY_NODE_VALUE != nodeType(pParamNode2) || !validAperventileAlgo((SValueNode*)pParamNode2)) {
|
||||
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
|
||||
"Third parameter algorithm of apercentile must be 'default' or 't-digest'");
|
||||
}
|
||||
|
||||
pValue = (SValueNode*)pParamNode2;
|
||||
pValue->notReserved = true;
|
||||
}
|
||||
|
||||
pFunc->node.resType = (SDataType){.bytes = tDataTypes[TSDB_DATA_TYPE_DOUBLE].bytes, .type = TSDB_DATA_TYPE_DOUBLE};
|
||||
|
@ -700,6 +726,11 @@ static int32_t translateDiff(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
|
|||
|
||||
//param1
|
||||
if (numOfParams == 2) {
|
||||
uint8_t paraType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type;
|
||||
if (!IS_INTEGER_TYPE(paraType)) {
|
||||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
SNode* pParamNode1 = nodesListGetNode(pFunc->pParameterList, 1);
|
||||
if (QUERY_NODE_VALUE != nodeType(pParamNode1)) {
|
||||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
|
@ -714,7 +745,13 @@ static int32_t translateDiff(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
|
|||
pValue->notReserved = true;
|
||||
}
|
||||
|
||||
pFunc->node.resType = (SDataType){.bytes = tDataTypes[colType].bytes, .type = colType};
|
||||
uint8_t resType;
|
||||
if (IS_SIGNED_NUMERIC_TYPE(colType)) {
|
||||
resType = TSDB_DATA_TYPE_BIGINT;
|
||||
} else {
|
||||
resType = TSDB_DATA_TYPE_DOUBLE;
|
||||
}
|
||||
pFunc->node.resType = (SDataType){.bytes = tDataTypes[resType].bytes, .type = resType};
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
|
|
@ -834,12 +834,14 @@ int32_t avgFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
|
|||
if (IS_INTEGER_TYPE(type)) {
|
||||
pAvgRes->result = pAvgRes->sum.isum / ((double)pAvgRes->count);
|
||||
} else {
|
||||
if (isinf(pAvgRes->sum.dsum) || isnan(pAvgRes->sum.dsum)) {
|
||||
GET_RES_INFO(pCtx)->isNullRes = 1;
|
||||
}
|
||||
pAvgRes->result = pAvgRes->sum.dsum / ((double)pAvgRes->count);
|
||||
}
|
||||
|
||||
//check for overflow
|
||||
if (isinf(pAvgRes->result) || isnan(pAvgRes->result)) {
|
||||
GET_RES_INFO(pCtx)->isNullRes = 1;
|
||||
}
|
||||
|
||||
return functionFinalize(pCtx, pBlock);
|
||||
}
|
||||
|
||||
|
@ -1963,7 +1965,7 @@ bool apercentileFunctionSetup(SqlFunctionCtx* pCtx, SResultRowEntryInfo* pResult
|
|||
if (pCtx->numOfParams == 2) {
|
||||
pInfo->algo = APERCT_ALGO_DEFAULT;
|
||||
} else if (pCtx->numOfParams == 3) {
|
||||
pInfo->algo = getApercentileAlgo(pCtx->param[2].param.pz);
|
||||
pInfo->algo = getApercentileAlgo(varDataVal(pCtx->param[2].param.pz));
|
||||
if (pInfo->algo == APERCT_ALGO_UNKNOWN) {
|
||||
return false;
|
||||
}
|
||||
|
@ -2299,15 +2301,15 @@ static void doSetPrevVal(SDiffInfo* pDiffInfo, int32_t type, const char* pv) {
|
|||
}
|
||||
|
||||
static void doHandleDiff(SDiffInfo* pDiffInfo, int32_t type, const char* pv, SColumnInfoData* pOutput, int32_t pos, int32_t order) {
|
||||
int32_t factor = (order == TSDB_ORDER_ASC)? 1:-1;
|
||||
int32_t factor = (order == TSDB_ORDER_ASC)? 1:-1;
|
||||
switch (type) {
|
||||
case TSDB_DATA_TYPE_INT: {
|
||||
int32_t v = *(int32_t*)pv;
|
||||
int32_t delta = factor*(v - pDiffInfo->prev.i64); // direct previous may be null
|
||||
int64_t delta = factor*(v - pDiffInfo->prev.i64); // direct previous may be null
|
||||
if (delta < 0 && pDiffInfo->ignoreNegative) {
|
||||
colDataSetNull_f(pOutput->nullbitmap, pos);
|
||||
} else {
|
||||
colDataAppendInt32(pOutput, pos, &delta);
|
||||
colDataAppendInt64(pOutput, pos, &delta);
|
||||
}
|
||||
pDiffInfo->prev.i64 = v;
|
||||
break;
|
||||
|
@ -2315,22 +2317,22 @@ static void doHandleDiff(SDiffInfo* pDiffInfo, int32_t type, const char* pv, SCo
|
|||
case TSDB_DATA_TYPE_BOOL:
|
||||
case TSDB_DATA_TYPE_TINYINT: {
|
||||
int8_t v = *(int8_t*)pv;
|
||||
int8_t delta = factor*(v - pDiffInfo->prev.i64); // direct previous may be null
|
||||
int64_t delta = factor*(v - pDiffInfo->prev.i64); // direct previous may be null
|
||||
if (delta < 0 && pDiffInfo->ignoreNegative) {
|
||||
colDataSetNull_f(pOutput->nullbitmap, pos);
|
||||
} else {
|
||||
colDataAppendInt8(pOutput, pos, &delta);
|
||||
colDataAppendInt64(pOutput, pos, &delta);
|
||||
}
|
||||
pDiffInfo->prev.i64 = v;
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_SMALLINT: {
|
||||
int16_t v = *(int16_t*)pv;
|
||||
int16_t delta = factor*(v - pDiffInfo->prev.i64); // direct previous may be null
|
||||
int64_t delta = factor*(v - pDiffInfo->prev.i64); // direct previous may be null
|
||||
if (delta < 0 && pDiffInfo->ignoreNegative) {
|
||||
colDataSetNull_f(pOutput->nullbitmap, pos);
|
||||
} else {
|
||||
colDataAppendInt16(pOutput, pos, &delta);
|
||||
colDataAppendInt64(pOutput, pos, &delta);
|
||||
}
|
||||
pDiffInfo->prev.i64 = v;
|
||||
break;
|
||||
|
@ -2348,11 +2350,11 @@ static void doHandleDiff(SDiffInfo* pDiffInfo, int32_t type, const char* pv, SCo
|
|||
}
|
||||
case TSDB_DATA_TYPE_FLOAT: {
|
||||
float v = *(float*)pv;
|
||||
float delta = factor*(v - pDiffInfo->prev.d64); // direct previous may be null
|
||||
double delta = factor*(v - pDiffInfo->prev.d64); // direct previous may be null
|
||||
if ((delta < 0 && pDiffInfo->ignoreNegative) || isinf(delta) || isnan(delta)) { //check for overflow
|
||||
colDataSetNull_f(pOutput->nullbitmap, pos);
|
||||
} else {
|
||||
colDataAppendFloat(pOutput, pos, &delta);
|
||||
colDataAppendDouble(pOutput, pos, &delta);
|
||||
}
|
||||
pDiffInfo->prev.d64 = v;
|
||||
break;
|
||||
|
|
|
@ -1780,7 +1780,7 @@ int32_t smlBindData(void* handle, SArray* tags, SArray* colsSchema, SArray* cols
|
|||
|
||||
// 1. set the parsed value from sql string
|
||||
for (int c = 0, j = 0; c < spd->numOfBound; ++c) {
|
||||
SSchema* pColSchema = &pSchema[spd->boundColumns[c] - 1];
|
||||
SSchema* pColSchema = &pSchema[spd->boundColumns[c]];
|
||||
|
||||
param.schema = pColSchema;
|
||||
getSTSRowAppendInfo(pBuilder->rowType, spd, c, ¶m.toffset, ¶m.colIdx);
|
||||
|
|
|
@ -27,6 +27,8 @@ extern "C" {
|
|||
#include "syncInt.h"
|
||||
#include "taosdef.h"
|
||||
|
||||
#define CONFIG_FILE_LEN 1024
|
||||
|
||||
typedef struct SRaftCfg {
|
||||
SSyncCfg cfg;
|
||||
TdFilePtr pFile;
|
||||
|
|
|
@ -50,10 +50,18 @@ int32_t raftCfgPersist(SRaftCfg *pRaftCfg) {
|
|||
|
||||
char *s = raftCfg2Str(pRaftCfg);
|
||||
taosLSeekFile(pRaftCfg->pFile, 0, SEEK_SET);
|
||||
int64_t ret = taosWriteFile(pRaftCfg->pFile, s, strlen(s) + 1);
|
||||
assert(ret == strlen(s) + 1);
|
||||
taosMemoryFree(s);
|
||||
|
||||
char buf[CONFIG_FILE_LEN];
|
||||
memset(buf, 0, sizeof(buf));
|
||||
ASSERT(strlen(s) + 1 <= CONFIG_FILE_LEN);
|
||||
snprintf(buf, sizeof(buf), "%s", s);
|
||||
int64_t ret = taosWriteFile(pRaftCfg->pFile, buf, sizeof(buf));
|
||||
assert(ret == sizeof(buf));
|
||||
|
||||
//int64_t ret = taosWriteFile(pRaftCfg->pFile, s, strlen(s) + 1);
|
||||
//assert(ret == strlen(s) + 1);
|
||||
|
||||
taosMemoryFree(s);
|
||||
taosFsyncFile(pRaftCfg->pFile);
|
||||
return 0;
|
||||
}
|
||||
|
@ -163,8 +171,16 @@ int32_t raftCfgCreateFile(SSyncCfg *pCfg, int8_t isStandBy, const char *path) {
|
|||
raftCfg.cfg = *pCfg;
|
||||
raftCfg.isStandBy = isStandBy;
|
||||
char * s = raftCfg2Str(&raftCfg);
|
||||
int64_t ret = taosWriteFile(pFile, s, strlen(s) + 1);
|
||||
assert(ret == strlen(s) + 1);
|
||||
|
||||
char buf[CONFIG_FILE_LEN];
|
||||
memset(buf, 0, sizeof(buf));
|
||||
ASSERT(strlen(s) + 1 <= CONFIG_FILE_LEN);
|
||||
snprintf(buf, sizeof(buf), "%s", s);
|
||||
int64_t ret = taosWriteFile(pFile, buf, sizeof(buf));
|
||||
assert(ret == sizeof(buf));
|
||||
|
||||
//int64_t ret = taosWriteFile(pFile, s, strlen(s) + 1);
|
||||
//assert(ret == strlen(s) + 1);
|
||||
|
||||
taosMemoryFree(s);
|
||||
taosCloseFile(&pFile);
|
||||
|
|
|
@ -204,7 +204,7 @@ void taosRemoveOldFiles(const char *dirname, int32_t keepDays) {
|
|||
int32_t taosExpandDir(const char *dirname, char *outname, int32_t maxlen) {
|
||||
wordexp_t full_path;
|
||||
if (0 != wordexp(dirname, &full_path, 0)) {
|
||||
// printf("failed to expand path:%s since %s", dirname, strerror(errno));
|
||||
printf("failed to expand path:%s since %s", dirname, strerror(errno));
|
||||
wordfree(&full_path);
|
||||
return -1;
|
||||
}
|
||||
|
|
|
@ -32,6 +32,700 @@
|
|||
#pragma warning(disable : 4091)
|
||||
#include <DbgHelp.h>
|
||||
#pragma warning(pop)
|
||||
|
||||
char *win_tz[139][2]={{"China Standard Time", "Asia/Shanghai"},
|
||||
{"AUS Central Standard Time", "Australia/Darwin"},
|
||||
{"AUS Eastern Standard Time", "Australia/Sydney"},
|
||||
{"Afghanistan Standard Time", "Asia/Kabul"},
|
||||
{"Alaskan Standard Time", "America/Anchorage"},
|
||||
{"Aleutian Standard Time", "America/Adak"},
|
||||
{"Altai Standard Time", "Asia/Barnaul"},
|
||||
{"Arab Standard Time", "Asia/Riyadh"},
|
||||
{"Arabian Standard Time", "Asia/Dubai"},
|
||||
{"Arabic Standard Time", "Asia/Baghdad"},
|
||||
{"Argentina Standard Time", "America/Buenos_Aires"},
|
||||
{"Astrakhan Standard Time", "Europe/Astrakhan"},
|
||||
{"Atlantic Standard Time", "America/Halifax"},
|
||||
{"Aus Central W. Standard Time", "Australia/Eucla"},
|
||||
{"Azerbaijan Standard Time", "Asia/Baku"},
|
||||
{"Azores Standard Time", "Atlantic/Azores"},
|
||||
{"Bahia Standard Time", "America/Bahia"},
|
||||
{"Bangladesh Standard Time", "Asia/Dhaka"},
|
||||
{"Belarus Standard Time", "Europe/Minsk"},
|
||||
{"Bougainville Standard Time", "Pacific/Bougainville"},
|
||||
{"Canada Central Standard Time", "America/Regina"},
|
||||
{"Cape Verde Standard Time", "Atlantic/Cape_Verde"},
|
||||
{"Caucasus Standard Time", "Asia/Yerevan"},
|
||||
{"Cen. Australia Standard Time", "Australia/Adelaide"},
|
||||
{"Central America Standard Time", "America/Guatemala"},
|
||||
{"Central Asia Standard Time", "Asia/Almaty"},
|
||||
{"Central Brazilian Standard Time", "America/Cuiaba"},
|
||||
{"Central Europe Standard Time", "Europe/Budapest"},
|
||||
{"Central European Standard Time", "Europe/Warsaw"},
|
||||
{"Central Pacific Standard Time", "Pacific/Guadalcanal"},
|
||||
{"Central Standard Time", "America/Chicago"},
|
||||
{"Central Standard Time (Mexico)", "America/Mexico_City"},
|
||||
{"Chatham Islands Standard Time", "Pacific/Chatham"},
|
||||
{"Cuba Standard Time", "America/Havana"},
|
||||
{"Dateline Standard Time", "Etc/GMT+12"},
|
||||
{"E. Africa Standard Time", "Africa/Nairobi"},
|
||||
{"E. Australia Standard Time", "Australia/Brisbane"},
|
||||
{"E. Europe Standard Time", "Europe/Chisinau"},
|
||||
{"E. South America Standard Time", "America/Sao_Paulo"},
|
||||
{"Easter Island Standard Time", "Pacific/Easter"},
|
||||
{"Eastern Standard Time", "America/New_York"},
|
||||
{"Eastern Standard Time (Mexico)", "America/Cancun"},
|
||||
{"Egypt Standard Time", "Africa/Cairo"},
|
||||
{"Ekaterinburg Standard Time", "Asia/Yekaterinburg"},
|
||||
{"FLE Standard Time", "Europe/Kiev"},
|
||||
{"Fiji Standard Time", "Pacific/Fiji"},
|
||||
{"GMT Standard Time", "Europe/London"},
|
||||
{"GTB Standard Time", "Europe/Bucharest"},
|
||||
{"Georgian Standard Time", "Asia/Tbilisi"},
|
||||
{"Greenland Standard Time", "America/Godthab"},
|
||||
{"Greenwich Standard Time", "Atlantic/Reykjavik"},
|
||||
{"Haiti Standard Time", "America/Port-au-Prince"},
|
||||
{"Hawaiian Standard Time", "Pacific/Honolulu"},
|
||||
{"India Standard Time", "Asia/Calcutta"},
|
||||
{"Iran Standard Time", "Asia/Tehran"},
|
||||
{"Israel Standard Time", "Asia/Jerusalem"},
|
||||
{"Jordan Standard Time", "Asia/Amman"},
|
||||
{"Kaliningrad Standard Time", "Europe/Kaliningrad"},
|
||||
{"Korea Standard Time", "Asia/Seoul"},
|
||||
{"Libya Standard Time", "Africa/Tripoli"},
|
||||
{"Line Islands Standard Time", "Pacific/Kiritimati"},
|
||||
{"Lord Howe Standard Time", "Australia/Lord_Howe"},
|
||||
{"Magadan Standard Time", "Asia/Magadan"},
|
||||
{"Magallanes Standard Time", "America/Punta_Arenas"},
|
||||
{"Marquesas Standard Time", "Pacific/Marquesas"},
|
||||
{"Mauritius Standard Time", "Indian/Mauritius"},
|
||||
{"Middle East Standard Time", "Asia/Beirut"},
|
||||
{"Montevideo Standard Time", "America/Montevideo"},
|
||||
{"Morocco Standard Time", "Africa/Casablanca"},
|
||||
{"Mountain Standard Time", "America/Denver"},
|
||||
{"Mountain Standard Time (Mexico)", "America/Chihuahua"},
|
||||
{"Myanmar Standard Time", "Asia/Rangoon"},
|
||||
{"N. Central Asia Standard Time", "Asia/Novosibirsk"},
|
||||
{"Namibia Standard Time", "Africa/Windhoek"},
|
||||
{"Nepal Standard Time", "Asia/Katmandu"},
|
||||
{"New Zealand Standard Time", "Pacific/Auckland"},
|
||||
{"Newfoundland Standard Time", "America/St_Johns"},
|
||||
{"Norfolk Standard Time", "Pacific/Norfolk"},
|
||||
{"North Asia East Standard Time", "Asia/Irkutsk"},
|
||||
{"North Asia Standard Time", "Asia/Krasnoyarsk"},
|
||||
{"North Korea Standard Time", "Asia/Pyongyang"},
|
||||
{"Omsk Standard Time", "Asia/Omsk"},
|
||||
{"Pacific SA Standard Time", "America/Santiago"},
|
||||
{"Pacific Standard Time", "America/Los_Angeles"},
|
||||
{"Pacific Standard Time (Mexico)", "America/Tijuana"},
|
||||
{"Pakistan Standard Time", "Asia/Karachi"},
|
||||
{"Paraguay Standard Time", "America/Asuncion"},
|
||||
{"Qyzylorda Standard Time", "Asia/Qyzylorda"},
|
||||
{"Romance Standard Time", "Europe/Paris"},
|
||||
{"Russia Time Zone 10", "Asia/Srednekolymsk"},
|
||||
{"Russia Time Zone 11", "Asia/Kamchatka"},
|
||||
{"Russia Time Zone 3", "Europe/Samara"},
|
||||
{"Russian Standard Time", "Europe/Moscow"},
|
||||
{"SA Eastern Standard Time", "America/Cayenne"},
|
||||
{"SA Pacific Standard Time", "America/Bogota"},
|
||||
{"SA Western Standard Time", "America/La_Paz"},
|
||||
{"SE Asia Standard Time", "Asia/Bangkok"},
|
||||
{"Saint Pierre Standard Time", "America/Miquelon"},
|
||||
{"Sakhalin Standard Time", "Asia/Sakhalin"},
|
||||
{"Samoa Standard Time", "Pacific/Apia"},
|
||||
{"Sao Tome Standard Time", "Africa/Sao_Tome"},
|
||||
{"Saratov Standard Time", "Europe/Saratov"},
|
||||
{"Singapore Standard Time", "Asia/Singapore"},
|
||||
{"South Africa Standard Time", "Africa/Johannesburg"},
|
||||
{"South Sudan Standard Time", "Africa/Juba"},
|
||||
{"Sri Lanka Standard Time", "Asia/Colombo"},
|
||||
{"Sudan Standard Time", "Africa/Khartoum"},
|
||||
{"Syria Standard Time", "Asia/Damascus"},
|
||||
{"Taipei Standard Time", "Asia/Taipei"},
|
||||
{"Tasmania Standard Time", "Australia/Hobart"},
|
||||
{"Tocantins Standard Time", "America/Araguaina"},
|
||||
{"Tokyo Standard Time", "Asia/Tokyo"},
|
||||
{"Tomsk Standard Time", "Asia/Tomsk"},
|
||||
{"Tonga Standard Time", "Pacific/Tongatapu"},
|
||||
{"Transbaikal Standard Time", "Asia/Chita"},
|
||||
{"Turkey Standard Time", "Europe/Istanbul"},
|
||||
{"Turks And Caicos Standard Time", "America/Grand_Turk"},
|
||||
{"US Eastern Standard Time", "America/Indianapolis"},
|
||||
{"US Mountain Standard Time", "America/Phoenix"},
|
||||
{"UTC", "Etc/UTC"},
|
||||
{"UTC+12", "Etc/GMT-12"},
|
||||
{"UTC+13", "Etc/GMT-13"},
|
||||
{"UTC-02", "Etc/GMT+2"},
|
||||
{"UTC-08", "Etc/GMT+8"},
|
||||
{"UTC-09", "Etc/GMT+9"},
|
||||
{"UTC-11", "Etc/GMT+11"},
|
||||
{"Ulaanbaatar Standard Time", "Asia/Ulaanbaatar"},
|
||||
{"Venezuela Standard Time", "America/Caracas"},
|
||||
{"Vladivostok Standard Time", "Asia/Vladivostok"},
|
||||
{"Volgograd Standard Time", "Europe/Volgograd"},
|
||||
{"W. Australia Standard Time", "Australia/Perth"},
|
||||
{"W. Central Africa Standard Time", "Africa/Lagos"},
|
||||
{"W. Europe Standard Time", "Europe/Berlin"},
|
||||
{"W. Mongolia Standard Time", "Asia/Hovd"},
|
||||
{"West Asia Standard Time", "Asia/Tashkent"},
|
||||
{"West Bank Standard Time", "Asia/Hebron"},
|
||||
{"West Pacific Standard Time", "Pacific/Port_Moresby"},
|
||||
{"Yakutsk Standard Time", "Asia/Yakutsk"},
|
||||
{"Yukon Standard Time", "America/Whitehorse"}};
|
||||
char *tz_win[554][2]={{"Asia/Shanghai", "China Standard Time"},
|
||||
{"Africa/Abidjan", "Greenwich Standard Time"},
|
||||
{"Africa/Accra", "Greenwich Standard Time"},
|
||||
{"Africa/Addis_Ababa", "E. Africa Standard Time"},
|
||||
{"Africa/Algiers", "W. Central Africa Standard Time"},
|
||||
{"Africa/Asmera", "E. Africa Standard Time"},
|
||||
{"Africa/Bamako", "Greenwich Standard Time"},
|
||||
{"Africa/Bangui", "W. Central Africa Standard Time"},
|
||||
{"Africa/Banjul", "Greenwich Standard Time"},
|
||||
{"Africa/Bissau", "Greenwich Standard Time"},
|
||||
{"Africa/Blantyre", "South Africa Standard Time"},
|
||||
{"Africa/Brazzaville", "W. Central Africa Standard Time"},
|
||||
{"Africa/Bujumbura", "South Africa Standard Time"},
|
||||
{"Africa/Cairo", "Egypt Standard Time"},
|
||||
{"Africa/Casablanca", "Morocco Standard Time"},
|
||||
{"Africa/Ceuta", "Romance Standard Time"},
|
||||
{"Africa/Conakry", "Greenwich Standard Time"},
|
||||
{"Africa/Dakar", "Greenwich Standard Time"},
|
||||
{"Africa/Dar_es_Salaam", "E. Africa Standard Time"},
|
||||
{"Africa/Djibouti", "E. Africa Standard Time"},
|
||||
{"Africa/Douala", "W. Central Africa Standard Time"},
|
||||
{"Africa/El_Aaiun", "Morocco Standard Time"},
|
||||
{"Africa/Freetown", "Greenwich Standard Time"},
|
||||
{"Africa/Gaborone", "South Africa Standard Time"},
|
||||
{"Africa/Harare", "South Africa Standard Time"},
|
||||
{"Africa/Johannesburg", "South Africa Standard Time"},
|
||||
{"Africa/Juba", "South Sudan Standard Time"},
|
||||
{"Africa/Kampala", "E. Africa Standard Time"},
|
||||
{"Africa/Khartoum", "Sudan Standard Time"},
|
||||
{"Africa/Kigali", "South Africa Standard Time"},
|
||||
{"Africa/Kinshasa", "W. Central Africa Standard Time"},
|
||||
{"Africa/Lagos", "W. Central Africa Standard Time"},
|
||||
{"Africa/Libreville", "W. Central Africa Standard Time"},
|
||||
{"Africa/Lome", "Greenwich Standard Time"},
|
||||
{"Africa/Luanda", "W. Central Africa Standard Time"},
|
||||
{"Africa/Lubumbashi", "South Africa Standard Time"},
|
||||
{"Africa/Lusaka", "South Africa Standard Time"},
|
||||
{"Africa/Malabo", "W. Central Africa Standard Time"},
|
||||
{"Africa/Maputo", "South Africa Standard Time"},
|
||||
{"Africa/Maseru", "South Africa Standard Time"},
|
||||
{"Africa/Mbabane", "South Africa Standard Time"},
|
||||
{"Africa/Mogadishu", "E. Africa Standard Time"},
|
||||
{"Africa/Monrovia", "Greenwich Standard Time"},
|
||||
{"Africa/Nairobi", "E. Africa Standard Time"},
|
||||
{"Africa/Ndjamena", "W. Central Africa Standard Time"},
|
||||
{"Africa/Niamey", "W. Central Africa Standard Time"},
|
||||
{"Africa/Nouakchott", "Greenwich Standard Time"},
|
||||
{"Africa/Ouagadougou", "Greenwich Standard Time"},
|
||||
{"Africa/Porto-Novo", "W. Central Africa Standard Time"},
|
||||
{"Africa/Sao_Tome", "Sao Tome Standard Time"},
|
||||
{"Africa/Timbuktu", "Greenwich Standard Time"},
|
||||
{"Africa/Tripoli", "Libya Standard Time"},
|
||||
{"Africa/Tunis", "W. Central Africa Standard Time"},
|
||||
{"Africa/Windhoek", "Namibia Standard Time"},
|
||||
{"America/Adak", "Aleutian Standard Time"},
|
||||
{"America/Anchorage", "Alaskan Standard Time"},
|
||||
{"America/Anguilla", "SA Western Standard Time"},
|
||||
{"America/Antigua", "SA Western Standard Time"},
|
||||
{"America/Araguaina", "Tocantins Standard Time"},
|
||||
{"America/Argentina/La_Rioja", "Argentina Standard Time"},
|
||||
{"America/Argentina/Rio_Gallegos", "Argentina Standard Time"},
|
||||
{"America/Argentina/Salta", "Argentina Standard Time"},
|
||||
{"America/Argentina/San_Juan", "Argentina Standard Time"},
|
||||
{"America/Argentina/San_Luis", "Argentina Standard Time"},
|
||||
{"America/Argentina/Tucuman", "Argentina Standard Time"},
|
||||
{"America/Argentina/Ushuaia", "Argentina Standard Time"},
|
||||
{"America/Aruba", "SA Western Standard Time"},
|
||||
{"America/Asuncion", "Paraguay Standard Time"},
|
||||
{"America/Atka", "Aleutian Standard Time"},
|
||||
{"America/Bahia", "Bahia Standard Time"},
|
||||
{"America/Bahia_Banderas", "Central Standard Time (Mexico)"},
|
||||
{"America/Barbados", "SA Western Standard Time"},
|
||||
{"America/Belem", "SA Eastern Standard Time"},
|
||||
{"America/Belize", "Central America Standard Time"},
|
||||
{"America/Blanc-Sablon", "SA Western Standard Time"},
|
||||
{"America/Boa_Vista", "SA Western Standard Time"},
|
||||
{"America/Bogota", "SA Pacific Standard Time"},
|
||||
{"America/Boise", "Mountain Standard Time"},
|
||||
{"America/Buenos_Aires", "Argentina Standard Time"},
|
||||
{"America/Cambridge_Bay", "Mountain Standard Time"},
|
||||
{"America/Campo_Grande", "Central Brazilian Standard Time"},
|
||||
{"America/Cancun", "Eastern Standard Time (Mexico)"},
|
||||
{"America/Caracas", "Venezuela Standard Time"},
|
||||
{"America/Catamarca", "Argentina Standard Time"},
|
||||
{"America/Cayenne", "SA Eastern Standard Time"},
|
||||
{"America/Cayman", "SA Pacific Standard Time"},
|
||||
{"America/Chicago", "Central Standard Time"},
|
||||
{"America/Chihuahua", "Mountain Standard Time (Mexico)"},
|
||||
{"America/Coral_Harbour", "SA Pacific Standard Time"},
|
||||
{"America/Cordoba", "Argentina Standard Time"},
|
||||
{"America/Costa_Rica", "Central America Standard Time"},
|
||||
{"America/Creston", "US Mountain Standard Time"},
|
||||
{"America/Cuiaba", "Central Brazilian Standard Time"},
|
||||
{"America/Curacao", "SA Western Standard Time"},
|
||||
{"America/Danmarkshavn", "Greenwich Standard Time"},
|
||||
{"America/Dawson", "Yukon Standard Time"},
|
||||
{"America/Dawson_Creek", "US Mountain Standard Time"},
|
||||
{"America/Denver", "Mountain Standard Time"},
|
||||
{"America/Detroit", "Eastern Standard Time"},
|
||||
{"America/Dominica", "SA Western Standard Time"},
|
||||
{"America/Edmonton", "Mountain Standard Time"},
|
||||
{"America/Eirunepe", "SA Pacific Standard Time"},
|
||||
{"America/El_Salvador", "Central America Standard Time"},
|
||||
{"America/Ensenada", "Pacific Standard Time (Mexico)"},
|
||||
{"America/Fort_Nelson", "US Mountain Standard Time"},
|
||||
{"America/Fortaleza", "SA Eastern Standard Time"},
|
||||
{"America/Glace_Bay", "Atlantic Standard Time"},
|
||||
{"America/Godthab", "Greenland Standard Time"},
|
||||
{"America/Goose_Bay", "Atlantic Standard Time"},
|
||||
{"America/Grand_Turk", "Turks And Caicos Standard Time"},
|
||||
{"America/Grenada", "SA Western Standard Time"},
|
||||
{"America/Guadeloupe", "SA Western Standard Time"},
|
||||
{"America/Guatemala", "Central America Standard Time"},
|
||||
{"America/Guayaquil", "SA Pacific Standard Time"},
|
||||
{"America/Guyana", "SA Western Standard Time"},
|
||||
{"America/Halifax", "Atlantic Standard Time"},
|
||||
{"America/Havana", "Cuba Standard Time"},
|
||||
{"America/Hermosillo", "US Mountain Standard Time"},
|
||||
{"America/Indiana/Knox", "Central Standard Time"},
|
||||
{"America/Indiana/Marengo", "US Eastern Standard Time"},
|
||||
{"America/Indiana/Petersburg", "Eastern Standard Time"},
|
||||
{"America/Indiana/Tell_City", "Central Standard Time"},
|
||||
{"America/Indiana/Vevay", "US Eastern Standard Time"},
|
||||
{"America/Indiana/Vincennes", "Eastern Standard Time"},
|
||||
{"America/Indiana/Winamac", "Eastern Standard Time"},
|
||||
{"America/Indianapolis", "US Eastern Standard Time"},
|
||||
{"America/Inuvik", "Mountain Standard Time"},
|
||||
{"America/Iqaluit", "Eastern Standard Time"},
|
||||
{"America/Jamaica", "SA Pacific Standard Time"},
|
||||
{"America/Jujuy", "Argentina Standard Time"},
|
||||
{"America/Juneau", "Alaskan Standard Time"},
|
||||
{"America/Kentucky/Monticello", "Eastern Standard Time"},
|
||||
{"America/Knox_IN", "Central Standard Time"},
|
||||
{"America/Kralendijk", "SA Western Standard Time"},
|
||||
{"America/La_Paz", "SA Western Standard Time"},
|
||||
{"America/Lima", "SA Pacific Standard Time"},
|
||||
{"America/Los_Angeles", "Pacific Standard Time"},
|
||||
{"America/Louisville", "Eastern Standard Time"},
|
||||
{"America/Lower_Princes", "SA Western Standard Time"},
|
||||
{"America/Maceio", "SA Eastern Standard Time"},
|
||||
{"America/Managua", "Central America Standard Time"},
|
||||
{"America/Manaus", "SA Western Standard Time"},
|
||||
{"America/Marigot", "SA Western Standard Time"},
|
||||
{"America/Martinique", "SA Western Standard Time"},
|
||||
{"America/Matamoros", "Central Standard Time"},
|
||||
{"America/Mazatlan", "Mountain Standard Time (Mexico)"},
|
||||
{"America/Mendoza", "Argentina Standard Time"},
|
||||
{"America/Menominee", "Central Standard Time"},
|
||||
{"America/Merida", "Central Standard Time (Mexico)"},
|
||||
{"America/Metlakatla", "Alaskan Standard Time"},
|
||||
{"America/Mexico_City", "Central Standard Time (Mexico)"},
|
||||
{"America/Miquelon", "Saint Pierre Standard Time"},
|
||||
{"America/Moncton", "Atlantic Standard Time"},
|
||||
{"America/Monterrey", "Central Standard Time (Mexico)"},
|
||||
{"America/Montevideo", "Montevideo Standard Time"},
|
||||
{"America/Montreal", "Eastern Standard Time"},
|
||||
{"America/Montserrat", "SA Western Standard Time"},
|
||||
{"America/Nassau", "Eastern Standard Time"},
|
||||
{"America/New_York", "Eastern Standard Time"},
|
||||
{"America/Nipigon", "Eastern Standard Time"},
|
||||
{"America/Nome", "Alaskan Standard Time"},
|
||||
{"America/Noronha", "UTC-02"},
|
||||
{"America/North_Dakota/Beulah", "Central Standard Time"},
|
||||
{"America/North_Dakota/Center", "Central Standard Time"},
|
||||
{"America/North_Dakota/New_Salem", "Central Standard Time"},
|
||||
{"America/Ojinaga", "Mountain Standard Time"},
|
||||
{"America/Panama", "SA Pacific Standard Time"},
|
||||
{"America/Pangnirtung", "Eastern Standard Time"},
|
||||
{"America/Paramaribo", "SA Eastern Standard Time"},
|
||||
{"America/Phoenix", "US Mountain Standard Time"},
|
||||
{"America/Port-au-Prince", "Haiti Standard Time"},
|
||||
{"America/Port_of_Spain", "SA Western Standard Time"},
|
||||
{"America/Porto_Acre", "SA Pacific Standard Time"},
|
||||
{"America/Porto_Velho", "SA Western Standard Time"},
|
||||
{"America/Puerto_Rico", "SA Western Standard Time"},
|
||||
{"America/Punta_Arenas", "Magallanes Standard Time"},
|
||||
{"America/Rainy_River", "Central Standard Time"},
|
||||
{"America/Rankin_Inlet", "Central Standard Time"},
|
||||
{"America/Recife", "SA Eastern Standard Time"},
|
||||
{"America/Regina", "Canada Central Standard Time"},
|
||||
{"America/Resolute", "Central Standard Time"},
|
||||
{"America/Rio_Branco", "SA Pacific Standard Time"},
|
||||
{"America/Santa_Isabel", "Pacific Standard Time (Mexico)"},
|
||||
{"America/Santarem", "SA Eastern Standard Time"},
|
||||
{"America/Santiago", "Pacific SA Standard Time"},
|
||||
{"America/Santo_Domingo", "SA Western Standard Time"},
|
||||
{"America/Sao_Paulo", "E. South America Standard Time"},
|
||||
{"America/Scoresbysund", "Azores Standard Time"},
|
||||
{"America/Shiprock", "Mountain Standard Time"},
|
||||
{"America/Sitka", "Alaskan Standard Time"},
|
||||
{"America/St_Barthelemy", "SA Western Standard Time"},
|
||||
{"America/St_Johns", "Newfoundland Standard Time"},
|
||||
{"America/St_Kitts", "SA Western Standard Time"},
|
||||
{"America/St_Lucia", "SA Western Standard Time"},
|
||||
{"America/St_Thomas", "SA Western Standard Time"},
|
||||
{"America/St_Vincent", "SA Western Standard Time"},
|
||||
{"America/Swift_Current", "Canada Central Standard Time"},
|
||||
{"America/Tegucigalpa", "Central America Standard Time"},
|
||||
{"America/Thule", "Atlantic Standard Time"},
|
||||
{"America/Thunder_Bay", "Eastern Standard Time"},
|
||||
{"America/Tijuana", "Pacific Standard Time (Mexico)"},
|
||||
{"America/Toronto", "Eastern Standard Time"},
|
||||
{"America/Tortola", "SA Western Standard Time"},
|
||||
{"America/Vancouver", "Pacific Standard Time"},
|
||||
{"America/Virgin", "SA Western Standard Time"},
|
||||
{"America/Whitehorse", "Yukon Standard Time"},
|
||||
{"America/Winnipeg", "Central Standard Time"},
|
||||
{"America/Yakutat", "Alaskan Standard Time"},
|
||||
{"America/Yellowknife", "Mountain Standard Time"},
|
||||
{"Antarctica/Casey", "Central Pacific Standard Time"},
|
||||
{"Antarctica/Davis", "SE Asia Standard Time"},
|
||||
{"Antarctica/DumontDUrville", "West Pacific Standard Time"},
|
||||
{"Antarctica/Macquarie", "Tasmania Standard Time"},
|
||||
{"Antarctica/Mawson", "West Asia Standard Time"},
|
||||
{"Antarctica/McMurdo", "New Zealand Standard Time"},
|
||||
{"Antarctica/Palmer", "SA Eastern Standard Time"},
|
||||
{"Antarctica/Rothera", "SA Eastern Standard Time"},
|
||||
{"Antarctica/South_Pole", "New Zealand Standard Time"},
|
||||
{"Antarctica/Syowa", "E. Africa Standard Time"},
|
||||
{"Antarctica/Vostok", "Central Asia Standard Time"},
|
||||
{"Arctic/Longyearbyen", "W. Europe Standard Time"},
|
||||
{"Asia/Aden", "Arab Standard Time"},
|
||||
{"Asia/Almaty", "Central Asia Standard Time"},
|
||||
{"Asia/Amman", "Jordan Standard Time"},
|
||||
{"Asia/Anadyr", "Russia Time Zone 11"},
|
||||
{"Asia/Aqtau", "West Asia Standard Time"},
|
||||
{"Asia/Aqtobe", "West Asia Standard Time"},
|
||||
{"Asia/Ashgabat", "West Asia Standard Time"},
|
||||
{"Asia/Ashkhabad", "West Asia Standard Time"},
|
||||
{"Asia/Atyrau", "West Asia Standard Time"},
|
||||
{"Asia/Baghdad", "Arabic Standard Time"},
|
||||
{"Asia/Bahrain", "Arab Standard Time"},
|
||||
{"Asia/Baku", "Azerbaijan Standard Time"},
|
||||
{"Asia/Bangkok", "SE Asia Standard Time"},
|
||||
{"Asia/Barnaul", "Altai Standard Time"},
|
||||
{"Asia/Beirut", "Middle East Standard Time"},
|
||||
{"Asia/Bishkek", "Central Asia Standard Time"},
|
||||
{"Asia/Brunei", "Singapore Standard Time"},
|
||||
{"Asia/Calcutta", "India Standard Time"},
|
||||
{"Asia/Chita", "Transbaikal Standard Time"},
|
||||
{"Asia/Choibalsan", "Ulaanbaatar Standard Time"},
|
||||
{"Asia/Chongqing", "China Standard Time"},
|
||||
{"Asia/Chungking", "China Standard Time"},
|
||||
{"Asia/Colombo", "Sri Lanka Standard Time"},
|
||||
{"Asia/Dacca", "Bangladesh Standard Time"},
|
||||
{"Asia/Damascus", "Syria Standard Time"},
|
||||
{"Asia/Dhaka", "Bangladesh Standard Time"},
|
||||
{"Asia/Dili", "Tokyo Standard Time"},
|
||||
{"Asia/Dubai", "Arabian Standard Time"},
|
||||
{"Asia/Dushanbe", "West Asia Standard Time"},
|
||||
{"Asia/Famagusta", "GTB Standard Time"},
|
||||
{"Asia/Gaza", "West Bank Standard Time"},
|
||||
{"Asia/Harbin", "China Standard Time"},
|
||||
{"Asia/Hebron", "West Bank Standard Time"},
|
||||
{"Asia/Hong_Kong", "China Standard Time"},
|
||||
{"Asia/Hovd", "W. Mongolia Standard Time"},
|
||||
{"Asia/Irkutsk", "North Asia East Standard Time"},
|
||||
{"Asia/Jakarta", "SE Asia Standard Time"},
|
||||
{"Asia/Jayapura", "Tokyo Standard Time"},
|
||||
{"Asia/Jerusalem", "Israel Standard Time"},
|
||||
{"Asia/Kabul", "Afghanistan Standard Time"},
|
||||
{"Asia/Kamchatka", "Russia Time Zone 11"},
|
||||
{"Asia/Karachi", "Pakistan Standard Time"},
|
||||
{"Asia/Kashgar", "Central Asia Standard Time"},
|
||||
{"Asia/Katmandu", "Nepal Standard Time"},
|
||||
{"Asia/Khandyga", "Yakutsk Standard Time"},
|
||||
{"Asia/Krasnoyarsk", "North Asia Standard Time"},
|
||||
{"Asia/Kuala_Lumpur", "Singapore Standard Time"},
|
||||
{"Asia/Kuching", "Singapore Standard Time"},
|
||||
{"Asia/Kuwait", "Arab Standard Time"},
|
||||
{"Asia/Macao", "China Standard Time"},
|
||||
{"Asia/Macau", "China Standard Time"},
|
||||
{"Asia/Magadan", "Magadan Standard Time"},
|
||||
{"Asia/Makassar", "Singapore Standard Time"},
|
||||
{"Asia/Manila", "Singapore Standard Time"},
|
||||
{"Asia/Muscat", "Arabian Standard Time"},
|
||||
{"Asia/Nicosia", "GTB Standard Time"},
|
||||
{"Asia/Novokuznetsk", "North Asia Standard Time"},
|
||||
{"Asia/Novosibirsk", "N. Central Asia Standard Time"},
|
||||
{"Asia/Omsk", "Omsk Standard Time"},
|
||||
{"Asia/Oral", "West Asia Standard Time"},
|
||||
{"Asia/Phnom_Penh", "SE Asia Standard Time"},
|
||||
{"Asia/Pontianak", "SE Asia Standard Time"},
|
||||
{"Asia/Pyongyang", "North Korea Standard Time"},
|
||||
{"Asia/Qatar", "Arab Standard Time"},
|
||||
{"Asia/Qostanay", "Central Asia Standard Time"},
|
||||
{"Asia/Qyzylorda", "Qyzylorda Standard Time"},
|
||||
{"Asia/Rangoon", "Myanmar Standard Time"},
|
||||
{"Asia/Riyadh", "Arab Standard Time"},
|
||||
{"Asia/Saigon", "SE Asia Standard Time"},
|
||||
{"Asia/Sakhalin", "Sakhalin Standard Time"},
|
||||
{"Asia/Samarkand", "West Asia Standard Time"},
|
||||
{"Asia/Seoul", "Korea Standard Time"},
|
||||
{"Asia/Singapore", "Singapore Standard Time"},
|
||||
{"Asia/Srednekolymsk", "Russia Time Zone 10"},
|
||||
{"Asia/Taipei", "Taipei Standard Time"},
|
||||
{"Asia/Tashkent", "West Asia Standard Time"},
|
||||
{"Asia/Tbilisi", "Georgian Standard Time"},
|
||||
{"Asia/Tehran", "Iran Standard Time"},
|
||||
{"Asia/Tel_Aviv", "Israel Standard Time"},
|
||||
{"Asia/Thimbu", "Bangladesh Standard Time"},
|
||||
{"Asia/Thimphu", "Bangladesh Standard Time"},
|
||||
{"Asia/Tokyo", "Tokyo Standard Time"},
|
||||
{"Asia/Tomsk", "Tomsk Standard Time"},
|
||||
{"Asia/Ujung_Pandang", "Singapore Standard Time"},
|
||||
{"Asia/Ulaanbaatar", "Ulaanbaatar Standard Time"},
|
||||
{"Asia/Ulan_Bator", "Ulaanbaatar Standard Time"},
|
||||
{"Asia/Urumqi", "Central Asia Standard Time"},
|
||||
{"Asia/Ust-Nera", "Vladivostok Standard Time"},
|
||||
{"Asia/Vientiane", "SE Asia Standard Time"},
|
||||
{"Asia/Vladivostok", "Vladivostok Standard Time"},
|
||||
{"Asia/Yakutsk", "Yakutsk Standard Time"},
|
||||
{"Asia/Yekaterinburg", "Ekaterinburg Standard Time"},
|
||||
{"Asia/Yerevan", "Caucasus Standard Time"},
|
||||
{"Atlantic/Azores", "Azores Standard Time"},
|
||||
{"Atlantic/Bermuda", "Atlantic Standard Time"},
|
||||
{"Atlantic/Canary", "GMT Standard Time"},
|
||||
{"Atlantic/Cape_Verde", "Cape Verde Standard Time"},
|
||||
{"Atlantic/Faeroe", "GMT Standard Time"},
|
||||
{"Atlantic/Jan_Mayen", "W. Europe Standard Time"},
|
||||
{"Atlantic/Madeira", "GMT Standard Time"},
|
||||
{"Atlantic/Reykjavik", "Greenwich Standard Time"},
|
||||
{"Atlantic/South_Georgia", "UTC-02"},
|
||||
{"Atlantic/St_Helena", "Greenwich Standard Time"},
|
||||
{"Atlantic/Stanley", "SA Eastern Standard Time"},
|
||||
{"Australia/ACT", "AUS Eastern Standard Time"},
|
||||
{"Australia/Adelaide", "Cen. Australia Standard Time"},
|
||||
{"Australia/Brisbane", "E. Australia Standard Time"},
|
||||
{"Australia/Broken_Hill", "Cen. Australia Standard Time"},
|
||||
{"Australia/Canberra", "AUS Eastern Standard Time"},
|
||||
{"Australia/Currie", "Tasmania Standard Time"},
|
||||
{"Australia/Darwin", "AUS Central Standard Time"},
|
||||
{"Australia/Eucla", "Aus Central W. Standard Time"},
|
||||
{"Australia/Hobart", "Tasmania Standard Time"},
|
||||
{"Australia/LHI", "Lord Howe Standard Time"},
|
||||
{"Australia/Lindeman", "E. Australia Standard Time"},
|
||||
{"Australia/Lord_Howe", "Lord Howe Standard Time"},
|
||||
{"Australia/Melbourne", "AUS Eastern Standard Time"},
|
||||
{"Australia/NSW", "AUS Eastern Standard Time"},
|
||||
{"Australia/North", "AUS Central Standard Time"},
|
||||
{"Australia/Perth", "W. Australia Standard Time"},
|
||||
{"Australia/Queensland", "E. Australia Standard Time"},
|
||||
{"Australia/South", "Cen. Australia Standard Time"},
|
||||
{"Australia/Sydney", "AUS Eastern Standard Time"},
|
||||
{"Australia/Tasmania", "Tasmania Standard Time"},
|
||||
{"Australia/Victoria", "AUS Eastern Standard Time"},
|
||||
{"Australia/West", "W. Australia Standard Time"},
|
||||
{"Australia/Yancowinna", "Cen. Australia Standard Time"},
|
||||
{"Brazil/Acre", "SA Pacific Standard Time"},
|
||||
{"Brazil/DeNoronha", "UTC-02"},
|
||||
{"Brazil/East", "E. South America Standard Time"},
|
||||
{"Brazil/West", "SA Western Standard Time"},
|
||||
{"CST6CDT", "Central Standard Time"},
|
||||
{"Canada/Atlantic", "Atlantic Standard Time"},
|
||||
{"Canada/Central", "Central Standard Time"},
|
||||
{"Canada/Eastern", "Eastern Standard Time"},
|
||||
{"Canada/Mountain", "Mountain Standard Time"},
|
||||
{"Canada/Newfoundland", "Newfoundland Standard Time"},
|
||||
{"Canada/Pacific", "Pacific Standard Time"},
|
||||
{"Canada/Saskatchewan", "Canada Central Standard Time"},
|
||||
{"Canada/Yukon", "Yukon Standard Time"},
|
||||
{"Chile/Continental", "Pacific SA Standard Time"},
|
||||
{"Chile/EasterIsland", "Easter Island Standard Time"},
|
||||
{"Cuba", "Cuba Standard Time"},
|
||||
{"EST5EDT", "Eastern Standard Time"},
|
||||
{"Egypt", "Egypt Standard Time"},
|
||||
{"Eire", "GMT Standard Time"},
|
||||
{"Etc/GMT", "UTC"},
|
||||
{"Etc/GMT+1", "Cape Verde Standard Time"},
|
||||
{"Etc/GMT+10", "Hawaiian Standard Time"},
|
||||
{"Etc/GMT+11", "UTC-11"},
|
||||
{"Etc/GMT+12", "Dateline Standard Time"},
|
||||
{"Etc/GMT+2", "UTC-02"},
|
||||
{"Etc/GMT+3", "SA Eastern Standard Time"},
|
||||
{"Etc/GMT+4", "SA Western Standard Time"},
|
||||
{"Etc/GMT+5", "SA Pacific Standard Time"},
|
||||
{"Etc/GMT+6", "Central America Standard Time"},
|
||||
{"Etc/GMT+7", "US Mountain Standard Time"},
|
||||
{"Etc/GMT+8", "UTC-08"},
|
||||
{"Etc/GMT+9", "UTC-09"},
|
||||
{"Etc/GMT-1", "W. Central Africa Standard Time"},
|
||||
{"Etc/GMT-10", "West Pacific Standard Time"},
|
||||
{"Etc/GMT-11", "Central Pacific Standard Time"},
|
||||
{"Etc/GMT-12", "UTC+12"},
|
||||
{"Etc/GMT-13", "UTC+13"},
|
||||
{"Etc/GMT-14", "Line Islands Standard Time"},
|
||||
{"Etc/GMT-2", "South Africa Standard Time"},
|
||||
{"Etc/GMT-3", "E. Africa Standard Time"},
|
||||
{"Etc/GMT-4", "Arabian Standard Time"},
|
||||
{"Etc/GMT-5", "West Asia Standard Time"},
|
||||
{"Etc/GMT-6", "Central Asia Standard Time"},
|
||||
{"Etc/GMT-7", "SE Asia Standard Time"},
|
||||
{"Etc/GMT-8", "Singapore Standard Time"},
|
||||
{"Etc/GMT-9", "Tokyo Standard Time"},
|
||||
{"Etc/UCT", "UTC"},
|
||||
{"Etc/UTC", "UTC"},
|
||||
{"Europe/Amsterdam", "W. Europe Standard Time"},
|
||||
{"Europe/Andorra", "W. Europe Standard Time"},
|
||||
{"Europe/Astrakhan", "Astrakhan Standard Time"},
|
||||
{"Europe/Athens", "GTB Standard Time"},
|
||||
{"Europe/Belfast", "GMT Standard Time"},
|
||||
{"Europe/Belgrade", "Central Europe Standard Time"},
|
||||
{"Europe/Berlin", "W. Europe Standard Time"},
|
||||
{"Europe/Bratislava", "Central Europe Standard Time"},
|
||||
{"Europe/Brussels", "Romance Standard Time"},
|
||||
{"Europe/Bucharest", "GTB Standard Time"},
|
||||
{"Europe/Budapest", "Central Europe Standard Time"},
|
||||
{"Europe/Busingen", "W. Europe Standard Time"},
|
||||
{"Europe/Chisinau", "E. Europe Standard Time"},
|
||||
{"Europe/Copenhagen", "Romance Standard Time"},
|
||||
{"Europe/Dublin", "GMT Standard Time"},
|
||||
{"Europe/Gibraltar", "W. Europe Standard Time"},
|
||||
{"Europe/Guernsey", "GMT Standard Time"},
|
||||
{"Europe/Helsinki", "FLE Standard Time"},
|
||||
{"Europe/Isle_of_Man", "GMT Standard Time"},
|
||||
{"Europe/Istanbul", "Turkey Standard Time"},
|
||||
{"Europe/Jersey", "GMT Standard Time"},
|
||||
{"Europe/Kaliningrad", "Kaliningrad Standard Time"},
|
||||
{"Europe/Kiev", "FLE Standard Time"},
|
||||
{"Europe/Kirov", "Russian Standard Time"},
|
||||
{"Europe/Lisbon", "GMT Standard Time"},
|
||||
{"Europe/Ljubljana", "Central Europe Standard Time"},
|
||||
{"Europe/London", "GMT Standard Time"},
|
||||
{"Europe/Luxembourg", "W. Europe Standard Time"},
|
||||
{"Europe/Madrid", "Romance Standard Time"},
|
||||
{"Europe/Malta", "W. Europe Standard Time"},
|
||||
{"Europe/Mariehamn", "FLE Standard Time"},
|
||||
{"Europe/Minsk", "Belarus Standard Time"},
|
||||
{"Europe/Monaco", "W. Europe Standard Time"},
|
||||
{"Europe/Moscow", "Russian Standard Time"},
|
||||
{"Europe/Oslo", "W. Europe Standard Time"},
|
||||
{"Europe/Paris", "Romance Standard Time"},
|
||||
{"Europe/Podgorica", "Central Europe Standard Time"},
|
||||
{"Europe/Prague", "Central Europe Standard Time"},
|
||||
{"Europe/Riga", "FLE Standard Time"},
|
||||
{"Europe/Rome", "W. Europe Standard Time"},
|
||||
{"Europe/Samara", "Russia Time Zone 3"},
|
||||
{"Europe/San_Marino", "W. Europe Standard Time"},
|
||||
{"Europe/Sarajevo", "Central European Standard Time"},
|
||||
{"Europe/Saratov", "Saratov Standard Time"},
|
||||
{"Europe/Simferopol", "Russian Standard Time"},
|
||||
{"Europe/Skopje", "Central European Standard Time"},
|
||||
{"Europe/Sofia", "FLE Standard Time"},
|
||||
{"Europe/Stockholm", "W. Europe Standard Time"},
|
||||
{"Europe/Tallinn", "FLE Standard Time"},
|
||||
{"Europe/Tirane", "Central Europe Standard Time"},
|
||||
{"Europe/Tiraspol", "E. Europe Standard Time"},
|
||||
{"Europe/Ulyanovsk", "Astrakhan Standard Time"},
|
||||
{"Europe/Uzhgorod", "FLE Standard Time"},
|
||||
{"Europe/Vaduz", "W. Europe Standard Time"},
|
||||
{"Europe/Vatican", "W. Europe Standard Time"},
|
||||
{"Europe/Vienna", "W. Europe Standard Time"},
|
||||
{"Europe/Vilnius", "FLE Standard Time"},
|
||||
{"Europe/Volgograd", "Volgograd Standard Time"},
|
||||
{"Europe/Warsaw", "Central European Standard Time"},
|
||||
{"Europe/Zagreb", "Central European Standard Time"},
|
||||
{"Europe/Zaporozhye", "FLE Standard Time"},
|
||||
{"Europe/Zurich", "W. Europe Standard Time"},
|
||||
{"GB", "GMT Standard Time"},
|
||||
{"GB-Eire", "GMT Standard Time"},
|
||||
{"GMT+0", "UTC"},
|
||||
{"GMT-0", "UTC"},
|
||||
{"GMT0", "UTC"},
|
||||
{"Greenwich", "UTC"},
|
||||
{"Hongkong", "China Standard Time"},
|
||||
{"Iceland", "Greenwich Standard Time"},
|
||||
{"Indian/Antananarivo", "E. Africa Standard Time"},
|
||||
{"Indian/Chagos", "Central Asia Standard Time"},
|
||||
{"Indian/Christmas", "SE Asia Standard Time"},
|
||||
{"Indian/Cocos", "Myanmar Standard Time"},
|
||||
{"Indian/Comoro", "E. Africa Standard Time"},
|
||||
{"Indian/Kerguelen", "West Asia Standard Time"},
|
||||
{"Indian/Mahe", "Mauritius Standard Time"},
|
||||
{"Indian/Maldives", "West Asia Standard Time"},
|
||||
{"Indian/Mauritius", "Mauritius Standard Time"},
|
||||
{"Indian/Mayotte", "E. Africa Standard Time"},
|
||||
{"Indian/Reunion", "Mauritius Standard Time"},
|
||||
{"Iran", "Iran Standard Time"},
|
||||
{"Israel", "Israel Standard Time"},
|
||||
{"Jamaica", "SA Pacific Standard Time"},
|
||||
{"Japan", "Tokyo Standard Time"},
|
||||
{"Kwajalein", "UTC+12"},
|
||||
{"Libya", "Libya Standard Time"},
|
||||
{"MST7MDT", "Mountain Standard Time"},
|
||||
{"Mexico/BajaNorte", "Pacific Standard Time (Mexico)"},
|
||||
{"Mexico/BajaSur", "Mountain Standard Time (Mexico)"},
|
||||
{"Mexico/General", "Central Standard Time (Mexico)"},
|
||||
{"NZ", "New Zealand Standard Time"},
|
||||
{"NZ-CHAT", "Chatham Islands Standard Time"},
|
||||
{"Navajo", "Mountain Standard Time"},
|
||||
{"PRC", "China Standard Time"},
|
||||
{"PST8PDT", "Pacific Standard Time"},
|
||||
{"Pacific/Apia", "Samoa Standard Time"},
|
||||
{"Pacific/Auckland", "New Zealand Standard Time"},
|
||||
{"Pacific/Bougainville", "Bougainville Standard Time"},
|
||||
{"Pacific/Chatham", "Chatham Islands Standard Time"},
|
||||
{"Pacific/Easter", "Easter Island Standard Time"},
|
||||
{"Pacific/Efate", "Central Pacific Standard Time"},
|
||||
{"Pacific/Enderbury", "UTC+13"},
|
||||
{"Pacific/Fakaofo", "UTC+13"},
|
||||
{"Pacific/Fiji", "Fiji Standard Time"},
|
||||
{"Pacific/Funafuti", "UTC+12"},
|
||||
{"Pacific/Galapagos", "Central America Standard Time"},
|
||||
{"Pacific/Gambier", "UTC-09"},
|
||||
{"Pacific/Guadalcanal", "Central Pacific Standard Time"},
|
||||
{"Pacific/Guam", "West Pacific Standard Time"},
|
||||
{"Pacific/Honolulu", "Hawaiian Standard Time"},
|
||||
{"Pacific/Johnston", "Hawaiian Standard Time"},
|
||||
{"Pacific/Kiritimati", "Line Islands Standard Time"},
|
||||
{"Pacific/Kosrae", "Central Pacific Standard Time"},
|
||||
{"Pacific/Kwajalein", "UTC+12"},
|
||||
{"Pacific/Majuro", "UTC+12"},
|
||||
{"Pacific/Marquesas", "Marquesas Standard Time"},
|
||||
{"Pacific/Midway", "UTC-11"},
|
||||
{"Pacific/Nauru", "UTC+12"},
|
||||
{"Pacific/Niue", "UTC-11"},
|
||||
{"Pacific/Norfolk", "Norfolk Standard Time"},
|
||||
{"Pacific/Noumea", "Central Pacific Standard Time"},
|
||||
{"Pacific/Pago_Pago", "UTC-11"},
|
||||
{"Pacific/Palau", "Tokyo Standard Time"},
|
||||
{"Pacific/Pitcairn", "UTC-08"},
|
||||
{"Pacific/Ponape", "Central Pacific Standard Time"},
|
||||
{"Pacific/Port_Moresby", "West Pacific Standard Time"},
|
||||
{"Pacific/Rarotonga", "Hawaiian Standard Time"},
|
||||
{"Pacific/Saipan", "West Pacific Standard Time"},
|
||||
{"Pacific/Samoa", "UTC-11"},
|
||||
{"Pacific/Tahiti", "Hawaiian Standard Time"},
|
||||
{"Pacific/Tarawa", "UTC+12"},
|
||||
{"Pacific/Tongatapu", "Tonga Standard Time"},
|
||||
{"Pacific/Truk", "West Pacific Standard Time"},
|
||||
{"Pacific/Wake", "UTC+12"},
|
||||
{"Pacific/Wallis", "UTC+12"},
|
||||
{"Poland", "Central European Standard Time"},
|
||||
{"Portugal", "GMT Standard Time"},
|
||||
{"ROC", "Taipei Standard Time"},
|
||||
{"ROK", "Korea Standard Time"},
|
||||
{"Singapore", "Singapore Standard Time"},
|
||||
{"Turkey", "Turkey Standard Time"},
|
||||
{"UCT", "UTC"},
|
||||
{"US/Alaska", "Alaskan Standard Time"},
|
||||
{"US/Aleutian", "Aleutian Standard Time"},
|
||||
{"US/Arizona", "US Mountain Standard Time"},
|
||||
{"US/Central", "Central Standard Time"},
|
||||
{"US/Eastern", "Eastern Standard Time"},
|
||||
{"US/Hawaii", "Hawaiian Standard Time"},
|
||||
{"US/Indiana-Starke", "Central Standard Time"},
|
||||
{"US/Michigan", "Eastern Standard Time"},
|
||||
{"US/Mountain", "Mountain Standard Time"},
|
||||
{"US/Pacific", "Pacific Standard Time"},
|
||||
{"US/Samoa", "UTC-11"},
|
||||
{"UTC", "UTC"},
|
||||
{"Universal", "UTC"},
|
||||
{"W-SU", "Russian Standard Time"},
|
||||
{"Zulu", "UTC"}};
|
||||
#elif defined(_TD_DARWIN_64)
|
||||
#include <errno.h>
|
||||
#include <libproc.h>
|
||||
|
@ -61,19 +755,33 @@ void taosSetSystemTimezone(const char *inTimezoneStr, char *outTimezoneStr, int8
|
|||
|
||||
#ifdef WINDOWS
|
||||
char winStr[TD_LOCALE_LEN * 2];
|
||||
sprintf(winStr, "TZ=%s", buf);
|
||||
putenv(winStr);
|
||||
tzset();
|
||||
/*
|
||||
* get CURRENT time zone.
|
||||
* system current time zone is affected by daylight saving time(DST)
|
||||
*
|
||||
* e.g., the local time zone of London in DST is GMT+01:00,
|
||||
* otherwise is GMT+00:00
|
||||
*/
|
||||
memset(winStr, 0, sizeof(winStr));
|
||||
for (size_t i = 0; i < 554; i++) {
|
||||
if (strcmp(tz_win[i][0],buf) == 0) {
|
||||
char keyPath[100];
|
||||
char keyValue[100];
|
||||
DWORD keyValueSize = sizeof(keyValue);
|
||||
sprintf(keyPath, "SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\Time Zones\\%s",tz_win[i][1]);
|
||||
RegGetValue(HKEY_LOCAL_MACHINE, keyPath, "Display", RRF_RT_ANY, NULL, (PVOID)&keyValue, &keyValueSize);
|
||||
if (keyValueSize > 0) {
|
||||
keyValue[4] = (keyValue[4] == '+' ? '-' : '+');
|
||||
keyValue[10] = 0;
|
||||
sprintf(winStr, "TZ=%s:00", &(keyValue[1]));
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
char *p = strchr(inTimezoneStr, '+');
|
||||
if (p == NULL) p = strchr(inTimezoneStr, '-');
|
||||
if (p == NULL) {
|
||||
sprintf(winStr, "TZ=UTC+00:00:00");
|
||||
} else {
|
||||
sprintf(winStr, "TZ=UTC%c%c%c:%c%c:00", (p[0] == '+' ? '-' : '+'), p[1], p[2], p[3], p[4]);
|
||||
}
|
||||
_putenv(winStr);
|
||||
_tzset();
|
||||
#ifdef _MSC_VER
|
||||
#if _MSC_VER >= 1900
|
||||
// see https://docs.microsoft.com/en-us/cpp/c-runtime-library/daylight-dstbias-timezone-and-tzname?view=vs-2019
|
||||
int64_t timezone = _timezone;
|
||||
int32_t daylight = _daylight;
|
||||
char **tzname = _tzname;
|
||||
|
@ -83,11 +791,6 @@ void taosSetSystemTimezone(const char *inTimezoneStr, char *outTimezoneStr, int8
|
|||
int32_t tz = (int32_t)((-timezone * MILLISECOND_PER_SECOND) / MILLISECOND_PER_HOUR);
|
||||
*tsTimezone = tz;
|
||||
tz += daylight;
|
||||
/*
|
||||
* format:
|
||||
* (CST, +0800)
|
||||
* (BST, +0100)
|
||||
*/
|
||||
sprintf(outTimezoneStr, "%s (%s, %s%02d00)", buf, tzname[daylight], tz >= 0 ? "+" : "-", abs(tz));
|
||||
*outDaylight = daylight;
|
||||
|
||||
|
@ -118,13 +821,35 @@ void taosSetSystemTimezone(const char *inTimezoneStr, char *outTimezoneStr, int8
|
|||
|
||||
void taosGetSystemTimezone(char *outTimezoneStr, enum TdTimezone *tsTimezone) {
|
||||
#ifdef WINDOWS
|
||||
char *tz = getenv("TZ");
|
||||
if (tz == NULL || strlen(tz) == 0) {
|
||||
char value[100];
|
||||
DWORD bufferSize = sizeof(value);
|
||||
char *buf = getenv("TZ");
|
||||
if (buf == NULL || strlen(buf) == 0) {
|
||||
RegGetValue(HKEY_LOCAL_MACHINE, "SYSTEM\\CurrentControlSet\\Control\\TimeZoneInformation", "TimeZoneKeyName", RRF_RT_ANY, NULL, (PVOID)&value, &bufferSize);
|
||||
strcpy(outTimezoneStr, "not configured");
|
||||
if (bufferSize > 0) {
|
||||
for (size_t i = 0; i < 139; i++) {
|
||||
if (strcmp(win_tz[i][0],value) == 0) {
|
||||
strcpy(outTimezoneStr, win_tz[i][1]);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
strcpy(outTimezoneStr, tz);
|
||||
strcpy(outTimezoneStr, buf);
|
||||
}
|
||||
|
||||
#ifdef _MSC_VER
|
||||
#if _MSC_VER >= 1900
|
||||
// see https://docs.microsoft.com/en-us/cpp/c-runtime-library/daylight-dstbias-timezone-and-tzname?view=vs-2019
|
||||
int64_t timezone = _timezone;
|
||||
int32_t daylight = _daylight;
|
||||
char **tzname = _tzname;
|
||||
#endif
|
||||
#endif
|
||||
int32_t tz = (int32_t)((-timezone * MILLISECOND_PER_SECOND) / MILLISECOND_PER_HOUR);
|
||||
*tsTimezone = tz;
|
||||
tz += daylight;
|
||||
sprintf(outTimezoneStr, "%s (%s, %s%02d00)", outTimezoneStr, tzname[daylight], tz >= 0 ? "+" : "-", abs(tz));
|
||||
#elif defined(_TD_DARWIN_64)
|
||||
char buf[4096] = {0};
|
||||
char *tz = NULL;
|
||||
|
|
|
@ -92,13 +92,13 @@ class Node:
|
|||
self.conn.run("yes|./install.sh")
|
||||
|
||||
def configTaosd(self, taosConfigKey, taosConfigValue):
|
||||
self.conn.run("sudo echo '%s %s' >> %s" % (taosConfigKey, taosConfigValue, "/etc/taos/taos.cfg"))
|
||||
self.conn.run("sudo echo %s %s >> %s" % (taosConfigKey, taosConfigValue, "/etc/taos/taos.cfg"))
|
||||
|
||||
def removeTaosConfig(self, taosConfigKey, taosConfigValue):
|
||||
self.conn.run("sudo sed -in-place -e '/%s %s/d' %s" % (taosConfigKey, taosConfigValue, "/etc/taos/taos.cfg"))
|
||||
|
||||
def configHosts(self, ip, name):
|
||||
self.conn.run("echo '%s %s' >> %s" % (ip, name, '/etc/hosts'))
|
||||
self.conn.run("echo %s %s >> %s" % (ip, name, '/etc/hosts'))
|
||||
|
||||
def removeData(self):
|
||||
try:
|
||||
|
|
|
@ -113,7 +113,7 @@ class BuildDockerCluser:
|
|||
|
||||
def cfg(self, option, value, nodeIndex):
|
||||
cfgPath = "%s/node%d/cfg/taos.cfg" % (self.dockerDir, nodeIndex)
|
||||
cmd = "echo '%s %s' >> %s" % (option, value, cfgPath)
|
||||
cmd = "echo %s %s >> %s" % (option, value, cfgPath)
|
||||
self.execCmd(cmd)
|
||||
|
||||
def updateLocalhosts(self):
|
||||
|
@ -122,7 +122,7 @@ class BuildDockerCluser:
|
|||
print(result)
|
||||
if result is None or result.isspace():
|
||||
print("==========")
|
||||
cmd = "echo '172.27.0.7 tdnode1' >> /etc/hosts"
|
||||
cmd = "echo 172.27.0.7 tdnode1 >> /etc/hosts"
|
||||
display = "echo %s" % cmd
|
||||
self.execCmd(display)
|
||||
self.execCmd(cmd)
|
||||
|
|
|
@ -1,2 +1,22 @@
|
|||
|
||||
python .\test.py -f insert\basic.py
|
||||
python .\test.py -f insert\int.py
|
||||
python .\test.py -f insert\float.py
|
||||
python .\test.py -f insert\bigint.py
|
||||
python .\test.py -f insert\bool.py
|
||||
python .\test.py -f insert\double.py
|
||||
python .\test.py -f insert\smallint.py
|
||||
python .\test.py -f insert\tinyint.py
|
||||
python .\test.py -f insert\date.py
|
||||
python .\test.py -f insert\binary.py
|
||||
python .\test.py -f insert\nchar.py
|
||||
|
||||
python .\test.py -f query\filter.py
|
||||
python .\test.py -f query\filterCombo.py
|
||||
python .\test.py -f query\queryNormal.py
|
||||
python .\test.py -f query\queryError.py
|
||||
python .\test.py -f query\filterAllIntTypes.py
|
||||
python .\test.py -f query\filterFloatAndDouble.py
|
||||
python .\test.py -f query\filterOtherTypes.py
|
||||
python .\test.py -f query\querySort.py
|
||||
python .\test.py -f query\queryJoin.py
|
|
@ -38,7 +38,7 @@ class Node:
|
|||
def buildTaosd(self):
|
||||
try:
|
||||
print(self.conn)
|
||||
# self.conn.run('echo "1234" > /home/chr/installtest/test.log')
|
||||
# self.conn.run('echo 1234 > /home/chr/installtest/test.log')
|
||||
self.conn.run("cd /home/chr/installtest/ && tar -xvf %s " %self.verName)
|
||||
self.conn.run("cd /home/chr/installtest/%s && ./install.sh " % self.installPath)
|
||||
except Exception as e:
|
||||
|
@ -49,7 +49,7 @@ class Node:
|
|||
def rebuildTaosd(self):
|
||||
try:
|
||||
print(self.conn)
|
||||
# self.conn.run('echo "1234" > /home/chr/installtest/test.log')
|
||||
# self.conn.run('echo 1234 > /home/chr/installtest/test.log')
|
||||
self.conn.run("cd /home/chr/installtest/%s && ./install.sh " % self.installPath)
|
||||
except Exception as e:
|
||||
print("Build Taosd error for node %d " % self.index)
|
||||
|
@ -108,7 +108,7 @@ class oneNode:
|
|||
# install TDengine at 192.168.103/104/141
|
||||
try:
|
||||
node = Node(id, username, IP, passwd, version)
|
||||
node.conn.run('echo "start taosd"')
|
||||
node.conn.run('echo start taosd')
|
||||
node.buildTaosd()
|
||||
# clear DataPath , if need clear data
|
||||
node.clearData()
|
||||
|
@ -128,7 +128,7 @@ class oneNode:
|
|||
# start TDengine
|
||||
try:
|
||||
node = Node(id, username, IP, passwd, version)
|
||||
node.conn.run('echo "restart taosd"')
|
||||
node.conn.run('echo restart taosd')
|
||||
# clear DataPath , if need clear data
|
||||
node.clearData()
|
||||
node.restartTaosd()
|
||||
|
@ -149,14 +149,14 @@ class oneNode:
|
|||
verName = "TDengine-enterprise-server-%s-Linux-x64.tar.gz" % version
|
||||
# installPath = "TDengine-enterprise-server-%s" % self.version
|
||||
node131 = Node(131, 'ubuntu', '192.168.1.131', 'tbase125!', '2.0.20.0')
|
||||
node131.conn.run('echo "upgrade cluster"')
|
||||
node131.conn.run('echo upgrade cluster')
|
||||
node131.conn.run('sshpass -p tbase125! scp /nas/TDengine/v%s/enterprise/%s root@192.168.1.%d:/home/chr/installtest/' % (version,verName,id))
|
||||
node131.conn.close()
|
||||
# upgrade TDengine at 192.168.103/104/141
|
||||
try:
|
||||
node = Node(id, username, IP, passwd, version)
|
||||
node.conn.run('echo "start taosd"')
|
||||
node.conn.run('echo "1234" > /home/chr/test.log')
|
||||
node.conn.run('echo start taosd')
|
||||
node.conn.run('echo 1234 > /home/chr/test.log')
|
||||
node.buildTaosd()
|
||||
time.sleep(5)
|
||||
node.startTaosd()
|
||||
|
@ -176,7 +176,7 @@ class oneNode:
|
|||
# backCluster TDengine at 192.168.103/104/141
|
||||
try:
|
||||
node = Node(id, username, IP, passwd, version)
|
||||
node.conn.run('echo "rollback taos"')
|
||||
node.conn.run('echo rollback taos')
|
||||
node.rebuildTaosd()
|
||||
time.sleep(5)
|
||||
node.startTaosd()
|
||||
|
|
|
@ -14,12 +14,14 @@ for /F "usebackq tokens=*" %%i in (fulltest.bat) do (
|
|||
echo Processing %%i
|
||||
set /a a+=1
|
||||
call %%i ARG1 -w 1 -m %1 > result_!a!.txt 2>error_!a!.txt
|
||||
if errorlevel 1 ( call :colorEcho 0c "failed" &echo. && exit 8 ) else ( call :colorEcho 0a "Success" &echo. )
|
||||
if errorlevel 1 ( call :colorEcho 0c "failed" &echo. && goto :end ) else ( call :colorEcho 0a "Success" &echo. )
|
||||
)
|
||||
exit
|
||||
goto :end
|
||||
|
||||
:colorEcho
|
||||
echo off
|
||||
<nul set /p ".=%DEL%" > "%~2"
|
||||
findstr /v /a:%1 /R "^$" "%~2" nul
|
||||
del "%~2" > nul 2>&1i
|
||||
|
||||
:end
|
|
@ -18,6 +18,7 @@ import getopt
|
|||
import subprocess
|
||||
import time
|
||||
from distutils.log import warn as printf
|
||||
import platform
|
||||
|
||||
from util.log import *
|
||||
from util.dnodes import *
|
||||
|
@ -36,8 +37,10 @@ if __name__ == "__main__":
|
|||
stop = 0
|
||||
restart = False
|
||||
windows = 0
|
||||
opts, args = getopt.gnu_getopt(sys.argv[1:], 'f:p:m:l:scghrw', [
|
||||
'file=', 'path=', 'master', 'logSql', 'stop', 'cluster', 'valgrind', 'help', 'windows'])
|
||||
if platform.system().lower() == 'windows':
|
||||
windows = 1
|
||||
opts, args = getopt.gnu_getopt(sys.argv[1:], 'f:p:m:l:scghr', [
|
||||
'file=', 'path=', 'master', 'logSql', 'stop', 'cluster', 'valgrind', 'help', 'restart'])
|
||||
for key, value in opts:
|
||||
if key in ['-h', '--help']:
|
||||
tdLog.printNoPrefix(
|
||||
|
@ -64,9 +67,6 @@ if __name__ == "__main__":
|
|||
if key in ['-m', '--master']:
|
||||
masterIp = value
|
||||
|
||||
if key in ['-w', '--windows']:
|
||||
windows = 1
|
||||
|
||||
if key in ['-l', '--logSql']:
|
||||
if (value.upper() == "TRUE"):
|
||||
logSql = True
|
||||
|
@ -146,7 +146,7 @@ if __name__ == "__main__":
|
|||
else:
|
||||
pass
|
||||
tdDnodes.deploy(1,{})
|
||||
tdDnodes.startWin(1)
|
||||
tdDnodes.start(1)
|
||||
else:
|
||||
remote_conn = Connection("root@%s"%host)
|
||||
with remote_conn.cd('/var/lib/jenkins/workspace/TDinternal/community/tests/pytest'):
|
||||
|
|
|
@ -247,7 +247,7 @@ class TDDnode:
|
|||
|
||||
paths = []
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ((tool) in files):
|
||||
if ((tool) in files or ("%s.exe"%tool) in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
paths.append(os.path.join(root, tool))
|
||||
|
@ -333,7 +333,7 @@ class TDDnode:
|
|||
if self.deployed == 0:
|
||||
tdLog.exit("dnode:%d is not deployed" % (self.index))
|
||||
|
||||
cmd = "mintty -h never %s -c %s" % (
|
||||
cmd = "mintty -h never -w hide %s -c %s" % (
|
||||
binPath, self.cfgDir)
|
||||
|
||||
if (taosadapterBinPath != ""):
|
||||
|
@ -424,9 +424,10 @@ class TDDnode:
|
|||
time.sleep(1)
|
||||
processID = subprocess.check_output(
|
||||
psCmd, shell=True).decode("utf-8")
|
||||
for port in range(6030, 6041):
|
||||
fuserCmd = "fuser -k -n tcp %d" % port
|
||||
os.system(fuserCmd)
|
||||
if not platform.system().lower() == 'windows':
|
||||
for port in range(6030, 6041):
|
||||
fuserCmd = "fuser -k -n tcp %d" % port
|
||||
os.system(fuserCmd)
|
||||
if self.valgrind:
|
||||
time.sleep(2)
|
||||
|
||||
|
@ -571,11 +572,10 @@ class TDDnodes:
|
|||
|
||||
def start(self, index):
|
||||
self.check(index)
|
||||
self.dnodes[index - 1].start()
|
||||
|
||||
def startWin(self, index):
|
||||
self.check(index)
|
||||
self.dnodes[index - 1].startWin()
|
||||
if platform.system().lower() == 'windows':
|
||||
self.dnodes[index - 1].startWin()
|
||||
else:
|
||||
self.dnodes[index - 1].start()
|
||||
|
||||
def startWithoutSleep(self, index):
|
||||
self.check(index)
|
||||
|
|
|
@ -31,7 +31,7 @@ class TDTestCase:
|
|||
|
||||
def createOldDirAndAddWal(self):
|
||||
oldDir = tdDnodes.getDnodesRootDir() + "dnode1/data/vnode/vnode2/wal/old"
|
||||
os.system("sudo echo 'test' >> %s/wal" % oldDir)
|
||||
os.system("sudo echo test >> %s/wal" % oldDir)
|
||||
|
||||
|
||||
def run(self):
|
||||
|
|
|
@ -118,7 +118,7 @@
|
|||
#./test.sh -f tsim/mnode/basic1.sim -m
|
||||
|
||||
# --- sma
|
||||
./test.sh -f tsim/sma/tsmaCreateInsertData.sim
|
||||
#./test.sh -f tsim/sma/tsmaCreateInsertData.sim
|
||||
./test.sh -f tsim/sma/rsmaCreateInsertQuery.sim
|
||||
|
||||
# --- valgrind
|
||||
|
|
|
@ -36,13 +36,14 @@ if $data(2)[4] != ready then
|
|||
goto step1
|
||||
endi
|
||||
|
||||
print =============== create drop mnode 1
|
||||
sql_error create mnode on dnode 1
|
||||
sql_error drop mnode on dnode 1
|
||||
|
||||
print =============== create mnode 2
|
||||
sql create mnode on dnode 2
|
||||
|
||||
$x = 0
|
||||
step1:
|
||||
step2:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 20 then
|
||||
|
@ -65,11 +66,11 @@ if $data(2)[0] != 2 then
|
|||
return -1
|
||||
endi
|
||||
if $data(2)[2] != FOLLOWER then
|
||||
goto step1
|
||||
goto step2
|
||||
endi
|
||||
|
||||
sleep 2000
|
||||
print ============ drop mnodes
|
||||
print ============ drop mnode 2
|
||||
sql drop mnode on dnode 2
|
||||
sql show mnodes
|
||||
if $rows != 1 then
|
||||
|
|
|
@ -37,6 +37,15 @@ if $rows > 2 then
|
|||
print retention level 2 file rows $rows > 2
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
||||
if $data01 != 1 then
|
||||
if $data01 != 10 then
|
||||
print retention level 2 file result $data01 != 1 or 10
|
||||
return -1
|
||||
endi
|
||||
endi
|
||||
|
||||
print =============== select * from retention level 1 from memory
|
||||
sql select * from ct1 where ts > now-8d;
|
||||
print $data00 $data01
|
||||
|
@ -44,15 +53,30 @@ if $rows > 2 then
|
|||
print retention level 1 file rows $rows > 2
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data01 != 1 then
|
||||
if $data01 != 10 then
|
||||
print retention level 1 file result $data01 != 1 or 10
|
||||
return -1
|
||||
endi
|
||||
endi
|
||||
|
||||
print =============== select * from retention level 0 from memory
|
||||
sql select * from ct1 where ts > now-3d;
|
||||
print $data00 $data01
|
||||
print $data10 $data11
|
||||
print $data20 $data21
|
||||
|
||||
if $rows < 1 then
|
||||
print retention level 0 file rows $rows < 1
|
||||
return -1
|
||||
endi
|
||||
|
||||
if $data01 != 10 then
|
||||
print retention level 0 file result $data01 != 10
|
||||
return -1
|
||||
endi
|
||||
|
||||
#===================================================================
|
||||
|
||||
|
||||
|
@ -68,6 +92,13 @@ if $rows > 2 then
|
|||
return -1
|
||||
endi
|
||||
|
||||
if $data01 != 1 then
|
||||
if $data01 != 10 then
|
||||
print retention level 2 file result $data01 != 1 or 10
|
||||
return -1
|
||||
endi
|
||||
endi
|
||||
|
||||
print =============== select * from retention level 1 from file
|
||||
sql select * from ct1 where ts > now-8d;
|
||||
print $data00 $data01
|
||||
|
@ -76,6 +107,13 @@ if $rows > 2 then
|
|||
return -1
|
||||
endi
|
||||
|
||||
if $data01 != 1 then
|
||||
if $data01 != 10 then
|
||||
print retention level 1 file result $data01 != 1 or 10
|
||||
return -1
|
||||
endi
|
||||
endi
|
||||
|
||||
print =============== select * from retention level 0 from file
|
||||
sql select * from ct1 where ts > now-3d;
|
||||
print $data00 $data01
|
||||
|
@ -86,4 +124,9 @@ if $rows < 1 then
|
|||
return -1
|
||||
endi
|
||||
|
||||
if $data01 != 10 then
|
||||
print retention level 0 file result $data01 != 10
|
||||
return -1
|
||||
endi
|
||||
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
|
@ -37,5 +37,12 @@ print =============== trigger stream to execute sma aggr task and insert sma dat
|
|||
sql insert into ct1 values(now+5s, 20, 20.0, 30.0)
|
||||
#===================================================================
|
||||
|
||||
print =============== select * from ct1 from memory
|
||||
sql select * from ct1;
|
||||
print $data00 $data01
|
||||
if $rows != 5 then
|
||||
print rows $rows != 5
|
||||
return -1
|
||||
endi
|
||||
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
|
|
|
@ -141,6 +141,8 @@ sql connect
|
|||
|
||||
sql select count(a), count(b), count(c), count(d), count(e), count(f), count(g), count(h) from d1.tb;
|
||||
sql select count(a), count(b), count(c), count(d), count(e), count(f), count(g), count(h) from d1.tb;
|
||||
|
||||
sql use d1
|
||||
sql select count(a), count(b), count(c), count(d), count(e), count(f), count(g), count(h) from tb
|
||||
if $data00 != 24 then
|
||||
return -1
|
||||
|
|
|
@ -3,8 +3,12 @@ import taos
|
|||
import sys
|
||||
import time
|
||||
import socket
|
||||
import pexpect
|
||||
import os
|
||||
import platform
|
||||
if platform.system().lower() == 'windows':
|
||||
import wexpect as taosExpect
|
||||
else:
|
||||
import pexpect as taosExpect
|
||||
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
|
@ -15,7 +19,11 @@ def taos_command (buildPath, key, value, expectString, cfgDir, sqlString='', key
|
|||
if len(key) == 0:
|
||||
tdLog.exit("taos test key is null!")
|
||||
|
||||
taosCmd = buildPath + '/build/bin/taos '
|
||||
if platform.system().lower() == 'windows':
|
||||
taosCmd = buildPath + '\\build\\bin\\taos.exe '
|
||||
taosCmd = taosCmd.replace('\\','\\\\')
|
||||
else:
|
||||
taosCmd = buildPath + '/build/bin/taos '
|
||||
if len(cfgDir) != 0:
|
||||
taosCmd = taosCmd + '-c ' + cfgDir
|
||||
|
||||
|
@ -36,25 +44,30 @@ def taos_command (buildPath, key, value, expectString, cfgDir, sqlString='', key
|
|||
|
||||
tdLog.info ("taos cmd: %s" % taosCmd)
|
||||
|
||||
child = pexpect.spawn(taosCmd, timeout=3)
|
||||
child = taosExpect.spawn(taosCmd, timeout=3)
|
||||
#output = child.readline()
|
||||
#print (output.decode())
|
||||
if len(expectString) != 0:
|
||||
i = child.expect([expectString, pexpect.TIMEOUT, pexpect.EOF], timeout=6)
|
||||
i = child.expect([expectString, taosExpect.TIMEOUT, taosExpect.EOF], timeout=6)
|
||||
else:
|
||||
i = child.expect([pexpect.TIMEOUT, pexpect.EOF], timeout=6)
|
||||
i = child.expect([taosExpect.TIMEOUT, taosExpect.EOF], timeout=6)
|
||||
|
||||
retResult = child.before.decode()
|
||||
if platform.system().lower() == 'windows':
|
||||
retResult = child.before
|
||||
else:
|
||||
retResult = child.before.decode()
|
||||
print(retResult)
|
||||
#print(child.after.decode())
|
||||
if i == 0:
|
||||
print ('taos login success! Here can run sql, taos> ')
|
||||
if len(sqlString) != 0:
|
||||
child.sendline (sqlString)
|
||||
w = child.expect(["Query OK", pexpect.TIMEOUT, pexpect.EOF], timeout=1)
|
||||
w = child.expect(["Query OK", taosExpect.TIMEOUT, taosExpect.EOF], timeout=1)
|
||||
if w == 0:
|
||||
return "TAOS_OK"
|
||||
else:
|
||||
print(1)
|
||||
print(retResult)
|
||||
return "TAOS_FAIL"
|
||||
else:
|
||||
if key == 'A' or key1 == 'A' or key == 'C' or key1 == 'C' or key == 'V' or key1 == 'V':
|
||||
|
@ -102,7 +115,7 @@ class TDTestCase:
|
|||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
if ("taosd" in files or "taosd.exe" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
|
@ -275,11 +288,15 @@ class TDTestCase:
|
|||
pwd=os.getcwd()
|
||||
newDbName="dbf"
|
||||
sqlFile = pwd + "/0-others/sql.txt"
|
||||
sql1 = "echo 'create database " + newDbName + "' > " + sqlFile
|
||||
sql2 = "echo 'use " + newDbName + "' >> " + sqlFile
|
||||
sql3 = "echo 'create table ntbf (ts timestamp, c binary(40))' >> " + sqlFile
|
||||
sql4 = "echo 'insert into ntbf values (\"2021-04-01 08:00:00.000\", \"test taos -f1\")(\"2021-04-01 08:00:01.000\", \"test taos -f2\")' >> " + sqlFile
|
||||
sql5 = "echo 'show databases' >> " + sqlFile
|
||||
sql1 = "echo create database " + newDbName + " > " + sqlFile
|
||||
sql2 = "echo use " + newDbName + " >> " + sqlFile
|
||||
if platform.system().lower() == 'windows':
|
||||
sql3 = "echo create table ntbf (ts timestamp, c binary(40)) >> " + sqlFile
|
||||
sql4 = "echo insert into ntbf values (\"2021-04-01 08:00:00.000\", \"test taos -f1\")(\"2021-04-01 08:00:01.000\", \"test taos -f2\") >> " + sqlFile
|
||||
else:
|
||||
sql3 = "echo 'create table ntbf (ts timestamp, c binary(40))' >> " + sqlFile
|
||||
sql4 = "echo 'insert into ntbf values (\"2021-04-01 08:00:00.000\", \"test taos -f1\")(\"2021-04-01 08:00:01.000\", \"test taos -f2\")' >> " + sqlFile
|
||||
sql5 = "echo show databases >> " + sqlFile
|
||||
os.system(sql1)
|
||||
os.system(sql2)
|
||||
os.system(sql3)
|
||||
|
|
|
@ -3,7 +3,11 @@ import taos
|
|||
import sys
|
||||
import time
|
||||
import socket
|
||||
import pexpect
|
||||
import platform
|
||||
if platform.system().lower() == 'windows':
|
||||
import wexpect as taosExpect
|
||||
else:
|
||||
import pexpect as taosExpect
|
||||
import os
|
||||
|
||||
from util.log import *
|
||||
|
@ -15,7 +19,11 @@ def taos_command (buildPath, key, value, expectString, cfgDir, sqlString='', key
|
|||
if len(key) == 0:
|
||||
tdLog.exit("taos test key is null!")
|
||||
|
||||
taosCmd = buildPath + '/build/bin/taos '
|
||||
if platform.system().lower() == 'windows':
|
||||
taosCmd = buildPath + '\\build\\bin\\taos.exe '
|
||||
taosCmd = taosCmd.replace('\\','\\\\')
|
||||
else:
|
||||
taosCmd = buildPath + '/build/bin/taos '
|
||||
if len(cfgDir) != 0:
|
||||
taosCmd = taosCmd + '-c ' + cfgDir
|
||||
|
||||
|
@ -36,23 +44,29 @@ def taos_command (buildPath, key, value, expectString, cfgDir, sqlString='', key
|
|||
|
||||
tdLog.info ("taos cmd: %s" % taosCmd)
|
||||
|
||||
child = pexpect.spawn(taosCmd, timeout=3)
|
||||
child = taosExpect.spawn(taosCmd, timeout=3)
|
||||
#output = child.readline()
|
||||
#print (output.decode())
|
||||
if len(expectString) != 0:
|
||||
i = child.expect([expectString, pexpect.TIMEOUT, pexpect.EOF], timeout=6)
|
||||
i = child.expect([expectString, taosExpect.TIMEOUT, taosExpect.EOF], timeout=6)
|
||||
else:
|
||||
i = child.expect([pexpect.TIMEOUT, pexpect.EOF], timeout=6)
|
||||
i = child.expect([taosExpect.TIMEOUT, taosExpect.EOF], timeout=6)
|
||||
|
||||
retResult = child.before.decode()
|
||||
if platform.system().lower() == 'windows':
|
||||
retResult = child.before
|
||||
else:
|
||||
retResult = child.before.decode()
|
||||
print("cmd return result:\n%s\n"%retResult)
|
||||
#print(child.after.decode())
|
||||
if i == 0:
|
||||
print ('taos login success! Here can run sql, taos> ')
|
||||
if len(sqlString) != 0:
|
||||
child.sendline (sqlString)
|
||||
w = child.expect(["Query OK", pexpect.TIMEOUT, pexpect.EOF], timeout=1)
|
||||
retResult = child.before.decode()
|
||||
w = child.expect(["Query OK", taosExpect.TIMEOUT, taosExpect.EOF], timeout=1)
|
||||
if platform.system().lower() == 'windows':
|
||||
retResult = child.before
|
||||
else:
|
||||
retResult = child.before.decode()
|
||||
if w == 0:
|
||||
return "TAOS_OK", retResult
|
||||
else:
|
||||
|
@ -103,7 +117,7 @@ class TDTestCase:
|
|||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
if ("taosd" in files or "taosd.exe" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
|
@ -216,11 +230,15 @@ class TDTestCase:
|
|||
pwd=os.getcwd()
|
||||
newDbName="dbf"
|
||||
sqlFile = pwd + "/0-others/sql.txt"
|
||||
sql1 = "echo 'create database " + newDbName + "' > " + sqlFile
|
||||
sql2 = "echo 'use " + newDbName + "' >> " + sqlFile
|
||||
sql3 = "echo 'create table ntbf (ts timestamp, c binary(40)) no this item' >> " + sqlFile
|
||||
sql4 = "echo 'insert into ntbf values (\"2021-04-01 08:00:00.000\", \"test taos -f1\")(\"2021-04-01 08:00:01.000\", \"test taos -f2\")' >> " + sqlFile
|
||||
sql5 = "echo 'show databases' >> " + sqlFile
|
||||
sql1 = "echo create database " + newDbName + " > " + sqlFile
|
||||
sql2 = "echo use " + newDbName + " >> " + sqlFile
|
||||
if platform.system().lower() == 'windows':
|
||||
sql3 = "echo create table ntbf (ts timestamp, c binary(40)) no this item >> " + sqlFile
|
||||
sql4 = "echo insert into ntbf values (\"2021-04-01 08:00:00.000\", \"test taos -f1\")(\"2021-04-01 08:00:01.000\", \"test taos -f2\") >> " + sqlFile
|
||||
else:
|
||||
sql3 = "echo 'create table ntbf (ts timestamp, c binary(40)) no this item' >> " + sqlFile
|
||||
sql4 = "echo 'insert into ntbf values (\"2021-04-01 08:00:00.000\", \"test taos -f1\")(\"2021-04-01 08:00:01.000\", \"test taos -f2\")' >> " + sqlFile
|
||||
sql5 = "echo show databases >> " + sqlFile
|
||||
os.system(sql1)
|
||||
os.system(sql2)
|
||||
os.system(sql3)
|
||||
|
|
|
@ -3,7 +3,11 @@ import taos
|
|||
import sys
|
||||
import time
|
||||
import socket
|
||||
import pexpect
|
||||
import platform
|
||||
if platform.system().lower() == 'windows':
|
||||
import wexpect as taosExpect
|
||||
else:
|
||||
import pexpect as taosExpect
|
||||
import os
|
||||
|
||||
from util.log import *
|
||||
|
@ -15,7 +19,11 @@ def taos_command (buildPath, key, value, expectString, cfgDir, sqlString='', key
|
|||
if len(key) == 0:
|
||||
tdLog.exit("taos test key is null!")
|
||||
|
||||
taosCmd = buildPath + '/build/bin/taos '
|
||||
if platform.system().lower() == 'windows':
|
||||
taosCmd = buildPath + '\\build\\bin\\taos.exe '
|
||||
taosCmd = taosCmd.replace('\\','\\\\')
|
||||
else:
|
||||
taosCmd = buildPath + '/build/bin/taos '
|
||||
if len(cfgDir) != 0:
|
||||
taosCmd = taosCmd + '-c ' + cfgDir
|
||||
|
||||
|
@ -36,23 +44,29 @@ def taos_command (buildPath, key, value, expectString, cfgDir, sqlString='', key
|
|||
|
||||
tdLog.info ("taos cmd: %s" % taosCmd)
|
||||
|
||||
child = pexpect.spawn(taosCmd, timeout=3)
|
||||
child = taosExpect.spawn(taosCmd, timeout=3)
|
||||
#output = child.readline()
|
||||
#print (output.decode())
|
||||
if len(expectString) != 0:
|
||||
i = child.expect([expectString, pexpect.TIMEOUT, pexpect.EOF], timeout=6)
|
||||
i = child.expect([expectString, taosExpect.TIMEOUT, taosExpect.EOF], timeout=6)
|
||||
else:
|
||||
i = child.expect([pexpect.TIMEOUT, pexpect.EOF], timeout=6)
|
||||
i = child.expect([taosExpect.TIMEOUT, taosExpect.EOF], timeout=6)
|
||||
|
||||
retResult = child.before.decode()
|
||||
if platform.system().lower() == 'windows':
|
||||
retResult = child.before
|
||||
else:
|
||||
retResult = child.before.decode()
|
||||
print("expect() return code: %d, content:\n %s\n"%(i, retResult))
|
||||
#print(child.after.decode())
|
||||
if i == 0:
|
||||
print ('taos login success! Here can run sql, taos> ')
|
||||
if len(sqlString) != 0:
|
||||
child.sendline (sqlString)
|
||||
w = child.expect(["Query OK", pexpect.TIMEOUT, pexpect.EOF], timeout=1)
|
||||
retResult = child.before.decode()
|
||||
w = child.expect(["Query OK", taosExpect.TIMEOUT, taosExpect.EOF], timeout=1)
|
||||
if platform.system().lower() == 'windows':
|
||||
retResult = child.before
|
||||
else:
|
||||
retResult = child.before.decode()
|
||||
if w == 0:
|
||||
return "TAOS_OK", retResult
|
||||
else:
|
||||
|
@ -103,7 +117,7 @@ class TDTestCase:
|
|||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
if ("taosd" in files or "taosd.exe" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
|
@ -168,21 +182,33 @@ class TDTestCase:
|
|||
tdDnodes.stop(1)
|
||||
|
||||
role = 'server'
|
||||
taosCmd = 'nohup ' + buildPath + '/build/bin/taos -c ' + keyDict['c']
|
||||
taosCmd = taosCmd + ' -n ' + role + ' > /dev/null 2>&1 &'
|
||||
if platform.system().lower() == 'windows':
|
||||
taosCmd = 'mintty -h never -w hide ' + buildPath + '\\build\\bin\\taos.exe -c ' + keyDict['c']
|
||||
taosCmd = taosCmd.replace('\\','\\\\')
|
||||
taosCmd = taosCmd + ' -n ' + role
|
||||
else:
|
||||
taosCmd = 'nohup ' + buildPath + '/build/bin/taos -c ' + keyDict['c']
|
||||
taosCmd = taosCmd + ' -n ' + role + ' > /dev/null 2>&1 &'
|
||||
print (taosCmd)
|
||||
os.system(taosCmd)
|
||||
|
||||
pktLen = '2000'
|
||||
pktNum = '10'
|
||||
role = 'client'
|
||||
taosCmd = buildPath + '/build/bin/taos -c ' + keyDict['c']
|
||||
if platform.system().lower() == 'windows':
|
||||
taosCmd = buildPath + '\\build\\bin\\taos.exe -c ' + keyDict['c']
|
||||
taosCmd = taosCmd.replace('\\','\\\\')
|
||||
else:
|
||||
taosCmd = buildPath + '/build/bin/taos -c ' + keyDict['c']
|
||||
taosCmd = taosCmd + ' -n ' + role + ' -l ' + pktLen + ' -N ' + pktNum
|
||||
print (taosCmd)
|
||||
child = pexpect.spawn(taosCmd, timeout=3)
|
||||
i = child.expect([pexpect.TIMEOUT, pexpect.EOF], timeout=6)
|
||||
child = taosExpect.spawn(taosCmd, timeout=3)
|
||||
i = child.expect([taosExpect.TIMEOUT, taosExpect.EOF], timeout=6)
|
||||
|
||||
retResult = child.before.decode()
|
||||
if platform.system().lower() == 'windows':
|
||||
retResult = child.before
|
||||
else:
|
||||
retResult = child.before.decode()
|
||||
print("expect() return code: %d, content:\n %s\n"%(i, retResult))
|
||||
#print(child.after.decode())
|
||||
if i == 0:
|
||||
|
@ -195,7 +221,10 @@ class TDTestCase:
|
|||
else:
|
||||
tdLog.exit('taos -n client fail!')
|
||||
|
||||
os.system('pkill taos')
|
||||
if platform.system().lower() == 'windows':
|
||||
os.system('ps -a | grep taos | awk \'{print $2}\' | xargs kill -9')
|
||||
else:
|
||||
os.system('pkill taos')
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
|
|
|
@ -2,7 +2,7 @@ import taos
|
|||
import sys
|
||||
import time
|
||||
import socket
|
||||
import pexpect
|
||||
# import pexpect
|
||||
import os
|
||||
import http.server
|
||||
import gzip
|
||||
|
|
|
@ -2,7 +2,7 @@ import taos
|
|||
import sys
|
||||
import time
|
||||
import socket
|
||||
import pexpect
|
||||
# import pexpect
|
||||
import os
|
||||
import http.server
|
||||
import gzip
|
||||
|
|
|
@ -22,8 +22,8 @@ class TDTestCase:
|
|||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
#tdSql.init(conn.cursor())
|
||||
tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
@ -186,7 +186,7 @@ class TDTestCase:
|
|||
time.sleep(1)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
|
||||
|
@ -228,7 +228,7 @@ class TDTestCase:
|
|||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'batchNum': 200, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
|
@ -300,7 +300,7 @@ class TDTestCase:
|
|||
time.sleep(1)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
|
||||
|
@ -349,8 +349,8 @@ class TDTestCase:
|
|||
'vgroups': 1, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 30000, \
|
||||
'batchNum': 100, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 200, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
|
@ -381,8 +381,8 @@ class TDTestCase:
|
|||
'vgroups': 1, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 30000, \
|
||||
'batchNum': 100, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 200, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict2['cfg'] = cfgPath
|
||||
tdSql.execute("create stable if not exists %s.%s (ts timestamp, c1 bigint, c2 binary(16)) tags(t1 int)"%(parameterDict2['dbName'], parameterDict2['stbName']))
|
||||
|
@ -432,7 +432,7 @@ class TDTestCase:
|
|||
time.sleep(1)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
|
||||
|
|
|
@ -22,8 +22,8 @@ class TDTestCase:
|
|||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
#tdSql.init(conn.cursor())
|
||||
tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
@ -167,7 +167,7 @@ class TDTestCase:
|
|||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
@ -197,7 +197,7 @@ class TDTestCase:
|
|||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -236,7 +236,7 @@ class TDTestCase:
|
|||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 20
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -264,7 +264,7 @@ class TDTestCase:
|
|||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
@ -298,7 +298,7 @@ class TDTestCase:
|
|||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 20
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -313,8 +313,8 @@ class TDTestCase:
|
|||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
if not (totalConsumeRows >= expectrowcnt):
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
|
@ -330,7 +330,7 @@ class TDTestCase:
|
|||
'vgroups': 4, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
@ -364,7 +364,7 @@ class TDTestCase:
|
|||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 10
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -399,7 +399,7 @@ class TDTestCase:
|
|||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
@ -416,7 +416,7 @@ class TDTestCase:
|
|||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
@ -446,7 +446,7 @@ class TDTestCase:
|
|||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 10
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -470,336 +470,6 @@ class TDTestCase:
|
|||
|
||||
tdLog.printNoPrefix("======== test case 3 end ...... ")
|
||||
|
||||
def tmqCase4(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 4: Produce while two consumers to subscribe one db, include 2 stb")
|
||||
tdLog.info("step 1: create database, stb, ctb and insert data")
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'dbName': 'db4', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict['dbName'], parameterDict['vgroups']))
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'dbName': 'db4', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
tdLog.info("create topics from db")
|
||||
topicName1 = 'topic_db1'
|
||||
|
||||
tdSql.execute("create topic %s as %s" %(topicName1, parameterDict['dbName']))
|
||||
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
|
||||
topicList = topicName1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
consumerId = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# wait for data ready
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 4 end ...... ")
|
||||
|
||||
def tmqCase5(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 5: Produce while two consumers to subscribe one db, firstly create one stb, after start consume create other stb")
|
||||
tdLog.info("step 1: create database, stb, ctb and insert data")
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'dbName': 'db5', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict['dbName'], parameterDict['vgroups']))
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'dbName': 'db5', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
tdLog.info("create topics from db")
|
||||
topicName1 = 'topic_db1'
|
||||
|
||||
tdSql.execute("create topic %s as %s" %(topicName1, parameterDict['dbName']))
|
||||
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
|
||||
topicList = topicName1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
consumerId = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
# wait for data ready
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows < expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 5 end ...... ")
|
||||
|
||||
def tmqCase6(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 6: Produce while one consumers to subscribe tow topic, Each contains one db")
|
||||
tdLog.info("step 1: create database, stb, ctb and insert data")
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'dbName': 'db60', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict['dbName'], parameterDict['vgroups']))
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'dbName': 'db61', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict2['dbName'], parameterDict2['vgroups']))
|
||||
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
tdLog.info("create topics from db")
|
||||
topicName1 = 'topic_db60'
|
||||
topicName2 = 'topic_db61'
|
||||
|
||||
tdSql.execute("create topic %s as %s" %(topicName1, parameterDict['dbName']))
|
||||
tdSql.execute("create topic %s as %s" %(topicName2, parameterDict2['dbName']))
|
||||
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
|
||||
topicList = topicName1 + ',' + topicName2
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
#consumerId = 1
|
||||
#self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# wait for data ready
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
tdSql.query("drop topic %s"%topicName2)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 6 end ...... ")
|
||||
|
||||
def tmqCase7(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 7: Produce while two consumers to subscribe tow topic, Each contains one db")
|
||||
tdLog.info("step 1: create database, stb, ctb and insert data")
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'dbName': 'db70', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict['dbName'], parameterDict['vgroups']))
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'dbName': 'db71', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict2['dbName'], parameterDict2['vgroups']))
|
||||
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
tdLog.info("create topics from db")
|
||||
topicName1 = 'topic_db60'
|
||||
topicName2 = 'topic_db61'
|
||||
|
||||
tdSql.execute("create topic %s as %s" %(topicName1, parameterDict['dbName']))
|
||||
tdSql.execute("create topic %s as %s" %(topicName2, parameterDict2['dbName']))
|
||||
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
|
||||
topicList = topicName1 + ',' + topicName2
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
consumerId = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# wait for data ready
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
tdSql.query("drop topic %s"%topicName2)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 7 end ...... ")
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
|
@ -815,11 +485,6 @@ class TDTestCase:
|
|||
self.tmqCase2(cfgPath, buildPath)
|
||||
self.tmqCase2a(cfgPath, buildPath)
|
||||
self.tmqCase3(cfgPath, buildPath)
|
||||
self.tmqCase4(cfgPath, buildPath)
|
||||
self.tmqCase5(cfgPath, buildPath)
|
||||
self.tmqCase6(cfgPath, buildPath)
|
||||
self.tmqCase7(cfgPath, buildPath)
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
|
|
|
@ -0,0 +1,515 @@
|
|||
|
||||
import taos
|
||||
import sys
|
||||
import time
|
||||
import socket
|
||||
import os
|
||||
import threading
|
||||
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
from util.cases import *
|
||||
from util.dnodes import *
|
||||
|
||||
class TDTestCase:
|
||||
hostname = socket.gethostname()
|
||||
#rpcDebugFlagVal = '143'
|
||||
#clientCfgDict = {'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
|
||||
#clientCfgDict["rpcDebugFlag"] = rpcDebugFlagVal
|
||||
#updatecfgDict = {'clientCfg': {}, 'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
|
||||
#updatecfgDict["rpcDebugFlag"] = rpcDebugFlagVal
|
||||
#print ("===================: ", updatecfgDict)
|
||||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def newcur(self,cfg,host,port):
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
con=taos.connect(host=host, user=user, password=password, config=cfg ,port=port)
|
||||
cur=con.cursor()
|
||||
print(cur)
|
||||
return cur
|
||||
|
||||
def initConsumerTable(self,cdbName='cdb'):
|
||||
tdLog.info("create consume database, and consume info table, and consume result table")
|
||||
tdSql.query("create database if not exists %s vgroups 1"%(cdbName))
|
||||
tdSql.query("drop table if exists %s.consumeinfo "%(cdbName))
|
||||
tdSql.query("drop table if exists %s.consumeresult "%(cdbName))
|
||||
|
||||
tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName)
|
||||
tdSql.query("create table %s.consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)"%cdbName)
|
||||
|
||||
def insertConsumerInfo(self,consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifmanualcommit,cdbName='cdb'):
|
||||
sql = "insert into %s.consumeinfo values "%cdbName
|
||||
sql += "(now, %d, '%s', '%s', %d, %d, %d)"%(consumerId, topicList, keyList, expectrowcnt, ifcheckdata, ifmanualcommit)
|
||||
tdLog.info("consume info sql: %s"%sql)
|
||||
tdSql.query(sql)
|
||||
|
||||
def selectConsumeResult(self,expectRows,cdbName='cdb'):
|
||||
resultList=[]
|
||||
while 1:
|
||||
tdSql.query("select * from %s.consumeresult"%cdbName)
|
||||
#tdLog.info("row: %d, %l64d, %l64d"%(tdSql.getData(0, 1),tdSql.getData(0, 2),tdSql.getData(0, 3))
|
||||
if tdSql.getRows() == expectRows:
|
||||
break
|
||||
else:
|
||||
time.sleep(5)
|
||||
|
||||
for i in range(expectRows):
|
||||
tdLog.info ("consume id: %d, consume msgs: %d, consume rows: %d"%(tdSql.getData(i , 1), tdSql.getData(i , 2), tdSql.getData(i , 3)))
|
||||
resultList.append(tdSql.getData(i , 3))
|
||||
|
||||
return resultList
|
||||
|
||||
def startTmqSimProcess(self,buildPath,cfgPath,pollDelay,dbName,showMsg=1,showRow=1,cdbName='cdb',valgrind=0):
|
||||
shellCmd = 'nohup '
|
||||
if valgrind == 1:
|
||||
logFile = cfgPath + '/../log/valgrind-tmq.log'
|
||||
shellCmd = 'nohup valgrind --log-file=' + logFile
|
||||
shellCmd += '--tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all --num-callers=20 -v --workaround-gcc296-bugs=yes '
|
||||
|
||||
shellCmd += buildPath + '/build/bin/tmq_sim -c ' + cfgPath
|
||||
shellCmd += " -y %d -d %s -g %d -r %d -w %s "%(pollDelay, dbName, showMsg, showRow, cdbName)
|
||||
shellCmd += "> /dev/null 2>&1 &"
|
||||
tdLog.info(shellCmd)
|
||||
os.system(shellCmd)
|
||||
|
||||
def create_tables(self,tsql, dbName,vgroups,stbName,ctbNum):
|
||||
tsql.execute("create database if not exists %s vgroups %d"%(dbName, vgroups))
|
||||
tsql.execute("use %s" %dbName)
|
||||
tsql.execute("create table if not exists %s (ts timestamp, c1 bigint, c2 binary(16)) tags(t1 int)"%stbName)
|
||||
pre_create = "create table"
|
||||
sql = pre_create
|
||||
#tdLog.debug("doing create one stable %s and %d child table in %s ..." %(stbname, count ,dbname))
|
||||
for i in range(ctbNum):
|
||||
sql += " %s_%d using %s tags(%d)"%(stbName,i,stbName,i+1)
|
||||
if (i > 0) and (i%100 == 0):
|
||||
tsql.execute(sql)
|
||||
sql = pre_create
|
||||
if sql != pre_create:
|
||||
tsql.execute(sql)
|
||||
|
||||
event.set()
|
||||
tdLog.debug("complete to create database[%s], stable[%s] and %d child tables" %(dbName, stbName, ctbNum))
|
||||
return
|
||||
|
||||
def insert_data(self,tsql,dbName,stbName,ctbNum,rowsPerTbl,batchNum,startTs):
|
||||
tdLog.debug("start to insert data ............")
|
||||
tsql.execute("use %s" %dbName)
|
||||
pre_insert = "insert into "
|
||||
sql = pre_insert
|
||||
|
||||
t = time.time()
|
||||
startTs = int(round(t * 1000))
|
||||
#tdLog.debug("doing insert data into stable:%s rows:%d ..."%(stbName, allRows))
|
||||
for i in range(ctbNum):
|
||||
sql += " %s_%d values "%(stbName,i)
|
||||
for j in range(rowsPerTbl):
|
||||
sql += "(%d, %d, 'tmqrow_%d') "%(startTs + j, j, j)
|
||||
if (j > 0) and ((j%batchNum == 0) or (j == rowsPerTbl - 1)):
|
||||
tsql.execute(sql)
|
||||
if j < rowsPerTbl - 1:
|
||||
sql = "insert into %s_%d values " %(stbName,i)
|
||||
else:
|
||||
sql = "insert into "
|
||||
#end sql
|
||||
if sql != pre_insert:
|
||||
#print("insert sql:%s"%sql)
|
||||
tsql.execute(sql)
|
||||
tdLog.debug("insert data ............ [OK]")
|
||||
return
|
||||
|
||||
def prepareEnv(self, **parameterDict):
|
||||
print ("input parameters:")
|
||||
print (parameterDict)
|
||||
# create new connector for my thread
|
||||
tsql=self.newcur(parameterDict['cfg'], 'localhost', 6030)
|
||||
self.create_tables(tsql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["vgroups"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"])
|
||||
|
||||
self.insert_data(tsql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"],\
|
||||
parameterDict["startTs"])
|
||||
return
|
||||
|
||||
def tmqCase4(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 4: Produce while two consumers to subscribe one db, include 2 stb")
|
||||
tdLog.info("step 1: create database, stb, ctb and insert data")
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'dbName': 'db4', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict['dbName'], parameterDict['vgroups']))
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'dbName': 'db4', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
tdLog.info("create topics from db")
|
||||
topicName1 = 'topic_db1'
|
||||
|
||||
tdSql.execute("create topic %s as %s" %(topicName1, parameterDict['dbName']))
|
||||
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
|
||||
topicList = topicName1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
consumerId = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# wait for data ready
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 4 end ...... ")
|
||||
|
||||
def tmqCase5(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 5: Produce while two consumers to subscribe one db, firstly create one stb, after start consume create other stb")
|
||||
tdLog.info("step 1: create database, stb, ctb and insert data")
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'dbName': 'db5', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict['dbName'], parameterDict['vgroups']))
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'dbName': 'db5', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
tdLog.info("create topics from db")
|
||||
topicName1 = 'topic_db1'
|
||||
|
||||
tdSql.execute("create topic %s as %s" %(topicName1, parameterDict['dbName']))
|
||||
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
|
||||
topicList = topicName1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
consumerId = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
# wait for data ready
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows < expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 5 end ...... ")
|
||||
|
||||
def tmqCase6(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 6: Produce while one consumers to subscribe tow topic, Each contains one db")
|
||||
tdLog.info("step 1: create database, stb, ctb and insert data")
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'dbName': 'db60', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict['dbName'], parameterDict['vgroups']))
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'dbName': 'db61', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict2['dbName'], parameterDict2['vgroups']))
|
||||
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
tdLog.info("create topics from db")
|
||||
topicName1 = 'topic_db60'
|
||||
topicName2 = 'topic_db61'
|
||||
|
||||
tdSql.execute("create topic %s as %s" %(topicName1, parameterDict['dbName']))
|
||||
tdSql.execute("create topic %s as %s" %(topicName2, parameterDict2['dbName']))
|
||||
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
|
||||
topicList = topicName1 + ',' + topicName2
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
#consumerId = 1
|
||||
#self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# wait for data ready
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
tdSql.query("drop topic %s"%topicName2)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 6 end ...... ")
|
||||
|
||||
def tmqCase7(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 7: Produce while two consumers to subscribe tow topic, Each contains one db")
|
||||
tdLog.info("step 1: create database, stb, ctb and insert data")
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'dbName': 'db70', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict['dbName'], parameterDict['vgroups']))
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'dbName': 'db71', \
|
||||
'vgroups': 4, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 5000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
tdSql.execute("create database if not exists %s vgroups %d" %(parameterDict2['dbName'], parameterDict2['vgroups']))
|
||||
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
tdLog.info("create topics from db")
|
||||
topicName1 = 'topic_db60'
|
||||
topicName2 = 'topic_db61'
|
||||
|
||||
tdSql.execute("create topic %s as %s" %(topicName1, parameterDict['dbName']))
|
||||
tdSql.execute("create topic %s as %s" %(topicName2, parameterDict2['dbName']))
|
||||
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
|
||||
topicList = topicName1 + ',' + topicName2
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
consumerId = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# wait for data ready
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicName1)
|
||||
tdSql.query("drop topic %s"%topicName2)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 7 end ...... ")
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
buildPath = self.getBuildPath()
|
||||
if (buildPath == ""):
|
||||
tdLog.exit("taosd not found!")
|
||||
else:
|
||||
tdLog.info("taosd found in %s" % buildPath)
|
||||
cfgPath = buildPath + "/../sim/psim/cfg"
|
||||
tdLog.info("cfgPath: %s" % cfgPath)
|
||||
|
||||
self.tmqCase4(cfgPath, buildPath)
|
||||
self.tmqCase5(cfgPath, buildPath)
|
||||
self.tmqCase6(cfgPath, buildPath)
|
||||
self.tmqCase7(cfgPath, buildPath)
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success(f"{__file__} successfully executed")
|
||||
|
||||
event = threading.Event()
|
||||
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
|
@ -22,8 +22,8 @@ class TDTestCase:
|
|||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
#tdSql.init(conn.cursor())
|
||||
tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
@ -198,7 +198,7 @@ class TDTestCase:
|
|||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -276,7 +276,7 @@ class TDTestCase:
|
|||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -354,7 +354,7 @@ class TDTestCase:
|
|||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 15
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -425,7 +425,7 @@ class TDTestCase:
|
|||
event.wait()
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -29,8 +29,8 @@ class TDTestCase:
|
|||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
#tdSql.init(conn.cursor())
|
||||
tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
@ -183,169 +183,6 @@ class TDTestCase:
|
|||
|
||||
return
|
||||
|
||||
def tmqCase1(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 1: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db1', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
time.sleep(5)
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 1 end ...... ")
|
||||
|
||||
def tmqCase2(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 2: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db2', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
parameterDict2 = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db2', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb2', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict2['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_stable(tdSql, parameterDict2["dbName"], parameterDict2["stbName"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start create child tables of stb1 and stb2")
|
||||
parameterDict['actionType'] = actionType.CREATE_CTABLE
|
||||
parameterDict2['actionType'] = actionType.CREATE_CTABLE
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("start insert data into child tables of stb1 and stb2")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
parameterDict2['actionType'] = actionType.INSERT_DATA
|
||||
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
|
||||
prepareEnvThread2.start()
|
||||
|
||||
prepareEnvThread.join()
|
||||
prepareEnvThread2.join()
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 2 end ...... ")
|
||||
|
||||
def tmqCase3(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 3: ")
|
||||
|
||||
|
@ -386,7 +223,7 @@ class TDTestCase:
|
|||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -461,7 +298,7 @@ class TDTestCase:
|
|||
self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -544,7 +381,7 @@ class TDTestCase:
|
|||
self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
@ -582,788 +419,6 @@ class TDTestCase:
|
|||
|
||||
tdLog.printNoPrefix("======== test case 5 end ...... ")
|
||||
|
||||
def tmqCase6(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 6: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db6', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt/4:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:latest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("again start consume processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("again check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 6 end ...... ")
|
||||
|
||||
def tmqCase7(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 7: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db7', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:latest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("again start consume processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("again check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 7 end ...... ")
|
||||
|
||||
def tmqCase8(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 8: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db8', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:latest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume 0 processor")
|
||||
pollDelay = 10
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start to check consume 0 result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 1 processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 2 processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 and 2 result")
|
||||
expectRows = 3
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt*2:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 8 end ...... ")
|
||||
|
||||
def tmqCase9(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 9: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db9', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:latest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume 0 processor")
|
||||
pollDelay = 10
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start to check consume 0 result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 1 processor")
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
ifManualCommit = 0
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 2 processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 and 2 result")
|
||||
expectRows = 3
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt*2:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 9 end ...... ")
|
||||
|
||||
def tmqCase10(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 10: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db10', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:latest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume 0 processor")
|
||||
pollDelay = 10
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start to check consume 0 result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 1 processor")
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
ifManualCommit = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt-10000,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt-10000:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt-10000))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 2 processor")
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 2
|
||||
ifManualCommit = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt+10000,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 and 2 result")
|
||||
expectRows = 3
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt*2:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 10 end ...... ")
|
||||
|
||||
def tmqCase11(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 11: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db11', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:none'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:none'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("again start consume processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("again check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 11 end ...... ")
|
||||
|
||||
def tmqCase12(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 12: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db12', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt/4:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:none'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("again start consume processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("again check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt/4:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 12 end ...... ")
|
||||
|
||||
def tmqCase13(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 13: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db13', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 5
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt/4:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:none'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt/2,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("again start consume processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("again check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt*(1/2+1/4):
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*(1/2+1/4)))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 2
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:none'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("again start consume processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("again check consume result")
|
||||
expectRows = 3
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 13 end ...... ")
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
|
@ -1375,8 +430,6 @@ class TDTestCase:
|
|||
cfgPath = buildPath + "/../sim/psim/cfg"
|
||||
tdLog.info("cfgPath: %s" % cfgPath)
|
||||
|
||||
# self.tmqCase1(cfgPath, buildPath)
|
||||
# self.tmqCase2(cfgPath, buildPath)
|
||||
self.tmqCase3(cfgPath, buildPath)
|
||||
self.tmqCase4(cfgPath, buildPath)
|
||||
self.tmqCase5(cfgPath, buildPath)
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -29,8 +29,8 @@ class TDTestCase:
|
|||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
#tdSql.init(conn.cursor())
|
||||
tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
@ -183,18 +183,15 @@ class TDTestCase:
|
|||
|
||||
return
|
||||
|
||||
def tmqCase1(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 1: ")
|
||||
def tmqCase8(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 8: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
auotCtbNum = 5
|
||||
auotCtbPrefix = 'autoCtb'
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db1', \
|
||||
'dbName': 'db8', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
|
@ -208,63 +205,98 @@ class TDTestCase:
|
|||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,parameterDict["dbName"],parameterDict["stbName"],parameterDict["ctbNum"],parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * (auotCtbNum + parameterDict["ctbNum"])
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
auto.offset.reset:latest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
tdLog.info("start consume 0 processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# add some new child tables using auto ctreating mode
|
||||
time.sleep(1)
|
||||
for index in range(auotCtbNum):
|
||||
tdSql.query("create table %s.%s_%d using %s.%s tags(%d)"%(parameterDict["dbName"], auotCtbPrefix, index, parameterDict["dbName"], parameterDict["stbName"], index))
|
||||
|
||||
self.insert_data(tdSql,parameterDict["dbName"],auotCtbPrefix,auotCtbNum,parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
tdLog.info("start to check consume 0 result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 1 processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 2 processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 and 2 result")
|
||||
expectRows = 3
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt*2:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 1 end ...... ")
|
||||
tdLog.printNoPrefix("======== test case 8 end ...... ")
|
||||
|
||||
def tmqCase2(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 2: ")
|
||||
def tmqCase9(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 9: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
auotCtbNum = 10
|
||||
auotCtbPrefix = 'autoCtb'
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db2', \
|
||||
'dbName': 'db9', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
|
@ -278,54 +310,92 @@ class TDTestCase:
|
|||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,parameterDict["dbName"],parameterDict["stbName"],parameterDict["ctbNum"],parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
self.create_stable(tdSql, parameterDict["dbName"], 'stb2')
|
||||
|
||||
tdLog.info("create topics from stb0/stb1")
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
topicFromStb2 = 'topic_stb2'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb2, parameterDict['dbName'], 'stb2'))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * (auotCtbNum + parameterDict["ctbNum"])
|
||||
topicList = '%s, %s'%(topicFromStb1,topicFromStb2)
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
auto.offset.reset:latest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
tdLog.info("start consume 0 processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# add some new child tables using auto ctreating mode
|
||||
time.sleep(1)
|
||||
for index in range(auotCtbNum):
|
||||
tdSql.query("create table %s.%s_%d using %s.%s tags(%d)"%(parameterDict["dbName"], auotCtbPrefix, index, parameterDict["dbName"], 'stb2', index))
|
||||
|
||||
self.insert_data(tdSql,parameterDict["dbName"],auotCtbPrefix,auotCtbNum,parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
tdLog.info("start to check consume 0 result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 1 processor")
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
ifManualCommit = 0
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 2 processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 and 2 result")
|
||||
expectRows = 3
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt*2:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 2 end ...... ")
|
||||
tdLog.printNoPrefix("======== test case 9 end ...... ")
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
@ -338,8 +408,8 @@ class TDTestCase:
|
|||
cfgPath = buildPath + "/../sim/psim/cfg"
|
||||
tdLog.info("cfgPath: %s" % cfgPath)
|
||||
|
||||
self.tmqCase1(cfgPath, buildPath)
|
||||
self.tmqCase2(cfgPath, buildPath)
|
||||
self.tmqCase8(cfgPath, buildPath)
|
||||
self.tmqCase9(cfgPath, buildPath)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
|
|
|
@ -0,0 +1,607 @@
|
|||
|
||||
import taos
|
||||
import sys
|
||||
import time
|
||||
import socket
|
||||
import os
|
||||
import threading
|
||||
from enum import Enum
|
||||
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
from util.cases import *
|
||||
from util.dnodes import *
|
||||
|
||||
class actionType(Enum):
|
||||
CREATE_DATABASE = 0
|
||||
CREATE_STABLE = 1
|
||||
CREATE_CTABLE = 2
|
||||
INSERT_DATA = 3
|
||||
|
||||
class TDTestCase:
|
||||
hostname = socket.gethostname()
|
||||
#rpcDebugFlagVal = '143'
|
||||
#clientCfgDict = {'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
|
||||
#clientCfgDict["rpcDebugFlag"] = rpcDebugFlagVal
|
||||
#updatecfgDict = {'clientCfg': {}, 'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
|
||||
#updatecfgDict["rpcDebugFlag"] = rpcDebugFlagVal
|
||||
#print ("===================: ", updatecfgDict)
|
||||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def newcur(self,cfg,host,port):
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
con=taos.connect(host=host, user=user, password=password, config=cfg ,port=port)
|
||||
cur=con.cursor()
|
||||
print(cur)
|
||||
return cur
|
||||
|
||||
def initConsumerTable(self,cdbName='cdb'):
|
||||
tdLog.info("create consume database, and consume info table, and consume result table")
|
||||
tdSql.query("create database if not exists %s vgroups 1"%(cdbName))
|
||||
tdSql.query("drop table if exists %s.consumeinfo "%(cdbName))
|
||||
tdSql.query("drop table if exists %s.consumeresult "%(cdbName))
|
||||
|
||||
tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName)
|
||||
tdSql.query("create table %s.consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)"%cdbName)
|
||||
|
||||
def initConsumerInfoTable(self,cdbName='cdb'):
|
||||
tdLog.info("drop consumeinfo table")
|
||||
tdSql.query("drop table if exists %s.consumeinfo "%(cdbName))
|
||||
tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName)
|
||||
|
||||
def insertConsumerInfo(self,consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifmanualcommit,cdbName='cdb'):
|
||||
sql = "insert into %s.consumeinfo values "%cdbName
|
||||
sql += "(now, %d, '%s', '%s', %d, %d, %d)"%(consumerId, topicList, keyList, expectrowcnt, ifcheckdata, ifmanualcommit)
|
||||
tdLog.info("consume info sql: %s"%sql)
|
||||
tdSql.query(sql)
|
||||
|
||||
def selectConsumeResult(self,expectRows,cdbName='cdb'):
|
||||
resultList=[]
|
||||
while 1:
|
||||
tdSql.query("select * from %s.consumeresult"%cdbName)
|
||||
#tdLog.info("row: %d, %l64d, %l64d"%(tdSql.getData(0, 1),tdSql.getData(0, 2),tdSql.getData(0, 3))
|
||||
if tdSql.getRows() == expectRows:
|
||||
break
|
||||
else:
|
||||
time.sleep(5)
|
||||
|
||||
for i in range(expectRows):
|
||||
tdLog.info ("consume id: %d, consume msgs: %d, consume rows: %d"%(tdSql.getData(i , 1), tdSql.getData(i , 2), tdSql.getData(i , 3)))
|
||||
resultList.append(tdSql.getData(i , 3))
|
||||
|
||||
return resultList
|
||||
|
||||
def startTmqSimProcess(self,buildPath,cfgPath,pollDelay,dbName,showMsg=1,showRow=1,cdbName='cdb',valgrind=0):
|
||||
shellCmd = 'nohup '
|
||||
if valgrind == 1:
|
||||
logFile = cfgPath + '/../log/valgrind-tmq.log'
|
||||
shellCmd = 'nohup valgrind --log-file=' + logFile
|
||||
shellCmd += '--tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all --num-callers=20 -v --workaround-gcc296-bugs=yes '
|
||||
|
||||
shellCmd += buildPath + '/build/bin/tmq_sim -c ' + cfgPath
|
||||
shellCmd += " -y %d -d %s -g %d -r %d -w %s "%(pollDelay, dbName, showMsg, showRow, cdbName)
|
||||
shellCmd += "> /dev/null 2>&1 &"
|
||||
tdLog.info(shellCmd)
|
||||
os.system(shellCmd)
|
||||
|
||||
def create_database(self,tsql, dbName,dropFlag=1,vgroups=4,replica=1):
|
||||
if dropFlag == 1:
|
||||
tsql.execute("drop database if exists %s"%(dbName))
|
||||
|
||||
tsql.execute("create database if not exists %s vgroups %d replica %d"%(dbName, vgroups, replica))
|
||||
tdLog.debug("complete to create database %s"%(dbName))
|
||||
return
|
||||
|
||||
def create_stable(self,tsql, dbName,stbName):
|
||||
tsql.execute("create table if not exists %s.%s (ts timestamp, c1 bigint, c2 binary(16)) tags(t1 int)"%(dbName, stbName))
|
||||
tdLog.debug("complete to create %s.%s" %(dbName, stbName))
|
||||
return
|
||||
|
||||
def create_ctables(self,tsql, dbName,stbName,ctbNum):
|
||||
tsql.execute("use %s" %dbName)
|
||||
pre_create = "create table"
|
||||
sql = pre_create
|
||||
#tdLog.debug("doing create one stable %s and %d child table in %s ..." %(stbname, count ,dbname))
|
||||
for i in range(ctbNum):
|
||||
sql += " %s_%d using %s tags(%d)"%(stbName,i,stbName,i+1)
|
||||
if (i > 0) and (i%100 == 0):
|
||||
tsql.execute(sql)
|
||||
sql = pre_create
|
||||
if sql != pre_create:
|
||||
tsql.execute(sql)
|
||||
|
||||
tdLog.debug("complete to create %d child tables in %s.%s" %(ctbNum, dbName, stbName))
|
||||
return
|
||||
|
||||
def insert_data(self,tsql,dbName,stbName,ctbNum,rowsPerTbl,batchNum,startTs=0):
|
||||
tdLog.debug("start to insert data ............")
|
||||
tsql.execute("use %s" %dbName)
|
||||
pre_insert = "insert into "
|
||||
sql = pre_insert
|
||||
|
||||
if startTs == 0:
|
||||
t = time.time()
|
||||
startTs = int(round(t * 1000))
|
||||
|
||||
#tdLog.debug("doing insert data into stable:%s rows:%d ..."%(stbName, allRows))
|
||||
rowsOfSql = 0
|
||||
for i in range(ctbNum):
|
||||
sql += " %s_%d values "%(stbName,i)
|
||||
for j in range(rowsPerTbl):
|
||||
sql += "(%d, %d, 'tmqrow_%d') "%(startTs + j, j, j)
|
||||
rowsOfSql += 1
|
||||
if (j > 0) and ((rowsOfSql == batchNum) or (j == rowsPerTbl - 1)):
|
||||
tsql.execute(sql)
|
||||
rowsOfSql = 0
|
||||
if j < rowsPerTbl - 1:
|
||||
sql = "insert into %s_%d values " %(stbName,i)
|
||||
else:
|
||||
sql = "insert into "
|
||||
#end sql
|
||||
if sql != pre_insert:
|
||||
#print("insert sql:%s"%sql)
|
||||
tsql.execute(sql)
|
||||
tdLog.debug("insert data ............ [OK]")
|
||||
return
|
||||
|
||||
def prepareEnv(self, **parameterDict):
|
||||
# create new connector for my thread
|
||||
tsql=self.newcur(parameterDict['cfg'], 'localhost', 6030)
|
||||
|
||||
if parameterDict["actionType"] == actionType.CREATE_DATABASE:
|
||||
self.create_database(tsql, parameterDict["dbName"])
|
||||
elif parameterDict["actionType"] == actionType.CREATE_STABLE:
|
||||
self.create_stable(tsql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
elif parameterDict["actionType"] == actionType.CREATE_CTABLE:
|
||||
self.create_ctables(tsql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
elif parameterDict["actionType"] == actionType.INSERT_DATA:
|
||||
self.insert_data(tsql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
else:
|
||||
tdLog.exit("not support's action: ", parameterDict["actionType"])
|
||||
|
||||
return
|
||||
|
||||
def tmqCase10(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 10: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db10', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:latest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume 0 processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start to check consume 0 result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 1 processor")
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
ifManualCommit = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt-10000,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt-10000:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt-10000))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdLog.info("start consume 2 processor")
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 2
|
||||
ifManualCommit = 1
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt+10000,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start one new thread to insert data")
|
||||
parameterDict['actionType'] = actionType.INSERT_DATA
|
||||
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
|
||||
prepareEnvThread.start()
|
||||
prepareEnvThread.join()
|
||||
|
||||
tdLog.info("start to check consume 0 and 1 and 2 result")
|
||||
expectRows = 3
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt*2:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 10 end ...... ")
|
||||
|
||||
def tmqCase11(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 11: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db11', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:none'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:none'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("again start consume processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("again check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != 0:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 11 end ...... ")
|
||||
|
||||
def tmqCase12(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 12: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db12', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt/4:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:none'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("again start consume processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("again check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt/4:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 12 end ...... ")
|
||||
|
||||
def tmqCase13(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 13: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db13', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,\
|
||||
parameterDict["dbName"],\
|
||||
parameterDict["stbName"],\
|
||||
parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],\
|
||||
parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt/4:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 1
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:none'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt/2,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("again start consume processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("again check consume result")
|
||||
expectRows = 2
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt*(1/2+1/4):
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*(1/2+1/4)))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
self.initConsumerInfoTable()
|
||||
consumerId = 2
|
||||
ifManualCommit = 1
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:none'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("again start consume processor")
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
tdLog.info("again check consume result")
|
||||
expectRows = 3
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 13 end ...... ")
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
buildPath = self.getBuildPath()
|
||||
if (buildPath == ""):
|
||||
tdLog.exit("taosd not found!")
|
||||
else:
|
||||
tdLog.info("taosd found in %s" % buildPath)
|
||||
cfgPath = buildPath + "/../sim/psim/cfg"
|
||||
tdLog.info("cfgPath: %s" % cfgPath)
|
||||
|
||||
self.tmqCase10(cfgPath, buildPath)
|
||||
self.tmqCase11(cfgPath, buildPath)
|
||||
self.tmqCase12(cfgPath, buildPath)
|
||||
self.tmqCase13(cfgPath, buildPath)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success(f"{__file__} successfully executed")
|
||||
|
||||
event = threading.Event()
|
||||
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
|
@ -0,0 +1,351 @@
|
|||
|
||||
import taos
|
||||
import sys
|
||||
import time
|
||||
import socket
|
||||
import os
|
||||
import threading
|
||||
from enum import Enum
|
||||
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
from util.cases import *
|
||||
from util.dnodes import *
|
||||
|
||||
class actionType(Enum):
|
||||
CREATE_DATABASE = 0
|
||||
CREATE_STABLE = 1
|
||||
CREATE_CTABLE = 2
|
||||
INSERT_DATA = 3
|
||||
|
||||
class TDTestCase:
|
||||
hostname = socket.gethostname()
|
||||
#rpcDebugFlagVal = '143'
|
||||
#clientCfgDict = {'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
|
||||
#clientCfgDict["rpcDebugFlag"] = rpcDebugFlagVal
|
||||
#updatecfgDict = {'clientCfg': {}, 'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
|
||||
#updatecfgDict["rpcDebugFlag"] = rpcDebugFlagVal
|
||||
#print ("===================: ", updatecfgDict)
|
||||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def newcur(self,cfg,host,port):
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
con=taos.connect(host=host, user=user, password=password, config=cfg ,port=port)
|
||||
cur=con.cursor()
|
||||
print(cur)
|
||||
return cur
|
||||
|
||||
def initConsumerTable(self,cdbName='cdb'):
|
||||
tdLog.info("create consume database, and consume info table, and consume result table")
|
||||
tdSql.query("create database if not exists %s vgroups 1"%(cdbName))
|
||||
tdSql.query("drop table if exists %s.consumeinfo "%(cdbName))
|
||||
tdSql.query("drop table if exists %s.consumeresult "%(cdbName))
|
||||
|
||||
tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName)
|
||||
tdSql.query("create table %s.consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)"%cdbName)
|
||||
|
||||
def initConsumerInfoTable(self,cdbName='cdb'):
|
||||
tdLog.info("drop consumeinfo table")
|
||||
tdSql.query("drop table if exists %s.consumeinfo "%(cdbName))
|
||||
tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName)
|
||||
|
||||
def insertConsumerInfo(self,consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifmanualcommit,cdbName='cdb'):
|
||||
sql = "insert into %s.consumeinfo values "%cdbName
|
||||
sql += "(now, %d, '%s', '%s', %d, %d, %d)"%(consumerId, topicList, keyList, expectrowcnt, ifcheckdata, ifmanualcommit)
|
||||
tdLog.info("consume info sql: %s"%sql)
|
||||
tdSql.query(sql)
|
||||
|
||||
def selectConsumeResult(self,expectRows,cdbName='cdb'):
|
||||
resultList=[]
|
||||
while 1:
|
||||
tdSql.query("select * from %s.consumeresult"%cdbName)
|
||||
#tdLog.info("row: %d, %l64d, %l64d"%(tdSql.getData(0, 1),tdSql.getData(0, 2),tdSql.getData(0, 3))
|
||||
if tdSql.getRows() == expectRows:
|
||||
break
|
||||
else:
|
||||
time.sleep(5)
|
||||
|
||||
for i in range(expectRows):
|
||||
tdLog.info ("consume id: %d, consume msgs: %d, consume rows: %d"%(tdSql.getData(i , 1), tdSql.getData(i , 2), tdSql.getData(i , 3)))
|
||||
resultList.append(tdSql.getData(i , 3))
|
||||
|
||||
return resultList
|
||||
|
||||
def startTmqSimProcess(self,buildPath,cfgPath,pollDelay,dbName,showMsg=1,showRow=1,cdbName='cdb',valgrind=0):
|
||||
shellCmd = 'nohup '
|
||||
if valgrind == 1:
|
||||
logFile = cfgPath + '/../log/valgrind-tmq.log'
|
||||
shellCmd = 'nohup valgrind --log-file=' + logFile
|
||||
shellCmd += '--tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all --num-callers=20 -v --workaround-gcc296-bugs=yes '
|
||||
|
||||
shellCmd += buildPath + '/build/bin/tmq_sim -c ' + cfgPath
|
||||
shellCmd += " -y %d -d %s -g %d -r %d -w %s "%(pollDelay, dbName, showMsg, showRow, cdbName)
|
||||
shellCmd += "> /dev/null 2>&1 &"
|
||||
tdLog.info(shellCmd)
|
||||
os.system(shellCmd)
|
||||
|
||||
def create_database(self,tsql, dbName,dropFlag=1,vgroups=4,replica=1):
|
||||
if dropFlag == 1:
|
||||
tsql.execute("drop database if exists %s"%(dbName))
|
||||
|
||||
tsql.execute("create database if not exists %s vgroups %d replica %d"%(dbName, vgroups, replica))
|
||||
tdLog.debug("complete to create database %s"%(dbName))
|
||||
return
|
||||
|
||||
def create_stable(self,tsql, dbName,stbName):
|
||||
tsql.execute("create table if not exists %s.%s (ts timestamp, c1 bigint, c2 binary(16)) tags(t1 int)"%(dbName, stbName))
|
||||
tdLog.debug("complete to create %s.%s" %(dbName, stbName))
|
||||
return
|
||||
|
||||
def create_ctables(self,tsql, dbName,stbName,ctbNum):
|
||||
tsql.execute("use %s" %dbName)
|
||||
pre_create = "create table"
|
||||
sql = pre_create
|
||||
#tdLog.debug("doing create one stable %s and %d child table in %s ..." %(stbname, count ,dbname))
|
||||
for i in range(ctbNum):
|
||||
sql += " %s_%d using %s tags(%d)"%(stbName,i,stbName,i+1)
|
||||
if (i > 0) and (i%100 == 0):
|
||||
tsql.execute(sql)
|
||||
sql = pre_create
|
||||
if sql != pre_create:
|
||||
tsql.execute(sql)
|
||||
|
||||
tdLog.debug("complete to create %d child tables in %s.%s" %(ctbNum, dbName, stbName))
|
||||
return
|
||||
|
||||
def insert_data(self,tsql,dbName,stbName,ctbNum,rowsPerTbl,batchNum,startTs=0):
|
||||
tdLog.debug("start to insert data ............")
|
||||
tsql.execute("use %s" %dbName)
|
||||
pre_insert = "insert into "
|
||||
sql = pre_insert
|
||||
|
||||
if startTs == 0:
|
||||
t = time.time()
|
||||
startTs = int(round(t * 1000))
|
||||
|
||||
#tdLog.debug("doing insert data into stable:%s rows:%d ..."%(stbName, allRows))
|
||||
rowsOfSql = 0
|
||||
for i in range(ctbNum):
|
||||
sql += " %s_%d values "%(stbName,i)
|
||||
for j in range(rowsPerTbl):
|
||||
sql += "(%d, %d, 'tmqrow_%d') "%(startTs + j, j, j)
|
||||
rowsOfSql += 1
|
||||
if (j > 0) and ((rowsOfSql == batchNum) or (j == rowsPerTbl - 1)):
|
||||
tsql.execute(sql)
|
||||
rowsOfSql = 0
|
||||
if j < rowsPerTbl - 1:
|
||||
sql = "insert into %s_%d values " %(stbName,i)
|
||||
else:
|
||||
sql = "insert into "
|
||||
#end sql
|
||||
if sql != pre_insert:
|
||||
#print("insert sql:%s"%sql)
|
||||
tsql.execute(sql)
|
||||
tdLog.debug("insert data ............ [OK]")
|
||||
return
|
||||
|
||||
def prepareEnv(self, **parameterDict):
|
||||
# create new connector for my thread
|
||||
tsql=self.newcur(parameterDict['cfg'], 'localhost', 6030)
|
||||
|
||||
if parameterDict["actionType"] == actionType.CREATE_DATABASE:
|
||||
self.create_database(tsql, parameterDict["dbName"])
|
||||
elif parameterDict["actionType"] == actionType.CREATE_STABLE:
|
||||
self.create_stable(tsql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
elif parameterDict["actionType"] == actionType.CREATE_CTABLE:
|
||||
self.create_ctables(tsql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
elif parameterDict["actionType"] == actionType.INSERT_DATA:
|
||||
self.insert_data(tsql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"],\
|
||||
parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
else:
|
||||
tdLog.exit("not support's action: ", parameterDict["actionType"])
|
||||
|
||||
return
|
||||
|
||||
def tmqCase1(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 1: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
auotCtbNum = 5
|
||||
auotCtbPrefix = 'autoCtb'
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db1', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,parameterDict["dbName"],parameterDict["stbName"],parameterDict["ctbNum"],parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("create topics from stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * (auotCtbNum + parameterDict["ctbNum"])
|
||||
topicList = topicFromStb1
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# add some new child tables using auto ctreating mode
|
||||
time.sleep(1)
|
||||
for index in range(auotCtbNum):
|
||||
tdSql.query("create table %s.%s_%d using %s.%s tags(%d)"%(parameterDict["dbName"], auotCtbPrefix, index, parameterDict["dbName"], parameterDict["stbName"], index))
|
||||
|
||||
self.insert_data(tdSql,parameterDict["dbName"],auotCtbPrefix,auotCtbNum,parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 1 end ...... ")
|
||||
|
||||
def tmqCase2(self, cfgPath, buildPath):
|
||||
tdLog.printNoPrefix("======== test case 2: ")
|
||||
|
||||
self.initConsumerTable()
|
||||
|
||||
auotCtbNum = 10
|
||||
auotCtbPrefix = 'autoCtb'
|
||||
|
||||
# create and start thread
|
||||
parameterDict = {'cfg': '', \
|
||||
'actionType': 0, \
|
||||
'dbName': 'db2', \
|
||||
'dropFlag': 1, \
|
||||
'vgroups': 4, \
|
||||
'replica': 1, \
|
||||
'stbName': 'stb1', \
|
||||
'ctbNum': 10, \
|
||||
'rowsPerTbl': 10000, \
|
||||
'batchNum': 100, \
|
||||
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
|
||||
parameterDict['cfg'] = cfgPath
|
||||
|
||||
self.create_database(tdSql, parameterDict["dbName"])
|
||||
self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
|
||||
self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
|
||||
self.insert_data(tdSql,parameterDict["dbName"],parameterDict["stbName"],parameterDict["ctbNum"],parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
|
||||
self.create_stable(tdSql, parameterDict["dbName"], 'stb2')
|
||||
|
||||
tdLog.info("create topics from stb0/stb1")
|
||||
topicFromStb1 = 'topic_stb1'
|
||||
topicFromStb2 = 'topic_stb2'
|
||||
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
|
||||
tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb2, parameterDict['dbName'], 'stb2'))
|
||||
consumerId = 0
|
||||
expectrowcnt = parameterDict["rowsPerTbl"] * (auotCtbNum + parameterDict["ctbNum"])
|
||||
topicList = '%s, %s'%(topicFromStb1,topicFromStb2)
|
||||
ifcheckdata = 0
|
||||
ifManualCommit = 0
|
||||
keyList = 'group.id:cgrp1,\
|
||||
enable.auto.commit:false,\
|
||||
auto.commit.interval.ms:6000,\
|
||||
auto.offset.reset:earliest'
|
||||
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
||||
|
||||
tdLog.info("start consume processor")
|
||||
pollDelay = 100
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
|
||||
|
||||
# add some new child tables using auto ctreating mode
|
||||
time.sleep(1)
|
||||
for index in range(auotCtbNum):
|
||||
tdSql.query("create table %s.%s_%d using %s.%s tags(%d)"%(parameterDict["dbName"], auotCtbPrefix, index, parameterDict["dbName"], 'stb2', index))
|
||||
|
||||
self.insert_data(tdSql,parameterDict["dbName"],auotCtbPrefix,auotCtbNum,parameterDict["rowsPerTbl"],parameterDict["batchNum"])
|
||||
|
||||
tdLog.info("insert process end, and start to check consume result")
|
||||
expectRows = 1
|
||||
resultList = self.selectConsumeResult(expectRows)
|
||||
totalConsumeRows = 0
|
||||
for i in range(expectRows):
|
||||
totalConsumeRows += resultList[i]
|
||||
|
||||
if totalConsumeRows != expectrowcnt:
|
||||
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
|
||||
tdLog.exit("tmq consume rows error!")
|
||||
|
||||
tdSql.query("drop topic %s"%topicFromStb1)
|
||||
|
||||
tdLog.printNoPrefix("======== test case 2 end ...... ")
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
buildPath = self.getBuildPath()
|
||||
if (buildPath == ""):
|
||||
tdLog.exit("taosd not found!")
|
||||
else:
|
||||
tdLog.info("taosd found in %s" % buildPath)
|
||||
cfgPath = buildPath + "/../sim/psim/cfg"
|
||||
tdLog.info("cfgPath: %s" % cfgPath)
|
||||
|
||||
self.tmqCase1(cfgPath, buildPath)
|
||||
self.tmqCase2(cfgPath, buildPath)
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success(f"{__file__} successfully executed")
|
||||
|
||||
event = threading.Event()
|
||||
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
File diff suppressed because it is too large
Load Diff
|
@ -29,8 +29,8 @@ class TDTestCase:
|
|||
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
#tdSql.init(conn.cursor())
|
||||
tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
tdSql.init(conn.cursor())
|
||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue