diff --git a/docs/en/12-taos-sql/14-stream.md b/docs/en/12-taos-sql/14-stream.md index c8be1753a2..6f2343d347 100644 --- a/docs/en/12-taos-sql/14-stream.md +++ b/docs/en/12-taos-sql/14-stream.md @@ -67,7 +67,7 @@ If a stream is created with PARTITION BY clause and SUBTABLE clause, the name of CREATE STREAM avg_vol_s INTO avg_vol SUBTABLE(CONCAT('new-', tname)) AS SELECT _wstart, count(*), avg(voltage) FROM meters PARTITION BY tbname tname INTERVAL(1m); ``` -IN PARTITION clause, 'tbname', representing each subtable name of source supertable, is given alias 'tname'. And 'tname' is used in SUBTABLE clause. In SUBTABLE clause, each auto created subtable will concat 'new-' and source subtable name as their name. Other expressions are also allowed in SUBTABLE clause, but the output type must be varchar. +IN PARTITION clause, 'tbname', representing each subtable name of source supertable, is given alias 'tname'. And 'tname' is used in SUBTABLE clause. In SUBTABLE clause, each auto created subtable will concat 'new-' and source subtable name as their name(Starting from 3.2.3.0, in order to avoid the expression in subtable being unable to distinguish between different subtables, add '_groupId' to the end of subtable name). If the output length exceeds the limitation of TDengine(192), the name will be truncated. If the generated name is occupied by some other table, the creation and writing of the new subtable will be failed. diff --git a/docs/en/14-reference/12-config/index.md b/docs/en/14-reference/12-config/index.md index 65c48f9190..c1abfd3e39 100755 --- a/docs/en/14-reference/12-config/index.md +++ b/docs/en/14-reference/12-config/index.md @@ -226,9 +226,10 @@ Please note the `taoskeeper` needs to be installed and running to create the `lo | Attribute | Description | | ------------- | --------------------------------------------------------------------------------------------------------------- | | Applicable | Client only | -| Meaning | When the Last, First, LastRow function is queried, whether the returned column name contains the function name. | -| Value Range | 0 means including the function name, 1 means not including the function name. | -| Default Value | 0 | +| Meaning | When the Last, First, and LastRow functions are queried and no alias is specified, the alias is automatically set to the column name (excluding the function name). Therefore, if the order by clause refers to the column name, it will automatically refer to the function corresponding to the column. | +| Value Range | 1 means automatically setting the alias to the column name (excluding the function name), 0 means not automatically setting the alias. | +| Default Value | 0 | +| Notes | When multiple of the above functions act on the same column at the same time and no alias is specified, if the order by clause refers to the column name, column selection ambiguous will occur because the aliases of multiple columns are the same. | ## Locale Parameters diff --git a/docs/en/14-reference/13-schemaless/13-schemaless.md b/docs/en/14-reference/13-schemaless/13-schemaless.md index 9b001ee79c..4a259f100c 100644 --- a/docs/en/14-reference/13-schemaless/13-schemaless.md +++ b/docs/en/14-reference/13-schemaless/13-schemaless.md @@ -84,18 +84,22 @@ Schemaless writes process row data according to the following principles. 1. You can use the following rules to generate the subtable names: first, combine the measurement name and the key and value of the label into the next string: -```json -"measurement,tag_key1=tag_value1,tag_key2=tag_value2" -``` + ```json + "measurement,tag_key1=tag_value1,tag_key2=tag_value2" + ``` -:::tip -Note that tag_key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol. -The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t\_" is a fixed prefix that every table generated by this mapping relationship has. -::: + :::tip + Note that tag_key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol. + The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t\_" is a fixed prefix that every table generated by this mapping relationship has. + ::: -If you do not want to use an automatically generated table name, there are two ways to specify sub table names, the first one has a higher priority. -You can configure smlAutoChildTableNameDelimiter in taos.cfg(except for `@ # space \r \t \n`), for example, `smlAutoChildTableNameDelimiter=tname`. You can insert `st,t0=cpul,t1=4 c1=3 1626006833639000000` and the table name will be cpu1-4. -You can configure smlChildTableName in taos.cfg to specify table names, for example, `smlChildTableName=tname`. You can insert `st,tname=cpul,t1=4 c1=3 1626006833639000000` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored. + If you do not want to use an automatically generated table name, there are two ways to specify sub table names(the first one has a higher priority). + + 1. You can configure smlAutoChildTableNameDelimiter in taos.cfg(except for `@ # space \r \t \n`). + 1. For example, `smlAutoChildTableNameDelimiter=tname`. You can insert `st,t0=cpul,t1=4 c1=3 1626006833639000000` and the table name will be cpu1-4. + + 2. You can configure smlChildTableName in taos.cfg to specify table names. + 2. For example, `smlChildTableName=tname`. You can insert `st,tname=cpul,t1=4 c1=3 1626006833639000000` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored. 2. If the super table obtained by parsing the line protocol does not exist, this super table is created. **Important:** Manually creating supertables for schemaless writing is not supported. Schemaless writing creates appropriate supertables automatically. diff --git a/docs/zh/12-taos-sql/14-stream.md b/docs/zh/12-taos-sql/14-stream.md index 929cf9ee4e..979fc436b9 100644 --- a/docs/zh/12-taos-sql/14-stream.md +++ b/docs/zh/12-taos-sql/14-stream.md @@ -34,7 +34,7 @@ subquery: SELECT select_list stb_name 是保存计算结果的超级表的表名,如果该超级表不存在,会自动创建;如果已存在,则检查列的schema信息。详见 写入已存在的超级表 -TAGS 字句定义了流计算中创建TAG的规则,可以为每个partition对应的子表生成自定义的TAG值,详见 自定义TAG +TAGS 子句定义了流计算中创建TAG的规则,可以为每个partition对应的子表生成自定义的TAG值,详见 自定义TAG ```sql create_definition: col_name column_definition @@ -77,7 +77,7 @@ SELECT _wstart, count(*), avg(voltage) FROM meters PARTITION BY tbname INTERVAL( CREATE STREAM avg_vol_s INTO avg_vol SUBTABLE(CONCAT('new-', tname)) AS SELECT _wstart, count(*), avg(voltage) FROM meters PARTITION BY tbname tname INTERVAL(1m); ``` -PARTITION 子句中,为 tbname 定义了一个别名 tname, 在PARTITION 子句中的别名可以用于 SUBTABLE 子句中的表达式计算,在上述示例中,流新创建的子表将以前缀 'new-' 连接原表名作为表名。 +PARTITION 子句中,为 tbname 定义了一个别名 tname, 在PARTITION 子句中的别名可以用于 SUBTABLE 子句中的表达式计算,在上述示例中,流新创建的子表将以前缀 'new-' 连接原表名作为表名(从3.2.3.0开始,为了避免 sutable 中的表达式无法区分各个子表,即误将多个相同时间线写入一个子表,在指定的子表名后面加上 _groupId)。 注意,子表名的长度若超过 TDengine 的限制,将被截断。若要生成的子表名已经存在于另一超级表,由于 TDengine 的子表名是唯一的,因此对应新子表的创建以及数据的写入将会失败。 @@ -212,8 +212,8 @@ CREATE STREAM streams2 trigger at_once INTO st1 TAGS(cc varchar(100)) as select PARTITION 子句中,为 concat("tag-", tbname)定义了一个别名cc, 对应超级表st1的自定义TAG的名字。在上述示例中,流新创建的子表的TAG将以前缀 'new-' 连接原表名作为TAG的值。 会对TAG信息进行如下检查 -1.检查tag的schema信息是否匹配,对于不匹配的,则自动进行数据类型转换,当前只有数据长度大于4096byte时才报错,其余场景都能进行类型转换。 -2.检查tag的个数是否相同,如果不同,需要显示的指定超级表与subquery的tag的对应关系,否则报错;如果相同,可以指定对应关系,也可以不指定,不指定则按位置顺序对应。 +1. 检查tag的schema信息是否匹配,对于不匹配的,则自动进行数据类型转换,当前只有数据长度大于4096byte时才报错,其余场景都能进行类型转换。 +2. 检查tag的个数是否相同,如果不同,需要显示的指定超级表与subquery的tag的对应关系,否则报错;如果相同,可以指定对应关系,也可以不指定,不指定则按位置顺序对应。 ## 清理中间状态 diff --git a/docs/zh/14-reference/12-config/index.md b/docs/zh/14-reference/12-config/index.md index 27a9618107..4d47f0771c 100755 --- a/docs/zh/14-reference/12-config/index.md +++ b/docs/zh/14-reference/12-config/index.md @@ -215,9 +215,10 @@ taos -C | 属性 | 说明 | | -------- | ----------------------------------------------------------- | | 适用范围 | 仅客户端适用 | -| 含义 | Last、First、LastRow 函数查询时,返回的列名是否包含函数名。 | -| 取值范围 | 0 表示包含函数名,1 表示不包含函数名。 | -| 缺省值 | 0 | +| 含义 | Last、First、LastRow 函数查询且未指定别名时,自动设置别名为列名(不含函数名),因此 order by 子句如果引用了该列名将自动引用该列对应的函数 | +| 取值范围 | 1 表示自动设置别名为列名(不包含函数名), 0 表示不自动设置别名。 | +| 缺省值 | 0 | +| 补充说明 | 当同时出现多个上述函数作用于同一列且未指定别名时,如果 order by 子句引用了该列名,将会因为多列别名相同引发列选择冲突| ### countAlwaysReturnValue diff --git a/docs/zh/14-reference/13-schemaless/13-schemaless.md b/docs/zh/14-reference/13-schemaless/13-schemaless.md index 723531676c..47737ab7ee 100644 --- a/docs/zh/14-reference/13-schemaless/13-schemaless.md +++ b/docs/zh/14-reference/13-schemaless/13-schemaless.md @@ -87,19 +87,20 @@ st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000 1. 将使用如下规则来生成子表名:首先将 measurement 的名称和标签的 key 和 value 组合成为如下的字符串 -```json -"measurement,tag_key1=tag_value1,tag_key2=tag_value2" -``` + ```json + "measurement,tag_key1=tag_value1,tag_key2=tag_value2" + ``` -:::tip -需要注意的是,这里的 tag_key1, tag_key2 并不是用户输入的标签的原始顺序,而是使用了标签名称按照字符串升序排列后的结果。所以,tag_key1 并不是在行协议中输入的第一个标签。 -排列完成以后计算该字符串的 MD5 散列值 "md5_val"。然后将计算的结果与字符串组合生成表名:“t_md5_val”。其中的 “t_” 是固定的前缀,每个通过该映射关系自动生成的表都具有该前缀。 -:::tip -如果不想用自动生成的表名,有两种指定子表名的方式,第一种优先级更高: -通过在taos.cfg里配置 smlAutoChildTableNameDelimiter 参数来指定(`@ # 空格 回车 换行 制表符`除外)。 -举例如下:配置 smlAutoChildTableNameDelimiter=- 插入数据为 st,t0=cpu1,t1=4 c1=3 1626006833639000000 则创建的表名为 cpu1-4。 -通过在taos.cfg里配置 smlChildTableName 参数来指定。 -举例如下:配置 smlChildTableName=tname 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的表名为 cpu1,注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。 + :::tip + 需要注意的是,这里的 tag_key1, tag_key2 并不是用户输入的标签的原始顺序,而是使用了标签名称按照字符串升序排列后的结果。所以,tag_key1 并不是在行协议中输入的第一个标签。 + 排列完成以后计算该字符串的 MD5 散列值 "md5_val"。然后将计算的结果与字符串组合生成表名:“t_md5_val”。其中的 “t_” 是固定的前缀,每个通过该映射关系自动生成的表都具有该前缀。 + :::tip + + 如果不想用自动生成的表名,有两种指定子表名的方式(第一种优先级更高)。 + 1. 通过在taos.cfg里配置 smlAutoChildTableNameDelimiter 参数来指定(`@ # 空格 回车 换行 制表符`除外)。 + 1. 举例如下:配置 smlAutoChildTableNameDelimiter=- 插入数据为 st,t0=cpu1,t1=4 c1=3 1626006833639000000 则创建的表名为 cpu1-4。 + 2. 通过在taos.cfg里配置 smlChildTableName 参数来指定。 + 1. 举例如下:配置 smlChildTableName=tname 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的表名为 cpu1,注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略。 2. 如果解析行协议获得的超级表不存在,则会创建这个超级表(不建议手动创建超级表,不然插入数据可能异常)。 3. 如果解析行协议获得子表不存在,则 Schemaless 会按照步骤 1 或 2 确定的子表名来创建子表。 diff --git a/include/common/tmsg.h b/include/common/tmsg.h index 084a445423..be7ffdd105 100644 --- a/include/common/tmsg.h +++ b/include/common/tmsg.h @@ -1983,6 +1983,14 @@ typedef struct { char data[]; } SRetrieveTableRsp; +typedef struct { + int64_t version; + int64_t numOfRows; + int8_t compressed; + int8_t precision; + char data[]; +} SRetrieveTableRspForTmq; + typedef struct { int64_t handle; int64_t useconds; diff --git a/include/dnode/vnode/tqCommon.h b/include/dnode/vnode/tqCommon.h index 39e0c07406..50458d684f 100644 --- a/include/dnode/vnode/tqCommon.h +++ b/include/dnode/vnode/tqCommon.h @@ -32,8 +32,9 @@ int32_t tqStreamTaskProcessDeployReq(SStreamMeta* pMeta, SMsgCb* cb, int64_t sve bool isLeader, bool restored); int32_t tqStreamTaskProcessDropReq(SStreamMeta* pMeta, char* msg, int32_t msgLen); int32_t tqStreamTaskProcessRunReq(SStreamMeta* pMeta, SRpcMsg* pMsg, bool isLeader); -int32_t tqStreamTaskResetStatus(SStreamMeta* pMeta, int32_t* numOfTasks); +int32_t tqStreamTaskResetStatus(SStreamMeta* pMeta); int32_t tqStartTaskCompleteCallback(SStreamMeta* pMeta); +int32_t tqStreamTasksGetTotalNum(SStreamMeta* pMeta); int32_t tqStreamTaskProcessTaskResetReq(SStreamMeta* pMeta, SRpcMsg* pMsg); int32_t tqStreamTaskProcessTaskPauseReq(SStreamMeta* pMeta, char* pMsg); int32_t tqStreamTaskProcessTaskResumeReq(void* handle, int64_t sversion, char* pMsg, bool fromVnode); diff --git a/include/libs/function/function.h b/include/libs/function/function.h index ffaad69373..8863201094 100644 --- a/include/libs/function/function.h +++ b/include/libs/function/function.h @@ -249,6 +249,10 @@ typedef struct SPoint { int32_t taosGetLinearInterpolationVal(SPoint *point, int32_t outputType, SPoint *point1, SPoint *point2, int32_t inputType); +#define LEASTSQUARES_DOUBLE_ITEM_LENGTH 25 +#define LEASTSQUARES_BUFF_LENGTH 128 +#define DOUBLE_PRECISION_DIGITS "16e" + #ifdef __cplusplus } #endif diff --git a/include/libs/nodes/plannodes.h b/include/libs/nodes/plannodes.h index 6cb83ebb51..8e375e9da8 100644 --- a/include/libs/nodes/plannodes.h +++ b/include/libs/nodes/plannodes.h @@ -156,6 +156,7 @@ typedef struct SProjectLogicNode { SNodeList* pProjections; char stmtName[TSDB_TABLE_NAME_LEN]; bool ignoreGroupId; + bool inputIgnoreGroup; } SProjectLogicNode; typedef struct SIndefRowsFuncLogicNode { @@ -449,6 +450,7 @@ typedef struct SProjectPhysiNode { SNodeList* pProjections; bool mergeDataBlock; bool ignoreGroupId; + bool inputIgnoreGroup; } SProjectPhysiNode; typedef struct SIndefRowsFuncPhysiNode { diff --git a/include/libs/parser/parser.h b/include/libs/parser/parser.h index 177607b178..bb206b5a02 100644 --- a/include/libs/parser/parser.h +++ b/include/libs/parser/parser.h @@ -150,7 +150,7 @@ int32_t smlBindData(SQuery* handle, bool dataFormat, SArray* tags, SArray* colsS STableMeta* pTableMeta, char* tableName, const char* sTableName, int32_t sTableNameLen, int32_t ttl, char* msgBuf, int32_t msgBufLen); int32_t smlBuildOutput(SQuery* handle, SHashObj* pVgHash); -int rawBlockBindData(SQuery *query, STableMeta* pTableMeta, void* data, SVCreateTbReq* pCreateTb, TAOS_FIELD *fields, int numFields, bool needChangeLength); +int rawBlockBindData(SQuery *query, STableMeta* pTableMeta, void* data, SVCreateTbReq** pCreateTb, TAOS_FIELD *fields, int numFields, bool needChangeLength); int32_t rewriteToVnodeModifyOpStmt(SQuery* pQuery, SArray* pBufArray); SArray* serializeVgroupsCreateTableBatch(SHashObj* pVgroupHashmap); diff --git a/include/libs/stream/tstream.h b/include/libs/stream/tstream.h index 4e87e71137..c6923a2233 100644 --- a/include/libs/stream/tstream.h +++ b/include/libs/stream/tstream.h @@ -690,9 +690,9 @@ typedef struct STaskStatusEntry { int64_t verStart; // start version in WAL, only valid for source task int64_t verEnd; // end version in WAL, only valid for source task int64_t processedVer; // only valid for source task - int64_t activeCheckpointId; // current active checkpoint id + int64_t checkpointId; // current active checkpoint id int32_t chkpointTransId; // checkpoint trans id - bool checkpointFailed; // denote if the checkpoint is failed or not + int8_t checkpointFailed; // denote if the checkpoint is failed or not bool inputQChanging; // inputQ is changing or not int64_t inputQUnchangeCounter; double inputQUsed; // in MiB @@ -875,6 +875,8 @@ void streamMetaStartHb(SStreamMeta* pMeta); bool streamMetaTaskInTimer(SStreamMeta* pMeta); int32_t streamMetaAddTaskLaunchResult(SStreamMeta* pMeta, int64_t streamId, int32_t taskId, int64_t startTs, int64_t endTs, bool ready); +int32_t streamMetaResetTaskStatus(SStreamMeta* pMeta); + void streamMetaRLock(SStreamMeta* pMeta); void streamMetaRUnLock(SStreamMeta* pMeta); void streamMetaWLock(SStreamMeta* pMeta); @@ -887,6 +889,7 @@ int32_t streamMetaStartAllTasks(SStreamMeta* pMeta); int32_t streamMetaStopAllTasks(SStreamMeta* pMeta); int32_t streamMetaStartOneTask(SStreamMeta* pMeta, int64_t streamId, int32_t taskId); bool streamMetaAllTasksReady(const SStreamMeta* pMeta); +tmr_h streamTimerGetInstance(); // checkpoint int32_t streamProcessCheckpointSourceReq(SStreamTask* pTask, SStreamCheckpointSourceReq* pReq); diff --git a/include/util/taoserror.h b/include/util/taoserror.h index 827567904a..47dc8c11b1 100644 --- a/include/util/taoserror.h +++ b/include/util/taoserror.h @@ -748,6 +748,7 @@ int32_t* taosGetErrno(); #define TSDB_CODE_PAR_INVALID_VIEW_QUERY TAOS_DEF_ERROR_CODE(0, 0x266C) #define TSDB_CODE_PAR_COL_QUERY_MISMATCH TAOS_DEF_ERROR_CODE(0, 0x266D) #define TSDB_CODE_PAR_VIEW_CONFLICT_WITH_TABLE TAOS_DEF_ERROR_CODE(0, 0x266E) +#define TSDB_CODE_PAR_ORDERBY_AMBIGUOUS TAOS_DEF_ERROR_CODE(0, 0x266F) #define TSDB_CODE_PAR_INTERNAL_ERROR TAOS_DEF_ERROR_CODE(0, 0x26FF) //planner diff --git a/source/client/inc/clientInt.h b/source/client/inc/clientInt.h index a6d5039be7..f73d121b15 100644 --- a/source/client/inc/clientInt.h +++ b/source/client/inc/clientInt.h @@ -155,6 +155,7 @@ typedef struct STscObj { int8_t biMode; int32_t acctId; uint32_t connId; + int32_t appHbMgrIdx; int64_t id; // ref ID returned by taosAddRef TdThreadMutex mutex; // used to protect the operation on db int32_t numOfReqs; // number of sqlObj bound to this connection @@ -298,6 +299,8 @@ void doSetOneRowPtr(SReqResultInfo* pResultInfo); void setResPrecision(SReqResultInfo* pResInfo, int32_t precision); int32_t setQueryResultFromRsp(SReqResultInfo* pResultInfo, const SRetrieveTableRsp* pRsp, bool convertUcs4, bool freeAfterUse); +int32_t setResultDataPtr(SReqResultInfo* pResultInfo, TAOS_FIELD* pFields, int32_t numOfCols, int32_t numOfRows, + bool convertUcs4); void setResSchemaInfo(SReqResultInfo* pResInfo, const SSchema* pSchema, int32_t numOfCols); void doFreeReqResultInfo(SReqResultInfo* pResInfo); int32_t transferTableNameList(const char* tbList, int32_t acctId, char* dbName, SArray** pReq); diff --git a/source/client/src/clientEnv.c b/source/client/src/clientEnv.c index b6c5701915..f1e9e36433 100644 --- a/source/client/src/clientEnv.c +++ b/source/client/src/clientEnv.c @@ -283,6 +283,7 @@ void *createTscObj(const char *user, const char *auth, const char *db, int32_t c pObj->connType = connType; pObj->pAppInfo = pAppInfo; + pObj->appHbMgrIdx = pAppInfo->pAppHbMgr->idx; tstrncpy(pObj->user, user, sizeof(pObj->user)); memcpy(pObj->pass, auth, TSDB_PASSWORD_LEN); diff --git a/source/client/src/clientHb.c b/source/client/src/clientHb.c index b3d9a9ced5..63a65d7c95 100644 --- a/source/client/src/clientHb.c +++ b/source/client/src/clientHb.c @@ -30,7 +30,7 @@ typedef struct { }; } SHbParam; -static SClientHbMgr clientHbMgr = {0}; +SClientHbMgr clientHbMgr = {0}; static int32_t hbCreateThread(); static void hbStopThread(); @@ -1294,9 +1294,8 @@ void hbMgrCleanUp() { taosThreadMutexLock(&clientHbMgr.lock); appHbMgrCleanup(); - taosArrayDestroy(clientHbMgr.appHbMgrs); + clientHbMgr.appHbMgrs = taosArrayDestroy(clientHbMgr.appHbMgrs); taosThreadMutexUnlock(&clientHbMgr.lock); - clientHbMgr.appHbMgrs = NULL; } int hbRegisterConnImpl(SAppHbMgr *pAppHbMgr, SClientHbKey connKey, int64_t clusterId) { @@ -1335,19 +1334,18 @@ int hbRegisterConn(SAppHbMgr *pAppHbMgr, int64_t tscRefId, int64_t clusterId, in } void hbDeregisterConn(STscObj *pTscObj, SClientHbKey connKey) { - SAppHbMgr *pAppHbMgr = pTscObj->pAppInfo->pAppHbMgr; - SClientHbReq *pReq = taosHashAcquire(pAppHbMgr->activeInfo, &connKey, sizeof(SClientHbKey)); - if (pReq) { - tFreeClientHbReq(pReq); - taosHashRemove(pAppHbMgr->activeInfo, &connKey, sizeof(SClientHbKey)); - taosHashRelease(pAppHbMgr->activeInfo, pReq); + taosThreadMutexLock(&clientHbMgr.lock); + SAppHbMgr *pAppHbMgr = taosArrayGetP(clientHbMgr.appHbMgrs, pTscObj->appHbMgrIdx); + if (pAppHbMgr) { + SClientHbReq *pReq = taosHashAcquire(pAppHbMgr->activeInfo, &connKey, sizeof(SClientHbKey)); + if (pReq) { + tFreeClientHbReq(pReq); + taosHashRemove(pAppHbMgr->activeInfo, &connKey, sizeof(SClientHbKey)); + taosHashRelease(pAppHbMgr->activeInfo, pReq); + atomic_sub_fetch_32(&pAppHbMgr->connKeyCnt, 1); + } } - - if (NULL == pReq) { - return; - } - - atomic_sub_fetch_32(&pAppHbMgr->connKeyCnt, 1); + taosThreadMutexUnlock(&clientHbMgr.lock); } // set heart beat thread quit mode , if quicByKill 1 then kill thread else quit from inner diff --git a/source/client/src/clientImpl.c b/source/client/src/clientImpl.c index d4f89d6b54..9800d233e9 100644 --- a/source/client/src/clientImpl.c +++ b/source/client/src/clientImpl.c @@ -1795,6 +1795,7 @@ int32_t getVersion1BlockMetaSize(const char* p, int32_t numOfCols) { static int32_t estimateJsonLen(SReqResultInfo* pResultInfo, int32_t numOfCols, int32_t numOfRows) { char* p = (char*)pResultInfo->pData; + int32_t blockVersion = *(int32_t*)p; // | version | total length | total rows | total columns | flag seg| block group id | column schema | each column // length | @@ -1810,7 +1811,7 @@ static int32_t estimateJsonLen(SReqResultInfo* pResultInfo, int32_t numOfCols, i char* pStart = p + len; for (int32_t i = 0; i < numOfCols; ++i) { - int32_t colLen = htonl(colLength[i]); + int32_t colLen = (blockVersion == 1) ? htonl(colLength[i]) : colLength[i]; if (pResultInfo->fields[i].type == TSDB_DATA_TYPE_JSON) { int32_t* offset = (int32_t*)pStart; @@ -1873,6 +1874,7 @@ static int32_t doConvertJson(SReqResultInfo* pResultInfo, int32_t numOfCols, int tscDebug("start to convert form json format string"); char* p = (char*)pResultInfo->pData; + int32_t blockVersion = *(int32_t*)p; int32_t dataLen = estimateJsonLen(pResultInfo, numOfCols, numOfRows); if (dataLen <= 0) { return TSDB_CODE_TSC_INTERNAL_ERROR; @@ -1908,8 +1910,8 @@ static int32_t doConvertJson(SReqResultInfo* pResultInfo, int32_t numOfCols, int char* pStart = p; char* pStart1 = p1; for (int32_t i = 0; i < numOfCols; ++i) { - int32_t colLen = htonl(colLength[i]); - int32_t colLen1 = htonl(colLength1[i]); + int32_t colLen = (blockVersion == 1) ? htonl(colLength[i]) : colLength[i]; + int32_t colLen1 = (blockVersion == 1) ? htonl(colLength1[i]) : colLength1[i]; if (ASSERT(colLen < dataLen)) { tscError("doConvertJson error: colLen:%d >= dataLen:%d", colLen, dataLen); return TSDB_CODE_TSC_INTERNAL_ERROR; @@ -1968,7 +1970,7 @@ static int32_t doConvertJson(SReqResultInfo* pResultInfo, int32_t numOfCols, int } colLen1 = len; totalLen += colLen1; - colLength1[i] = htonl(len); + colLength1[i] = (blockVersion == 1) ? htonl(len) : len; } else if (IS_VAR_DATA_TYPE(pResultInfo->fields[i].type)) { len = numOfRows * sizeof(int32_t); memcpy(pStart1, pStart, len); @@ -2057,7 +2059,9 @@ int32_t setResultDataPtr(SReqResultInfo* pResultInfo, TAOS_FIELD* pFields, int32 char* pStart = p; for (int32_t i = 0; i < numOfCols; ++i) { - colLength[i] = htonl(colLength[i]); + if(blockVersion == 1){ + colLength[i] = htonl(colLength[i]); + } if (colLength[i] >= dataLen) { tscError("invalid colLength %d, dataLen %d", colLength[i], dataLen); return TSDB_CODE_TSC_INTERNAL_ERROR; diff --git a/source/client/src/clientMsgHandler.c b/source/client/src/clientMsgHandler.c index e0cedb9924..324b99022b 100644 --- a/source/client/src/clientMsgHandler.c +++ b/source/client/src/clientMsgHandler.c @@ -26,6 +26,8 @@ #include "tname.h" #include "tversion.h" +extern SClientHbMgr clientHbMgr; + static void setErrno(SRequestObj* pRequest, int32_t code) { pRequest->code = code; terrno = code; @@ -63,8 +65,9 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) { STscObj* pTscObj = pRequest->pTscObj; - if (NULL == pTscObj->pAppInfo || NULL == pTscObj->pAppInfo->pAppHbMgr) { - setErrno(pRequest, TSDB_CODE_TSC_DISCONNECTED); + if (NULL == pTscObj->pAppInfo) { + code = TSDB_CODE_TSC_DISCONNECTED; + setErrno(pRequest, code); tsem_post(&pRequest->body.rspSem); goto End; } @@ -95,7 +98,8 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) { } if (connectRsp.epSet.numOfEps == 0) { - setErrno(pRequest, TSDB_CODE_APP_ERROR); + code = TSDB_CODE_APP_ERROR; + setErrno(pRequest, code); tsem_post(&pRequest->body.rspSem); goto End; } @@ -142,7 +146,18 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) { pTscObj->authVer = connectRsp.authVer; pTscObj->whiteListInfo.ver = connectRsp.whiteListVer; - hbRegisterConn(pTscObj->pAppInfo->pAppHbMgr, pTscObj->id, connectRsp.clusterId, connectRsp.connType); + taosThreadMutexLock(&clientHbMgr.lock); + SAppHbMgr* pAppHbMgr = taosArrayGetP(clientHbMgr.appHbMgrs, pTscObj->appHbMgrIdx); + if (pAppHbMgr) { + hbRegisterConn(pAppHbMgr, pTscObj->id, connectRsp.clusterId, connectRsp.connType); + } else { + taosThreadMutexUnlock(&clientHbMgr.lock); + code = TSDB_CODE_TSC_DISCONNECTED; + setErrno(pRequest, code); + tsem_post(&pRequest->body.rspSem); + goto End; + } + taosThreadMutexUnlock(&clientHbMgr.lock); tscDebug("0x%" PRIx64 " clusterId:%" PRId64 ", totalConn:%" PRId64, pRequest->requestId, connectRsp.clusterId, pTscObj->pAppInfo->numOfConns); diff --git a/source/client/src/clientRawBlockWrite.c b/source/client/src/clientRawBlockWrite.c index 1f9d3c6d8c..db8de44f1c 100644 --- a/source/client/src/clientRawBlockWrite.c +++ b/source/client/src/clientRawBlockWrite.c @@ -1553,6 +1553,18 @@ end: return code; } +static void* getRawDataFromRes(void *pRetrieve){ + void* rawData = NULL; + // deal with compatibility + if(*(int64_t*)pRetrieve == 0){ + rawData = ((SRetrieveTableRsp*)pRetrieve)->data; + }else if(*(int64_t*)pRetrieve == 1){ + rawData = ((SRetrieveTableRspForTmq*)pRetrieve)->data; + } + ASSERT(rawData != NULL); + return rawData; +} + static int32_t tmqWriteRawDataImpl(TAOS* taos, void* data, int32_t dataLen) { if(taos == NULL || data == NULL){ terrno = TSDB_CODE_INVALID_PARA; @@ -1607,7 +1619,7 @@ static int32_t tmqWriteRawDataImpl(TAOS* taos, void* data, int32_t dataLen) { } pVgHash = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT), true, HASH_NO_LOCK); while (++rspObj.resIter < rspObj.rsp.blockNum) { - SRetrieveTableRsp* pRetrieve = (SRetrieveTableRsp*)taosArrayGetP(rspObj.rsp.blockData, rspObj.resIter); + void* pRetrieve = taosArrayGetP(rspObj.rsp.blockData, rspObj.resIter); if (!rspObj.rsp.withSchema) { goto end; } @@ -1653,7 +1665,8 @@ static int32_t tmqWriteRawDataImpl(TAOS* taos, void* data, int32_t dataLen) { fields[i].bytes = pSW->pSchema[i].bytes; tstrncpy(fields[i].name, pSW->pSchema[i].name, tListLen(pSW->pSchema[i].name)); } - code = rawBlockBindData(pQuery, pTableMeta, pRetrieve->data, NULL, fields, pSW->nCols, true); + void* rawData = getRawDataFromRes(pRetrieve); + code = rawBlockBindData(pQuery, pTableMeta, rawData, NULL, fields, pSW->nCols, true); taosMemoryFree(fields); if (code != TSDB_CODE_SUCCESS) { goto end; @@ -1737,7 +1750,7 @@ static int32_t tmqWriteRawMetaDataImpl(TAOS* taos, void* data, int32_t dataLen) uDebug(LOG_ID_TAG" write raw metadata block num:%d", LOG_ID_VALUE, rspObj.rsp.blockNum); while (++rspObj.resIter < rspObj.rsp.blockNum) { - SRetrieveTableRsp* pRetrieve = (SRetrieveTableRsp*)taosArrayGetP(rspObj.rsp.blockData, rspObj.resIter); + void* pRetrieve = taosArrayGetP(rspObj.rsp.blockData, rspObj.resIter); if (!rspObj.rsp.withSchema) { goto end; } @@ -1824,12 +1837,12 @@ static int32_t tmqWriteRawMetaDataImpl(TAOS* taos, void* data, int32_t dataLen) fields[i].bytes = pSW->pSchema[i].bytes; tstrncpy(fields[i].name, pSW->pSchema[i].name, tListLen(pSW->pSchema[i].name)); } - code = rawBlockBindData(pQuery, pTableMeta, pRetrieve->data, pCreateReqDst, fields, pSW->nCols, true); + void* rawData = getRawDataFromRes(pRetrieve); + code = rawBlockBindData(pQuery, pTableMeta, rawData, &pCreateReqDst, fields, pSW->nCols, true); taosMemoryFree(fields); if (code != TSDB_CODE_SUCCESS) { goto end; } - pCreateReqDst = NULL; taosMemoryFreeClear(pTableMeta); } diff --git a/source/client/src/clientTmq.c b/source/client/src/clientTmq.c index 28e269ee10..69681b9ae0 100644 --- a/source/client/src/clientTmq.c +++ b/source/client/src/clientTmq.c @@ -1577,16 +1577,24 @@ SMqRspObj* tmqBuildRspFromWrapper(SMqPollRspWrapper* pWrapper, SMqClientVg* pVg, pRspObj->resInfo.totalRows = 0; pRspObj->resInfo.precision = TSDB_TIME_PRECISION_MILLI; - if (!pWrapper->dataRsp.withSchema) { - setResSchemaInfo(&pRspObj->resInfo, pWrapper->topicHandle->schema.pSchema, pWrapper->topicHandle->schema.nCols); + bool needTransformSchema = !pRspObj->rsp.withSchema; + if (!pRspObj->rsp.withSchema) { // withSchema is false if subscribe subquery, true if subscribe db or stable + pRspObj->rsp.withSchema = true; + pRspObj->rsp.blockSchema = taosArrayInit(pRspObj->rsp.blockNum, sizeof(void*)); } - - // extract the rows in this data packet + // extract the rows in this data packet for (int32_t i = 0; i < pRspObj->rsp.blockNum; ++i) { - SRetrieveTableRsp* pRetrieve = (SRetrieveTableRsp*)taosArrayGetP(pRspObj->rsp.blockData, i); + SRetrieveTableRspForTmq* pRetrieve = (SRetrieveTableRspForTmq*)taosArrayGetP(pRspObj->rsp.blockData, i); int64_t rows = htobe64(pRetrieve->numOfRows); pVg->numOfRows += rows; (*numOfRows) += rows; + + if (needTransformSchema) { //withSchema is false if subscribe subquery, true if subscribe db or stable + SSchemaWrapper *schema = tCloneSSchemaWrapper(&pWrapper->topicHandle->schema); + if(schema){ + taosArrayPush(pRspObj->rsp.blockSchema, &schema); + } + } } return pRspObj; @@ -1603,13 +1611,10 @@ SMqTaosxRspObj* tmqBuildTaosxRspFromWrapper(SMqPollRspWrapper* pWrapper, SMqClie pRspObj->resInfo.totalRows = 0; pRspObj->resInfo.precision = TSDB_TIME_PRECISION_MILLI; - if (!pWrapper->taosxRsp.withSchema) { - setResSchemaInfo(&pRspObj->resInfo, pWrapper->topicHandle->schema.pSchema, pWrapper->topicHandle->schema.nCols); - } // extract the rows in this data packet for (int32_t i = 0; i < pRspObj->rsp.blockNum; ++i) { - SRetrieveTableRsp* pRetrieve = (SRetrieveTableRsp*)taosArrayGetP(pRspObj->rsp.blockData, i); + SRetrieveTableRspForTmq* pRetrieve = (SRetrieveTableRspForTmq*)taosArrayGetP(pRspObj->rsp.blockData, i); int64_t rows = htobe64(pRetrieve->numOfRows); pVg->numOfRows += rows; (*numOfRows) += rows; @@ -2548,7 +2553,7 @@ SReqResultInfo* tmqGetNextResInfo(TAOS_RES* res, bool convertUcs4) { pRspObj->resIter++; if (pRspObj->resIter < pRspObj->rsp.blockNum) { - SRetrieveTableRsp* pRetrieve = (SRetrieveTableRsp*)taosArrayGetP(pRspObj->rsp.blockData, pRspObj->resIter); + SRetrieveTableRspForTmq* pRetrieveTmq = (SRetrieveTableRspForTmq*)taosArrayGetP(pRspObj->rsp.blockData, pRspObj->resIter); if (pRspObj->rsp.withSchema) { SSchemaWrapper* pSW = (SSchemaWrapper*)taosArrayGetP(pRspObj->rsp.blockSchema, pRspObj->resIter); setResSchemaInfo(&pRspObj->resInfo, pSW->pSchema, pSW->nCols); @@ -2559,7 +2564,16 @@ SReqResultInfo* tmqGetNextResInfo(TAOS_RES* res, bool convertUcs4) { taosMemoryFreeClear(pRspObj->resInfo.convertJson); } - setQueryResultFromRsp(&pRspObj->resInfo, pRetrieve, convertUcs4, false); + pRspObj->resInfo.pData = (void*)pRetrieveTmq->data; + pRspObj->resInfo.numOfRows = htobe64(pRetrieveTmq->numOfRows); + pRspObj->resInfo.current = 0; + pRspObj->resInfo.precision = pRetrieveTmq->precision; + + // TODO handle the compressed case + pRspObj->resInfo.totalRows += pRspObj->resInfo.numOfRows; + setResultDataPtr(&pRspObj->resInfo, pRspObj->resInfo.fields, pRspObj->resInfo.numOfCols, pRspObj->resInfo.numOfRows, + convertUcs4); + return &pRspObj->resInfo; } diff --git a/source/common/src/tdatablock.c b/source/common/src/tdatablock.c index f0d1630480..5382259899 100644 --- a/source/common/src/tdatablock.c +++ b/source/common/src/tdatablock.c @@ -2196,7 +2196,7 @@ int32_t blockEncode(const SSDataBlock* pBlock, char* data, int32_t numOfCols) { // todo extract method int32_t* version = (int32_t*)data; - *version = 1; + *version = 2; data += sizeof(int32_t); int32_t* actualLen = (int32_t*)data; @@ -2277,7 +2277,7 @@ int32_t blockEncode(const SSDataBlock* pBlock, char* data, int32_t numOfCols) { data += colSizes[col]; } - colSizes[col] = htonl(colSizes[col]); +// colSizes[col] = htonl(colSizes[col]); // uError("blockEncode col bytes:%d, type:%d, size:%d, htonl size:%d", pColRes->info.bytes, pColRes->info.type, // htonl(colSizes[col]), colSizes[col]); } @@ -2293,7 +2293,6 @@ const char* blockDecode(SSDataBlock* pBlock, const char* pData) { int32_t version = *(int32_t*)pStart; pStart += sizeof(int32_t); - ASSERT(version == 1); // total length sizeof(int32_t) int32_t dataLen = *(int32_t*)pStart; @@ -2339,7 +2338,9 @@ const char* blockDecode(SSDataBlock* pBlock, const char* pData) { pStart += sizeof(int32_t) * numOfCols; for (int32_t i = 0; i < numOfCols; ++i) { - colLen[i] = htonl(colLen[i]); + if(version == 1){ + colLen[i] = htonl(colLen[i]); + } ASSERT(colLen[i] >= 0); SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, i); diff --git a/source/dnode/mnode/impl/src/mndStream.c b/source/dnode/mnode/impl/src/mndStream.c index a0670d2382..1d761b93cc 100644 --- a/source/dnode/mnode/impl/src/mndStream.c +++ b/source/dnode/mnode/impl/src/mndStream.c @@ -2378,7 +2378,7 @@ static int32_t mndProcessVgroupChange(SMnode *pMnode, SVgroupChangeInfo *pChange sdbRelease(pSdb, pStream); if (conflict) { - mWarn("nodeUpdate trans in progress, current nodeUpdate ignored"); + mError("nodeUpdate conflict with other trans, current nodeUpdate ignored"); sdbCancelFetch(pSdb, pIter); return -1; } @@ -2591,14 +2591,22 @@ int32_t removeExpirednodeEntryAndTask(SArray *pNodeSnapshot) { // kill all trans in the dst DB static void killAllCheckpointTrans(SMnode *pMnode, SVgroupChangeInfo *pChangeInfo) { + mDebug("start to clear checkpoints in all Dbs"); + void *pIter = NULL; while ((pIter = taosHashIterate(pChangeInfo->pDBMap, pIter)) != NULL) { char *pDb = (char *)pIter; size_t len = 0; void *pKey = taosHashGetKey(pDb, &len); + char *p = strndup(pKey, len); + + mDebug("clear checkpoint trans in Db:%s", p); doKillCheckpointTrans(pMnode, pKey, len); + taosMemoryFree(p); } + + mDebug("complete clear checkpoints in Dbs"); } // this function runs by only one thread, so it is not multi-thread safe @@ -2653,7 +2661,7 @@ static int32_t mndProcessNodeCheckReq(SRpcMsg *pMsg) { execInfo.pNodeList = pNodeSnapshot; execInfo.ts = ts; } else { - mDebug("unexpect code during create nodeUpdate trans, code:%s", tstrerror(code)); + mError("unexpected code during create nodeUpdate trans, code:%s", tstrerror(code)); taosArrayDestroy(pNodeSnapshot); } } else { @@ -2848,6 +2856,8 @@ void killTransImpl(SMnode *pMnode, int32_t transId, const char *pDbName) { mInfo("kill active transId:%d in Db:%s", transId, pDbName); mndKillTrans(pMnode, pTrans); mndReleaseTrans(pMnode, pTrans); + } else { + mError("failed to acquire trans in Db:%s, transId:%d", pDbName, transId); } } @@ -2873,15 +2883,7 @@ int32_t doKillCheckpointTrans(SMnode *pMnode, const char *pDBName, size_t len) { static int32_t mndResetStatusFromCheckpoint(SMnode *pMnode, int64_t streamId, int32_t transId) { int32_t code = TSDB_CODE_SUCCESS; - - STrans *pTrans = mndAcquireTrans(pMnode, transId); - if (pTrans != NULL) { - mInfo("kill checkpoint transId:%d to reset task status", transId); - mndKillTrans(pMnode, pTrans); - mndReleaseTrans(pMnode, pTrans); - } else { - mError("failed to acquire checkpoint trans:%d", transId); - } + killTransImpl(pMnode, transId, ""); SStreamObj *pStream = mndGetStreamObj(pMnode, streamId); if (pStream == NULL) { @@ -3037,7 +3039,7 @@ int32_t mndProcessStreamHb(SRpcMsg *pReq) { SStreamHbMsg req = {0}; bool checkpointFailed = false; - int64_t activeCheckpointId = 0; + int64_t checkpointId = 0; int64_t streamId = 0; int32_t transId = 0; @@ -3099,14 +3101,17 @@ int32_t mndProcessStreamHb(SRpcMsg *pReq) { } streamTaskStatusCopy(pTaskEntry, p); - if (p->activeCheckpointId != 0) { - if (activeCheckpointId != 0) { - ASSERT(activeCheckpointId == p->activeCheckpointId); + if (p->checkpointId != 0) { + if (checkpointId != 0) { + ASSERT(checkpointId == p->checkpointId); } else { - activeCheckpointId = p->activeCheckpointId; + checkpointId = p->checkpointId; } if (p->checkpointFailed) { + mError("stream task:0x%" PRIx64 " checkpointId:%" PRIx64 " transId:%d failed, kill it", p->id.taskId, + p->checkpointId, p->chkpointTransId); + checkpointFailed = p->checkpointFailed; streamId = p->id.streamId; transId = p->chkpointTransId; @@ -3128,17 +3133,17 @@ int32_t mndProcessStreamHb(SRpcMsg *pReq) { // current checkpoint is failed, rollback from the checkpoint trans // kill the checkpoint trans and then set all tasks status to be normal - if (checkpointFailed && activeCheckpointId != 0) { + if (checkpointFailed && checkpointId != 0) { bool allReady = true; SArray *p = mndTakeVgroupSnapshot(pMnode, &allReady); taosArrayDestroy(p); if (allReady || snodeChanged) { // if the execInfo.activeCheckpoint == 0, the checkpoint is restoring from wal - mInfo("checkpointId:%" PRId64 " failed, issue task-reset trans to reset all tasks status", activeCheckpointId); + mInfo("checkpointId:%" PRId64 " failed, issue task-reset trans to reset all tasks status", checkpointId); mndResetStatusFromCheckpoint(pMnode, streamId, transId); } else { - mInfo("not all vgroups are ready, wait for next HB from stream tasks"); + mInfo("not all vgroups are ready, wait for next HB from stream tasks to reset the task status"); } } @@ -3152,6 +3157,7 @@ void freeCheckpointCandEntry(void *param) { SCheckpointCandEntry *pEntry = param; taosMemoryFreeClear(pEntry->pName); } + SStreamObj *mndGetStreamObj(SMnode *pMnode, int64_t streamId) { void *pIter = NULL; SSdb *pSdb = pMnode->pSdb; diff --git a/source/dnode/mnode/impl/src/mndStreamTrans.c b/source/dnode/mnode/impl/src/mndStreamTrans.c index 847f55a8fd..a6dd1c4856 100644 --- a/source/dnode/mnode/impl/src/mndStreamTrans.c +++ b/source/dnode/mnode/impl/src/mndStreamTrans.c @@ -89,7 +89,7 @@ bool mndStreamTransConflictCheck(SMnode* pMnode, int64_t streamUid, const char* } if (strcmp(tInfo.name, MND_STREAM_CHECKPOINT_NAME) == 0) { - if (strcmp(pTransName, MND_STREAM_DROP_NAME) != 0) { + if ((strcmp(pTransName, MND_STREAM_DROP_NAME) != 0) && (strcmp(pTransName, MND_STREAM_TASK_RESET_NAME) != 0)) { mWarn("conflict with other transId:%d streamUid:0x%" PRIx64 ", trans:%s", tInfo.transId, tInfo.streamUid, tInfo.name); terrno = TSDB_CODE_MND_TRANS_CONFLICT; diff --git a/source/dnode/snode/src/snode.c b/source/dnode/snode/src/snode.c index 4815dfe65c..4284b0838e 100644 --- a/source/dnode/snode/src/snode.c +++ b/source/dnode/snode/src/snode.c @@ -145,8 +145,7 @@ FAIL: } int32_t sndInit(SSnode *pSnode) { - int32_t numOfTasks = 0; - tqStreamTaskResetStatus(pSnode->pMeta, &numOfTasks); + streamMetaResetTaskStatus(pSnode->pMeta); streamMetaStartAllTasks(pSnode->pMeta); return 0; } diff --git a/source/dnode/vnode/src/inc/tq.h b/source/dnode/vnode/src/inc/tq.h index a7fd35773d..cded4ddd7c 100644 --- a/source/dnode/vnode/src/inc/tq.h +++ b/source/dnode/vnode/src/inc/tq.h @@ -107,7 +107,6 @@ struct STQ { TTB* pExecStore; TTB* pCheckStore; SStreamMeta* pStreamMeta; - void* tqTimer; }; int32_t tEncodeSTqHandle(SEncoder* pEncoder, const STqHandle* pHandle); diff --git a/source/dnode/vnode/src/tq/tq.c b/source/dnode/vnode/src/tq/tq.c index 1fc7402363..9ae4fa5e19 100644 --- a/source/dnode/vnode/src/tq/tq.c +++ b/source/dnode/vnode/src/tq/tq.c @@ -26,18 +26,6 @@ static FORCE_INLINE bool tqIsHandleExec(STqHandle* pHandle) { return TMQ_HANDLE_ static FORCE_INLINE void tqSetHandleExec(STqHandle* pHandle) { pHandle->status = TMQ_HANDLE_STATUS_EXEC; } static FORCE_INLINE void tqSetHandleIdle(STqHandle* pHandle) { pHandle->status = TMQ_HANDLE_STATUS_IDLE; } -static int32_t tqTimerInit(STQ* pTq) { - pTq->tqTimer = taosTmrInit(100, 100, 1000, "TQ"); - if (pTq->tqTimer == NULL) { - return -1; - } - return 0; -} - -static void tqTimerCleanUp(STQ* pTq) { - taosTmrCleanUp(pTq->tqTimer); -} - void tqDestroyTqHandle(void* data) { STqHandle* pData = (STqHandle*)data; qDestroyTask(pData->execHandle.task); @@ -118,7 +106,6 @@ int32_t tqInitialize(STQ* pTq) { return -1; } - tqTimerInit(pTq); return 0; } @@ -149,7 +136,6 @@ void tqClose(STQ* pTq) { taosMemoryFree(pTq->path); tqMetaClose(pTq); streamMetaClose(pTq->pStreamMeta); - tqTimerCleanUp(pTq); qDebug("end to close tq"); taosMemoryFree(pTq); diff --git a/source/dnode/vnode/src/tq/tqScan.c b/source/dnode/vnode/src/tq/tqScan.c index 01866ef893..5432637482 100644 --- a/source/dnode/vnode/src/tq/tqScan.c +++ b/source/dnode/vnode/src/tq/tqScan.c @@ -16,21 +16,20 @@ #include "tq.h" int32_t tqAddBlockDataToRsp(const SSDataBlock* pBlock, SMqDataRsp* pRsp, int32_t numOfCols, int8_t precision) { - int32_t dataStrLen = sizeof(SRetrieveTableRsp) + blockGetEncodeSize(pBlock); + int32_t dataStrLen = sizeof(SRetrieveTableRspForTmq) + blockGetEncodeSize(pBlock); void* buf = taosMemoryCalloc(1, dataStrLen); if (buf == NULL) { return TSDB_CODE_OUT_OF_MEMORY; } - SRetrieveTableRsp* pRetrieve = (SRetrieveTableRsp*)buf; - pRetrieve->useconds = 0; + SRetrieveTableRspForTmq* pRetrieve = (SRetrieveTableRspForTmq*)buf; + pRetrieve->version = 1; pRetrieve->precision = precision; pRetrieve->compressed = 0; - pRetrieve->completed = 1; pRetrieve->numOfRows = htobe64((int64_t)pBlock->info.rows); int32_t actualLen = blockEncode(pBlock, pRetrieve->data, numOfCols); - actualLen += sizeof(SRetrieveTableRsp); + actualLen += sizeof(SRetrieveTableRspForTmq); taosArrayPush(pRsp->blockDataLen, &actualLen); taosArrayPush(pRsp->blockData, &buf); diff --git a/source/dnode/vnode/src/tq/tqStreamTask.c b/source/dnode/vnode/src/tq/tqStreamTask.c index 2a600d08ef..cdb5cc26f8 100644 --- a/source/dnode/vnode/src/tq/tqStreamTask.c +++ b/source/dnode/vnode/src/tq/tqStreamTask.c @@ -95,10 +95,14 @@ int32_t tqScanWalInFuture(STQ* pTq, int32_t numOfTasks, int32_t idleDuration) { pParam->pTq = pTq; pParam->numOfTasks = numOfTasks; + + tmr_h pTimer = streamTimerGetInstance(); + ASSERT(pTimer); + if (pMeta->scanInfo.scanTimer == NULL) { - pMeta->scanInfo.scanTimer = taosTmrStart(doStartScanWal, idleDuration, pParam, pTq->tqTimer); + pMeta->scanInfo.scanTimer = taosTmrStart(doStartScanWal, idleDuration, pParam, pTimer); } else { - taosTmrReset(doStartScanWal, idleDuration, pParam, pTq->tqTimer, &pMeta->scanInfo.scanTimer); + taosTmrReset(doStartScanWal, idleDuration, pParam, pTimer, &pMeta->scanInfo.scanTimer); } return TSDB_CODE_SUCCESS; @@ -175,11 +179,11 @@ int32_t tqStopStreamTasksAsync(STQ* pTq) { SStreamTaskRunReq* pRunReq = rpcMallocCont(sizeof(SStreamTaskRunReq)); if (pRunReq == NULL) { terrno = TSDB_CODE_OUT_OF_MEMORY; - tqError("vgId:%d failed to create msg to stop tasks, code:%s", vgId, terrstr()); + tqError("vgId:%d failed to create msg to stop tasks async, code:%s", vgId, terrstr()); return -1; } - tqDebug("vgId:%d create msg to stop tasks", vgId); + tqDebug("vgId:%d create msg to stop all tasks async", vgId); pRunReq->head.vgId = vgId; pRunReq->streamId = 0; diff --git a/source/dnode/vnode/src/tqCommon/tqCommon.c b/source/dnode/vnode/src/tqCommon/tqCommon.c index e2dc9fa679..00b3860565 100644 --- a/source/dnode/vnode/src/tqCommon/tqCommon.c +++ b/source/dnode/vnode/src/tqCommon/tqCommon.c @@ -103,8 +103,7 @@ int32_t tqStreamTaskProcessUpdateReq(SStreamMeta* pMeta, SMsgCb* cb, SRpcMsg* pM STaskId id = {.streamId = req.streamId, .taskId = req.taskId}; SStreamTask** ppTask = (SStreamTask**)taosHashGet(pMeta->pTasksMap, &id, sizeof(id)); if (ppTask == NULL || *ppTask == NULL) { - tqError("vgId:%d failed to acquire task:0x%x when handling update, it may have been dropped already", vgId, - req.taskId); + tqError("vgId:%d failed to acquire task:0x%x when handling update, it may have been dropped", vgId, req.taskId); rsp.code = TSDB_CODE_SUCCESS; streamMetaWUnLock(pMeta); @@ -124,6 +123,7 @@ int32_t tqStreamTaskProcessUpdateReq(SStreamMeta* pMeta, SMsgCb* cb, SRpcMsg* pM tqDebug("s-task:%s recv trans to update nodeEp from mnode, transId:%d", idstr, req.transId); } + // duplicate update epset msg received, discard this redundant message STaskUpdateEntry entry = {.streamId = req.streamId, .taskId = req.taskId, .transId = req.transId}; void* pReqTask = taosHashGet(pMeta->updateInfo.pTasks, &entry, sizeof(STaskUpdateEntry)); @@ -135,7 +135,6 @@ int32_t tqStreamTaskProcessUpdateReq(SStreamMeta* pMeta, SMsgCb* cb, SRpcMsg* pM return rsp.code; } - // the following two functions should not be executed within the scope of meta lock to avoid deadlock streamTaskUpdateEpsetInfo(pTask, req.pNodeList); streamTaskResetStatus(pTask); @@ -159,11 +158,6 @@ int32_t tqStreamTaskProcessUpdateReq(SStreamMeta* pMeta, SMsgCb* cb, SRpcMsg* pM streamMetaSaveTask(pMeta, *ppHTask); } -#if 0 - if (streamMetaCommit(pMeta) < 0) { - // persist to disk - } -#endif } else { tqDebug("s-task:%s vgId:%d not save since restore not finish", idstr, vgId); } @@ -171,7 +165,7 @@ int32_t tqStreamTaskProcessUpdateReq(SStreamMeta* pMeta, SMsgCb* cb, SRpcMsg* pM tqDebug("s-task:%s vgId:%d start to stop task after save task", idstr, vgId); streamTaskStop(pTask); - // keep the already handled info + // keep the already updated info taosHashPut(pMeta->updateInfo.pTasks, &entry, sizeof(entry), NULL, 0); if (ppHTask != NULL) { @@ -200,6 +194,10 @@ int32_t tqStreamTaskProcessUpdateReq(SStreamMeta* pMeta, SMsgCb* cb, SRpcMsg* pM updateTasks, (numOfTasks - updateTasks)); streamMetaWUnLock(pMeta); } else { + if (streamMetaCommit(pMeta) < 0) { + // persist to disk + } + if (!restored) { tqDebug("vgId:%d vnode restore not completed, not start the tasks, clear the start after nodeUpdate flag", vgId); pMeta->startInfo.tasksWillRestart = 0; @@ -686,16 +684,16 @@ int32_t tqStreamTaskProcessDropReq(SStreamMeta* pMeta, char* msg, int32_t msgLen return 0; } -int32_t tqStreamTaskResetStatus(SStreamMeta* pMeta, int32_t* numOfTasks) { +int32_t tqStreamTaskResetStatus(SStreamMeta* pMeta) { int32_t vgId = pMeta->vgId; - *numOfTasks = taosArrayGetSize(pMeta->pTaskList); + int32_t numOfTasks = taosArrayGetSize(pMeta->pTaskList); - tqDebug("vgId:%d reset all %d stream task(s) status to be uninit", vgId, *numOfTasks); - if (*numOfTasks == 0) { + tqDebug("vgId:%d reset all %d stream task(s) status to be uninit", vgId, numOfTasks); + if (numOfTasks == 0) { return TSDB_CODE_SUCCESS; } - for (int32_t i = 0; i < (*numOfTasks); ++i) { + for (int32_t i = 0; i < numOfTasks; ++i) { SStreamTaskId* pTaskId = taosArrayGet(pMeta->pTaskList, i); STaskId id = {.streamId = pTaskId->streamId, .taskId = pTaskId->taskId}; @@ -754,8 +752,7 @@ static int32_t restartStreamTasks(SStreamMeta* pMeta, bool isLeader) { } if (isLeader && !tsDisableStream) { - int32_t numOfTasks = 0; - tqStreamTaskResetStatus(pMeta, &numOfTasks); + streamMetaResetTaskStatus(pMeta); streamMetaWUnLock(pMeta); streamMetaStartAllTasks(pMeta); @@ -965,8 +962,7 @@ static int32_t tqProcessTaskResumeImpl(void* handle, SStreamTask* pTask, int64_t } else if (status == TASK_STATUS__UNINIT) { // todo: fill-history task init ? if (pTask->info.fillHistory == 0) { - EStreamTaskEvent event = /*HAS_RELATED_FILLHISTORY_TASK(pTask) ? TASK_EVENT_INIT_STREAM_SCANHIST : */TASK_EVENT_INIT; - streamTaskHandleEvent(pTask->status.pSM, event); + streamTaskHandleEvent(pTask->status.pSM, TASK_EVENT_INIT); } } @@ -990,4 +986,8 @@ int32_t tqStreamTaskProcessTaskResumeReq(void* handle, int64_t sversion, char* m } return code; +} + +int32_t tqStreamTasksGetTotalNum(SStreamMeta* pMeta) { + return taosArrayGetSize(pMeta->pTaskList); } \ No newline at end of file diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index d86219542f..cc0bf2b774 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -941,7 +941,7 @@ static int32_t tsdbCacheLoadFromRaw(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArr } if(lastrowTmpIndexArray != NULL) { - mergeLastCid(uid, pTsdb, &lastrowTmpColArray, pr, lastrowColIds, lastrowIndex, lastrowSlotIds); + mergeLastRowCid(uid, pTsdb, &lastrowTmpColArray, pr, lastrowColIds, lastrowIndex, lastrowSlotIds); for(int i = 0; i < taosArrayGetSize(lastrowTmpColArray); i++) { taosArrayInsert(pTmpColArray, *(int32_t*)taosArrayGet(lastrowTmpIndexArray, i), taosArrayGet(lastrowTmpColArray, i)); } diff --git a/source/dnode/vnode/src/tsdb/tsdbRead2.c b/source/dnode/vnode/src/tsdb/tsdbRead2.c index e958600093..609f38cead 100644 --- a/source/dnode/vnode/src/tsdb/tsdbRead2.c +++ b/source/dnode/vnode/src/tsdb/tsdbRead2.c @@ -4120,8 +4120,10 @@ void tsdbReaderClose2(STsdbReader* pReader) { taosMemoryFreeClear(pReader->status.uidList.tableUidList); qTrace("tsdb/reader-close: %p, untake snapshot", pReader); - tsdbUntakeReadSnap2(pReader, pReader->pReadSnap, true); - pReader->pReadSnap = NULL; + void* p = pReader->pReadSnap; + if ((p == atomic_val_compare_exchange_ptr((void**)&pReader->pReadSnap, p, NULL)) && (p != NULL)) { + tsdbUntakeReadSnap2(pReader, p, true); + } tsem_destroy(&pReader->resumeAfterSuspend); tsdbReleaseReader(pReader); @@ -4195,8 +4197,12 @@ int32_t tsdbReaderSuspend2(STsdbReader* pReader) { doSuspendCurrentReader(pReader); } - tsdbUntakeReadSnap2(pReader, pReader->pReadSnap, false); - pReader->pReadSnap = NULL; + // make sure only release once + void* p = pReader->pReadSnap; + if ((p == atomic_val_compare_exchange_ptr((void**)&pReader->pReadSnap, p, NULL)) && (p != NULL)) { + tsdbUntakeReadSnap2(pReader, p, false); + } + if (pReader->bFilesetDelimited) { pReader->status.memTableMinKey = INT64_MAX; pReader->status.memTableMaxKey = INT64_MIN; diff --git a/source/dnode/vnode/src/vnd/vnodeSync.c b/source/dnode/vnode/src/vnd/vnodeSync.c index e6a80757ff..8844e358d5 100644 --- a/source/dnode/vnode/src/vnd/vnodeSync.c +++ b/source/dnode/vnode/src/vnd/vnodeSync.c @@ -570,9 +570,7 @@ static void vnodeRestoreFinish(const SSyncFSM *pFsm, const SyncIndex commitIdx) vInfo("vgId:%d, sync restore finished, not launch stream tasks, since stream tasks are disabled", vgId); } else { vInfo("vgId:%d sync restore finished, start to launch stream task(s)", pVnode->config.vgId); - int32_t numOfTasks = 0; - tqStreamTaskResetStatus(pMeta, &numOfTasks); - + int32_t numOfTasks = tqStreamTasksGetTotalNum(pMeta); if (numOfTasks > 0) { if (pMeta->startInfo.taskStarting == 1) { pMeta->startInfo.restartCount += 1; @@ -580,6 +578,7 @@ static void vnodeRestoreFinish(const SSyncFSM *pFsm, const SyncIndex commitIdx) pMeta->startInfo.restartCount); } else { pMeta->startInfo.taskStarting = 1; + streamMetaWUnLock(pMeta); tqStreamTaskStartAsync(pMeta, &pVnode->msgCb, false); return; diff --git a/source/libs/executor/inc/executorInt.h b/source/libs/executor/inc/executorInt.h index c3f47cde9d..64c14456b6 100644 --- a/source/libs/executor/inc/executorInt.h +++ b/source/libs/executor/inc/executorInt.h @@ -200,6 +200,7 @@ typedef struct SExchangeInfo { uint64_t self; SLimitInfo limitInfo; int64_t openedTs; // start exec time stamp, todo: move to SLoadRemoteDataInfo + char* pTaskId; } SExchangeInfo; typedef struct SScanInfo { @@ -272,7 +273,8 @@ typedef struct STableScanInfo { SSampleExecInfo sample; // sample execution info int32_t tableStartIndex; // current group scan start int32_t tableEndIndex; // current group scan end - int32_t currentGroupIndex; // current group index of groupOffset + int32_t currentGroupId; + int32_t currentTable; int8_t scanMode; int8_t assignBlockUid; uint8_t countState; // empty table count state diff --git a/source/libs/executor/src/exchangeoperator.c b/source/libs/executor/src/exchangeoperator.c index 2797cb2d82..06dd43e170 100644 --- a/source/libs/executor/src/exchangeoperator.c +++ b/source/libs/executor/src/exchangeoperator.c @@ -260,14 +260,17 @@ static int32_t initDataSource(int32_t numOfSources, SExchangeInfo* pInfo, const return TSDB_CODE_SUCCESS; } + int32_t len = strlen(id) + 1; + pInfo->pTaskId = taosMemoryCalloc(1, len); + strncpy(pInfo->pTaskId, id, len); for (int32_t i = 0; i < numOfSources; ++i) { SSourceDataInfo dataInfo = {0}; dataInfo.status = EX_SOURCE_DATA_NOT_READY; - dataInfo.taskId = id; + dataInfo.taskId = pInfo->pTaskId; dataInfo.index = i; SSourceDataInfo* pDs = taosArrayPush(pInfo->pSourceDataInfo, &dataInfo); if (pDs == NULL) { - taosArrayDestroy(pInfo->pSourceDataInfo); + taosArrayDestroyEx(pInfo->pSourceDataInfo, freeSourceDataInfo); return TSDB_CODE_OUT_OF_MEMORY; } } @@ -383,6 +386,8 @@ void doDestroyExchangeOperatorInfo(void* param) { tSimpleHashCleanup(pExInfo->pHashSources); tsem_destroy(&pExInfo->ready); + taosMemoryFreeClear(pExInfo->pTaskId); + taosMemoryFreeClear(param); } @@ -782,7 +787,7 @@ int32_t addSingleExchangeSource(SOperatorInfo* pOperator, SExchangeOperatorBasic if (pIdx->inUseIdx < 0) { SSourceDataInfo dataInfo = {0}; dataInfo.status = EX_SOURCE_DATA_NOT_READY; - dataInfo.taskId = GET_TASKID(pOperator->pTaskInfo); + dataInfo.taskId = pExchangeInfo->pTaskId; dataInfo.index = pIdx->srcIdx; dataInfo.pSrcUidList = taosArrayDup(pBasicParam->uidList, NULL); dataInfo.srcOpType = pBasicParam->srcOpType; diff --git a/source/libs/executor/src/executil.c b/source/libs/executor/src/executil.c index 1da7818c3c..033cda43f2 100644 --- a/source/libs/executor/src/executil.c +++ b/source/libs/executor/src/executil.c @@ -646,10 +646,6 @@ int32_t getColInfoResultForGroupby(void* pVnode, SNodeList* group, STableListInf } } - if (initRemainGroups) { - pTableListInfo->numOfOuputGroups = taosHashGetSize(pTableListInfo->remainGroups); - } - if (tsTagFilterCache) { tableList = taosArrayDup(pTableListInfo->pTableList, NULL); pAPI->metaFn.metaPutTbGroupToCache(pVnode, pTableListInfo->idInfo.suid, context.digest, tListLen(context.digest), @@ -2142,6 +2138,8 @@ int32_t buildGroupIdMapForAllTables(STableListInfo* pTableListInfo, SReadHandle* pTableListInfo->numOfOuputGroups = numOfTables; } else if (groupByTbname && pScanNode->groupOrderScan){ pTableListInfo->numOfOuputGroups = numOfTables; + } else if (groupByTbname && tsCountAlwaysReturnValue && ((STableScanPhysiNode*)pScanNode)->needCountEmptyTable) { + pTableListInfo->numOfOuputGroups = numOfTables; } else { pTableListInfo->numOfOuputGroups = 1; } @@ -2159,6 +2157,8 @@ int32_t buildGroupIdMapForAllTables(STableListInfo* pTableListInfo, SReadHandle* return code; } + if (pScanNode->groupOrderScan) pTableListInfo->numOfOuputGroups = taosArrayGetSize(pTableListInfo->pTableList); + if (groupSort || pScanNode->groupOrderScan) { code = sortTableGroup(pTableListInfo); } diff --git a/source/libs/executor/src/executor.c b/source/libs/executor/src/executor.c index eb84cb0639..e872c87237 100644 --- a/source/libs/executor/src/executor.c +++ b/source/libs/executor/src/executor.c @@ -1264,7 +1264,7 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subT STableKeyInfo* pTableInfo = tableListGetInfo(pTableListInfo, 0); uid = pTableInfo->uid; ts = INT64_MIN; - pScanInfo->tableEndIndex = 0; + pScanInfo->currentTable = 0; } else { taosRUnLockLatch(&pTaskInfo->lock); qError("no table in table list, %s", id); @@ -1278,16 +1278,16 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subT pInfo->pTableScanOp->resultInfo.totalRows = 0; // start from current accessed position - // we cannot start from the pScanInfo->tableEndIndex, since the commit offset may cause the rollback of the start + // we cannot start from the pScanInfo->currentTable, since the commit offset may cause the rollback of the start // position, let's find it from the beginning. index = tableListFind(pTableListInfo, uid, 0); taosRUnLockLatch(&pTaskInfo->lock); if (index >= 0) { - pScanInfo->tableEndIndex = index; + pScanInfo->currentTable = index; } else { qError("vgId:%d uid:%" PRIu64 " not found in table list, total:%d, index:%d %s", pTaskInfo->id.vgId, uid, - numOfTables, pScanInfo->tableEndIndex, id); + numOfTables, pScanInfo->currentTable, id); terrno = TSDB_CODE_PAR_INTERNAL_ERROR; return -1; } @@ -1310,12 +1310,12 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subT } qDebug("tsdb reader created with offset(snapshot) uid:%" PRId64 " ts:%" PRId64 " table index:%d, total:%d, %s", - uid, pScanBaseInfo->cond.twindows.skey, pScanInfo->tableEndIndex, numOfTables, id); + uid, pScanBaseInfo->cond.twindows.skey, pScanInfo->currentTable, numOfTables, id); } else { pTaskInfo->storageAPI.tsdReader.tsdSetQueryTableList(pScanBaseInfo->dataReader, &keyInfo, 1); pTaskInfo->storageAPI.tsdReader.tsdReaderResetStatus(pScanBaseInfo->dataReader, &pScanBaseInfo->cond); qDebug("tsdb reader offset seek snapshot to uid:%" PRId64 " ts %" PRId64 " table index:%d numOfTable:%d, %s", - uid, pScanBaseInfo->cond.twindows.skey, pScanInfo->tableEndIndex, numOfTables, id); + uid, pScanBaseInfo->cond.twindows.skey, pScanInfo->currentTable, numOfTables, id); } // restore the key value diff --git a/source/libs/executor/src/projectoperator.c b/source/libs/executor/src/projectoperator.c index d965f16862..cc83ecd84e 100644 --- a/source/libs/executor/src/projectoperator.c +++ b/source/libs/executor/src/projectoperator.c @@ -27,6 +27,7 @@ typedef struct SProjectOperatorInfo { SLimitInfo limitInfo; bool mergeDataBlocks; SSDataBlock* pFinalRes; + bool inputIgnoreGroup; } SProjectOperatorInfo; typedef struct SIndefOperatorInfo { @@ -109,7 +110,8 @@ SOperatorInfo* createProjectOperatorInfo(SOperatorInfo* downstream, SProjectPhys pInfo->pFinalRes = createOneDataBlock(pResBlock, false); pInfo->binfo.inputTsOrder = pProjPhyNode->node.inputTsOrder; pInfo->binfo.outputTsOrder = pProjPhyNode->node.outputTsOrder; - + pInfo->inputIgnoreGroup = pProjPhyNode->inputIgnoreGroup; + if (pTaskInfo->execModel == OPTR_EXEC_MODEL_STREAM || pTaskInfo->execModel == OPTR_EXEC_MODEL_QUEUE) { pInfo->mergeDataBlocks = false; } else { @@ -300,6 +302,10 @@ SSDataBlock* doProjectOperation(SOperatorInfo* pOperator) { return pBlock; } + if (pProjectInfo->inputIgnoreGroup) { + pBlock->info.id.groupId = 0; + } + int32_t status = discardGroupDataBlock(pBlock, pLimitInfo); if (status == PROJECT_RETRIEVE_CONTINUE) { continue; diff --git a/source/libs/executor/src/scanoperator.c b/source/libs/executor/src/scanoperator.c index 3ed5128858..09f4f6ac3c 100644 --- a/source/libs/executor/src/scanoperator.c +++ b/source/libs/executor/src/scanoperator.c @@ -657,33 +657,17 @@ void setTbNameColData(const SSDataBlock* pBlock, SColumnInfoData* pColInfoData, static void initNextGroupScan(STableScanInfo* pInfo, STableKeyInfo** pKeyInfo, int32_t* size) { - pInfo->tableStartIndex = pInfo->tableEndIndex + 1; + tableListGetGroupList(pInfo->base.pTableListInfo, pInfo->currentGroupId, pKeyInfo, size); - STableListInfo* pTableListInfo = pInfo->base.pTableListInfo; - int32_t numOfTables = tableListGetSize(pTableListInfo); - STableKeyInfo* pStart = (STableKeyInfo*)tableListGetInfo(pTableListInfo, pInfo->tableStartIndex); + pInfo->tableStartIndex = TARRAY_ELEM_IDX(pInfo->base.pTableListInfo->pTableList, *pKeyInfo); - if (pTableListInfo->oneTableForEachGroup) { - pInfo->tableEndIndex = pInfo->tableStartIndex; - } else if (pTableListInfo->groupOffset) { - pInfo->currentGroupIndex++; - if (pInfo->currentGroupIndex + 1 < pTableListInfo->numOfOuputGroups) { - pInfo->tableEndIndex = pTableListInfo->groupOffset[pInfo->currentGroupIndex + 1] - 1; - } else { - pInfo->tableEndIndex = numOfTables - 1; - } - } else { - pInfo->tableEndIndex = numOfTables - 1; - } + pInfo->tableEndIndex = (pInfo->tableStartIndex + (*size) - 1); if (!pInfo->needCountEmptyTable) { pInfo->countState = TABLE_COUNT_STATE_END; } else { pInfo->countState = TABLE_COUNT_STATE_SCAN; } - - *pKeyInfo = pStart; - *size = pInfo->tableEndIndex - pInfo->tableStartIndex + 1; } void markGroupProcessed(STableScanInfo* pInfo, uint64_t groupId) { @@ -939,7 +923,7 @@ static SSDataBlock* startNextGroupScan(SOperatorInfo* pOperator) { SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo; SStorageAPI* pAPI = &pTaskInfo->storageAPI; int32_t numOfTables = tableListGetSize(pInfo->base.pTableListInfo); - if (pInfo->tableEndIndex + 1 >= numOfTables) { + if ((++pInfo->currentGroupId) >= tableListGetOutputGroups(pInfo->base.pTableListInfo)) { setOperatorCompleted(pOperator); if (pOperator->dynamicTask) { taosArrayClear(pInfo->base.pTableListInfo->pTableList); @@ -978,13 +962,14 @@ static SSDataBlock* groupSeqTableScan(SOperatorInfo* pOperator) { int32_t num = 0; STableKeyInfo* pList = NULL; - if (pInfo->tableEndIndex == -1) { + if (pInfo->currentGroupId == -1) { int32_t numOfTables = tableListGetSize(pInfo->base.pTableListInfo); - if (pInfo->tableEndIndex + 1 == numOfTables) { + if ((++pInfo->currentGroupId) >= tableListGetOutputGroups(pInfo->base.pTableListInfo)) { setOperatorCompleted(pOperator); return NULL; } - + + initNextGroupScan(pInfo, &pList, &num); ASSERT(pInfo->base.dataReader == NULL); @@ -1034,7 +1019,7 @@ static SSDataBlock* doTableScan(SOperatorInfo* pOperator) { T_LONG_JMP(pTaskInfo->env, code); } if (pOperator->status == OP_EXEC_DONE) { - pInfo->tableEndIndex = -1; + pInfo->currentGroupId = -1; pOperator->status = OP_OPENED; SSDataBlock* result = NULL; while (true) { @@ -1059,23 +1044,23 @@ static SSDataBlock* doTableScan(SOperatorInfo* pOperator) { } // if no data, switch to next table and continue scan - pInfo->tableEndIndex++; + pInfo->currentTable++; taosRLockLatch(&pTaskInfo->lock); numOfTables = tableListGetSize(pInfo->base.pTableListInfo); - if (pInfo->tableEndIndex >= numOfTables) { + if (pInfo->currentTable >= numOfTables) { qDebug("all table checked in table list, total:%d, return NULL, %s", numOfTables, GET_TASKID(pTaskInfo)); taosRUnLockLatch(&pTaskInfo->lock); return NULL; } - tInfo = *(STableKeyInfo*)tableListGetInfo(pInfo->base.pTableListInfo, pInfo->tableEndIndex); + tInfo = *(STableKeyInfo*)tableListGetInfo(pInfo->base.pTableListInfo, pInfo->currentTable); taosRUnLockLatch(&pTaskInfo->lock); pAPI->tsdReader.tsdSetQueryTableList(pInfo->base.dataReader, &tInfo, 1); qDebug("set uid:%" PRIu64 " into scanner, total tables:%d, index:%d/%d %s", tInfo.uid, numOfTables, - pInfo->tableEndIndex, numOfTables, GET_TASKID(pTaskInfo)); + pInfo->currentTable, numOfTables, GET_TASKID(pTaskInfo)); pAPI->tsdReader.tsdReaderResetStatus(pInfo->base.dataReader, &pInfo->base.cond); pInfo->scanTimes = 0; @@ -1167,9 +1152,10 @@ SOperatorInfo* createTableScanOperatorInfo(STableScanPhysiNode* pTableScanNode, if (code != TSDB_CODE_SUCCESS) { goto _error; } + + pInfo->currentGroupId = -1; pInfo->tableEndIndex = -1; - pInfo->currentGroupIndex = -1; pInfo->assignBlockUid = pTableScanNode->assignBlockUid; pInfo->hasGroupByTag = pTableScanNode->pGroupTags ? true : false; @@ -1264,6 +1250,7 @@ void resetTableScanInfo(STableScanInfo* pTableScanInfo, STimeWindow* pWin, uint6 pTableScanInfo->base.cond.startVersion = 0; pTableScanInfo->base.cond.endVersion = ver; pTableScanInfo->scanTimes = 0; + pTableScanInfo->currentGroupId = -1; pTableScanInfo->tableEndIndex = -1; pTableScanInfo->base.readerAPI.tsdReaderClose(pTableScanInfo->base.dataReader); pTableScanInfo->base.dataReader = NULL; @@ -2167,7 +2154,7 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) { pInfo->pTableScanOp->status = OP_OPENED; pTSInfo->scanTimes = 0; - pTSInfo->tableEndIndex = -1; + pTSInfo->currentGroupId = -1; } if (pStreamInfo->recoverStep == STREAM_RECOVER_STEP__SCAN1) { diff --git a/source/libs/function/inc/fnLog.h b/source/libs/function/inc/fnLog.h index 9658b6b782..150d145f50 100644 --- a/source/libs/function/inc/fnLog.h +++ b/source/libs/function/inc/fnLog.h @@ -1,6 +1,17 @@ -// -// Created by slzhou on 22-4-20. -// +/* + * Copyright (c) 2019 TAOS Data, Inc. + * + * This program is free software: you can use, redistribute, and/or modify + * it under the terms of the GNU Affero General Public License, version 3 + * or later ("AGPL"), as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. + * + * You should have received a copy of the GNU Affero General Public License + * along with this program. If not, see . + */ #ifndef TDENGINE_FNLOG_H #define TDENGINE_FNLOG_H diff --git a/source/libs/function/src/builtins.c b/source/libs/function/src/builtins.c index 5424a5c704..6f5b28f366 100644 --- a/source/libs/function/src/builtins.c +++ b/source/libs/function/src/builtins.c @@ -952,7 +952,7 @@ static int32_t translateLeastSQR(SFunctionNode* pFunc, char* pErrBuf, int32_t le } } - pFunc->node.resType = (SDataType){.bytes = 64, .type = TSDB_DATA_TYPE_BINARY}; + pFunc->node.resType = (SDataType){.bytes = LEASTSQUARES_BUFF_LENGTH, .type = TSDB_DATA_TYPE_BINARY}; return TSDB_CODE_SUCCESS; } diff --git a/source/libs/function/src/builtinsimpl.c b/source/libs/function/src/builtinsimpl.c index 1e71954278..000f634fe5 100644 --- a/source/libs/function/src/builtinsimpl.c +++ b/source/libs/function/src/builtinsimpl.c @@ -1576,9 +1576,19 @@ int32_t leastSQRFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { param12 /= param[1][1]; - char buf[512] = {0}; + char buf[LEASTSQUARES_BUFF_LENGTH] = {0}; + char slopBuf[64] = {0}; + char interceptBuf[64] = {0}; + int n = snprintf(slopBuf, 64, "%.6lf", param02); + if (n > LEASTSQUARES_DOUBLE_ITEM_LENGTH) { + snprintf(slopBuf, 64, "%." DOUBLE_PRECISION_DIGITS, param02); + } + n = snprintf(interceptBuf, 64, "%.6lf", param12); + if (n > LEASTSQUARES_DOUBLE_ITEM_LENGTH) { + snprintf(interceptBuf, 64, "%." DOUBLE_PRECISION_DIGITS, param12); + } size_t len = - snprintf(varDataVal(buf), sizeof(buf) - VARSTR_HEADER_SIZE, "{slop:%.6lf, intercept:%.6lf}", param02, param12); + snprintf(varDataVal(buf), sizeof(buf) - VARSTR_HEADER_SIZE, "{slop:%s, intercept:%s}", slopBuf, interceptBuf); varDataSetLen(buf, len); colDataSetVal(pCol, currentRow, buf, pResInfo->isNullRes); diff --git a/source/libs/nodes/src/nodesCloneFuncs.c b/source/libs/nodes/src/nodesCloneFuncs.c index c090cb4b63..bc9839792c 100644 --- a/source/libs/nodes/src/nodesCloneFuncs.c +++ b/source/libs/nodes/src/nodesCloneFuncs.c @@ -486,6 +486,7 @@ static int32_t logicProjectCopy(const SProjectLogicNode* pSrc, SProjectLogicNode CLONE_NODE_LIST_FIELD(pProjections); COPY_CHAR_ARRAY_FIELD(stmtName); COPY_SCALAR_FIELD(ignoreGroupId); + COPY_SCALAR_FIELD(inputIgnoreGroup); return TSDB_CODE_SUCCESS; } @@ -728,6 +729,15 @@ static int32_t physiPartitionCopy(const SPartitionPhysiNode* pSrc, SPartitionPhy return TSDB_CODE_SUCCESS; } +static int32_t physiProjectCopy(const SProjectPhysiNode* pSrc, SProjectPhysiNode* pDst) { + COPY_BASE_OBJECT_FIELD(node, physiNodeCopy); + CLONE_NODE_LIST_FIELD(pProjections); + COPY_SCALAR_FIELD(mergeDataBlock); + COPY_SCALAR_FIELD(ignoreGroupId); + COPY_SCALAR_FIELD(inputIgnoreGroup); + return TSDB_CODE_SUCCESS; +} + static int32_t dataBlockDescCopy(const SDataBlockDescNode* pSrc, SDataBlockDescNode* pDst) { COPY_SCALAR_FIELD(dataBlockId); CLONE_NODE_LIST_FIELD(pSlots); @@ -959,6 +969,9 @@ SNode* nodesCloneNode(const SNode* pNode) { case QUERY_NODE_PHYSICAL_PLAN_STREAM_PARTITION: code = physiPartitionCopy((const SPartitionPhysiNode*)pNode, (SPartitionPhysiNode*)pDst); break; + case QUERY_NODE_PHYSICAL_PLAN_PROJECT: + code = physiProjectCopy((const SProjectPhysiNode*)pNode, (SProjectPhysiNode*)pDst); + break; default: break; } diff --git a/source/libs/nodes/src/nodesCodeFuncs.c b/source/libs/nodes/src/nodesCodeFuncs.c index 254a892546..e2dc7c6d28 100644 --- a/source/libs/nodes/src/nodesCodeFuncs.c +++ b/source/libs/nodes/src/nodesCodeFuncs.c @@ -792,6 +792,7 @@ static int32_t jsonToLogicScanNode(const SJson* pJson, void* pObj) { static const char* jkProjectLogicPlanProjections = "Projections"; static const char* jkProjectLogicPlanIgnoreGroupId = "IgnoreGroupId"; +static const char* jkProjectLogicPlanInputIgnoreGroup= "InputIgnoreGroup"; static int32_t logicProjectNodeToJson(const void* pObj, SJson* pJson) { const SProjectLogicNode* pNode = (const SProjectLogicNode*)pObj; @@ -803,6 +804,9 @@ static int32_t logicProjectNodeToJson(const void* pObj, SJson* pJson) { if (TSDB_CODE_SUCCESS == code) { code = tjsonAddBoolToObject(pJson, jkProjectLogicPlanIgnoreGroupId, pNode->ignoreGroupId); } + if (TSDB_CODE_SUCCESS == code) { + code = tjsonAddBoolToObject(pJson, jkProjectLogicPlanInputIgnoreGroup, pNode->inputIgnoreGroup); + } return code; } @@ -817,7 +821,9 @@ static int32_t jsonToLogicProjectNode(const SJson* pJson, void* pObj) { if (TSDB_CODE_SUCCESS == code) { code = tjsonGetBoolValue(pJson, jkProjectLogicPlanIgnoreGroupId, &pNode->ignoreGroupId); } - + if (TSDB_CODE_SUCCESS == code) { + code = tjsonGetBoolValue(pJson, jkProjectLogicPlanInputIgnoreGroup, &pNode->inputIgnoreGroup); + } return code; } @@ -2075,6 +2081,7 @@ static int32_t jsonToPhysiSysTableScanNode(const SJson* pJson, void* pObj) { static const char* jkProjectPhysiPlanProjections = "Projections"; static const char* jkProjectPhysiPlanMergeDataBlock = "MergeDataBlock"; static const char* jkProjectPhysiPlanIgnoreGroupId = "IgnoreGroupId"; +static const char* jkProjectPhysiPlanInputIgnoreGroup = "InputIgnoreGroup"; static int32_t physiProjectNodeToJson(const void* pObj, SJson* pJson) { const SProjectPhysiNode* pNode = (const SProjectPhysiNode*)pObj; @@ -2089,7 +2096,9 @@ static int32_t physiProjectNodeToJson(const void* pObj, SJson* pJson) { if (TSDB_CODE_SUCCESS == code) { code = tjsonAddBoolToObject(pJson, jkProjectPhysiPlanIgnoreGroupId, pNode->ignoreGroupId); } - + if (TSDB_CODE_SUCCESS == code) { + code = tjsonAddBoolToObject(pJson, jkProjectPhysiPlanInputIgnoreGroup, pNode->inputIgnoreGroup); + } return code; } @@ -2106,7 +2115,9 @@ static int32_t jsonToPhysiProjectNode(const SJson* pJson, void* pObj) { if (TSDB_CODE_SUCCESS == code) { code = tjsonGetBoolValue(pJson, jkProjectPhysiPlanIgnoreGroupId, &pNode->ignoreGroupId); } - + if (TSDB_CODE_SUCCESS == code) { + code = tjsonGetBoolValue(pJson, jkProjectPhysiPlanInputIgnoreGroup, &pNode->inputIgnoreGroup); + } return code; } diff --git a/source/libs/nodes/src/nodesMsgFuncs.c b/source/libs/nodes/src/nodesMsgFuncs.c index 3cff7c76aa..3fd219b8d8 100644 --- a/source/libs/nodes/src/nodesMsgFuncs.c +++ b/source/libs/nodes/src/nodesMsgFuncs.c @@ -2367,7 +2367,8 @@ enum { PHY_PROJECT_CODE_BASE_NODE = 1, PHY_PROJECT_CODE_PROJECTIONS, PHY_PROJECT_CODE_MERGE_DATA_BLOCK, - PHY_PROJECT_CODE_IGNORE_GROUP_ID + PHY_PROJECT_CODE_IGNORE_GROUP_ID, + PHY_PROJECT_CODE_INPUT_IGNORE_GROUP }; static int32_t physiProjectNodeToMsg(const void* pObj, STlvEncoder* pEncoder) { @@ -2383,6 +2384,9 @@ static int32_t physiProjectNodeToMsg(const void* pObj, STlvEncoder* pEncoder) { if (TSDB_CODE_SUCCESS == code) { code = tlvEncodeBool(pEncoder, PHY_PROJECT_CODE_IGNORE_GROUP_ID, pNode->ignoreGroupId); } + if (TSDB_CODE_SUCCESS == code) { + code = tlvEncodeBool(pEncoder, PHY_PROJECT_CODE_INPUT_IGNORE_GROUP, pNode->inputIgnoreGroup); + } return code; } @@ -2406,6 +2410,9 @@ static int32_t msgToPhysiProjectNode(STlvDecoder* pDecoder, void* pObj) { case PHY_PROJECT_CODE_IGNORE_GROUP_ID: code = tlvDecodeBool(pTlv, &pNode->ignoreGroupId); break; + case PHY_PROJECT_CODE_INPUT_IGNORE_GROUP: + code = tlvDecodeBool(pTlv, &pNode->inputIgnoreGroup); + break; default: break; } diff --git a/source/libs/parser/src/parInsertUtil.c b/source/libs/parser/src/parInsertUtil.c index a924ed68b0..6b655bfae6 100644 --- a/source/libs/parser/src/parInsertUtil.c +++ b/source/libs/parser/src/parInsertUtil.c @@ -247,13 +247,13 @@ static int32_t createTableDataCxt(STableMeta* pTableMeta, SVCreateTbReq** pCreat if (NULL == pTableCxt->pData) { code = TSDB_CODE_OUT_OF_MEMORY; } else { - pTableCxt->pData->flags = NULL != *pCreateTbReq ? SUBMIT_REQ_AUTO_CREATE_TABLE : 0; + pTableCxt->pData->flags = (pCreateTbReq != NULL && NULL != *pCreateTbReq) ? SUBMIT_REQ_AUTO_CREATE_TABLE : 0; pTableCxt->pData->flags |= colMode ? SUBMIT_REQ_COLUMN_DATA_FORMAT : 0; pTableCxt->pData->suid = pTableMeta->suid; pTableCxt->pData->uid = pTableMeta->uid; pTableCxt->pData->sver = pTableMeta->sversion; - pTableCxt->pData->pCreateTbReq = *pCreateTbReq; - *pCreateTbReq = NULL; + pTableCxt->pData->pCreateTbReq = pCreateTbReq != NULL ? *pCreateTbReq : NULL; + if(pCreateTbReq != NULL) *pCreateTbReq = NULL; if (pTableCxt->pData->flags & SUBMIT_REQ_COLUMN_DATA_FORMAT) { pTableCxt->pData->aCol = taosArrayInit(128, sizeof(SColData)); if (NULL == pTableCxt->pData->aCol) { @@ -640,12 +640,12 @@ static bool findFileds(SSchema* pSchema, TAOS_FIELD* fields, int numFields) { return false; } -int rawBlockBindData(SQuery* query, STableMeta* pTableMeta, void* data, SVCreateTbReq* pCreateTb, TAOS_FIELD* tFields, +int rawBlockBindData(SQuery* query, STableMeta* pTableMeta, void* data, SVCreateTbReq** pCreateTb, TAOS_FIELD* tFields, int numFields, bool needChangeLength) { void* tmp = taosHashGet(((SVnodeModifyOpStmt*)(query->pRoot))->pTableBlockHashObj, &pTableMeta->uid, sizeof(pTableMeta->uid)); STableDataCxt* pTableCxt = NULL; int ret = insGetTableDataCxt(((SVnodeModifyOpStmt*)(query->pRoot))->pTableBlockHashObj, &pTableMeta->uid, - sizeof(pTableMeta->uid), pTableMeta, &pCreateTb, &pTableCxt, true, false); + sizeof(pTableMeta->uid), pTableMeta, pCreateTb, &pTableCxt, true, false); if (ret != TSDB_CODE_SUCCESS) { uError("insGetTableDataCxt error"); goto end; @@ -662,6 +662,7 @@ int rawBlockBindData(SQuery* query, STableMeta* pTableMeta, void* data, SVCreate char* p = (char*)data; // | version | total length | total rows | total columns | flag seg| block group id | column schema | each column // length | + int32_t version = *(int32_t*)data; p += sizeof(int32_t); p += sizeof(int32_t); @@ -717,7 +718,7 @@ int rawBlockBindData(SQuery* query, STableMeta* pTableMeta, void* data, SVCreate goto end; } fields += sizeof(int8_t) + sizeof(int32_t); - if (needChangeLength) { + if (needChangeLength && version == 1) { pStart += htonl(colLength[j]); } else { pStart += colLength[j]; @@ -748,7 +749,7 @@ int rawBlockBindData(SQuery* query, STableMeta* pTableMeta, void* data, SVCreate goto end; } fields += sizeof(int8_t) + sizeof(int32_t); - if (needChangeLength) { + if (needChangeLength && version == 1) { pStart += htonl(colLength[i]); } else { pStart += colLength[i]; diff --git a/source/libs/parser/src/parTranslater.c b/source/libs/parser/src/parTranslater.c index c5efdab533..53597286f2 100644 --- a/source/libs/parser/src/parTranslater.c +++ b/source/libs/parser/src/parTranslater.c @@ -1103,21 +1103,31 @@ static EDealRes translateColumnWithoutPrefix(STranslateContext* pCxt, SColumnNod static EDealRes translateColumnUseAlias(STranslateContext* pCxt, SColumnNode** pCol, bool* pFound) { SNodeList* pProjectionList = getProjectListFromCurrStmt(pCxt->pCurrStmt); SNode* pNode; + SNode* pFoundNode = NULL; + *pFound = false; FOREACH(pNode, pProjectionList) { SExprNode* pExpr = (SExprNode*)pNode; if (0 == strcmp((*pCol)->colName, pExpr->userAlias)) { - SNode* pNew = nodesCloneNode(pNode); - if (NULL == pNew) { - pCxt->errCode = TSDB_CODE_OUT_OF_MEMORY; - return DEAL_RES_ERROR; + if (true == *pFound) { + if(nodesEqualNode(pFoundNode, pNode)) { + continue; } - nodesDestroyNode(*(SNode**)pCol); - *(SNode**)pCol = (SNode*)pNew; - *pFound = true; - return DEAL_RES_CONTINUE; - } + pCxt->errCode = generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_ORDERBY_AMBIGUOUS, (*pCol)->colName); + return DEAL_RES_ERROR; + } + *pFound = true; + pFoundNode = pNode; + } + } + if (*pFound) { + SNode* pNew = nodesCloneNode(pFoundNode); + if (NULL == pNew) { + pCxt->errCode = TSDB_CODE_OUT_OF_MEMORY; + return DEAL_RES_ERROR; + } + nodesDestroyNode(*(SNode**)pCol); + *(SNode**)pCol = (SNode*)pNew; } - *pFound = false; return DEAL_RES_CONTINUE; } @@ -4517,13 +4527,12 @@ typedef struct SReplaceOrderByAliasCxt { static EDealRes replaceOrderByAliasImpl(SNode** pNode, void* pContext) { SReplaceOrderByAliasCxt* pCxt = pContext; - SNodeList* pProjectionList = pCxt->pProjectionList; - SNode* pProject = NULL; + SNodeList* pProjectionList = pCxt->pProjectionList; + SNode* pProject = NULL; if (QUERY_NODE_COLUMN == nodeType(*pNode)) { FOREACH(pProject, pProjectionList) { SExprNode* pExpr = (SExprNode*)pProject; - if (0 == strcmp(((SColumnRefNode*)*pNode)->colName, pExpr->userAlias) - && nodeType(*pNode) == nodeType(pProject)) { + if (0 == strcmp(((SColumnRefNode*)*pNode)->colName, pExpr->userAlias) && nodeType(*pNode) == nodeType(pProject)) { SNode* pNew = nodesCloneNode(pProject); if (NULL == pNew) { pCxt->pTranslateCxt->errCode = TSDB_CODE_OUT_OF_MEMORY; @@ -4537,14 +4546,14 @@ static EDealRes replaceOrderByAliasImpl(SNode** pNode, void* pContext) { } } else if (QUERY_NODE_ORDER_BY_EXPR == nodeType(*pNode)) { STranslateContext* pTransCxt = pCxt->pTranslateCxt; - SNode* pExpr = ((SOrderByExprNode*)*pNode)->pExpr; - if (QUERY_NODE_VALUE == nodeType(pExpr)) { + SNode* pExpr = ((SOrderByExprNode*)*pNode)->pExpr; + if (QUERY_NODE_VALUE == nodeType(pExpr)) { SValueNode* pVal = (SValueNode*)pExpr; if (DEAL_RES_ERROR == translateValue(pTransCxt, pVal)) { return pTransCxt->errCode; } int32_t pos = getPositionValue(pVal); - if ( 0 < pos && pos <= LIST_LENGTH(pProjectionList)) { + if (0 < pos && pos <= LIST_LENGTH(pProjectionList)) { SNode* pNew = nodesCloneNode(nodesListGetNode(pProjectionList, pos - 1)); if (NULL == pNew) { pCxt->pTranslateCxt->errCode = TSDB_CODE_OUT_OF_MEMORY; diff --git a/source/libs/parser/src/parUtil.c b/source/libs/parser/src/parUtil.c index 26e6111074..dfe33ce55e 100644 --- a/source/libs/parser/src/parUtil.c +++ b/source/libs/parser/src/parUtil.c @@ -190,6 +190,8 @@ static char* getSyntaxErrFormat(int32_t errCode) { return "invalid ip range"; case TSDB_CODE_OUT_OF_MEMORY: return "Out of memory"; + case TSDB_CODE_PAR_ORDERBY_AMBIGUOUS: + return "ORDER BY \"%s\" is ambiguous"; default: return "Unknown error"; } diff --git a/source/libs/planner/src/planOptimizer.c b/source/libs/planner/src/planOptimizer.c index 754d502a33..dfb9343cc8 100644 --- a/source/libs/planner/src/planOptimizer.c +++ b/source/libs/planner/src/planOptimizer.c @@ -3122,6 +3122,7 @@ static bool mergeProjectsMayBeOptimized(SLogicNode* pNode) { NULL != pChild->pConditions || NULL != pChild->pLimit || NULL != pChild->pSlimit) { return false; } + return true; } @@ -3160,7 +3161,9 @@ static EDealRes mergeProjectionsExpr(SNode** pNode, void* pContext) { static int32_t mergeProjectsOptimizeImpl(SOptimizeContext* pCxt, SLogicSubplan* pLogicSubplan, SLogicNode* pSelfNode) { SLogicNode* pChild = (SLogicNode*)nodesListGetNode(pSelfNode->pChildren, 0); - + if (((SProjectLogicNode*)pChild)->ignoreGroupId) { + ((SProjectLogicNode*)pSelfNode)->inputIgnoreGroup = true; + } SMergeProjectionsContext cxt = {.pChildProj = (SProjectLogicNode*)pChild, .errCode = TSDB_CODE_SUCCESS}; nodesRewriteExprs(((SProjectLogicNode*)pSelfNode)->pProjections, mergeProjectionsExpr, &cxt); int32_t code = cxt.errCode; @@ -3325,7 +3328,11 @@ static bool pushDownLimitTo(SLogicNode* pNodeWithLimit, SLogicNode* pNodeLimitPu } case QUERY_NODE_LOGIC_PLAN_SCAN: if (nodeType(pNodeWithLimit) == QUERY_NODE_LOGIC_PLAN_PROJECT && pNodeWithLimit->pLimit) { - swapLimit(pNodeWithLimit, pNodeLimitPushTo); + if (((SProjectLogicNode*)pNodeWithLimit)->inputIgnoreGroup) { + cloneLimit(pNodeWithLimit, pNodeLimitPushTo, CLONE_LIMIT); + } else { + swapLimit(pNodeWithLimit, pNodeLimitPushTo); + } return true; } default: diff --git a/source/libs/planner/src/planPhysiCreater.c b/source/libs/planner/src/planPhysiCreater.c index 21c698fda5..21c637116f 100644 --- a/source/libs/planner/src/planPhysiCreater.c +++ b/source/libs/planner/src/planPhysiCreater.c @@ -1459,6 +1459,7 @@ static int32_t createProjectPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChild pProject->mergeDataBlock = projectCanMergeDataBlock(pProjectLogicNode); pProject->ignoreGroupId = pProjectLogicNode->ignoreGroupId; + pProject->inputIgnoreGroup = pProjectLogicNode->inputIgnoreGroup; int32_t code = TSDB_CODE_SUCCESS; if (0 == LIST_LENGTH(pChildren)) { diff --git a/source/libs/planner/src/planSpliter.c b/source/libs/planner/src/planSpliter.c index ad0031f815..f2258a1d7e 100644 --- a/source/libs/planner/src/planSpliter.c +++ b/source/libs/planner/src/planSpliter.c @@ -1164,8 +1164,10 @@ static int32_t stbSplSplitSortNode(SSplitContext* pCxt, SStableSplitInfo* pInfo) static int32_t stbSplGetSplitNodeForScan(SStableSplitInfo* pInfo, SLogicNode** pSplitNode) { *pSplitNode = pInfo->pSplitNode; - if (NULL != pInfo->pSplitNode->pParent && QUERY_NODE_LOGIC_PLAN_PROJECT == nodeType(pInfo->pSplitNode->pParent) && - NULL == pInfo->pSplitNode->pParent->pLimit && NULL == pInfo->pSplitNode->pParent->pSlimit) { + if (NULL != pInfo->pSplitNode->pParent && + QUERY_NODE_LOGIC_PLAN_PROJECT == nodeType(pInfo->pSplitNode->pParent) && + NULL == pInfo->pSplitNode->pParent->pLimit && NULL == pInfo->pSplitNode->pParent->pSlimit && + !((SProjectLogicNode*)pInfo->pSplitNode->pParent)->inputIgnoreGroup) { *pSplitNode = pInfo->pSplitNode->pParent; if (NULL != pInfo->pSplitNode->pLimit) { (*pSplitNode)->pLimit = nodesCloneNode(pInfo->pSplitNode->pLimit); diff --git a/source/libs/scalar/src/sclfunc.c b/source/libs/scalar/src/sclfunc.c index e713043b80..2e44c75c17 100644 --- a/source/libs/scalar/src/sclfunc.c +++ b/source/libs/scalar/src/sclfunc.c @@ -2427,9 +2427,19 @@ int32_t leastSQRScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarPa matrix12 /= matrix[1][1]; - char buf[64] = {0}; - size_t len = snprintf(varDataVal(buf), sizeof(buf) - VARSTR_HEADER_SIZE, "{slop:%.6lf, intercept:%.6lf}", matrix02, - matrix12); + char buf[LEASTSQUARES_BUFF_LENGTH] = {0}; + char slopBuf[64] = {0}; + char interceptBuf[64] = {0}; + int n = snprintf(slopBuf, 64, "%.6lf", matrix02); + if (n > LEASTSQUARES_DOUBLE_ITEM_LENGTH) { + snprintf(slopBuf, 64, "%." DOUBLE_PRECISION_DIGITS, matrix02); + } + n = snprintf(interceptBuf, 64, "%.6lf", matrix12); + if (n > LEASTSQUARES_DOUBLE_ITEM_LENGTH) { + snprintf(interceptBuf, 64, "%." DOUBLE_PRECISION_DIGITS, matrix12); + } + size_t len = + snprintf(varDataVal(buf), sizeof(buf) - VARSTR_HEADER_SIZE, "{slop:%s, intercept:%s}", slopBuf, interceptBuf); varDataSetLen(buf, len); colDataSetVal(pOutputData, 0, buf, false); } diff --git a/source/libs/stream/inc/streamInt.h b/source/libs/stream/inc/streamInt.h index 1e4b8996b6..bd5bddcece 100644 --- a/source/libs/stream/inc/streamInt.h +++ b/source/libs/stream/inc/streamInt.h @@ -48,8 +48,8 @@ extern "C" { #define stError(...) do { if (stDebugFlag & DEBUG_ERROR) { taosPrintLog("STM ERROR ", DEBUG_ERROR, 255, __VA_ARGS__); }} while(0) #define stWarn(...) do { if (stDebugFlag & DEBUG_WARN) { taosPrintLog("STM WARN ", DEBUG_WARN, 255, __VA_ARGS__); }} while(0) #define stInfo(...) do { if (stDebugFlag & DEBUG_INFO) { taosPrintLog("STM ", DEBUG_INFO, 255, __VA_ARGS__); }} while(0) -#define stDebug(...) do { if (stDebugFlag & DEBUG_DEBUG) { taosPrintLog("STM ", DEBUG_DEBUG, tqDebugFlag, __VA_ARGS__); }} while(0) -#define stTrace(...) do { if (stDebugFlag & DEBUG_TRACE) { taosPrintLog("STM ", DEBUG_TRACE, tqDebugFlag, __VA_ARGS__); }} while(0) +#define stDebug(...) do { if (stDebugFlag & DEBUG_DEBUG) { taosPrintLog("STM ", DEBUG_DEBUG, stDebugFlag, __VA_ARGS__); }} while(0) +#define stTrace(...) do { if (stDebugFlag & DEBUG_TRACE) { taosPrintLog("STM ", DEBUG_TRACE, stDebugFlag, __VA_ARGS__); }} while(0) // clang-format on typedef struct { diff --git a/source/libs/stream/src/stream.c b/source/libs/stream/src/stream.c index 75af5a9fc5..c3373c1e7f 100644 --- a/source/libs/stream/src/stream.c +++ b/source/libs/stream/src/stream.c @@ -35,6 +35,10 @@ void streamTimerCleanUp() { streamTimer = NULL; } +tmr_h streamTimerGetInstance() { + return streamTimer; +} + char* createStreamTaskIdStr(int64_t streamId, int32_t taskId) { char buf[128] = {0}; sprintf(buf, "0x%" PRIx64 "-0x%x", streamId, taskId); diff --git a/source/libs/stream/src/streamBackendRocksdb.c b/source/libs/stream/src/streamBackendRocksdb.c index c8f944071f..bcf289a019 100644 --- a/source/libs/stream/src/streamBackendRocksdb.c +++ b/source/libs/stream/src/streamBackendRocksdb.c @@ -500,34 +500,27 @@ int32_t backendCopyFiles(char* src, char* dst) { // return 0; } int32_t rebuildFromLocalChkp(char* key, char* chkpPath, int64_t chkpId, char* defaultPath) { - int32_t code = -1; - int32_t len = strlen(defaultPath) + 32; - char* tmp = taosMemoryCalloc(1, len); - sprintf(tmp, "%s%s", defaultPath, "_tmp"); - - if (taosIsDir(tmp)) taosRemoveDir(tmp); - if (taosIsDir(defaultPath)) taosRenameFile(defaultPath, tmp); - - if (taosIsDir(chkpPath) && isValidCheckpoint(chkpPath)) { - if (taosIsDir(tmp)) { - taosRemoveDir(tmp); - } + int32_t code = 0; + if (taosIsDir(defaultPath)) { + taosRemoveDir(defaultPath); taosMkDir(defaultPath); + + stInfo("succ to clear stream backend %s", defaultPath); + } + if (taosIsDir(chkpPath) && isValidCheckpoint(chkpPath)) { code = backendCopyFiles(chkpPath, defaultPath); if (code != 0) { - stError("failed to restart stream backend from %s, reason: %s", chkpPath, tstrerror(TAOS_SYSTEM_ERROR(errno))); + taosRemoveDir(defaultPath); + taosMkDir(defaultPath); + + stError("failed to restart stream backend from %s, reason: %s, start to restart from empty path: %s", chkpPath, + tstrerror(TAOS_SYSTEM_ERROR(errno)), defaultPath); + code = 0; } else { stInfo("start to restart stream backend at checkpoint path: %s", chkpPath); } } - if (code != 0) { - if (taosIsDir(defaultPath)) taosRemoveDir(defaultPath); - if (taosIsDir(tmp)) taosRenameFile(tmp, defaultPath); - } else { - taosRemoveDir(tmp); - } - taosMemoryFree(tmp); return code; } diff --git a/source/libs/stream/src/streamCheckpoint.c b/source/libs/stream/src/streamCheckpoint.c index 006202fdd1..eb50efadeb 100644 --- a/source/libs/stream/src/streamCheckpoint.c +++ b/source/libs/stream/src/streamCheckpoint.c @@ -360,6 +360,8 @@ int32_t streamSaveTaskCheckpointInfo(SStreamTask* p, int64_t checkpointId) { void streamTaskSetCheckpointFailedId(SStreamTask* pTask) { pTask->chkInfo.failedId = pTask->chkInfo.checkpointingId; + stDebug("s-task:%s mark the checkpointId:%" PRId64 " (transId:%d) failed", pTask->id.idStr, + pTask->chkInfo.checkpointingId, pTask->chkInfo.transId); } int32_t getChkpMeta(char* id, char* path, SArray* list) { diff --git a/source/libs/stream/src/streamMeta.c b/source/libs/stream/src/streamMeta.c index 6bec79065e..6e35e39a0a 100644 --- a/source/libs/stream/src/streamMeta.c +++ b/source/libs/stream/src/streamMeta.c @@ -960,7 +960,7 @@ int32_t tEncodeStreamHbMsg(SEncoder* pEncoder, const SStreamHbMsg* pReq) { if (tEncodeI64(pEncoder, ps->processedVer) < 0) return -1; if (tEncodeI64(pEncoder, ps->verStart) < 0) return -1; if (tEncodeI64(pEncoder, ps->verEnd) < 0) return -1; - if (tEncodeI64(pEncoder, ps->activeCheckpointId) < 0) return -1; + if (tEncodeI64(pEncoder, ps->checkpointId) < 0) return -1; if (tEncodeI8(pEncoder, ps->checkpointFailed) < 0) return -1; if (tEncodeI32(pEncoder, ps->chkpointTransId) < 0) return -1; } @@ -999,8 +999,8 @@ int32_t tDecodeStreamHbMsg(SDecoder* pDecoder, SStreamHbMsg* pReq) { if (tDecodeI64(pDecoder, &entry.processedVer) < 0) return -1; if (tDecodeI64(pDecoder, &entry.verStart) < 0) return -1; if (tDecodeI64(pDecoder, &entry.verEnd) < 0) return -1; - if (tDecodeI64(pDecoder, &entry.activeCheckpointId) < 0) return -1; - if (tDecodeI8(pDecoder, (int8_t*)&entry.checkpointFailed) < 0) return -1; + if (tDecodeI64(pDecoder, &entry.checkpointId) < 0) return -1; + if (tDecodeI8(pDecoder, &entry.checkpointFailed) < 0) return -1; if (tDecodeI32(pDecoder, &entry.chkpointTransId) < 0) return -1; entry.id.taskId = taskId; @@ -1113,8 +1113,8 @@ static int32_t metaHeartbeatToMnodeImpl(SStreamMeta* pMeta) { } if ((*pTask)->chkInfo.checkpointingId != 0) { - entry.checkpointFailed = ((*pTask)->chkInfo.failedId >= (*pTask)->chkInfo.checkpointingId); - entry.activeCheckpointId = (*pTask)->chkInfo.checkpointingId; + entry.checkpointFailed = ((*pTask)->chkInfo.failedId >= (*pTask)->chkInfo.checkpointingId)? 1:0; + entry.checkpointId = (*pTask)->chkInfo.checkpointingId; entry.chkpointTransId = (*pTask)->chkInfo.transId; if (entry.checkpointFailed) { @@ -1373,7 +1373,6 @@ SArray* streamMetaSendMsgBeforeCloseTasks(SStreamMeta* pMeta) { SStreamTaskState* pState = streamTaskGetStatus(pTask); if (pState->state == TASK_STATUS__CK) { streamTaskSetCheckpointFailedId(pTask); - stDebug("s-task:%s mark the checkpoint:%"PRId64" failed", pTask->id.idStr, pTask->chkInfo.checkpointingId); } else { stDebug("s-task:%s status:%s not reset the checkpoint", pTask->id.idStr, pState->name); } @@ -1411,31 +1410,43 @@ void streamMetaUpdateStageRole(SStreamMeta* pMeta, int64_t stage, bool isLeader) } } +static SArray* prepareBeforeStartTasks(SStreamMeta* pMeta) { + streamMetaWLock(pMeta); + + SArray* pTaskList = taosArrayDup(pMeta->pTaskList, NULL); + taosHashClear(pMeta->startInfo.pReadyTaskSet); + taosHashClear(pMeta->startInfo.pFailedTaskSet); + pMeta->startInfo.startTs = taosGetTimestampMs(); + + streamMetaResetTaskStatus(pMeta); + streamMetaWUnLock(pMeta); + + return pTaskList; +} + int32_t streamMetaStartAllTasks(SStreamMeta* pMeta) { int32_t code = TSDB_CODE_SUCCESS; int32_t vgId = pMeta->vgId; int32_t numOfTasks = taosArrayGetSize(pMeta->pTaskList); stInfo("vgId:%d start to check all %d stream task(s) downstream status", vgId, numOfTasks); + if (numOfTasks == 0) { stInfo("vgId:%d start tasks completed", pMeta->vgId); return TSDB_CODE_SUCCESS; } - SArray* pTaskList = NULL; - streamMetaWLock(pMeta); - pTaskList = taosArrayDup(pMeta->pTaskList, NULL); - taosHashClear(pMeta->startInfo.pReadyTaskSet); - taosHashClear(pMeta->startInfo.pFailedTaskSet); - pMeta->startInfo.startTs = taosGetTimestampMs(); - streamMetaWUnLock(pMeta); - - // broadcast the check downstream tasks msg int64_t now = taosGetTimestampMs(); + SArray* pTaskList = prepareBeforeStartTasks(pMeta); + numOfTasks = taosArrayGetSize(pTaskList); + + // broadcast the check downstream tasks msg for (int32_t i = 0; i < numOfTasks; ++i) { SStreamTaskId* pTaskId = taosArrayGet(pTaskList, i); + // todo: may be we should find the related fill-history task and set it failed. + // todo: use hashTable instead SStreamTask* pTask = streamMetaAcquireTask(pMeta, pTaskId->streamId, pTaskId->taskId); if (pTask == NULL) { stError("vgId:%d failed to acquire task:0x%x during start tasks", pMeta->vgId, pTaskId->taskId); @@ -1443,8 +1454,6 @@ int32_t streamMetaStartAllTasks(SStreamMeta* pMeta) { continue; } - // todo: may be we should find the related fill-history task and set it failed. - // fill-history task can only be launched by related stream tasks. STaskExecStatisInfo* pInfo = &pTask->execInfo; if (pTask->info.fillHistory == 1) { @@ -1491,10 +1500,13 @@ int32_t streamMetaStopAllTasks(SStreamMeta* pMeta) { int32_t num = taosArrayGetSize(pMeta->pTaskList); stDebug("vgId:%d stop all %d stream task(s)", pMeta->vgId, num); if (num == 0) { + stDebug("vgId:%d stop all %d task(s) completed, elapsed time:0 Sec.", pMeta->vgId, num); streamMetaRUnLock(pMeta); return TSDB_CODE_SUCCESS; } + int64_t st = taosGetTimestampMs(); + // send hb msg to mnode before closing all tasks. SArray* pTaskList = streamMetaSendMsgBeforeCloseTasks(pMeta); int32_t numOfTasks = taosArrayGetSize(pTaskList); @@ -1512,6 +1524,9 @@ int32_t streamMetaStopAllTasks(SStreamMeta* pMeta) { taosArrayDestroy(pTaskList); + double el = (taosGetTimestampMs() - st) / 1000.0; + stDebug("vgId:%d stop all %d task(s) completed, elapsed time:%.2f Sec.", pMeta->vgId, num, el); + streamMetaRUnLock(pMeta); return 0; } @@ -1638,4 +1653,23 @@ int32_t streamMetaAddTaskLaunchResult(SStreamMeta* pMeta, int64_t streamId, int3 } return TSDB_CODE_SUCCESS; +} + +int32_t streamMetaResetTaskStatus(SStreamMeta* pMeta) { + int32_t numOfTasks = taosArrayGetSize(pMeta->pTaskList); + + stDebug("vgId:%d reset all %d stream task(s) status to be uninit", pMeta->vgId, numOfTasks); + if (numOfTasks == 0) { + return TSDB_CODE_SUCCESS; + } + + for (int32_t i = 0; i < numOfTasks; ++i) { + SStreamTaskId* pTaskId = taosArrayGet(pMeta->pTaskList, i); + + STaskId id = {.streamId = pTaskId->streamId, .taskId = pTaskId->taskId}; + SStreamTask** pTask = taosHashGet(pMeta->pTasksMap, &id, sizeof(id)); + streamTaskResetStatus(*pTask); + } + + return 0; } \ No newline at end of file diff --git a/source/libs/stream/src/streamSessionState.c b/source/libs/stream/src/streamSessionState.c index 2cb77c60dc..1f991d309f 100644 --- a/source/libs/stream/src/streamSessionState.c +++ b/source/libs/stream/src/streamSessionState.c @@ -90,6 +90,7 @@ SRowBuffPos* createSessionWinBuff(SStreamFileState* pFileState, SSessionKey* pKe SRowBuffPos* pNewPos = getNewRowPosForWrite(pFileState); memcpy(pNewPos->pKey, pKey, sizeof(SSessionKey)); pNewPos->needFree = true; + pNewPos->beFlushed = true; memcpy(pNewPos->pRowBuff, p, *pVLen); taosMemoryFree(p); return pNewPos; @@ -217,6 +218,7 @@ int32_t getSessionFlushedBuff(SStreamFileState* pFileState, SSessionKey* pKey, v SRowBuffPos* pNewPos = getNewRowPosForWrite(pFileState); memcpy(pNewPos->pKey, pKey, sizeof(SSessionKey)); pNewPos->needFree = true; + pNewPos->beFlushed = true; void* pBuff = NULL; int32_t code = streamStateSessionGet_rocksdb(getStateFileStore(pFileState), pKey, &pBuff, pVLen); if (code != TSDB_CODE_SUCCESS) { @@ -307,6 +309,7 @@ int32_t allocSessioncWinBuffByNextPosition(SStreamFileState* pFileState, SStream } pNewPos = getNewRowPosForWrite(pFileState); pNewPos->needFree = true; + pNewPos->beFlushed = true; } _end: @@ -482,6 +485,7 @@ int32_t sessionWinStateGetKVByCur(SStreamStateCur* pCur, SSessionKey* pKey, void SRowBuffPos* pNewPos = getNewRowPosForWrite(pCur->pStreamFileState); memcpy(pNewPos->pKey, pKey, sizeof(SSessionKey)); pNewPos->needFree = true; + pNewPos->beFlushed = true; memcpy(pNewPos->pRowBuff, pData, *pVLen); (*pVal) = pNewPos; } diff --git a/source/libs/stream/src/streamState.c b/source/libs/stream/src/streamState.c index 19b7359981..ea20f0e2b1 100644 --- a/source/libs/stream/src/streamState.c +++ b/source/libs/stream/src/streamState.c @@ -698,6 +698,7 @@ int32_t streamStateSessionPut(SStreamState* pState, const SSessionKey* key, void stDebug("===stream===save skey:%" PRId64 ", ekey:%" PRId64 ", groupId:%" PRIu64 ".code:%d", key->win.skey, key->win.ekey, key->groupId, code); } else { + pos->beFlushed = false; code = putSessionWinResultBuff(pState->pFileState, value); } } diff --git a/source/libs/stream/src/streamTask.c b/source/libs/stream/src/streamTask.c index 294c099c20..094068a06e 100644 --- a/source/libs/stream/src/streamTask.c +++ b/source/libs/stream/src/streamTask.c @@ -810,7 +810,7 @@ void streamTaskStatusCopy(STaskStatusEntry* pDst, const STaskStatusEntry* pSrc) pDst->verEnd = pSrc->verEnd; pDst->sinkQuota = pSrc->sinkQuota; pDst->sinkDataSize = pSrc->sinkDataSize; - pDst->activeCheckpointId = pSrc->activeCheckpointId; + pDst->checkpointId = pSrc->checkpointId; pDst->checkpointFailed = pSrc->checkpointFailed; pDst->chkpointTransId = pSrc->chkpointTransId; } diff --git a/source/libs/sync/src/syncMain.c b/source/libs/sync/src/syncMain.c index 9352d5662e..ff6401cba8 100644 --- a/source/libs/sync/src/syncMain.c +++ b/source/libs/sync/src/syncMain.c @@ -676,7 +676,7 @@ int32_t syncNodePropose(SSyncNode* pSyncNode, SRpcMsg* pMsg, bool isWeak, int64_ // heartbeat timeout if (syncNodeHeartbeatReplyTimeout(pSyncNode)) { terrno = TSDB_CODE_SYN_PROPOSE_NOT_READY; - sNError(pSyncNode, "failed to sync propose since hearbeat timeout, type:%s, last:%" PRId64 ", cmt:%" PRId64, + sNError(pSyncNode, "failed to sync propose since heartbeat timeout, type:%s, last:%" PRId64 ", cmt:%" PRId64, TMSG_INFO(pMsg->msgType), syncNodeGetLastIndex(pSyncNode), pSyncNode->commitIndex); return -1; } diff --git a/source/libs/sync/src/syncSnapshot.c b/source/libs/sync/src/syncSnapshot.c index 578e6798e0..d9c8bb21ac 100644 --- a/source/libs/sync/src/syncSnapshot.c +++ b/source/libs/sync/src/syncSnapshot.c @@ -345,9 +345,9 @@ int32_t snapshotReSend(SSyncSnapshotSender *pSender) { for (int32_t seq = pSndBuf->cursor + 1; seq < pSndBuf->end; ++seq) { SyncSnapBlock *pBlk = pSndBuf->entries[seq % pSndBuf->size]; - ASSERT(pBlk && !pBlk->acked); + ASSERT(pBlk); int64_t nowMs = taosGetTimestampMs(); - if (nowMs < pBlk->sendTimeMs + SYNC_SNAP_RESEND_MS) { + if (pBlk->acked || nowMs < pBlk->sendTimeMs + SYNC_SNAP_RESEND_MS) { continue; } if (syncSnapSendMsg(pSender, pBlk->seq, pBlk->pBlock, pBlk->blockLen, 0) != 0) { diff --git a/source/os/src/osTime.c b/source/os/src/osTime.c index e506ee603b..9cad1dd6ef 100644 --- a/source/os/src/osTime.c +++ b/source/os/src/osTime.c @@ -37,9 +37,6 @@ // This magic number is the number of 100 nanosecond intervals since January 1, 1601 (UTC) // until 00:00:00 January 1, 1970 static const uint64_t TIMEEPOCH = ((uint64_t)116444736000000000ULL); -// This magic number is the number of 100 nanosecond intervals since January 1, 1601 (UTC) -// until 00:00:00 January 1, 1900 -static const uint64_t TIMEEPOCH1900 = ((uint64_t)116445024000000000ULL); /* * We do not implement alternate representations. However, we always @@ -360,6 +357,7 @@ int32_t taosGetTimeOfDay(struct timeval *tv) { t.QuadPart -= TIMEEPOCH; tv->tv_sec = t.QuadPart / 10000000; tv->tv_usec = (t.QuadPart % 10000000) / 10; + return 0; #else return gettimeofday(tv, NULL); #endif @@ -482,33 +480,51 @@ struct tm *taosLocalTime(const time_t *timep, struct tm *result, char *buf) { sprintf(buf, "NaN"); } return NULL; - } + } else if (*timep < 0) { + SYSTEMTIME ss, s; + FILETIME ff, f; - SYSTEMTIME s; - FILETIME f; - LARGE_INTEGER offset; - struct tm tm1; - time_t tt = 0; - if (localtime_s(&tm1, &tt) != 0) { - if (buf != NULL) { - sprintf(buf, "NaN"); + LARGE_INTEGER offset; + struct tm tm1; + time_t tt = 0; + if (localtime_s(&tm1, &tt) != 0) { + if (buf != NULL) { + sprintf(buf, "NaN"); + } + return NULL; + } + ss.wYear = tm1.tm_year + 1900; + ss.wMonth = tm1.tm_mon + 1; + ss.wDay = tm1.tm_mday; + ss.wHour = tm1.tm_hour; + ss.wMinute = tm1.tm_min; + ss.wSecond = tm1.tm_sec; + ss.wMilliseconds = 0; + SystemTimeToFileTime(&ss, &ff); + offset.QuadPart = ff.dwHighDateTime; + offset.QuadPart <<= 32; + offset.QuadPart |= ff.dwLowDateTime; + offset.QuadPart += *timep * 10000000; + f.dwLowDateTime = offset.QuadPart & 0xffffffff; + f.dwHighDateTime = (offset.QuadPart >> 32) & 0xffffffff; + FileTimeToSystemTime(&f, &s); + result->tm_sec = s.wSecond; + result->tm_min = s.wMinute; + result->tm_hour = s.wHour; + result->tm_mday = s.wDay; + result->tm_mon = s.wMonth - 1; + result->tm_year = s.wYear - 1900; + result->tm_wday = s.wDayOfWeek; + result->tm_yday = 0; + result->tm_isdst = 0; + } else { + if (localtime_s(result, timep) != 0) { + if (buf != NULL) { + sprintf(buf, "NaN"); + } + return NULL; } - return NULL; } - offset.QuadPart = TIMEEPOCH1900; - offset.QuadPart += *timep * 10000000; - f.dwLowDateTime = offset.QuadPart & 0xffffffff; - f.dwHighDateTime = (offset.QuadPart >> 32) & 0xffffffff; - FileTimeToSystemTime(&f, &s); - result->tm_sec = s.wSecond; - result->tm_min = s.wMinute; - result->tm_hour = s.wHour; - result->tm_mday = s.wDay; - result->tm_mon = s.wMonth - 1; - result->tm_year = s.wYear - 1900; - result->tm_wday = s.wDayOfWeek; - result->tm_yday = 0; - result->tm_isdst = 0; #else res = localtime_r(timep, result); if (res == NULL && buf != NULL) { diff --git a/source/util/src/terror.c b/source/util/src/terror.c index fe8276604a..23673b1eff 100644 --- a/source/util/src/terror.c +++ b/source/util/src/terror.c @@ -101,8 +101,8 @@ TAOS_DEFINE_ERROR(TSDB_CODE_APP_IS_STARTING, "Database is starting TAOS_DEFINE_ERROR(TSDB_CODE_APP_IS_STOPPING, "Database is closing down") TAOS_DEFINE_ERROR(TSDB_CODE_INVALID_DATA_FMT, "Invalid data format") TAOS_DEFINE_ERROR(TSDB_CODE_INVALID_CFG_VALUE, "Invalid configuration value") -TAOS_DEFINE_ERROR(TSDB_CODE_IP_NOT_IN_WHITE_LIST, "Not allowed to connect") -TAOS_DEFINE_ERROR(TSDB_CODE_FAILED_TO_CONNECT_S3, "Failed to connect to s3 server") +TAOS_DEFINE_ERROR(TSDB_CODE_IP_NOT_IN_WHITE_LIST, "Not allowed to connect") +TAOS_DEFINE_ERROR(TSDB_CODE_FAILED_TO_CONNECT_S3, "Failed to connect to s3 server") //client TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_OPERATION, "Invalid operation") diff --git a/tests/army/community/cluster/incSnapshot.py b/tests/army/community/cluster/incSnapshot.py new file mode 100644 index 0000000000..b1b53e65bd --- /dev/null +++ b/tests/army/community/cluster/incSnapshot.py @@ -0,0 +1,105 @@ +import taos +import sys +import os +import subprocess +import glob +import shutil +import time + +from frame.log import * +from frame.cases import * +from frame.sql import * +from frame.caseBase import * +from frame import * +from frame.autogen import * +from frame.server.dnodes import * +from frame.server.cluster import * + + +class TDTestCase(TBase): + + def init(self, conn, logSql, replicaVar=3): + super(TDTestCase, self).init(conn, logSql, replicaVar=3, db="snapshot", checkColName="c1") + self.valgrind = 0 + self.childtable_count = 10 + # tdSql.init(conn.cursor()) + tdSql.init(conn.cursor(), logSql) # output sql.txt file + + def run(self): + tdSql.prepare() + autoGen = AutoGen() + autoGen.create_db(self.db, 2, 3) + tdSql.execute(f"use {self.db}") + autoGen.create_stable(self.stb, 5, 10, 8, 8) + autoGen.create_child(self.stb, "d", self.childtable_count) + autoGen.insert_data(1000) + tdSql.execute(f"flush database {self.db}") + clusterDnodes.stoptaosd(3) + # clusterDnodes.stoptaosd(1) + # clusterDnodes.starttaosd(3) + # time.sleep(5) + # clusterDnodes.stoptaosd(2) + # clusterDnodes.starttaosd(1) + # time.sleep(5) + autoGen.insert_data(5000, True) + tdSql.execute(f"flush database {self.db}") + + # sql = 'show vnodes;' + # while True: + # bFinish = True + # param_list = tdSql.query(sql, row_tag=True) + # for param in param_list: + # if param[3] == 'leading' or param[3] == 'following': + # bFinish = False + # break + # if bFinish: + # break + self.snapshotAgg() + time.sleep(10) + clusterDnodes.stopAll() + for i in range(1, 4): + path = clusterDnodes.getDnodeDir(i) + dnodesRootDir = os.path.join(path,"data","vnode", "vnode*") + dirs = glob.glob(dnodesRootDir) + for dir in dirs: + if os.path.isdir(dir): + tdLog.debug("delete dir: %s " % (dnodesRootDir)) + self.remove_directory(os.path.join(dir, "wal")) + + clusterDnodes.starttaosd(1) + clusterDnodes.starttaosd(2) + clusterDnodes.starttaosd(3) + sql = "show vnodes;" + time.sleep(10) + while True: + bFinish = True + param_list = tdSql.query(sql, row_tag=True) + for param in param_list: + if param[3] == 'offline': + tdLog.exit( + "dnode synchronous fail dnode id: %d, vgroup id:%d status offline" % (param[0], param[1])) + if param[3] == 'leading' or param[3] == 'following': + bFinish = False + break + if bFinish: + break + + self.timestamp_step = 1 + self.insert_rows = 6000 + self.checkInsertCorrect() + self.checkAggCorrect() + + def remove_directory(self, directory): + try: + shutil.rmtree(directory) + tdLog.debug("delete dir: %s " % (directory)) + except OSError as e: + tdLog.exit("delete fail dir: %s " % (directory)) + + def stop(self): + tdSql.close() + tdLog.success(f"{__file__} successfully executed") + + +tdCases.addLinux(__file__, TDTestCase()) +tdCases.addWindows(__file__, TDTestCase()) diff --git a/tests/army/community/cmdline/fullopt.py b/tests/army/community/cmdline/fullopt.py index 99c7095a38..39d1d581ed 100644 --- a/tests/army/community/cmdline/fullopt.py +++ b/tests/army/community/cmdline/fullopt.py @@ -119,6 +119,8 @@ class TDTestCase(TBase): # stop taosd test taos as server sc.dnodeStop(idx) etool.exeBinFile("taos", f'-n server', wait=False) + time.sleep(3) + eos.exe("pkill -9 taos") # run def run(self): diff --git a/tests/army/community/query/cquery_basic.json b/tests/army/community/query/cquery_basic.json new file mode 100644 index 0000000000..5a57d59d93 --- /dev/null +++ b/tests/army/community/query/cquery_basic.json @@ -0,0 +1,61 @@ +{ + "filetype": "insert", + "cfgdir": "/etc/taos", + "host": "127.0.0.1", + "port": 6030, + "user": "root", + "password": "taosdata", + "connection_pool_size": 8, + "num_of_records_per_req": 4000, + "prepared_rand": 10000, + "thread_count": 3, + "create_table_thread_count": 1, + "confirm_parameter_prompt": "no", + "databases": [ + { + "dbinfo": { + "name": "db", + "drop": "no", + "vgroups": 3, + "replica": 3, + "duration":"3d", + "wal_retention_period": 1, + "wal_retention_size": 1, + "stt_trigger": 1 + }, + "super_tables": [ + { + "name": "stb", + "child_table_exists": "yes", + "childtable_count": 6, + "insert_rows": 50000, + "childtable_prefix": "d", + "insert_mode": "taosc", + "timestamp_step": 60000, + "start_timestamp":1700000000000, + "columns": [ + { "type": "bool", "name": "bc"}, + { "type": "float", "name": "fc" }, + { "type": "double", "name": "dc"}, + { "type": "tinyint", "name": "ti"}, + { "type": "smallint", "name": "si" }, + { "type": "int", "name": "ic" }, + { "type": "bigint", "name": "bi" }, + { "type": "utinyint", "name": "uti"}, + { "type": "usmallint", "name": "usi"}, + { "type": "uint", "name": "ui" }, + { "type": "ubigint", "name": "ubi"}, + { "type": "binary", "name": "bin", "len": 8}, + { "type": "nchar", "name": "nch", "len": 16} + ], + "tags": [ + {"type": "tinyint", "name": "groupid","max": 10,"min": 1}, + {"name": "location","type": "binary", "len": 16, "values": + ["San Francisco", "Los Angles", "San Diego", "San Jose", "Palo Alto", "Campbell", "Mountain View","Sunnyvale", "Santa Clara", "Cupertino"] + } + ] + } + ] + } + ] +} diff --git a/tests/army/community/query/query_basic.json b/tests/army/community/query/query_basic.json index c6a3482cd4..d8d47266f9 100644 --- a/tests/army/community/query/query_basic.json +++ b/tests/army/community/query/query_basic.json @@ -32,7 +32,7 @@ "childtable_prefix": "d", "insert_mode": "taosc", "timestamp_step": 30000, - "start_timestamp":"2023-10-01 10:00:00", + "start_timestamp":1700000000000, "columns": [ { "type": "bool", "name": "bc"}, { "type": "float", "name": "fc" }, diff --git a/tests/army/community/query/query_basic.py b/tests/army/community/query/query_basic.py index e3fdeb0f34..912974d8ab 100644 --- a/tests/army/community/query/query_basic.py +++ b/tests/army/community/query/query_basic.py @@ -34,31 +34,179 @@ class TDTestCase(TBase): "querySmaOptimize": "1" } + def insertData(self): tdLog.info(f"insert data.") # taosBenchmark run jfile = etool.curFile(__file__, "query_basic.json") - etool.benchMark(json=jfile) + etool.benchMark(json = jfile) tdSql.execute(f"use {self.db}") tdSql.execute("select database();") - # set insert data information + # come from query_basic.json self.childtable_count = 6 self.insert_rows = 100000 self.timestamp_step = 30000 + self.start_timestamp = 1700000000000 + + # write again disorder + self.flushDb() + jfile = etool.curFile(__file__, "cquery_basic.json") + etool.benchMark(json = jfile) + + + def genTime(self, preCnt, cnt): + start = self.start_timestamp + preCnt * self.timestamp_step + end = start + self.timestamp_step * cnt + return (start, end) + + + def doWindowQuery(self): + pre = f"select count(ts) from {self.stb} " + # case1 operator "in" "and" is same + cnt = 6000 + s,e = self.genTime(12000, cnt) + sql1 = f"{pre} where ts between {s} and {e} " + sql2 = f"{pre} where ts >= {s} and ts <={e} " + expectCnt = (cnt + 1) * self.childtable_count + tdSql.checkFirstValue(sql1, expectCnt) + tdSql.checkFirstValue(sql2, expectCnt) + + # case2 no overloap "or" left + cnt1 = 120 + s1, e1 = self.genTime(4000, cnt1) + cnt2 = 3000 + s2, e2 = self.genTime(10000, cnt2) + sql = f"{pre} where (ts >= {s1} and ts < {e1}) or (ts >= {s2} and ts < {e2})" + expectCnt = (cnt1 + cnt2) * self.childtable_count + tdSql.checkFirstValue(sql, expectCnt) + + # case3 overloap "or" right + cnt1 = 300 + s1, e1 = self.genTime(17000, cnt1) + cnt2 = 8000 + s2, e2 = self.genTime(70000, cnt2) + sql = f"{pre} where (ts > {s1} and ts <= {e1}) or (ts > {s2} and ts <= {e2})" + expectCnt = (cnt1 + cnt2) * self.childtable_count + tdSql.checkFirstValue(sql, expectCnt) + + # case4 overloap "or" + cnt1 = 1000 + s1, e1 = self.genTime(9000, cnt1) + cnt2 = 1000 + s2, e2 = self.genTime(9000 + 500 , cnt2) + sql = f"{pre} where (ts > {s1} and ts <= {e1}) or (ts > {s2} and ts <= {e2})" + expectCnt = (cnt1 + 500) * self.childtable_count # expect=1500 + tdSql.checkFirstValue(sql, expectCnt) + + # case5 overloap "or" boundary hollow->solid + cnt1 = 3000 + s1, e1 = self.genTime(45000, cnt1) + cnt2 = 2000 + s2, e2 = self.genTime(45000 + cnt1 , cnt2) + sql = f"{pre} where (ts > {s1} and ts <= {e1}) or (ts > {s2} and ts <= {e2})" + expectCnt = (cnt1+cnt2) * self.childtable_count + tdSql.checkFirstValue(sql, expectCnt) + + # case6 overloap "or" boundary solid->solid + cnt1 = 300 + s1, e1 = self.genTime(55000, cnt1) + cnt2 = 500 + s2, e2 = self.genTime(55000 + cnt1 , cnt2) + sql = f"{pre} where (ts >= {s1} and ts <= {e1}) or (ts >= {s2} and ts <= {e2})" + expectCnt = (cnt1+cnt2+1) * self.childtable_count + tdSql.checkFirstValue(sql, expectCnt) + + # case7 overloap "and" + cnt1 = 1000 + s1, e1 = self.genTime(40000, cnt1) + cnt2 = 1000 + s2, e2 = self.genTime(40000 + 500 , cnt2) + sql = f"{pre} where (ts > {s1} and ts <= {e1}) and (ts > {s2} and ts <= {e2})" + expectCnt = cnt1/2 * self.childtable_count + tdSql.checkFirstValue(sql, expectCnt) + + # case8 overloap "and" boundary hollow->solid solid->hollow + cnt1 = 3000 + s1, e1 = self.genTime(45000, cnt1) + cnt2 = 2000 + s2, e2 = self.genTime(45000 + cnt1 , cnt2) + sql = f"{pre} where (ts > {s1} and ts <= {e1}) and (ts >= {s2} and ts < {e2})" + expectCnt = 1 * self.childtable_count + tdSql.checkFirstValue(sql, expectCnt) + + # case9 no overloap "and" + cnt1 = 6000 + s1, e1 = self.genTime(20000, cnt1) + cnt2 = 300 + s2, e2 = self.genTime(70000, cnt2) + sql = f"{pre} where (ts > {s1} and ts <= {e1}) and (ts >= {s2} and ts <= {e2})" + expectCnt = 0 + tdSql.checkFirstValue(sql, expectCnt) + + # case10 cnt1 contain cnt2 and + cnt1 = 5000 + s1, e1 = self.genTime(25000, cnt1) + cnt2 = 400 + s2, e2 = self.genTime(28000, cnt2) + sql = f"{pre} where (ts > {s1} and ts <= {e1}) and (ts >= {s2} and ts < {e2})" + expectCnt = cnt2 * self.childtable_count + tdSql.checkFirstValue(sql, expectCnt) + + + def queryMax(self, colname): + sql = f"select max({colname}) from {self.stb}" + tdSql.query(sql) + return tdSql.getData(0, 0) + + + def checkMax(self): + # max for tsdbRetrieveDatablockSMA2 coverage + colname = "ui" + max = self.queryMax(colname) + + # insert over max + sql = f"insert into d0(ts, {colname}) values" + for i in range(1, 5): + sql += f" (now + {i}s, {max+i})" + tdSql.execute(sql) + self.flushDb() + + expectMax = max + 4 + for i in range(1, 5): + realMax = self.queryMax(colname) + if realMax != expectMax: + tdLog.exit(f"Max value not expect. expect:{expectMax} real:{realMax}") + + # query ts list + sql = f"select ts from d0 where ui={expectMax}" + tdSql.query(sql) + tss = tdSql.getColData(0) + for ts in tss: + # delete + sql = f"delete from d0 where ts = '{ts}'" + tdSql.execute(sql) + expectMax -= 1 + + self.checkInsertCorrect() def doQuery(self): tdLog.info(f"do query.") - - # __group_key - sql = f"select count(*),_group_key(uti),uti from {self.stb} partition by uti;" - tdSql.execute(sql) - tdSql.checkRows(251) + self.doWindowQuery() - sql = f"select count(*),_group_key(usi) from {self.stb} group by usi;" - tdSql.execute(sql) - tdSql.checkRows(997) + # max + self.checkMax() + + # __group_key + sql = f"select count(*),_group_key(uti),uti from {self.stb} partition by uti" + tdSql.query(sql) + # column index 1 value same with 2 + tdSql.checkSameColumn(1, 2) + + sql = f"select count(*),_group_key(usi),usi from {self.stb} group by usi limit 100;" + tdSql.query(sql) + tdSql.checkSameColumn(1, 2) # tail sql1 = "select ts,ui from d0 order by ts desc limit 5 offset 2;" @@ -75,6 +223,20 @@ class TDTestCase(TBase): sql2 = "select bi from stb where bi is not null order by bi desc limit 10;" self.checkSameResult(sql1, sql2) + # distributed expect values + expects = { + "Block_Rows" : 6*100000, + "Total_Tables" : 6, + "Total_Vgroups" : 3 + } + self.waitTransactionZero() + reals = self.getDistributed(self.stb) + for k in expects.keys(): + v = expects[k] + if int(reals[k]) != v: + tdLog.exit(f"distribute {k} expect: {v} real: {reals[k]}") + + # run def run(self): tdLog.debug(f"start to excute {__file__}") diff --git a/tests/army/frame/autogen.py b/tests/army/frame/autogen.py index 1e041f633d..d1f02e7865 100644 --- a/tests/army/frame/autogen.py +++ b/tests/army/frame/autogen.py @@ -162,14 +162,19 @@ class AutoGen: tdLog.info(f" insert data i={i}") values = "" - tdLog.info(f" insert child data {child_name} finished, insert rows={cnt}") + tdLog.info(f" insert child data {child_name} finished, insert rows={cnt}") + return ts - # insert data - def insert_data(self, cnt): + def insert_data(self, cnt, bContinue=False): + if not bContinue: + self.ts = 1600000000000 + + currTs = 1600000000000 for i in range(self.child_cnt): name = f"{self.child_name}{i}" - self.insert_data_child(name, cnt, self.batch_size, 1) + currTs = self.insert_data_child(name, cnt, self.batch_size, 1) + self.ts = currTs tdLog.info(f" insert data ok, child table={self.child_cnt} insert rows={cnt}") # insert same timestamp to all childs diff --git a/tests/army/frame/caseBase.py b/tests/army/frame/caseBase.py index fcfcb7d066..2959cf54a1 100644 --- a/tests/army/frame/caseBase.py +++ b/tests/army/frame/caseBase.py @@ -29,7 +29,7 @@ class TBase: # # init - def init(self, conn, logSql, replicaVar=1): + def init(self, conn, logSql, replicaVar=1, db="db", stb="stb", checkColName="ic"): # save param self.replicaVar = int(replicaVar) tdSql.init(conn.cursor(), True) @@ -41,14 +41,14 @@ class TBase: self.mLevelDisk = 0 # test case information - self.db = "db" - self.stb = "stb" + self.db = db + self.stb = stb # sql - self.sqlSum = f"select sum(ic) from {self.stb}" - self.sqlMax = f"select max(ic) from {self.stb}" - self.sqlMin = f"select min(ic) from {self.stb}" - self.sqlAvg = f"select avg(ic) from {self.stb}" + self.sqlSum = f"select sum({checkColName}) from {self.stb}" + self.sqlMax = f"select max({checkColName}) from {self.stb}" + self.sqlMin = f"select min({checkColName}) from {self.stb}" + self.sqlAvg = f"select avg({checkColName}) from {self.stb}" self.sqlFirst = f"select first(ts) from {self.stb}" self.sqlLast = f"select last(ts) from {self.stb}" @@ -136,7 +136,7 @@ class TBase: tdSql.checkAgg(sql, self.childtable_count) # check step - sql = f"select * from (select diff(ts) as dif from {self.stb} partition by tbname) where dif != {self.timestamp_step}" + sql = f"select * from (select diff(ts) as dif from {self.stb} partition by tbname order by ts desc) where dif != {self.timestamp_step}" tdSql.query(sql) tdSql.checkRows(0) @@ -193,10 +193,10 @@ class TBase: # sql rows1 = tdSql.query(sql1,queryTimes=2) - res1 = copy.deepcopy(tdSql.queryResult) + res1 = copy.deepcopy(tdSql.res) tdSql.query(sql2,queryTimes=2) - res2 = tdSql.queryResult + res2 = tdSql.res rowlen1 = len(res1) rowlen2 = len(res2) @@ -229,9 +229,9 @@ class TBase: # # get vgroups - def getVGroup(self, db_name): + def getVGroup(self, dbName): vgidList = [] - sql = f"select vgroup_id from information_schema.ins_vgroups where db_name='{db_name}'" + sql = f"select vgroup_id from information_schema.ins_vgroups where db_name='{dbName}'" res = tdSql.getResult(sql) rows = len(res) for i in range(rows): @@ -239,6 +239,29 @@ class TBase: return vgidList + # get distributed rows + def getDistributed(self, tbName): + sql = f"show table distributed {tbName}" + tdSql.query(sql) + dics = {} + i = 0 + for i in range(tdSql.getRows()): + row = tdSql.getData(i, 0) + #print(row) + row = row.replace('[', '').replace(']', '') + #print(row) + items = row.split(' ') + #print(items) + for item in items: + #print(item) + v = item.split('=') + #print(v) + if len(v) == 2: + dics[v[0]] = v[1] + if i > 5: + break + print(dics) + return dics # @@ -269,3 +292,15 @@ class TBase: if len(lists) == 0: tdLog.exit(f"list is empty {tips}") + +# +# str util +# + # covert list to sql format string + def listSql(self, lists, sepa = ","): + strs = "" + for ls in lists: + if strs != "": + strs += sepa + strs += f"'{ls}'" + return strs \ No newline at end of file diff --git a/tests/army/frame/common.py b/tests/army/frame/common.py index 5cdb3f9f46..d11a0dbb4d 100644 --- a/tests/army/frame/common.py +++ b/tests/army/frame/common.py @@ -795,7 +795,7 @@ class TDCom: def getOneRow(self, location, containElm): res_list = list() if 0 <= location < tdSql.queryRows: - for row in tdSql.queryResult: + for row in tdSql.res: if row[location] == containElm: res_list.append(row) return res_list @@ -943,7 +943,7 @@ class TDCom: """drop all streams """ tdSql.query("show streams") - stream_name_list = list(map(lambda x: x[0], tdSql.queryResult)) + stream_name_list = list(map(lambda x: x[0], tdSql.res)) for stream_name in stream_name_list: tdSql.execute(f'drop stream if exists {stream_name};') @@ -962,7 +962,7 @@ class TDCom: """drop all databases """ tdSql.query("show databases;") - db_list = list(map(lambda x: x[0], tdSql.queryResult)) + db_list = list(map(lambda x: x[0], tdSql.res)) for dbname in db_list: if dbname not in self.white_list and "telegraf" not in dbname: tdSql.execute(f'drop database if exists `{dbname}`') @@ -1412,7 +1412,7 @@ class TDCom: input_function (str): scalar """ tdSql.query(sql) - res = tdSql.queryResult + res = tdSql.res if input_function in ["acos", "asin", "atan", "cos", "log", "pow", "sin", "sqrt", "tan"]: tdSql.checkEqual(res[1][1], "DOUBLE") tdSql.checkEqual(res[2][1], "DOUBLE") @@ -1490,7 +1490,7 @@ class TDCom: bigint: bigint-ts """ tdSql.query(f'select cast({str_ts} as bigint)') - return tdSql.queryResult[0][0] + return tdSql.res[0][0] def cast_query_data(self, query_data): """cast query-result for existed-stb @@ -1514,7 +1514,7 @@ class TDCom: tdSql.query(f'select cast("{v}" as binary(6))') else: tdSql.query(f'select cast("{v}" as {" ".join(col_tag_type_list[i].strip().split(" ")[1:])})') - query_data_l[i] = tdSql.queryResult[0][0] + query_data_l[i] = tdSql.res[0][0] else: query_data_l[i] = v nl.append(tuple(query_data_l)) @@ -1566,9 +1566,9 @@ class TDCom: if tag_value_list: dvalue = len(self.tag_type_str.split(',')) - defined_tag_count tdSql.query(sql1) - res1 = tdSql.queryResult + res1 = tdSql.res tdSql.query(sql2) - res2 = self.cast_query_data(tdSql.queryResult) if tag_value_list or use_exist_stb else tdSql.queryResult + res2 = self.cast_query_data(tdSql.res) if tag_value_list or use_exist_stb else tdSql.res tdSql.sql = sql1 new_list = list() if tag_value_list: @@ -1601,10 +1601,10 @@ class TDCom: tdLog.info("query retrying ...") new_list = list() tdSql.query(sql1) - res1 = tdSql.queryResult + res1 = tdSql.res tdSql.query(sql2) - # res2 = tdSql.queryResult - res2 = self.cast_query_data(tdSql.queryResult) if tag_value_list or use_exist_stb else tdSql.queryResult + # res2 = tdSql.res + res2 = self.cast_query_data(tdSql.res) if tag_value_list or use_exist_stb else tdSql.res tdSql.sql = sql1 if tag_value_list: @@ -1643,10 +1643,10 @@ class TDCom: tdLog.info("query retrying ...") new_list = list() tdSql.query(sql1) - res1 = tdSql.queryResult + res1 = tdSql.res tdSql.query(sql2) - # res2 = tdSql.queryResult - res2 = self.cast_query_data(tdSql.queryResult) if tag_value_list or use_exist_stb else tdSql.queryResult + # res2 = tdSql.res + res2 = self.cast_query_data(tdSql.res) if tag_value_list or use_exist_stb else tdSql.res tdSql.sql = sql1 if tag_value_list: diff --git a/tests/army/frame/server/cluster.py b/tests/army/frame/server/cluster.py index ade8ac39a2..309ca3bbd4 100644 --- a/tests/army/frame/server/cluster.py +++ b/tests/army/frame/server/cluster.py @@ -13,23 +13,24 @@ from frame.common import * class ClusterDnodes(TDDnodes): """rewrite TDDnodes and make MyDdnodes as TDDnodes child class""" - def __init__(self ,dnodes_lists): - + def __init__(self): super(ClusterDnodes,self).__init__() - self.dnodes = dnodes_lists # dnode must be TDDnode instance self.simDeployed = False self.testCluster = False self.valgrind = 0 self.killValgrind = 1 + def init(self, dnodes_lists, deployPath, masterIp): + self.dnodes = dnodes_lists # dnode must be TDDnode instance + super(ClusterDnodes, self).init(deployPath, masterIp) +clusterDnodes = ClusterDnodes() class ConfigureyCluster: """This will create defined number of dnodes and create a cluster. at the same time, it will return TDDnodes list: dnodes, """ hostname = socket.gethostname() - def __init__(self): - self.dnodes = [] + self.dnodes = [] self.dnodeNums = 5 self.independent = True self.startPort = 6030 @@ -86,10 +87,9 @@ class ConfigureyCluster: count=0 while count < 5: tdSql.query("select * from information_schema.ins_dnodes") - # tdLog.debug(tdSql.queryResult) status=0 for i in range(self.dnodeNums): - if tdSql.queryResult[i][4] == "ready": + if tdSql.res[i][4] == "ready": status+=1 # tdLog.debug(status) diff --git a/tests/army/frame/server/dnodes.py b/tests/army/frame/server/dnodes.py index a22e1cbff7..ad8a336857 100644 --- a/tests/army/frame/server/dnodes.py +++ b/tests/army/frame/server/dnodes.py @@ -251,6 +251,11 @@ class TDDnodes: dnodesRootDir = "%s/sim" % (self.path) return dnodesRootDir + def getDnodeDir(self, index): + self.check(index) + dnodesDir = "%s/sim/dnode%d" % (self.path, index) + return dnodesDir + def getSimCfgPath(self): return self.sim.getCfgDir() diff --git a/tests/army/frame/sql.py b/tests/army/frame/sql.py index 5477f96b3d..83f1243167 100644 --- a/tests/army/frame/sql.py +++ b/tests/army/frame/sql.py @@ -78,6 +78,107 @@ class TDSql: self.cursor.execute(s) time.sleep(2) + + # + # do execute + # + + def query(self, sql, row_tag=None, queryTimes=10, count_expected_res=None): + self.sql = sql + i=1 + while i <= queryTimes: + try: + self.cursor.execute(sql) + self.res = self.cursor.fetchall() + self.queryRows = len(self.res) + self.queryCols = len(self.cursor.description) + + if count_expected_res is not None: + counter = 0 + while count_expected_res != self.res[0][0]: + self.cursor.execute(sql) + self.res = self.cursor.fetchall() + if counter < queryTimes: + counter += 0.5 + time.sleep(0.5) + else: + return False + if row_tag: + return self.res + return self.queryRows + except Exception as e: + tdLog.notice("Try to query again, query times: %d "%i) + if i == queryTimes: + caller = inspect.getframeinfo(inspect.stack()[1][0]) + args = (caller.filename, caller.lineno, sql, repr(e)) + tdLog.notice("%s(%d) failed: sql:%s, %s" % args) + raise Exception(repr(e)) + i+=1 + time.sleep(1) + pass + + def executeTimes(self, sql, times): + for i in range(times): + try: + return self.cursor.execute(sql) + except BaseException: + time.sleep(1) + continue + + def execute(self, sql, queryTimes=30, show=False): + self.sql = sql + if show: + tdLog.info(sql) + i=1 + while i <= queryTimes: + try: + self.affectedRows = self.cursor.execute(sql) + return self.affectedRows + except Exception as e: + tdLog.notice("Try to execute sql again, query times: %d "%i) + if i == queryTimes: + caller = inspect.getframeinfo(inspect.stack()[1][0]) + args = (caller.filename, caller.lineno, sql, repr(e)) + tdLog.notice("%s(%d) failed: sql:%s, %s" % args) + raise Exception(repr(e)) + i+=1 + time.sleep(1) + pass + + # execute many sql + def executes(self, sqls, queryTimes=30, show=False): + for sql in sqls: + self.execute(sql, queryTimes, show) + + def waitedQuery(self, sql, expectRows, timeout): + tdLog.info("sql: %s, try to retrieve %d rows in %d seconds" % (sql, expectRows, timeout)) + self.sql = sql + try: + for i in range(timeout): + self.cursor.execute(sql) + self.res = self.cursor.fetchall() + self.queryRows = len(self.res) + self.queryCols = len(self.cursor.description) + tdLog.info("sql: %s, try to retrieve %d rows,get %d rows" % (sql, expectRows, self.queryRows)) + if self.queryRows >= expectRows: + return (self.queryRows, i) + time.sleep(1) + except Exception as e: + caller = inspect.getframeinfo(inspect.stack()[1][0]) + args = (caller.filename, caller.lineno, sql, repr(e)) + tdLog.notice("%s(%d) failed: sql:%s, %s" % args) + raise Exception(repr(e)) + return (self.queryRows, timeout) + + def is_err_sql(self, sql): + err_flag = True + try: + self.cursor.execute(sql) + except BaseException: + err_flag = False + + return False if err_flag else True + def error(self, sql, expectedErrno = None, expectErrInfo = None): caller = inspect.getframeinfo(inspect.stack()[1][0]) expectErrNotOccured = True @@ -95,7 +196,7 @@ class TDSql: else: self.queryRows = 0 self.queryCols = 0 - self.queryResult = None + self.res = None if expectedErrno != None: if expectedErrno == self.errno: @@ -115,49 +216,31 @@ class TDSql: return self.error_info - def query(self, sql, row_tag=None, queryTimes=10, count_expected_res=None): + # + # get session + # + + def getData(self, row, col): + self.checkRowCol(row, col) + return self.res[row][col] + + def getColData(self, col): + colDatas = [] + for i in range(self.queryRows): + colDatas.append(self.res[i][col]) + return colDatas + + def getResult(self, sql): self.sql = sql - i=1 - while i <= queryTimes: - try: - self.cursor.execute(sql) - self.queryResult = self.cursor.fetchall() - self.queryRows = len(self.queryResult) - self.queryCols = len(self.cursor.description) - - if count_expected_res is not None: - counter = 0 - while count_expected_res != self.queryResult[0][0]: - self.cursor.execute(sql) - self.queryResult = self.cursor.fetchall() - if counter < queryTimes: - counter += 0.5 - time.sleep(0.5) - else: - return False - if row_tag: - return self.queryResult - return self.queryRows - except Exception as e: - tdLog.notice("Try to query again, query times: %d "%i) - if i == queryTimes: - caller = inspect.getframeinfo(inspect.stack()[1][0]) - args = (caller.filename, caller.lineno, sql, repr(e)) - tdLog.notice("%s(%d) failed: sql:%s, %s" % args) - raise Exception(repr(e)) - i+=1 - time.sleep(1) - pass - - - def is_err_sql(self, sql): - err_flag = True try: self.cursor.execute(sql) - except BaseException: - err_flag = False - - return False if err_flag else True + self.res = self.cursor.fetchall() + except Exception as e: + caller = inspect.getframeinfo(inspect.stack()[1][0]) + args = (caller.filename, caller.lineno, sql, repr(e)) + tdLog.notice("%s(%d) failed: sql:%s, %s" % args) + raise Exception(repr(e)) + return self.res def getVariable(self, search_attr): ''' @@ -193,29 +276,19 @@ class TDSql: return col_name_list, col_type_list return col_name_list - def waitedQuery(self, sql, expectRows, timeout): - tdLog.info("sql: %s, try to retrieve %d rows in %d seconds" % (sql, expectRows, timeout)) - self.sql = sql - try: - for i in range(timeout): - self.cursor.execute(sql) - self.queryResult = self.cursor.fetchall() - self.queryRows = len(self.queryResult) - self.queryCols = len(self.cursor.description) - tdLog.info("sql: %s, try to retrieve %d rows,get %d rows" % (sql, expectRows, self.queryRows)) - if self.queryRows >= expectRows: - return (self.queryRows, i) - time.sleep(1) - except Exception as e: - caller = inspect.getframeinfo(inspect.stack()[1][0]) - args = (caller.filename, caller.lineno, sql, repr(e)) - tdLog.notice("%s(%d) failed: sql:%s, %s" % args) - raise Exception(repr(e)) - return (self.queryRows, timeout) - def getRows(self): return self.queryRows + # get first value + def getFirstValue(self, sql) : + self.query(sql) + return self.getData(0, 0) + + + # + # check session + # + def checkRows(self, expectRows): if self.queryRows == expectRows: tdLog.info("sql:%s, queryRows:%d == expect:%d" % (self.sql, self.queryRows, expectRows)) @@ -273,26 +346,26 @@ class TDSql: self.checkRowCol(row, col) - if self.queryResult[row][col] != data: + if self.res[row][col] != data: if self.cursor.istype(col, "TIMESTAMP"): # suppose user want to check nanosecond timestamp if a longer data passed`` if isinstance(data,str) : if (len(data) >= 28): - if self.queryResult[row][col] == _parse_ns_timestamp(data): + if self.res[row][col] == _parse_ns_timestamp(data): if(show): tdLog.info("check successfully") else: caller = inspect.getframeinfo(inspect.stack()[1][0]) - args = (caller.filename, caller.lineno, self.sql, row, col, self.queryResult[row][col], data) + args = (caller.filename, caller.lineno, self.sql, row, col, self.res[row][col], data) tdLog.exit("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args) else: - if self.queryResult[row][col].astimezone(datetime.timezone.utc) == _parse_datetime(data).astimezone(datetime.timezone.utc): - # tdLog.info(f"sql:{self.sql}, row:{row} col:{col} data:{self.queryResult[row][col]} == expect:{data}") + if self.res[row][col].astimezone(datetime.timezone.utc) == _parse_datetime(data).astimezone(datetime.timezone.utc): + # tdLog.info(f"sql:{self.sql}, row:{row} col:{col} data:{self.res[row][col]} == expect:{data}") if(show): tdLog.info("check successfully") else: caller = inspect.getframeinfo(inspect.stack()[1][0]) - args = (caller.filename, caller.lineno, self.sql, row, col, self.queryResult[row][col], data) + args = (caller.filename, caller.lineno, self.sql, row, col, self.res[row][col], data) tdLog.exit("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args) return elif isinstance(data,int): @@ -304,72 +377,72 @@ class TDSql: precision = 'ns' else: caller = inspect.getframeinfo(inspect.stack()[1][0]) - args = (caller.filename, caller.lineno, self.sql, row, col, self.queryResult[row][col], data) + args = (caller.filename, caller.lineno, self.sql, row, col, self.res[row][col], data) tdLog.exit("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args) return success = False if precision == 'ms': - dt_obj = self.queryResult[row][col] + dt_obj = self.res[row][col] ts = int(int((dt_obj-datetime.datetime.fromtimestamp(0,dt_obj.tzinfo)).total_seconds())*1000) + int(dt_obj.microsecond/1000) if ts == data: success = True elif precision == 'us': - dt_obj = self.queryResult[row][col] + dt_obj = self.res[row][col] ts = int(int((dt_obj-datetime.datetime.fromtimestamp(0,dt_obj.tzinfo)).total_seconds())*1e6) + int(dt_obj.microsecond) if ts == data: success = True elif precision == 'ns': - if data == self.queryResult[row][col]: + if data == self.res[row][col]: success = True if success: if(show): tdLog.info("check successfully") else: caller = inspect.getframeinfo(inspect.stack()[1][0]) - args = (caller.filename, caller.lineno, self.sql, row, col, self.queryResult[row][col], data) + args = (caller.filename, caller.lineno, self.sql, row, col, self.res[row][col], data) tdLog.exit("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args) return elif isinstance(data,datetime.datetime): - dt_obj = self.queryResult[row][col] + dt_obj = self.res[row][col] delt_data = data-datetime.datetime.fromtimestamp(0,data.tzinfo) - delt_result = self.queryResult[row][col] - datetime.datetime.fromtimestamp(0,self.queryResult[row][col].tzinfo) + delt_result = self.res[row][col] - datetime.datetime.fromtimestamp(0,self.res[row][col].tzinfo) if delt_data == delt_result: if(show): tdLog.info("check successfully") else: caller = inspect.getframeinfo(inspect.stack()[1][0]) - args = (caller.filename, caller.lineno, self.sql, row, col, self.queryResult[row][col], data) + args = (caller.filename, caller.lineno, self.sql, row, col, self.res[row][col], data) tdLog.exit("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args) return else: caller = inspect.getframeinfo(inspect.stack()[1][0]) - args = (caller.filename, caller.lineno, self.sql, row, col, self.queryResult[row][col], data) + args = (caller.filename, caller.lineno, self.sql, row, col, self.res[row][col], data) tdLog.exit("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args) - if str(self.queryResult[row][col]) == str(data): - # tdLog.info(f"sql:{self.sql}, row:{row} col:{col} data:{self.queryResult[row][col]} == expect:{data}") + if str(self.res[row][col]) == str(data): + # tdLog.info(f"sql:{self.sql}, row:{row} col:{col} data:{self.res[row][col]} == expect:{data}") if(show): tdLog.info("check successfully") return elif isinstance(data, float): - if abs(data) >= 1 and abs((self.queryResult[row][col] - data) / data) <= 0.000001: - # tdLog.info(f"sql:{self.sql}, row:{row} col:{col} data:{self.queryResult[row][col]} == expect:{data}") + if abs(data) >= 1 and abs((self.res[row][col] - data) / data) <= 0.000001: + # tdLog.info(f"sql:{self.sql}, row:{row} col:{col} data:{self.res[row][col]} == expect:{data}") if(show): tdLog.info("check successfully") - elif abs(data) < 1 and abs(self.queryResult[row][col] - data) <= 0.000001: - # tdLog.info(f"sql:{self.sql}, row:{row} col:{col} data:{self.queryResult[row][col]} == expect:{data}") + elif abs(data) < 1 and abs(self.res[row][col] - data) <= 0.000001: + # tdLog.info(f"sql:{self.sql}, row:{row} col:{col} data:{self.res[row][col]} == expect:{data}") if(show): tdLog.info("check successfully") else: caller = inspect.getframeinfo(inspect.stack()[1][0]) - args = (caller.filename, caller.lineno, self.sql, row, col, self.queryResult[row][col], data) + args = (caller.filename, caller.lineno, self.sql, row, col, self.res[row][col], data) tdLog.exit("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args) return else: caller = inspect.getframeinfo(inspect.stack()[1][0]) - args = (caller.filename, caller.lineno, self.sql, row, col, self.queryResult[row][col], data) + args = (caller.filename, caller.lineno, self.sql, row, col, self.res[row][col], data) tdLog.exit("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args) if(show): tdLog.info("check successfully") @@ -397,23 +470,23 @@ class TDSql: def checkDataNoExit(self, row, col, data): if self.checkRowColNoExit(row, col) == False: return False - if self.queryResult[row][col] != data: + if self.res[row][col] != data: if self.cursor.istype(col, "TIMESTAMP"): # suppose user want to check nanosecond timestamp if a longer data passed if (len(data) >= 28): - if pd.to_datetime(self.queryResult[row][col]) == pd.to_datetime(data): + if pd.to_datetime(self.res[row][col]) == pd.to_datetime(data): return True else: - if self.queryResult[row][col] == _parse_datetime(data): + if self.res[row][col] == _parse_datetime(data): return True return False - if str(self.queryResult[row][col]) == str(data): + if str(self.res[row][col]) == str(data): return True elif isinstance(data, float): - if abs(data) >= 1 and abs((self.queryResult[row][col] - data) / data) <= 0.000001: + if abs(data) >= 1 and abs((self.res[row][col] - data) / data) <= 0.000001: return True - elif abs(data) < 1 and abs(self.queryResult[row][col] - data) <= 0.000001: + elif abs(data) < 1 and abs(self.res[row][col] - data) <= 0.000001: return True else: return False @@ -437,56 +510,6 @@ class TDSql: self.query(sql) self.checkData(row, col, data) - - def getData(self, row, col): - self.checkRowCol(row, col) - return self.queryResult[row][col] - - def getResult(self, sql): - self.sql = sql - try: - self.cursor.execute(sql) - self.queryResult = self.cursor.fetchall() - except Exception as e: - caller = inspect.getframeinfo(inspect.stack()[1][0]) - args = (caller.filename, caller.lineno, sql, repr(e)) - tdLog.notice("%s(%d) failed: sql:%s, %s" % args) - raise Exception(repr(e)) - return self.queryResult - - def executeTimes(self, sql, times): - for i in range(times): - try: - return self.cursor.execute(sql) - except BaseException: - time.sleep(1) - continue - - def execute(self, sql, queryTimes=30, show=False): - self.sql = sql - if show: - tdLog.info(sql) - i=1 - while i <= queryTimes: - try: - self.affectedRows = self.cursor.execute(sql) - return self.affectedRows - except Exception as e: - tdLog.notice("Try to execute sql again, query times: %d "%i) - if i == queryTimes: - caller = inspect.getframeinfo(inspect.stack()[1][0]) - args = (caller.filename, caller.lineno, sql, repr(e)) - tdLog.notice("%s(%d) failed: sql:%s, %s" % args) - raise Exception(repr(e)) - i+=1 - time.sleep(1) - pass - - # execute many sql - def executes(self, sqls, queryTimes=30, show=False): - for sql in sqls: - self.execute(sql, queryTimes, show) - def checkAffectedRows(self, expectAffectedRows): if self.affectedRows != expectAffectedRows: caller = inspect.getframeinfo(inspect.stack()[1][0]) @@ -544,17 +567,22 @@ class TDSql: def checkAgg(self, sql, expectCnt): self.query(sql) self.checkData(0, 0, expectCnt) - - # get first value - def getFirstValue(self, sql) : - self.query(sql) - return self.getData(0, 0) # expect first value def checkFirstValue(self, sql, expect): self.query(sql) self.checkData(0, 0, expect) - + + # colIdx1 value same with colIdx2 + def checkSameColumn(self, c1, c2): + for i in range(self.queryRows): + if self.res[i][c1] != self.res[i][c2]: + tdLog.exit(f"Not same. row={i} col1={c1} col2={c2}. {self.res[i][c1]}!={self.res[i][c2]}") + tdLog.info(f"check {self.queryRows} rows two column value same. column index [{c1},{c2}]") + + # + # others session + # def get_times(self, time_str, precision="ms"): caller = inspect.getframeinfo(inspect.stack()[1][0]) diff --git a/tests/army/test.py b/tests/army/test.py index a3e28b772d..5ff1c7bdf5 100644 --- a/tests/army/test.py +++ b/tests/army/test.py @@ -405,15 +405,15 @@ if __name__ == "__main__": else : tdLog.debug("create an cluster with %s nodes and make %s dnode as independent mnode"%(dnodeNums,mnodeNums)) dnodeslist = cluster.configure_cluster(dnodeNums=dnodeNums, mnodeNums=mnodeNums, independentMnode=independentMnode, level=level, disk=disk) - tdDnodes = ClusterDnodes(dnodeslist) - tdDnodes.init(deployPath, masterIp) - tdDnodes.setTestCluster(testCluster) - tdDnodes.setValgrind(valgrind) - tdDnodes.stopAll() - for dnode in tdDnodes.dnodes: - tdDnodes.deploy(dnode.index, updateCfgDict) - for dnode in tdDnodes.dnodes: - tdDnodes.starttaosd(dnode.index) + clusterDnodes.init(dnodeslist, deployPath, masterIp) + clusterDnodes.setTestCluster(testCluster) + clusterDnodes.setValgrind(valgrind) + clusterDnodes.setAsan(asan) + clusterDnodes.stopAll() + for dnode in clusterDnodes.dnodes: + clusterDnodes.deploy(dnode.index, updateCfgDict) + for dnode in clusterDnodes.dnodes: + clusterDnodes.starttaosd(dnode.index) tdCases.logSql(logSql) if restful or websocket: @@ -560,17 +560,6 @@ if __name__ == "__main__": conn = taosws.connect(f"taosws://root:taosdata@{host}:6041") else: conn = taos.connect(host=f"{host}", config=tdDnodes.getSimCfgPath()) - # tdSql.init(conn.cursor()) - # tdSql.execute("create qnode on dnode 1") - # tdSql.execute('alter local "queryPolicy" "%d"'%queryPolicy) - # tdSql.query("show local variables;") - # for i in range(tdSql.queryRows): - # if tdSql.queryResult[i][0] == "queryPolicy" : - # if int(tdSql.queryResult[i][1]) == int(queryPolicy): - # tdLog.info('alter queryPolicy to %d successfully'%queryPolicy) - # else : - # tdLog.debug(tdSql.queryResult) - # tdLog.exit("alter queryPolicy to %d failed"%queryPolicy) cursor = conn.cursor() cursor.execute("create qnode on dnode 1") @@ -591,16 +580,15 @@ if __name__ == "__main__": print(independentMnode,"independentMnode valuse") # create dnode list dnodeslist = cluster.configure_cluster(dnodeNums=dnodeNums, mnodeNums=mnodeNums, independentMnode=independentMnode, level=level, disk=disk) - tdDnodes = ClusterDnodes(dnodeslist) - tdDnodes.init(deployPath, masterIp) - tdDnodes.setTestCluster(testCluster) - tdDnodes.setValgrind(valgrind) - tdDnodes.setAsan(asan) - tdDnodes.stopAll() - for dnode in tdDnodes.dnodes: - tdDnodes.deploy(dnode.index,updateCfgDict) - for dnode in tdDnodes.dnodes: - tdDnodes.starttaosd(dnode.index) + clusterDnodes.init(dnodeslist, deployPath, masterIp) + clusterDnodes.setTestCluster(testCluster) + clusterDnodes.setValgrind(valgrind) + clusterDnodes.setAsan(asan) + clusterDnodes.stopAll() + for dnode in clusterDnodes.dnodes: + clusterDnodes.deploy(dnode.index,updateCfgDict) + for dnode in clusterDnodes.dnodes: + clusterDnodes.starttaosd(dnode.index) tdCases.logSql(logSql) if restful or websocket: diff --git a/tests/parallel_test/cases.task b/tests/parallel_test/cases.task index 1c2f5ec23c..891a83568a 100644 --- a/tests/parallel_test/cases.task +++ b/tests/parallel_test/cases.task @@ -3,7 +3,13 @@ #NA,NA,y or n,script,./test.sh -f tsim/user/basic.sim #unit-test + +archOs=$(arch) +if [[ $archOs =~ "aarch64" ]]; then +,,n,unit-test,bash test.sh +else ,,y,unit-test,bash test.sh +fi # # army-test @@ -11,6 +17,8 @@ ,,y,army,./pytest.sh python3 ./test.py -f enterprise/multi-level/mlevel_basic.py -N 3 -L 3 -D 2 ,,y,army,./pytest.sh python3 ./test.py -f enterprise/s3/s3_basic.py -L 3 -D 1 ,,y,army,./pytest.sh python3 ./test.py -f community/cluster/snapshot.py -N 3 -L 3 -D 2 +,,y,army,./pytest.sh python3 ./test.py -f community/cluster/incSnapshot.py -N 3 -L 3 -D 2 +,,y,army,./pytest.sh python3 ./test.py -f community/query/query_basic.py -N 3 ,,n,army,python3 ./test.py -f community/cmdline/fullopt.py @@ -36,6 +44,7 @@ #,,n,system-test,python3 ./test.py -f 8-stream/snode_restart.py -N 4 ,,n,system-test,python3 ./test.py -f 8-stream/snode_restart_with_checkpoint.py -N 4 +,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/project_group.py ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tbname_vgroup.py ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/count_interval.py ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/compact-col.py @@ -1244,7 +1253,9 @@ e ,,y,script,./test.sh -f tsim/sma/tsmaCreateInsertQuery.sim ,,y,script,./test.sh -f tsim/sma/rsmaCreateInsertQuery.sim ,,y,script,./test.sh -f tsim/sma/rsmaCreateInsertQueryDelete.sim -,,y,script,./test.sh -f tsim/sma/rsmaPersistenceRecovery.sim + +### refactor stream backend, open case after rsma refactored +#,,y,script,./test.sh -f tsim/sma/rsmaPersistenceRecovery.sim ,,y,script,./test.sh -f tsim/sync/vnodesnapshot-rsma-test.sim ,,n,script,./test.sh -f tsim/valgrind/checkError1.sim ,,n,script,./test.sh -f tsim/valgrind/checkError2.sim diff --git a/tests/pytest/util/sql.py b/tests/pytest/util/sql.py index 62af9a1af9..92074161b6 100644 --- a/tests/pytest/util/sql.py +++ b/tests/pytest/util/sql.py @@ -97,7 +97,24 @@ class TDSql: i+=1 time.sleep(1) pass - + + def no_error(self, sql): + caller = inspect.getframeinfo(inspect.stack()[1][0]) + expectErrOccurred = False + + try: + self.cursor.execute(sql) + except BaseException as e: + expectErrOccurred = True + self.errno = e.errno + error_info = repr(e) + self.error_info = ','.join(error_info[error_info.index('(') + 1:-1].split(",")[:-1]).replace("'", "") + + if expectErrOccurred: + tdLog.exit("%s(%d) failed: sql:%s, unexpect error '%s' occurred" % (caller.filename, caller.lineno, sql, self.error_info)) + else: + tdLog.info("sql:%s, check passed, no ErrInfo occurred" % (sql)) + def error(self, sql, expectedErrno = None, expectErrInfo = None, fullMatched = True): caller = inspect.getframeinfo(inspect.stack()[1][0]) expectErrNotOccured = True @@ -126,9 +143,9 @@ class TDSql: if expectErrInfo != None: if expectErrInfo == self.error_info: - tdLog.info("sql:%s, expected expectErrInfo '%s' occured" % (sql, expectErrInfo)) + tdLog.info("sql:%s, expected ErrInfo '%s' occured" % (sql, expectErrInfo)) else: - tdLog.exit("%s(%d) failed: sql:%s, expectErrInfo '%s' occured, but not expected expectErrInfo '%s'" % (caller.filename, caller.lineno, sql, self.error_info, expectErrInfo)) + tdLog.exit("%s(%d) failed: sql:%s, ErrInfo '%s' occured, but not expected ErrInfo '%s'" % (caller.filename, caller.lineno, sql, self.error_info, expectErrInfo)) else: if expectedErrno != None: if expectedErrno in self.errno: @@ -138,9 +155,9 @@ class TDSql: if expectErrInfo != None: if expectErrInfo in self.error_info: - tdLog.info("sql:%s, expected expectErrInfo '%s' occured" % (sql, expectErrInfo)) + tdLog.info("sql:%s, expected ErrInfo '%s' occured" % (sql, expectErrInfo)) else: - tdLog.exit("%s(%d) failed: sql:%s, expectErrInfo %s occured, but not expected expectErrInfo '%s'" % (caller.filename, caller.lineno, sql, self.error_info, expectErrInfo)) + tdLog.exit("%s(%d) failed: sql:%s, ErrInfo %s occured, but not expected ErrInfo '%s'" % (caller.filename, caller.lineno, sql, self.error_info, expectErrInfo)) return self.error_info diff --git a/tests/script/tsim/parser/condition_query.sim b/tests/script/tsim/parser/condition_query.sim index d9bfcb8074..660e91dbfd 100644 --- a/tests/script/tsim/parser/condition_query.sim +++ b/tests/script/tsim/parser/condition_query.sim @@ -1505,7 +1505,7 @@ if $data10 != @21-05-05 18:19:21.000@ then return -1 endi -sql select a.ts,b.ts,a.c1,b.u1,b.u2 from (select * from stb1) a, (select * from stb2) b where a.ts=b.ts and (a.c1 < 10 or a.c1 > 30) and (b.u1 < 5 or b.u1 > 5) order by ts; +sql select a.ts,b.ts,a.c1,b.u1,b.u2 from (select * from stb1) a, (select * from stb2) b where a.ts=b.ts and (a.c1 < 10 or a.c1 > 30) and (b.u1 < 5 or b.u1 > 5) order by a.ts; if $rows != 4 then return -1 endi @@ -1521,8 +1521,9 @@ endi if $data30 != @21-05-05 18:19:14.000@ then return -1 endi +sql_error select a.ts,b.ts,a.c1,b.u1,b.u2 from (select * from stb1) a, (select * from stb2) b where a.ts=b.ts and (a.c1 < 10 or a.c1 > 30) and (b.u1 < 5 or b.u1 > 5) order by ts; -sql select a.ts,b.ts,a.c1,b.u1,b.u2 from (select * from stb1) a, (select * from stb2) b where a.ts=b.ts and a.c1 < 30 and b.u1 > 1 and a.c1 > 10 and b.u1 < 8 and b.u1<>5 order by ts; +sql select a.ts,b.ts,a.c1,b.u1,b.u2 from (select * from stb1) a, (select * from stb2) b where a.ts=b.ts and a.c1 < 30 and b.u1 > 1 and a.c1 > 10 and b.u1 < 8 and b.u1<>5 order by a.ts; if $rows != 3 then return -1 endi @@ -1535,6 +1536,8 @@ endi if $data20 != @21-05-05 18:19:10.000@ then return -1 endi +sql_error select a.ts,b.ts,a.c1,b.u1,b.u2 from (select * from stb1) a, (select * from stb2) b where a.ts=b.ts and a.c1 < 30 and b.u1 > 1 and a.c1 > 10 and b.u1 < 8 and b.u1<>5 order by ts; +sql select a.ts,a.ts,a.c1,b.u1,b.u2 from (select * from stb1) a, (select * from stb2) b where a.ts=b.ts and a.c1 < 30 and b.u1 > 1 and a.c1 > 10 and b.u1 < 8 and b.u1<>5 order by ts; sql select * from stb1 where c1 is null and c1 is not null; if $rows != 0 then @@ -2469,7 +2472,7 @@ endi if $data10 != @21-05-05 18:19:05.000@ then return -1 endi -sql select tb1.ts,tb1.*,tb2_1.* from tb1, tb2_1 where tb1.ts=tb2_1.ts and tb1.ts > '2021-05-05 18:19:03.000' and tb2_1.u1 < 5 order by ts; +sql select tb1.ts,tb1.*,tb2_1.* from tb1, tb2_1 where tb1.ts=tb2_1.ts and tb1.ts > '2021-05-05 18:19:03.000' and tb2_1.u1 < 5 order by tb1.ts; if $rows != 2 then return -1 endi @@ -2480,7 +2483,7 @@ if $data10 != @21-05-05 18:19:06.000@ then return -1 endi -sql select tb1.ts,tb1.*,tb2_1.* from tb1, tb2_1 where tb1.ts=tb2_1.ts and tb1.ts >= '2021-05-05 18:19:03.000' and tb1.c7=false and tb2_1.u3>4 order by ts; +sql select tb1.ts,tb1.*,tb2_1.* from tb1, tb2_1 where tb1.ts=tb2_1.ts and tb1.ts >= '2021-05-05 18:19:03.000' and tb1.c7=false and tb2_1.u3>4 order by tb1.ts; if $rows != 2 then return -1 endi @@ -2491,7 +2494,7 @@ if $data10 != @21-05-05 18:19:07.000@ then return -1 endi -sql select stb1.ts,stb1.c1,stb1.t1,stb2.ts,stb2.u1,stb2.t4 from stb1, stb2 where stb1.ts=stb2.ts and stb1.t1 = stb2.t4 order by ts; +sql select stb1.ts,stb1.c1,stb1.t1,stb2.ts,stb2.u1,stb2.t4 from stb1, stb2 where stb1.ts=stb2.ts and stb1.t1 = stb2.t4 order by stb1.ts; if $rows != 9 then return -1 endi @@ -2523,7 +2526,7 @@ if $data80 != @21-05-05 18:19:11.000@ then return -1 endi -sql select stb1.ts,stb1.c1,stb1.t1,stb2.ts,stb2.u1,stb2.t4 from stb1, stb2 where stb1.ts=stb2.ts and stb1.t1 = stb2.t4 and stb1.c1 > 2 and stb2.u1 <=4 order by ts; +sql select stb1.ts,stb1.c1,stb1.t1,stb2.ts,stb2.u1,stb2.t4 from stb1, stb2 where stb1.ts=stb2.ts and stb1.t1 = stb2.t4 and stb1.c1 > 2 and stb2.u1 <=4 order by stb1.ts; if $rows != 3 then return -1 endi diff --git a/tests/script/tsim/stream/partitionby1.sim b/tests/script/tsim/stream/partitionby1.sim index d92aecb3a6..24c588d410 100644 --- a/tests/script/tsim/stream/partitionby1.sim +++ b/tests/script/tsim/stream/partitionby1.sim @@ -13,6 +13,8 @@ sql create table ts3 using st tags(3,2,2); sql create table ts4 using st tags(4,2,2); sql create stream stream_t1 trigger at_once IGNORE EXPIRED 0 IGNORE UPDATE 0 into streamtST1 as select _wstart, count(*) c1, count(d) c2 , sum(a) c3 , max(b) c4, min(c) c5 from st partition by tbname interval(10s); +sleep 1000 + sql insert into ts1 values(1648791213001,1,12,3,1.0); sql insert into ts2 values(1648791213001,1,12,3,1.0); diff --git a/tests/script/tsim/valgrind/checkError8.sim b/tests/script/tsim/valgrind/checkError8.sim index 2f204768eb..3942765eb7 100644 --- a/tests/script/tsim/valgrind/checkError8.sim +++ b/tests/script/tsim/valgrind/checkError8.sim @@ -129,10 +129,10 @@ sql select a.ts,a.c1,a.c8 from (select * from stb1 where c7=true) a, (select * f sql select * from stb1 where (c6 > 3.0 or c6 < 60) and c6 > 50 and (c6 != 53 or c6 != 63);; sql select ts,c1 from stb1 where (c1 > 60 or c1 < 10 or (c1 > 20 and c1 < 30)) and ts > '2021-05-05 18:19:00.000' and ts < '2021-05-05 18:19:25.000' and c1 != 21 and c1 != 22 order by ts; sql select a.* from (select * from stb1 where c7=true) a, (select * from stb1 where c1 > 30) b where a.ts=b.ts and a.c1 > 50 order by ts;; -sql select a.ts,b.ts,a.c1,b.u1,b.u2 from (select * from stb1) a, (select * from stb2) b where a.ts=b.ts and (a.c1 < 10 or a.c1 > 30) and (b.u1 < 5 or b.u1 > 5) order by ts;; -sql select a.ts,b.ts,a.c1,b.u1,b.u2 from (select * from stb1) a, (select * from stb2) b where a.ts=b.ts and a.c1 < 30 and b.u1 > 1 and a.c1 > 10 and b.u1 < 8 and b.u1<>5 order by ts;; -sql select tb1.ts,tb1.*,tb2_1.* from tb1, tb2_1 where tb1.ts=tb2_1.ts and tb1.ts >= '2021-05-05 18:19:03.000' and tb1.c7=false and tb2_1.u3>4 order by ts;; -sql select stb1.ts,stb1.c1,stb1.t1,stb2.ts,stb2.u1,stb2.t4 from stb1, stb2 where stb1.ts=stb2.ts and stb1.t1 = stb2.t4 order by ts;; +sql select a.ts,b.ts,a.c1,b.u1,b.u2 from (select * from stb1) a, (select * from stb2) b where a.ts=b.ts and (a.c1 < 10 or a.c1 > 30) and (b.u1 < 5 or b.u1 > 5) order by a.ts;; +sql select a.ts,b.ts,a.c1,b.u1,b.u2 from (select * from stb1) a, (select * from stb2) b where a.ts=b.ts and a.c1 < 30 and b.u1 > 1 and a.c1 > 10 and b.u1 < 8 and b.u1<>5 order by a.ts;; +sql select tb1.ts,tb1.*,tb2_1.* from tb1, tb2_1 where tb1.ts=tb2_1.ts and tb1.ts >= '2021-05-05 18:19:03.000' and tb1.c7=false and tb2_1.u3>4 order by tb1.ts;; +sql select stb1.ts,stb1.c1,stb1.t1,stb2.ts,stb2.u1,stb2.t4 from stb1, stb2 where stb1.ts=stb2.ts and stb1.t1 = stb2.t4 order by stb1.ts;; sql select count(*) from stb1 where tbname like 'tb%' or c1 > 0;; sql select * from stb1 where tbname like 'tb%' and (t1=1 or t2=2 or t3=3) and t1 > 2 order by ts;; diff --git a/tests/script/win-test-file b/tests/script/win-test-file index 744415c89f..b9f250927f 100644 --- a/tests/script/win-test-file +++ b/tests/script/win-test-file @@ -280,7 +280,9 @@ ./test.sh -f tsim/sma/tsmaCreateInsertQuery.sim ./test.sh -f tsim/sma/rsmaCreateInsertQuery.sim ./test.sh -f tsim/sma/rsmaCreateInsertQueryDelete.sim -./test.sh -f tsim/sma/rsmaPersistenceRecovery.sim + +### refactor stream backend, open case after rsma refactored +#./test.sh -f tsim/sma/rsmaPersistenceRecovery.sim ./test.sh -f tsim/sync/vnodesnapshot-rsma-test.sim ./test.sh -f tsim/valgrind/checkError1.sim ./test.sh -f tsim/valgrind/checkError2.sim diff --git a/tests/system-test/0-others/com_alltypedata.json b/tests/system-test/0-others/com_alltypedata.json new file mode 100644 index 0000000000..0e6d8e3a07 --- /dev/null +++ b/tests/system-test/0-others/com_alltypedata.json @@ -0,0 +1,81 @@ +{ + "filetype": "insert", + "cfgdir": "/etc/taos", + "host": "127.0.0.1", + "port": 6030, + "user": "root", + "password": "taosdata", + "connection_pool_size": 8, + "num_of_records_per_req": 2000, + "thread_count": 2, + "create_table_thread_count": 10, + "result_file": "./insert_res_mix.txt", + "confirm_parameter_prompt": "no", + "insert_interval": 0, + "check_sql": "yes", + "continue_if_fail": "yes", + "databases": [ + { + "dbinfo": { + "name": "curdb", + "drop": "yes", + "vgroups": 2, + "replica": 1, + "precision": "ms", + "stt_trigger": 8, + "minRows": 100, + "maxRows": 4096 + }, + "super_tables": [ + { + "name": "meters", + "child_table_exists": "no", + "childtable_count": 5, + "insert_rows": 100000, + "childtable_prefix": "d", + "insert_mode": "taosc", + "insert_interval": 0, + "timestamp_step": 1000, + "start_timestamp":"2022-09-01 10:00:00", + "disorder_ratio": 60, + "update_ratio": 70, + "delete_ratio": 30, + "disorder_fill_interval": 300, + "update_fill_interval": 25, + "generate_row_rule": 2, + "columns": [ + { "type": "bool", "name": "bc"}, + { "type": "float", "name": "fc", "max": 1, "min": 0 }, + { "type": "double", "name": "dc", "max": 1, "min": 0 }, + { "type": "tinyint", "name": "ti", "max": 100, "min": 0 }, + { "type": "smallint", "name": "si", "max": 100, "min": 0 }, + { "type": "int", "name": "ic", "max": 100, "min": 0 }, + { "type": "bigint", "name": "bi", "max": 100, "min": 0 }, + { "type": "utinyint", "name": "uti", "max": 100, "min": 0 }, + { "type": "usmallint", "name": "usi", "max": 100, "min": 0 }, + { "type": "uint", "name": "ui", "max": 100, "min": 0 }, + { "type": "ubigint", "name": "ubi", "max": 100, "min": 0 }, + { "type": "binary", "name": "bin", "len": 32}, + { "type": "nchar", "name": "nch", "len": 64} + ], + "tags": [ + { + "type": "tinyint", + "name": "groupid", + "max": 10, + "min": 1 + }, + { + "name": "location", + "type": "binary", + "len": 16, + "values": ["San Francisco", "Los Angles", "San Diego", + "San Jose", "Palo Alto", "Campbell", "Mountain View", + "Sunnyvale", "Santa Clara", "Cupertino"] + } + ] + } + ] + } + ] +} \ No newline at end of file diff --git a/tests/system-test/0-others/compatibility.py b/tests/system-test/0-others/compatibility.py index d54c676c0d..c936cf1ae4 100644 --- a/tests/system-test/0-others/compatibility.py +++ b/tests/system-test/0-others/compatibility.py @@ -152,6 +152,13 @@ class TDTestCase: tdLog.info(f" LD_LIBRARY_PATH=/usr/lib taosBenchmark -t {tableNumbers} -n {recordNumbers1} -y ") os.system(f"LD_LIBRARY_PATH=/usr/lib taosBenchmark -t {tableNumbers} -n {recordNumbers1} -y ") os.system("LD_LIBRARY_PATH=/usr/lib taos -s 'flush database test '") + os.system("LD_LIBRARY_PATH=/usr/lib taosBenchmark -f 0-others/com_alltypedata.json -y") + os.system("LD_LIBRARY_PATH=/usr/lib taos -s 'flush database curdb '") + os.system("LD_LIBRARY_PATH=/usr/lib taos -s 'select count(*) from curdb.meters '") + os.system("LD_LIBRARY_PATH=/usr/lib taos -s 'select sum(fc) from curdb.meters '") + os.system("LD_LIBRARY_PATH=/usr/lib taos -s 'select avg(ic) from curdb.meters '") + os.system("LD_LIBRARY_PATH=/usr/lib taos -s 'select min(ui) from curdb.meters '") + os.system("LD_LIBRARY_PATH=/usr/lib taos -s 'select max(bi) from curdb.meters '") # os.system(f"LD_LIBRARY_PATH=/usr/lib taos -s 'use test;create stream current_stream into current_stream_output_stb as select _wstart as `start`, _wend as wend, max(current) as max_current from meters where voltage <= 220 interval (5s);' ") # os.system('LD_LIBRARY_PATH=/usr/lib taos -s "use test;create stream power_stream into power_stream_output_stb as select ts, concat_ws(\\".\\", location, tbname) as meter_location, current*voltage*cos(phase) as active_power, current*voltage*sin(phase) as reactive_power from meters partition by tbname;" ') diff --git a/tests/system-test/0-others/compatibility_coverage.py b/tests/system-test/0-others/compatibility_coverage.py new file mode 100644 index 0000000000..7a123739f7 --- /dev/null +++ b/tests/system-test/0-others/compatibility_coverage.py @@ -0,0 +1,275 @@ +from urllib.parse import uses_relative +import taos +import sys +import os +import time +import platform +import inspect +from taos.tmq import Consumer + +from pathlib import Path +from util.log import * +from util.sql import * +from util.cases import * +from util.dnodes import * +from util.dnodes import TDDnodes +from util.dnodes import TDDnode +from util.cluster import * +import subprocess + +BASEVERSION = "3.0.2.3" +class TDTestCase: + def caseDescription(self): + f''' + 3.0 data compatibility test + case1: basedata version is {BASEVERSION} + ''' + return + + def init(self, conn, logSql, replicaVar=1): + self.replicaVar = int(replicaVar) + tdLog.debug(f"start to excute {__file__}") + tdSql.init(conn.cursor()) + self.deletedDataSql= '''drop database if exists deldata;create database deldata duration 300 stt_trigger 1; ;use deldata; + create table deldata.stb1 (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp) tags (t1 int); + create table deldata.ct1 using deldata.stb1 tags ( 1 ); + insert into deldata.ct1 values ( now()-0s, 0, 0, 0, 0, 0.0, 0.0, 0, 'binary0', 'nchar0', now()+0a ) ( now()-10s, 1, 11111, 111, 11, 1.11, 11.11, 1, 'binary1', 'nchar1', now()+1a ) ( now()-20s, 2, 22222, 222, 22, 2.22, 22.22, 0, 'binary2', 'nchar2', now()+2a ) ( now()-30s, 3, 33333, 333, 33, 3.33, 33.33, 1, 'binary3', 'nchar3', now()+3a ); + select avg(c1) from deldata.ct1; + delete from deldata.stb1; + flush database deldata; + insert into deldata.ct1 values ( now()-0s, 0, 0, 0, 0, 0.0, 0.0, 0, 'binary0', 'nchar0', now()+0a ) ( now()-10s, 1, 11111, 111, 11, 1.11, 11.11, 1, 'binary1', 'nchar1', now()+1a ) ( now()-20s, 2, 22222, 222, 22, 2.22, 22.22, 0, 'binary2', 'nchar2', now()+2a ) ( now()-30s, 3, 33333, 333, 33, 3.33, 33.33, 1, 'binary3', 'nchar3', now()+3a ); + delete from deldata.ct1; + insert into deldata.ct1 values ( now()-0s, 0, 0, 0, 0, 0.0, 0.0, 0, 'binary0', 'nchar0', now()+0a ); + flush database deldata;''' + def checkProcessPid(self,processName): + i=0 + while i<60: + print(f"wait stop {processName}") + processPid = subprocess.getstatusoutput(f'ps aux|grep {processName} |grep -v "grep"|awk \'{{print $2}}\'')[1] + print(f"times:{i},{processName}-pid:{processPid}") + if(processPid == ""): + break + i += 1 + sleep(1) + else: + print(f'this processName is not stoped in 60s') + + + def getBuildPath(self): + selfPath = os.path.dirname(os.path.realpath(__file__)) + + if ("community" in selfPath): + projPath = selfPath[:selfPath.find("community")] + else: + projPath = selfPath[:selfPath.find("tests")] + + self.projPath = projPath + for root, dirs, files in os.walk(projPath): + if ("taosd" in files or "taosd.exe" in files): + rootRealPath = os.path.dirname(os.path.realpath(root)) + if ("packaging" not in rootRealPath): + buildPath = root[:len(root)-len("/build/bin")] + break + return buildPath + + def getCfgPath(self): + buildPath = self.getBuildPath() + selfPath = os.path.dirname(os.path.realpath(__file__)) + + if ("community" in selfPath): + cfgPath = buildPath + "/../sim/dnode1/cfg/" + else: + cfgPath = buildPath + "/../sim/dnode1/cfg/" + + return cfgPath + + def installTaosd(self,bPath,cPath): + # os.system(f"rmtaos && mkdir -p {self.getBuildPath()}/build/lib/temp && mv {self.getBuildPath()}/build/lib/libtaos.so* {self.getBuildPath()}/build/lib/temp/ ") + # os.system(f" mv {bPath}/build {bPath}/build_bak ") + # os.system(f"mv {self.getBuildPath()}/build/lib/libtaos.so {self.getBuildPath()}/build/lib/libtaos.so_bak ") + # os.system(f"mv {self.getBuildPath()}/build/lib/libtaos.so.1 {self.getBuildPath()}/build/lib/libtaos.so.1_bak ") + + packagePath = "/usr/local/src/" + dataPath = cPath + "/../data/" + if platform.system() == "Linux" and platform.machine() == "aarch64": + packageName = "TDengine-server-"+ BASEVERSION + "-Linux-arm64.tar.gz" + else: + packageName = "TDengine-server-"+ BASEVERSION + "-Linux-x64.tar.gz" + packageTPath = packageName.split("-Linux-")[0] + my_file = Path(f"{packagePath}/{packageName}") + if not my_file.exists(): + print(f"{packageName} is not exists") + tdLog.info(f"cd {packagePath} && wget https://www.tdengine.com/assets-download/3.0/{packageName}") + os.system(f"cd {packagePath} && wget https://www.tdengine.com/assets-download/3.0/{packageName}") + else: + print(f"{packageName} has been exists") + os.system(f" cd {packagePath} && tar xvf {packageName} && cd {packageTPath} && ./install.sh -e no " ) + tdDnodes.stop(1) + print(f"start taosd: rm -rf {dataPath}/* && nohup taosd -c {cPath} & ") + os.system(f"rm -rf {dataPath}/* && nohup taosd -c {cPath} & " ) + sleep(5) + + + def buildTaosd(self,bPath): + # os.system(f"mv {bPath}/build_bak {bPath}/build ") + os.system(f" cd {bPath} ") + + def is_list_same_as_ordered_list(self,unordered_list, ordered_list): + sorted_list = sorted(unordered_list) + return sorted_list == ordered_list + + def run(self): + scriptsPath = os.path.dirname(os.path.realpath(__file__)) + distro_id = distro.id() + if distro_id == "alpine": + tdLog.info(f"alpine skip compatibility test") + return True + if platform.system().lower() == 'windows': + tdLog.info(f"Windows skip compatibility test") + return True + bPath = self.getBuildPath() + cPath = self.getCfgPath() + dbname = "test" + stb = f"{dbname}.meters" + os.system("echo 'debugFlag 143' > /etc/taos/taos.cfg ") + tableNumbers=100 + recordNumbers1=100 + recordNumbers2=1000 + + # tdsqlF=tdCom.newTdSql() + # print(tdsqlF) + # tdsqlF.query(f"SELECT SERVER_VERSION();") + # print(tdsqlF.query(f"SELECT SERVER_VERSION();")) + # oldServerVersion=tdsqlF.queryResult[0][0] + # tdLog.info(f"Base server version is {oldServerVersion}") + # tdsqlF.query(f"SELECT CLIENT_VERSION();") + # # the oldClientVersion can't be updated in the same python process,so the version is new compiled verison + # oldClientVersion=tdsqlF.queryResult[0][0] + # tdLog.info(f"Base client version is {oldClientVersion}") + # baseVersion = "3.0.1.8" + + tdLog.printNoPrefix(f"==========step1:prepare and check data in old version-{BASEVERSION}") + os.system(f"rm -rf {cPath}/../data") + print(self.projPath) + # this data file is special for coverage test in 192.168.1.96 + os.system("cp -r f{self.projPath}/../comp_testdata/data/ {self.projPath}/sim/dnode1") + tdDnodes.stop(1) + tdDnodes.start(1) + + + tdsql=tdCom.newTdSql() + tdsql.query(f"SELECT SERVER_VERSION();") + nowServerVersion=tdsql.queryResult[0][0] + tdLog.printNoPrefix(f"==========step3:prepare and check data in new version-{nowServerVersion}") + tdsql.query(f"select count(*) from {stb}") + tdsql.checkData(0,0,tableNumbers*recordNumbers1) + # tdsql.query("show streams;") + # os.system(f"taosBenchmark -t {tableNumbers} -n {recordNumbers2} -y ") + # tdsql.query("show streams;") + # tdsql.query(f"select count(*) from {stb}") + # tdsql.checkData(0,0,tableNumbers*recordNumbers2) + + # checkout db4096 + tdsql.query("select count(*) from db4096.stb0") + tdsql.checkData(0,0,50000) + + # checkout deleted data + tdsql.execute("insert into deldata.ct1 values ( now()-0s, 0, 0, 0, 0, 0.0, 0.0, 0, 'binary0', 'nchar0', now()+0a ) ( now()-10s, 1, 11111, 111, 11, 1.11, 11.11, 1, 'binary1', 'nchar1', now()+1a ) ( now()-20s, 2, 22222, 222, 22, 2.22, 22.22, 0, 'binary2', 'nchar2', now()+2a ) ( now()-30s, 3, 33333, 333, 33, 3.33, 33.33, 1, 'binary3', 'nchar3', now()+3a );") + tdsql.execute("flush database deldata;") + tdsql.query("select avg(c1) from deldata.ct1;") + + + tdsql=tdCom.newTdSql() + tdLog.printNoPrefix("==========step4:verify backticks in taos Sql-TD18542") + tdsql.execute("drop database if exists db") + tdsql.execute("create database db") + tdsql.execute("use db") + tdsql.execute("create stable db.stb1 (ts timestamp, c1 int) tags (t1 int);") + tdsql.execute("insert into db.ct1 using db.stb1 TAGS(1) values(now(),11);") + tdsql.error(" insert into `db.ct2` using db.stb1 TAGS(9) values(now(),11);") + tdsql.error(" insert into db.`db.ct2` using db.stb1 TAGS(9) values(now(),11);") + tdsql.execute("insert into `db`.ct3 using db.stb1 TAGS(3) values(now(),13);") + tdsql.query("select * from db.ct3") + tdsql.checkData(0,1,13) + tdsql.execute("insert into db.`ct4` using db.stb1 TAGS(4) values(now(),14);") + tdsql.query("select * from db.ct4") + tdsql.checkData(0,1,14) + + #check retentions + tdsql=tdCom.newTdSql() + tdsql.query("describe information_schema.ins_databases;") + qRows=tdsql.queryRows + comFlag=True + j=0 + while comFlag: + for i in range(qRows) : + if tdsql.queryResult[i][0] == "retentions" : + print("parameters include retentions") + comFlag=False + break + else : + comFlag=True + j=j+1 + if j == qRows: + print("parameters don't include retentions") + caller = inspect.getframeinfo(inspect.stack()[0][0]) + args = (caller.filename, caller.lineno) + tdLog.exit("%s(%d) failed" % args) + + # check stream + tdsql.query("show streams;") + tdsql.checkRows(0) + + #check TS-3131 + tdsql.query("select *,tbname from d0.almlog where mcid='m0103';") + tdsql.checkRows(6) + expectList = [0,3003,20031,20032,20033,30031] + resultList = [] + for i in range(6): + resultList.append(tdsql.queryResult[i][3]) + print(resultList) + if self.is_list_same_as_ordered_list(resultList,expectList): + print("The unordered list is the same as the ordered list.") + else: + tdLog.exit("The unordered list is not the same as the ordered list.") + tdsql.execute("insert into test.d80 values (now+1s, 11, 103, 0.21);") + tdsql.execute("insert into test.d9 values (now+5s, 4.3, 104, 0.4);") + + + # check tmq + conn = taos.connect() + + consumer = Consumer( + { + "group.id": "tg75", + "client.id": "124", + "td.connect.user": "root", + "td.connect.pass": "taosdata", + "enable.auto.commit": "true", + "experimental.snapshot.enable": "true", + } + ) + consumer.subscribe(["tmq_test_topic"]) + + while True: + res = consumer.poll(10) + if not res: + break + err = res.error() + if err is not None: + raise err + val = res.value() + + for block in val: + print(block.fetchall()) + tdsql.query("show topics;") + tdsql.checkRows(1) + + + def stop(self): + tdSql.close() + tdLog.success(f"{__file__} successfully executed") + + +tdCases.addLinux(__file__, TDTestCase()) +tdCases.addWindows(__file__, TDTestCase()) diff --git a/tests/system-test/2-query/last_row.py b/tests/system-test/2-query/last_row.py index 6469b3ea54..0744b3bae5 100644 --- a/tests/system-test/2-query/last_row.py +++ b/tests/system-test/2-query/last_row.py @@ -861,11 +861,56 @@ class TDTestCase: self.support_super_table_test() + def initLastRowDelayTest(self, dbname="db"): + tdSql.execute(f"drop database if exists {dbname} ") + create_db_sql = f"create database if not exists {dbname} keep 3650 duration 1000 cachemodel 'NONE' REPLICA 1" + tdSql.execute(create_db_sql) + + time.sleep(3) + tdSql.execute(f"use {dbname}") + tdSql.execute(f'create stable {dbname}.st(ts timestamp, v_int int, v_float float) TAGS (ctname varchar(32))') + + tdSql.execute(f"create table {dbname}.ct1 using {dbname}.st tags('ct1')") + tdSql.execute(f"create table {dbname}.ct2 using {dbname}.st tags('ct2')") + + tdSql.execute(f"insert into {dbname}.st(tbname,ts,v_float, v_int) values('ct1',1630000000000,86,86)") + tdSql.execute(f"insert into {dbname}.st(tbname,ts,v_float, v_int) values('ct1',1630000021255,59,59)") + tdSql.execute(f'flush database {dbname}') + tdSql.execute(f'select last(*) from {dbname}.st') + tdSql.execute(f'select last_row(*) from {dbname}.st') + tdSql.execute(f"insert into {dbname}.st(tbname,ts) values('ct1',1630000091255)") + tdSql.execute(f'flush database {dbname}') + tdSql.execute(f'select last(*) from {dbname}.st') + tdSql.execute(f'select last_row(*) from {dbname}.st') + tdSql.execute(f'alter database {dbname} cachemodel "both"') + tdSql.query(f'select last(*) from {dbname}.st') + tdSql.checkData(0 , 1 , 59) + + tdSql.query(f'select last_row(*) from {dbname}.st') + tdSql.checkData(0 , 1 , None) + tdSql.checkData(0 , 2 , None) + + tdLog.printNoPrefix("========== delay test init success ==============") + + def lastRowDelayTest(self, dbname="db"): + tdLog.printNoPrefix("========== delay test start ==============") + + tdSql.execute(f"use {dbname}") + + tdSql.query(f'select last(*) from {dbname}.st') + tdSql.checkData(0 , 1 , 59) + + tdSql.query(f'select last_row(*) from {dbname}.st') + tdSql.checkData(0 , 1 , None) + tdSql.checkData(0 , 2 , None) + def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring # tdSql.prepare() tdLog.printNoPrefix("==========step1:create table ==============") + self.initLastRowDelayTest("DELAYTEST") + # cache_last 0 self.prepare_datas("'NONE' ") self.prepare_tag_datas("'NONE'") @@ -890,6 +935,8 @@ class TDTestCase: self.insert_datas_and_check_abs(self.tb_nums,self.row_nums,self.time_step,"'BOTH'") self.basic_query() + self.lastRowDelayTest("DELAYTEST") + def stop(self): tdSql.close() diff --git a/tests/system-test/2-query/orderBy.py b/tests/system-test/2-query/orderBy.py index dedc57eab3..1cf1478439 100644 --- a/tests/system-test/2-query/orderBy.py +++ b/tests/system-test/2-query/orderBy.py @@ -278,19 +278,19 @@ class TDTestCase: def queryOrderByAgg(self): - tdSql.query("SELECT COUNT(*) FROM t1 order by COUNT(*)") + tdSql.no_error("SELECT COUNT(*) FROM t1 order by COUNT(*)") - tdSql.query("SELECT COUNT(*) FROM t1 order by last(c2)") + tdSql.no_error("SELECT COUNT(*) FROM t1 order by last(c2)") - tdSql.query("SELECT c1 FROM t1 order by last(ts)") + tdSql.no_error("SELECT c1 FROM t1 order by last(ts)") - tdSql.query("SELECT ts FROM t1 order by last(ts)") + tdSql.no_error("SELECT ts FROM t1 order by last(ts)") - tdSql.query("SELECT last(ts), ts, c1 FROM t1 order by 2") + tdSql.no_error("SELECT last(ts), ts, c1 FROM t1 order by 2") - tdSql.query("SELECT ts, last(ts) FROM t1 order by last(ts)") + tdSql.no_error("SELECT ts, last(ts) FROM t1 order by last(ts)") - tdSql.query(f"SELECT * FROM t1 order by last(ts)") + tdSql.no_error(f"SELECT * FROM t1 order by last(ts)") tdSql.query(f"SELECT last(ts) as t2, ts FROM t1 order by 1") tdSql.checkRows(1) @@ -302,6 +302,18 @@ class TDTestCase: tdSql.error(f"SELECT last(ts) as t2, ts FROM t1 order by last(t2)") + def queryOrderByAmbiguousName(self): + tdSql.error(sql="select c1 as name, c2 as name, c3 from t1 order by name", expectErrInfo='ambiguous', + fullMatched=False) + + tdSql.error(sql="select c1, c2 as c1, c3 from t1 order by c1", expectErrInfo='ambiguous', fullMatched=False) + + tdSql.error(sql='select last(ts), last(c1) as name ,last(c2) as name,last(c3) from t1 order by name', + expectErrInfo='ambiguous', fullMatched=False) + + tdSql.no_error("select c1 as name, c2 as c1, c3 from t1 order by c1") + + tdSql.no_error('select c1 as name from (select c1, c2 as name from st) order by name') # run def run(self): @@ -317,6 +329,8 @@ class TDTestCase: # agg self.queryOrderByAgg() + # td-28332 + self.queryOrderByAmbiguousName() # stop def stop(self): diff --git a/tests/system-test/2-query/project_group.py b/tests/system-test/2-query/project_group.py new file mode 100644 index 0000000000..44943e5088 --- /dev/null +++ b/tests/system-test/2-query/project_group.py @@ -0,0 +1,65 @@ +from wsgiref.headers import tspecials +from util.log import * +from util.cases import * +from util.sql import * +import numpy as np + + +class TDTestCase: + def init(self, conn, logSql, replicaVar=1): + self.replicaVar = int(replicaVar) + tdLog.debug("start to execute %s" % __file__) + tdSql.init(conn.cursor()) + + self.rowNum = 10 + self.batchNum = 5 + self.ts = 1537146000000 + + def run(self): + dbname = "db" + tdSql.prepare() + + tdSql.execute(f'''create table sta(ts timestamp, col1 int, col2 bigint) tags(tg1 int, tg2 binary(20))''') + tdSql.execute(f"create table sta1 using sta tags(1, 'a')") + tdSql.execute(f"create table sta2 using sta tags(2, 'b')") + tdSql.execute(f"create table sta3 using sta tags(3, 'c')") + tdSql.execute(f"create table sta4 using sta tags(4, 'a')") + tdSql.execute(f"insert into sta1 values(1537146000001, 11, 110)") + tdSql.execute(f"insert into sta1 values(1537146000002, 12, 120)") + tdSql.execute(f"insert into sta1 values(1537146000003, 13, 130)") + tdSql.execute(f"insert into sta2 values(1537146000001, 21, 210)") + tdSql.execute(f"insert into sta2 values(1537146000002, 22, 220)") + tdSql.execute(f"insert into sta2 values(1537146000003, 23, 230)") + tdSql.execute(f"insert into sta3 values(1537146000001, 31, 310)") + tdSql.execute(f"insert into sta3 values(1537146000002, 32, 320)") + tdSql.execute(f"insert into sta3 values(1537146000003, 33, 330)") + tdSql.execute(f"insert into sta4 values(1537146000001, 41, 410)") + tdSql.execute(f"insert into sta4 values(1537146000002, 42, 420)") + tdSql.execute(f"insert into sta4 values(1537146000003, 43, 430)") + + tdSql.execute(f'''create table stb(ts timestamp, col1 int, col2 bigint) tags(tg1 int, tg2 binary(20))''') + tdSql.execute(f"create table stb1 using stb tags(1, 'a')") + tdSql.execute(f"create table stb2 using stb tags(2, 'b')") + tdSql.execute(f"create table stb3 using stb tags(3, 'c')") + tdSql.execute(f"create table stb4 using stb tags(4, 'a')") + tdSql.execute(f"insert into stb1 values(1537146000001, 911, 9110)") + tdSql.execute(f"insert into stb1 values(1537146000002, 912, 9120)") + tdSql.execute(f"insert into stb1 values(1537146000003, 913, 9130)") + tdSql.execute(f"insert into stb2 values(1537146000001, 921, 9210)") + tdSql.execute(f"insert into stb2 values(1537146000002, 922, 9220)") + tdSql.execute(f"insert into stb2 values(1537146000003, 923, 9230)") + tdSql.execute(f"insert into stb3 values(1537146000001, 931, 9310)") + tdSql.execute(f"insert into stb3 values(1537146000002, 932, 9320)") + tdSql.execute(f"insert into stb3 values(1537146000003, 933, 9330)") + tdSql.execute(f"insert into stb4 values(1537146000001, 941, 9410)") + tdSql.execute(f"insert into stb4 values(1537146000002, 942, 9420)") + tdSql.execute(f"insert into stb4 values(1537146000003, 943, 9430)") + + tdSql.query("select * from (select ts, col1 from sta partition by tbname) limit 2"); + tdSql.checkRows(2) + def stop(self): + tdSql.close() + tdLog.success("%s successfully executed" % __file__) + +tdCases.addWindows(__file__, TDTestCase()) +tdCases.addLinux(__file__, TDTestCase()) diff --git a/tests/system-test/7-tmq/tmqCheckData1.py b/tests/system-test/7-tmq/tmqCheckData1.py index 1209c2812c..f443ce5b1e 100644 --- a/tests/system-test/7-tmq/tmqCheckData1.py +++ b/tests/system-test/7-tmq/tmqCheckData1.py @@ -51,7 +51,8 @@ class TDTestCase: tdCom.create_ctable(tdSql, dbname=paraDict["dbName"],stbname=paraDict["stbName"],tag_elm_list=paraDict['tagSchema'],count=paraDict["ctbNum"], default_ctbname_prefix=paraDict['ctbPrefix']) tdLog.info("insert data") tmqCom.insert_data(tdSql,paraDict["dbName"],paraDict["ctbPrefix"],paraDict["ctbNum"],paraDict["rowsPerTbl"],paraDict["batchNum"],paraDict["startTs"]) - + tdSql.execute("insert into ctb0 values(now,1,'');") + tdLog.info("create topics from stb with filter") queryString = "select ts,c1,c2 from %s.%s" %(paraDict['dbName'], paraDict['stbName']) sqlString = "create topic %s as stable %s.%s" %(topicNameList[0], paraDict["dbName"],paraDict["stbName"])