From 6b14ee71b7331b8ad47df3a30281332b9701077e Mon Sep 17 00:00:00 2001 From: danielclow <106956386+danielclow@users.noreply.github.com> Date: Mon, 10 Apr 2023 19:15:51 +0800 Subject: [PATCH 01/24] docs: TD-23520 update english documents for schemaless --- .../13-schemaless/13-schemaless.md | 36 +++++++++++-------- 1 file changed, 21 insertions(+), 15 deletions(-) diff --git a/docs/en/14-reference/13-schemaless/13-schemaless.md b/docs/en/14-reference/13-schemaless/13-schemaless.md index 3f75364081..888e2fe5d2 100644 --- a/docs/en/14-reference/13-schemaless/13-schemaless.md +++ b/docs/en/14-reference/13-schemaless/13-schemaless.md @@ -3,13 +3,11 @@ title: Schemaless Writing description: This document describes how to use the schemaless write component of TDengine. --- -In IoT applications, data is collected for many purposes such as intelligent control, business analysis, device monitoring and so on. Due to changes in business or functional requirements or changes in device hardware, the application logic and even the data collected may change. Schemaless writing automatically creates storage structures for your data as it is being written to TDengine, so that you do not need to create supertables in advance. When necessary, schemaless writing -will automatically add the required columns to ensure that the data written by the user is stored correctly. +In IoT applications, data is collected for many purposes such as intelligent control, business analysis, device monitoring and so on. Due to changes in business or functional requirements or changes in device hardware, the application logic and even the data collected may change. Schemaless writing automatically creates storage structures for your data as it is being written to TDengine, so that you do not need to create supertables in advance. When necessary, schemaless writing will automatically add the required columns to ensure that the data written by the user is stored correctly. The schemaless writing method creates super tables and their corresponding subtables. These are completely indistinguishable from the super tables and subtables created directly via SQL. You can write data directly to them via SQL statements. Note that the names of tables created by schemaless writing are based on fixed mapping rules for tag values, so they are not explicitly ideographic and they lack readability. -Tips: -The schemaless write will automatically create a table. You do not need to create a table manually, or an unknown error may occur. +Note: Schemaless writing creates tables automatically. Creating tables manually is not supported with schemaless writing. ## Schemaless Writing Line Protocol @@ -50,8 +48,7 @@ In the schemaless writing data line protocol, each data item in the field_set ne - `t`, `T`, `true`, `True`, `TRUE`, `f`, `F`, `false`, and `False` will be handled directly as BOOL types. -For example, the following data rows write c1 column as 3 (BIGINT), c2 column as false (BOOL), c3 column -as "passit" (BINARY), c4 column as 4 (DOUBLE), and the primary key timestamp as 1626006833639000000 to child table with the t1 label as "3" (NCHAR), the t2 label as "4" (NCHAR), and the t3 label as "t3" (NCHAR) and the super table named `st`. +For example, the following string indicates that the one row of data is written to the st supertable with the t1 tag as "3" (NCHAR), the t2 tag as "4" (NCHAR), and the t3 tag as "t3" (NCHAR); the c1 column is 3 (BIGINT), the c2 column is false (BOOL), the c3 column is "passit" (BINARY), the c4 column is 4 (DOUBLE), and the primary key timestamp is 1626006833639000000. ```json st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000 @@ -69,22 +66,32 @@ Schemaless writes process row data according to the following principles. "measurement,tag_key1=tag_value1,tag_key2=tag_value2" ``` +:::tip Note that tag_key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol. -The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t_" is a fixed prefix that every table generated by this mapping relationship has. +The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t\_" is a fixed prefix that every table generated by this mapping relationship has. +::: You can configure smlChildTableName in taos.cfg to specify table names, for example, `smlChildTableName=tname`. You can insert `st,tname=cpul,t1=4 c1=3 1626006833639000000` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored. 2. If the super table obtained by parsing the line protocol does not exist, this super table is created. + + **Important:** Manually creating supertables for schemaless writing is not supported. Schemaless writing creates appropriate supertables automatically. + 3. If the subtable obtained by the parse line protocol does not exist, Schemaless creates the sub-table according to the subtable name determined in steps 1 or 2. + 4. If the specified tag or regular column in the data row does not exist, the corresponding tag or regular column is added to the super table (only incremental). -5. If there are some tag columns or regular columns in the super table that are not specified to take values in a data row, then the values of these columns are set to - NULL. + +5. If there are some tag columns or regular columns in the super table that are not specified to take values in a data row, then the values of these columns are set to NULL. + 6. For BINARY or NCHAR columns, if the length of the value provided in a data row exceeds the column type limit, the maximum length of characters allowed to be stored in the column is automatically increased (only incremented and not decremented) to ensure complete preservation of the data. + 7. Errors encountered throughout the processing will interrupt the writing process and return an error code. -8. It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat in taos.cfg to false. Otherwise, data will be written out of order and a database error will occur.(smlDataFormat in taos.cfg default to false after version of 3.0.1.3, discarded since 3.0.3.0) + +8. It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat to false. Otherwise, data will be written out of order and a database error will occur. + + Note: TDengine 3.0.3.0 and later automatically detect whether order is consistent. This parameter is no longer used. :::tip -All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed -48KB, and the total length of tag value cannot exceed 16KB. See [TDengine SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area. +All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed 48 KB and the total length of a tag value cannot exceed 16 KB. See [TDengine SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area. ::: @@ -114,8 +121,7 @@ In OpenTSDB file and JSON protocol modes, the precision of the timestamp is dete ## Data Model Mapping -This section describes how data in line protocol is mapped to a schema. The data measurement in each line is mapped to a -supertable name. The tag name in tag_set is the tag name in the schema, and the name in field_set is the column name in the schema. The following example shows how data is mapped: +This section describes how data in InfluxDB line protocol is mapped to a schema. The data measurement in each line is mapped to a supertable name. The tag name in tag_set is the tag name in the schema, and the name in field_set is the column name in the schema. The following example shows how data is mapped: ```json st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000 @@ -160,7 +166,7 @@ The preceding data includes a new entry, c6, with type binary(6). When this occu TDengine guarantees the idempotency of data writes. This means that you can repeatedly call the API to perform write operations with bad data. However, TDengine does not guarantee the atomicity of multi-row writes. In a multi-row write, some data may be written successfully and other data unsuccessfully. -##: Error Codes +## Error Codes The TSDB_CODE_TSC_LINE_SYNTAX_ERROR indicates an error in the schemaless writing component. This error occurs when writing text. For other errors, schemaless writing uses the standard TDengine error codes From 5b914088a8fb56c678675107a5575aa03835b5bb Mon Sep 17 00:00:00 2001 From: danielclow <106956386+danielclow@users.noreply.github.com> Date: Mon, 10 Apr 2023 22:39:51 +0800 Subject: [PATCH 02/24] docs: fix incorrect deletion and new lines --- docs/en/14-reference/13-schemaless/13-schemaless.md | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/docs/en/14-reference/13-schemaless/13-schemaless.md b/docs/en/14-reference/13-schemaless/13-schemaless.md index 888e2fe5d2..aad0e63a42 100644 --- a/docs/en/14-reference/13-schemaless/13-schemaless.md +++ b/docs/en/14-reference/13-schemaless/13-schemaless.md @@ -70,10 +70,10 @@ Schemaless writes process row data according to the following principles. Note that tag_key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol. The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t\_" is a fixed prefix that every table generated by this mapping relationship has. ::: + You can configure smlChildTableName in taos.cfg to specify table names, for example, `smlChildTableName=tname`. You can insert `st,tname=cpul,t1=4 c1=3 1626006833639000000` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored. 2. If the super table obtained by parsing the line protocol does not exist, this super table is created. - **Important:** Manually creating supertables for schemaless writing is not supported. Schemaless writing creates appropriate supertables automatically. 3. If the subtable obtained by the parse line protocol does not exist, Schemaless creates the sub-table according to the subtable name determined in steps 1 or 2. @@ -86,13 +86,11 @@ You can configure smlChildTableName in taos.cfg to specify table names, for exam 7. Errors encountered throughout the processing will interrupt the writing process and return an error code. -8. It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat to false. Otherwise, data will be written out of order and a database error will occur. - +8. It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat in taos.cfg to false. Otherwise, data will be written out of order and a database error will occur. Note: TDengine 3.0.3.0 and later automatically detect whether order is consistent. This parameter is no longer used. :::tip All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed 48 KB and the total length of a tag value cannot exceed 16 KB. See [TDengine SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area. - ::: ## Time resolution recognition From e2b5d27c2a7ec71109c2f90b66d58afc613f3df2 Mon Sep 17 00:00:00 2001 From: dapan1121 Date: Tue, 11 Apr 2023 10:22:47 +0800 Subject: [PATCH 03/24] enh: stmt ignore get fields errors --- source/client/inc/clientStmt.h | 9 +++++++++ source/client/src/clientStmt.c | 30 ++++++++++++++++++------------ 2 files changed, 27 insertions(+), 12 deletions(-) diff --git a/source/client/inc/clientStmt.h b/source/client/inc/clientStmt.h index 0c9696f1c8..cbef80b6da 100644 --- a/source/client/inc/clientStmt.h +++ b/source/client/inc/clientStmt.h @@ -144,6 +144,15 @@ extern char *gStmtStatusStr[]; goto _return; \ } \ } while (0) +#define STMT_ERRI_JRET(c) \ + do { \ + code = c; \ + if (code != TSDB_CODE_SUCCESS) { \ + terrno = code; \ + goto _return; \ + } \ + } while (0) + #define STMT_ELOG(param, ...) qError("stmt:%p " param, pStmt, __VA_ARGS__) #define STMT_DLOG(param, ...) qDebug("stmt:%p " param, pStmt, __VA_ARGS__) diff --git a/source/client/src/clientStmt.c b/source/client/src/clientStmt.c index 71a41c68e7..ac60a069eb 100644 --- a/source/client/src/clientStmt.c +++ b/source/client/src/clientStmt.c @@ -975,15 +975,16 @@ int stmtIsInsert(TAOS_STMT* stmt, int* insert) { } int stmtGetTagFields(TAOS_STMT* stmt, int* nums, TAOS_FIELD_E** fields) { + int32_t code = 0; STscStmt* pStmt = (STscStmt*)stmt; STMT_DLOG_E("start to get tag fields"); if (STMT_TYPE_QUERY == pStmt->sql.type) { - STMT_RET(TSDB_CODE_TSC_STMT_API_ERROR); + STMT_ERRI_JRET(TSDB_CODE_TSC_STMT_API_ERROR); } - STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS)); + STMT_ERRI_JRET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS)); if (pStmt->bInfo.needParse && pStmt->sql.runTimes && pStmt->sql.type > 0 && STMT_TYPE_MULTI_INSERT != pStmt->sql.type) { @@ -995,27 +996,30 @@ int stmtGetTagFields(TAOS_STMT* stmt, int* nums, TAOS_FIELD_E** fields) { pStmt->exec.pRequest = NULL; } - STMT_ERR_RET(stmtCreateRequest(pStmt)); + STMT_ERRI_JRET(stmtCreateRequest(pStmt)); if (pStmt->bInfo.needParse) { - STMT_ERR_RET(stmtParseSql(pStmt)); + STMT_ERRI_JRET(stmtParseSql(pStmt)); } - STMT_ERR_RET(stmtFetchTagFields(stmt, nums, fields)); + STMT_ERRI_JRET(stmtFetchTagFields(stmt, nums, fields)); - return TSDB_CODE_SUCCESS; +_return: + + return code; } int stmtGetColFields(TAOS_STMT* stmt, int* nums, TAOS_FIELD_E** fields) { + int32_t code = 0; STscStmt* pStmt = (STscStmt*)stmt; STMT_DLOG_E("start to get col fields"); if (STMT_TYPE_QUERY == pStmt->sql.type) { - STMT_RET(TSDB_CODE_TSC_STMT_API_ERROR); + STMT_ERRI_JRET(TSDB_CODE_TSC_STMT_API_ERROR); } - STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS)); + STMT_ERRI_JRET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS)); if (pStmt->bInfo.needParse && pStmt->sql.runTimes && pStmt->sql.type > 0 && STMT_TYPE_MULTI_INSERT != pStmt->sql.type) { @@ -1027,15 +1031,17 @@ int stmtGetColFields(TAOS_STMT* stmt, int* nums, TAOS_FIELD_E** fields) { pStmt->exec.pRequest = NULL; } - STMT_ERR_RET(stmtCreateRequest(pStmt)); + STMT_ERRI_JRET(stmtCreateRequest(pStmt)); if (pStmt->bInfo.needParse) { - STMT_ERR_RET(stmtParseSql(pStmt)); + STMT_ERRI_JRET(stmtParseSql(pStmt)); } - STMT_ERR_RET(stmtFetchColFields(stmt, nums, fields)); + STMT_ERRI_JRET(stmtFetchColFields(stmt, nums, fields)); - return TSDB_CODE_SUCCESS; +_return: + + return code; } int stmtGetParamNum(TAOS_STMT* stmt, int* nums) { From 8060e77be35e4b9e281e057a9d32a94aecff6078 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Tue, 11 Apr 2023 10:41:35 +0800 Subject: [PATCH 04/24] fix(udf): make strncat working for release building --- source/libs/function/src/udfd.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/source/libs/function/src/udfd.c b/source/libs/function/src/udfd.c index c368788243..e444a974f9 100644 --- a/source/libs/function/src/udfd.c +++ b/source/libs/function/src/udfd.c @@ -571,13 +571,13 @@ int32_t udfdLoadUdf(char *udfName, SUdf *udf) { char initFuncName[TSDB_FUNC_NAME_LEN + 5] = {0}; char *initSuffix = "_init"; strcpy(initFuncName, udfName); - strncat(initFuncName, initSuffix, strlen(initSuffix)); + strncat(initFuncName, initSuffix, TSDB_FUNC_NAME_LEN + 5 - strlen(initFuncName) - 1); uv_dlsym(&udf->lib, initFuncName, (void **)(&udf->initFunc)); char destroyFuncName[TSDB_FUNC_NAME_LEN + 5] = {0}; char *destroySuffix = "_destroy"; strcpy(destroyFuncName, udfName); - strncat(destroyFuncName, destroySuffix, strlen(destroySuffix)); + strncat(destroyFuncName, destroySuffix, TSDB_FUNC_NAME_LEN + 5 - strlen(destroyFuncName) - 1); uv_dlsym(&udf->lib, destroyFuncName, (void **)(&udf->destroyFunc)); if (udf->funcType == TSDB_FUNC_TYPE_SCALAR) { @@ -591,17 +591,17 @@ int32_t udfdLoadUdf(char *udfName, SUdf *udf) { char startFuncName[TSDB_FUNC_NAME_LEN + 6] = {0}; char *startSuffix = "_start"; strncpy(startFuncName, processFuncName, sizeof(startFuncName)); - strncat(startFuncName, startSuffix, strlen(startSuffix)); + strncat(startFuncName, startSuffix, TSDB_FUNC_NAME_LEN + 6 - strlen(startSuffix) - 1); uv_dlsym(&udf->lib, startFuncName, (void **)(&udf->aggStartFunc)); char finishFuncName[TSDB_FUNC_NAME_LEN + 7] = {0}; char *finishSuffix = "_finish"; strncpy(finishFuncName, processFuncName, sizeof(finishFuncName)); - strncat(finishFuncName, finishSuffix, strlen(finishSuffix)); + strncat(finishFuncName, finishSuffix, TSDB_FUNC_NAME_LEN + 7 - strlen(finishFuncName) - 1); uv_dlsym(&udf->lib, finishFuncName, (void **)(&udf->aggFinishFunc)); char mergeFuncName[TSDB_FUNC_NAME_LEN + 6] = {0}; char *mergeSuffix = "_merge"; strncpy(mergeFuncName, processFuncName, sizeof(mergeFuncName)); - strncat(mergeFuncName, mergeSuffix, strlen(mergeSuffix)); + strncat(mergeFuncName, mergeSuffix, TSDB_FUNC_NAME_LEN + 6 - strlen(mergeFuncName) - 1); uv_dlsym(&udf->lib, mergeFuncName, (void **)(&udf->aggMergeFunc)); } return 0; From d99a849956536825b71e0530564d3f075b12815e Mon Sep 17 00:00:00 2001 From: dapan1121 Date: Tue, 11 Apr 2023 10:47:15 +0800 Subject: [PATCH 05/24] fix: ignore stmt get fields error --- source/client/src/clientStmt.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/source/client/src/clientStmt.c b/source/client/src/clientStmt.c index ac60a069eb..6e529f1a0b 100644 --- a/source/client/src/clientStmt.c +++ b/source/client/src/clientStmt.c @@ -977,6 +977,7 @@ int stmtIsInsert(TAOS_STMT* stmt, int* insert) { int stmtGetTagFields(TAOS_STMT* stmt, int* nums, TAOS_FIELD_E** fields) { int32_t code = 0; STscStmt* pStmt = (STscStmt*)stmt; + int32_t preCode = pStmt->errCode; STMT_DLOG_E("start to get tag fields"); @@ -1006,12 +1007,15 @@ int stmtGetTagFields(TAOS_STMT* stmt, int* nums, TAOS_FIELD_E** fields) { _return: + pStmt->errCode = preCode; + return code; } int stmtGetColFields(TAOS_STMT* stmt, int* nums, TAOS_FIELD_E** fields) { int32_t code = 0; STscStmt* pStmt = (STscStmt*)stmt; + int32_t preCode = pStmt->errCode; STMT_DLOG_E("start to get col fields"); @@ -1041,6 +1045,8 @@ int stmtGetColFields(TAOS_STMT* stmt, int* nums, TAOS_FIELD_E** fields) { _return: + pStmt->errCode = preCode; + return code; } From c40c695ea74b9af6427431481c705fd3631544a4 Mon Sep 17 00:00:00 2001 From: wangmm0220 Date: Tue, 11 Apr 2023 13:49:18 +0800 Subject: [PATCH 06/24] fix:optimize log & change the length of tag if tag is null in schemaless --- source/client/src/clientSml.c | 2 +- source/dnode/mgmt/mgmt_vnode/src/vmWorker.c | 4 ++-- source/dnode/vnode/src/tq/tqRead.c | 3 +-- source/libs/executor/src/executor.c | 7 ++++++- source/libs/executor/src/executorimpl.c | 2 ++ source/libs/wal/src/walRead.c | 10 +--------- 6 files changed, 13 insertions(+), 15 deletions(-) diff --git a/source/client/src/clientSml.c b/source/client/src/clientSml.c index 4c28ee76c3..2e040052c3 100644 --- a/source/client/src/clientSml.c +++ b/source/client/src/clientSml.c @@ -699,7 +699,7 @@ static int32_t smlSendMetaMsg(SSmlHandle *info, SName *pName, SArray *pColumns, pReq.numOfTags = 1; SField field = {0}; field.type = TSDB_DATA_TYPE_NCHAR; - field.bytes = 1; + field.bytes = TSDB_NCHAR_SIZE + VARSTR_HEADER_SIZE; strcpy(field.name, tsSmlTagName); taosArrayPush(pReq.pTags, &field); } diff --git a/source/dnode/mgmt/mgmt_vnode/src/vmWorker.c b/source/dnode/mgmt/mgmt_vnode/src/vmWorker.c index e4e0d608de..756ca008b0 100644 --- a/source/dnode/mgmt/mgmt_vnode/src/vmWorker.c +++ b/source/dnode/mgmt/mgmt_vnode/src/vmWorker.c @@ -114,11 +114,11 @@ static void vmProcessFetchQueue(SQueueInfo *pInfo, STaosQall *qall, int32_t numO int32_t code = vnodeProcessFetchMsg(pVnode->pImpl, pMsg, pInfo); if (code != 0) { if (terrno != 0) code = terrno; - dGError("vgId:%d, msg:%p failed to fetch since %s", pVnode->vgId, pMsg, terrstr(code)); + dGError("vnodeProcessFetchMsg vgId:%d, msg:%p failed to fetch since %s", pVnode->vgId, pMsg, terrstr()); vmSendRsp(pMsg, code); } - dGTrace("vgId:%d, msg:%p is freed, code:0x%x", pVnode->vgId, pMsg, code); + dGTrace("vnodeProcessFetchMsg vgId:%d, msg:%p is freed, code:0x%x", pVnode->vgId, pMsg, code); rpcFreeCont(pMsg->pCont); taosFreeQitem(pMsg); } diff --git a/source/dnode/vnode/src/tq/tqRead.c b/source/dnode/vnode/src/tq/tqRead.c index 54e4e393ec..72b478e6bf 100644 --- a/source/dnode/vnode/src/tq/tqRead.c +++ b/source/dnode/vnode/src/tq/tqRead.c @@ -296,10 +296,9 @@ void tqCloseReader(STqReader* pReader) { int32_t tqSeekVer(STqReader* pReader, int64_t ver, const char* id) { if (walReadSeekVer(pReader->pWalReader, ver) < 0) { - tqDebug("tmq poll: wal reader failed to seek to ver:%"PRId64" code:%s, %s", ver, tstrerror(terrno), id); return -1; } - tqDebug("tmq poll: wal reader seek to ver:%"PRId64" %s", ver, id); + tqDebug("tmq poll: wal reader seek to ver success ver:%"PRId64" %s", ver, id); return 0; } diff --git a/source/libs/executor/src/executor.c b/source/libs/executor/src/executor.c index 1670eb3c59..cd7a76f7ad 100644 --- a/source/libs/executor/src/executor.c +++ b/source/libs/executor/src/executor.c @@ -1062,6 +1062,7 @@ int32_t qStreamSetScanMemData(qTaskInfo_t tinfo, SPackedData submit) { SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)tinfo; if((pTaskInfo->execModel != OPTR_EXEC_MODEL_QUEUE) || (pTaskInfo->streamInfo.submit.msgStr != NULL)){ qError("qStreamSetScanMemData err:%d,%p", pTaskInfo->execModel, pTaskInfo->streamInfo.submit.msgStr); + terrno = TSDB_CODE_PAR_INTERNAL_ERROR; return -1; } qDebug("set the submit block for future scan"); @@ -1102,7 +1103,6 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subT // let's seek to the next version in wal file if (tqSeekVer(pInfo->tqReader, pOffset->version + 1, id) < 0) { - qError("tqSeekVer failed ver:%"PRId64", %s", pOffset->version + 1, id); return -1; } } else if (pOffset->type == TMQ_OFFSET__SNAPSHOT_DATA) { @@ -1125,6 +1125,7 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subT } else { taosRUnLockLatch(&pTaskInfo->lock); qError("no table in table list, %s", id); + terrno = TSDB_CODE_PAR_INTERNAL_ERROR; return -1; } } @@ -1143,6 +1144,7 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subT } else { qError("vgId:%d uid:%" PRIu64 " not found in table list, total:%d, index:%d %s", pTaskInfo->id.vgId, uid, numOfTables, pScanInfo->currentTable, id); + terrno = TSDB_CODE_PAR_INTERNAL_ERROR; return -1; } @@ -1175,6 +1177,7 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subT pScanBaseInfo->cond.twindows.skey = oldSkey; } else { qError("invalid pOffset->type:%d, %s", pOffset->type, id); + terrno = TSDB_CODE_PAR_INTERNAL_ERROR; return -1; } @@ -1189,6 +1192,7 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subT if (setForSnapShot(sContext, pOffset->uid) != 0) { qError("setDataForSnapShot error. uid:%" PRId64" , %s", pOffset->uid, id); + terrno = TSDB_CODE_PAR_INTERNAL_ERROR; return -1; } @@ -1224,6 +1228,7 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subT SSnapContext* sContext = pInfo->sContext; if (setForSnapShot(sContext, pOffset->uid) != 0) { qError("setForSnapShot error. uid:%" PRIu64 " ,version:%" PRId64, pOffset->uid, pOffset->version); + terrno = TSDB_CODE_PAR_INTERNAL_ERROR; return -1; } qDebug("tmqsnap qStreamPrepareScan snapshot meta uid:%" PRId64 " ts %" PRId64 " %s", pOffset->uid, pOffset->ts, id); diff --git a/source/libs/executor/src/executorimpl.c b/source/libs/executor/src/executorimpl.c index 11753c181c..4e44e0e223 100644 --- a/source/libs/executor/src/executorimpl.c +++ b/source/libs/executor/src/executorimpl.c @@ -999,6 +999,7 @@ int32_t getTableScanInfo(SOperatorInfo* pOperator, int32_t* order, int32_t* scan SOperatorInfo* extractOperatorInTree(SOperatorInfo* pOperator, int32_t type, const char* id) { if (pOperator == NULL) { qError("invalid operator, failed to find tableScanOperator %s", id); + terrno = TSDB_CODE_PAR_INTERNAL_ERROR; return NULL; } @@ -1007,6 +1008,7 @@ SOperatorInfo* extractOperatorInTree(SOperatorInfo* pOperator, int32_t type, con } else { if (pOperator->pDownstream == NULL || pOperator->pDownstream[0] == NULL) { qError("invalid operator, failed to find tableScanOperator %s", id); + terrno = TSDB_CODE_PAR_INTERNAL_ERROR; return NULL; } diff --git a/source/libs/wal/src/walRead.c b/source/libs/wal/src/walRead.c index ad6127ead2..db4e3a4759 100644 --- a/source/libs/wal/src/walRead.c +++ b/source/libs/wal/src/walRead.c @@ -207,17 +207,12 @@ int32_t walReadSeekVer(SWalReader *pReader, int64_t ver) { return 0; } -// pReader->curInvalid = 1; -// pReader->curVersion = ver; - if (ver > pWal->vers.lastVer || ver < pWal->vers.firstVer) { - wDebug("vgId:%d, invalid index:%" PRId64 ", first index:%" PRId64 ", last index:%" PRId64, pReader->pWal->cfg.vgId, + wInfo("vgId:%d, invalid index:%" PRId64 ", first index:%" PRId64 ", last index:%" PRId64, pReader->pWal->cfg.vgId, ver, pWal->vers.firstVer, pWal->vers.lastVer); terrno = TSDB_CODE_WAL_LOG_NOT_EXIST; return -1; } -// if (ver < pWal->vers.snapshotVer) { -// } if (walReadSeekVerImpl(pReader, ver) < 0) { return -1; @@ -236,8 +231,6 @@ static int32_t walFetchHeadNew(SWalReader *pRead, int64_t fetchVer) { if (pRead->curVersion != fetchVer) { if (walReadSeekVer(pRead, fetchVer) < 0) { -// pRead->curVersion = fetchVer; -// pRead->curInvalid = 1; return -1; } seeked = true; @@ -256,7 +249,6 @@ static int32_t walFetchHeadNew(SWalReader *pRead, int64_t fetchVer) { } else { terrno = TSDB_CODE_WAL_FILE_CORRUPTED; } -// pRead->curInvalid = 1; return -1; } } From 42a5a87e964b4d54b0cbfaa7b1bc14f7de4f7ea1 Mon Sep 17 00:00:00 2001 From: Xiaoyu Wang Date: Tue, 11 Apr 2023 15:08:42 +0800 Subject: [PATCH 07/24] fix: table level privilege --- source/dnode/mnode/impl/src/mndUser.c | 31 ++++++++++++++++++++++++--- source/libs/catalog/src/ctgUtil.c | 3 ++- source/libs/parser/src/parInsertSql.c | 2 +- 3 files changed, 31 insertions(+), 5 deletions(-) diff --git a/source/dnode/mnode/impl/src/mndUser.c b/source/dnode/mnode/impl/src/mndUser.c index 34914d80f0..3a1c4ce58f 100644 --- a/source/dnode/mnode/impl/src/mndUser.c +++ b/source/dnode/mnode/impl/src/mndUser.c @@ -156,7 +156,7 @@ SSdbRaw *mndUserActionEncode(SUserObj *pUser) { size_t valueLen = 0; valueLen = strlen(stb); size += sizeof(int32_t); - size += keyLen; + size += valueLen; stb = taosHashIterate(pUser->writeTbs, stb); } @@ -369,7 +369,7 @@ static SSdbRow *mndUserActionDecode(SSdbRaw *pRaw) { int32_t valuelen = 0; SDB_GET_INT32(pRaw, dataPos, &valuelen, _OVER); char *value = taosMemoryCalloc(valuelen, sizeof(char)); - memset(value, 0, keyLen); + memset(value, 0, valuelen); SDB_GET_BINARY(pRaw, dataPos, value, valuelen, _OVER) taosHashPut(pUser->writeTbs, key, keyLen, value, valuelen); @@ -458,6 +458,31 @@ SHashObj *mndDupTableHash(SHashObj *pOld) { return pNew; } +SHashObj *mndDupUseDbHash(SHashObj *pOld) { + SHashObj *pNew = + taosHashInit(taosHashGetSize(pOld), taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK); + if (pNew == NULL) { + terrno = TSDB_CODE_OUT_OF_MEMORY; + return NULL; + } + + int32_t *db = taosHashIterate(pOld, NULL); + while (db != NULL) { + size_t keyLen = 0; + char *key = taosHashGetKey(db, &keyLen); + + if (taosHashPut(pNew, key, keyLen, db, sizeof(*db)) != 0) { + taosHashCancelIterate(pOld, db); + taosHashCleanup(pNew); + terrno = TSDB_CODE_OUT_OF_MEMORY; + return NULL; + } + db = taosHashIterate(pOld, db); + } + + return pNew; +} + static int32_t mndUserDupObj(SUserObj *pUser, SUserObj *pNew) { memcpy(pNew, pUser, sizeof(SUserObj)); pNew->authVersion++; @@ -469,7 +494,7 @@ static int32_t mndUserDupObj(SUserObj *pUser, SUserObj *pNew) { pNew->readTbs = mndDupTableHash(pUser->readTbs); pNew->writeTbs = mndDupTableHash(pUser->writeTbs); pNew->topics = mndDupTopicHash(pUser->topics); - pNew->useDbs = mndDupDbHash(pUser->useDbs); + pNew->useDbs = mndDupUseDbHash(pUser->useDbs); taosRUnLockLatch(&pUser->lock); if (pNew->readDbs == NULL || pNew->writeDbs == NULL || pNew->topics == NULL) { diff --git a/source/libs/catalog/src/ctgUtil.c b/source/libs/catalog/src/ctgUtil.c index d02f94c939..f7c7522a88 100644 --- a/source/libs/catalog/src/ctgUtil.c +++ b/source/libs/catalog/src/ctgUtil.c @@ -1457,7 +1457,8 @@ int32_t ctgChkSetAuthRes(SCatalog* pCtg, SCtgAuthReq* req, SCtgAuthRsp* res) { } case AUTH_TYPE_READ_OR_WRITE: { if ((pInfo->readDbs && taosHashGet(pInfo->readDbs, dbFName, strlen(dbFName))) || - (pInfo->writeDbs && taosHashGet(pInfo->writeDbs, dbFName, strlen(dbFName)))) { + (pInfo->writeDbs && taosHashGet(pInfo->writeDbs, dbFName, strlen(dbFName))) || + (pInfo->useDbs && taosHashGet(pInfo->useDbs, dbFName, strlen(dbFName)))) { pRes->pass = true; return TSDB_CODE_SUCCESS; } diff --git a/source/libs/parser/src/parInsertSql.c b/source/libs/parser/src/parInsertSql.c index d734479b98..83528288fc 100644 --- a/source/libs/parser/src/parInsertSql.c +++ b/source/libs/parser/src/parInsertSql.c @@ -2022,7 +2022,7 @@ static int32_t buildInsertUserAuthReq(const char* pUser, SName* pName, SArray** SUserAuthInfo userAuth = {.type = AUTH_TYPE_WRITE}; snprintf(userAuth.user, sizeof(userAuth.user), "%s", pUser); - // tNameGetFullDbName(pName, userAuth.dbFName); + memcpy(&userAuth.tbName, pName, sizeof(SName)); taosArrayPush(*pUserAuth, &userAuth); return TSDB_CODE_SUCCESS; From 171647bdbd5730b73097e45ba4ed5a0876f22884 Mon Sep 17 00:00:00 2001 From: Haojun Liao Date: Tue, 11 Apr 2023 15:46:36 +0800 Subject: [PATCH 08/24] fix(shell): update the double display. --- tools/shell/src/shellEngine.c | 22 ++++++---------------- 1 file changed, 6 insertions(+), 16 deletions(-) diff --git a/tools/shell/src/shellEngine.c b/tools/shell/src/shellEngine.c index f9602aaa4d..4798151d38 100644 --- a/tools/shell/src/shellEngine.c +++ b/tools/shell/src/shellEngine.c @@ -315,7 +315,6 @@ void shellDumpFieldToFile(TdFilePtr pFile, const char *val, TAOS_FIELD *field, i quotationStr[0] = '\"'; quotationStr[1] = 0; - int n; char buf[TSDB_MAX_BYTES_PER_ROW]; switch (field->type) { case TSDB_DATA_TYPE_BOOL: @@ -346,15 +345,11 @@ void shellDumpFieldToFile(TdFilePtr pFile, const char *val, TAOS_FIELD *field, i taosFprintfFile(pFile, "%" PRIu64, *((uint64_t *)val)); break; case TSDB_DATA_TYPE_FLOAT: - taosFprintfFile(pFile, "%.5f", GET_FLOAT_VAL(val)); + taosFprintfFile(pFile, "%e", GET_FLOAT_VAL(val)); break; case TSDB_DATA_TYPE_DOUBLE: - n = snprintf(buf, TSDB_MAX_BYTES_PER_ROW, "%*.9f", length, GET_DOUBLE_VAL(val)); - if (n > TMAX(25, length)) { - taosFprintfFile(pFile, "%*.15e", length, GET_DOUBLE_VAL(val)); - } else { - taosFprintfFile(pFile, "%s", buf); - } + snprintf(buf, TSDB_MAX_BYTES_PER_ROW, "%*.15e", 23, GET_DOUBLE_VAL(val)); + taosFprintfFile(pFile, "%s", buf); break; case TSDB_DATA_TYPE_BINARY: case TSDB_DATA_TYPE_NCHAR: @@ -510,7 +505,6 @@ void shellPrintField(const char *val, TAOS_FIELD *field, int32_t width, int32_t return; } - int n; char buf[TSDB_MAX_BYTES_PER_ROW]; switch (field->type) { case TSDB_DATA_TYPE_BOOL: @@ -541,15 +535,11 @@ void shellPrintField(const char *val, TAOS_FIELD *field, int32_t width, int32_t printf("%*" PRIu64, width, *((uint64_t *)val)); break; case TSDB_DATA_TYPE_FLOAT: - printf("%*ef", width, GET_FLOAT_VAL(val)); + printf("%*e", width, GET_FLOAT_VAL(val)); break; case TSDB_DATA_TYPE_DOUBLE: - n = snprintf(buf, TSDB_MAX_BYTES_PER_ROW, "%*.9f", width, GET_DOUBLE_VAL(val)); - if (n > TMAX(25, width)) { - printf("%*.15e", width, GET_DOUBLE_VAL(val)); - } else { - printf("%s", buf); - } + snprintf(buf, TSDB_MAX_BYTES_PER_ROW, "%.15e", GET_DOUBLE_VAL(val)); + printf("%*s", width, buf); break; case TSDB_DATA_TYPE_BINARY: case TSDB_DATA_TYPE_NCHAR: From 20013b3f77cb4af8da5c7703d0ac3409a2a6b09b Mon Sep 17 00:00:00 2001 From: Haojun Liao Date: Tue, 11 Apr 2023 17:26:05 +0800 Subject: [PATCH 09/24] fix(shell): update the display of double value. --- tools/shell/src/shellEngine.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/shell/src/shellEngine.c b/tools/shell/src/shellEngine.c index 4798151d38..11758c76dc 100644 --- a/tools/shell/src/shellEngine.c +++ b/tools/shell/src/shellEngine.c @@ -538,7 +538,7 @@ void shellPrintField(const char *val, TAOS_FIELD *field, int32_t width, int32_t printf("%*e", width, GET_FLOAT_VAL(val)); break; case TSDB_DATA_TYPE_DOUBLE: - snprintf(buf, TSDB_MAX_BYTES_PER_ROW, "%.15e", GET_DOUBLE_VAL(val)); + snprintf(buf, TSDB_MAX_BYTES_PER_ROW, "%.9e", GET_DOUBLE_VAL(val)); printf("%*s", width, buf); break; case TSDB_DATA_TYPE_BINARY: From bf45ff56cafb2d21cfdf7cbbb91af9e15fa48023 Mon Sep 17 00:00:00 2001 From: wangmm0220 Date: Tue, 11 Apr 2023 18:27:05 +0800 Subject: [PATCH 10/24] fix:modify fileContent for data compare --- tests/system-test/7-tmq/tmq3mnodeSwitch.py | 30 +------------------- tests/system-test/7-tmq/tmqCheckData.py | 33 ++-------------------- tests/system-test/7-tmq/tmqCommon.py | 22 +++++++++------ 3 files changed, 17 insertions(+), 68 deletions(-) diff --git a/tests/system-test/7-tmq/tmq3mnodeSwitch.py b/tests/system-test/7-tmq/tmq3mnodeSwitch.py index 7c95f7a3db..54ccc88226 100644 --- a/tests/system-test/7-tmq/tmq3mnodeSwitch.py +++ b/tests/system-test/7-tmq/tmq3mnodeSwitch.py @@ -138,34 +138,6 @@ class TDTestCase: else: tdLog.exit("three mnodes is not ready in 10s ") - def checkFileContent(self, consumerId, queryString): - buildPath = tdCom.getBuildPath() - cfgPath = tdCom.getClientCfgPath() - dstFile = '%s/../log/dstrows_%d.txt'%(cfgPath, consumerId) - cmdStr = '%s/build/bin/taos -c %s -s "%s >> %s"'%(buildPath, cfgPath, queryString, dstFile) - tdLog.info(cmdStr) - os.system(cmdStr) - - consumeRowsFile = '%s/../log/consumerid_%d.txt'%(cfgPath, consumerId) - tdLog.info("rows file: %s, %s"%(consumeRowsFile, dstFile)) - - consumeFile = open(consumeRowsFile, mode='r') - queryFile = open(dstFile, mode='r') - - # skip first line for it is schema - queryFile.readline() - - while True: - dst = queryFile.readline() - src = consumeFile.readline() - - if dst: - if dst != src: - tdLog.exit("consumerId %d consume rows is not match the rows by direct query"%consumerId) - else: - break - return - def tmqCase1(self): tdLog.printNoPrefix("======== test case 1: ") paraDict = {'dbName': 'db1', @@ -256,7 +228,7 @@ class TDTestCase: tdLog.exit("0 tmq consume rows error!") if expectRowsList[0] == resultList[0]: - self.checkFileContent(consumerId, queryString) + tqCom.checkFileContent(consumerId, queryString) time.sleep(10) for i in range(len(topicNameList)): diff --git a/tests/system-test/7-tmq/tmqCheckData.py b/tests/system-test/7-tmq/tmqCheckData.py index 04d0744ab5..1b45646c79 100644 --- a/tests/system-test/7-tmq/tmqCheckData.py +++ b/tests/system-test/7-tmq/tmqCheckData.py @@ -5,6 +5,7 @@ import time import socket import os import threading +import math from util.log import * from util.sql import * @@ -21,34 +22,6 @@ class TDTestCase: tdSql.init(conn.cursor()) #tdSql.init(conn.cursor(), logSql) # output sql.txt file - def checkFileContent(self, consumerId, queryString): - buildPath = tdCom.getBuildPath() - cfgPath = tdCom.getClientCfgPath() - dstFile = '%s/../log/dstrows_%d.txt'%(cfgPath, consumerId) - cmdStr = '%s/build/bin/taos -c %s -s "%s >> %s"'%(buildPath, cfgPath, queryString, dstFile) - tdLog.info(cmdStr) - os.system(cmdStr) - - consumeRowsFile = '%s/../log/consumerid_%d.txt'%(cfgPath, consumerId) - tdLog.info("rows file: %s, %s"%(consumeRowsFile, dstFile)) - - consumeFile = open(consumeRowsFile, mode='r') - queryFile = open(dstFile, mode='r') - - # skip first line for it is schema - queryFile.readline() - - while True: - dst = queryFile.readline() - src = consumeFile.readline() - - if dst: - if dst != src: - tdLog.exit("consumerId %d consume rows is not match the rows by direct query"%consumerId) - else: - break - return - def tmqCase1(self): tdLog.printNoPrefix("======== test case 1: ") paraDict = {'dbName': 'db1', @@ -109,7 +82,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[0], resultList[0])) tdLog.exit("0 tmq consume rows error!") - self.checkFileContent(consumerId, queryString) + tmqCom.checkFileContent(consumerId, queryString) # reinit consume info, and start tmq_sim, then check consume result tmqCom.initConsumerTable() @@ -135,7 +108,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[1], resultList[0])) tdLog.exit("1 tmq consume rows error!") - self.checkFileContent(consumerId, queryString) + tmqCom.checkFileContent(consumerId, queryString) # reinit consume info, and start tmq_sim, then check consume result tmqCom.initConsumerTable() diff --git a/tests/system-test/7-tmq/tmqCommon.py b/tests/system-test/7-tmq/tmqCommon.py index 895da95e5d..d17fea7b5a 100644 --- a/tests/system-test/7-tmq/tmqCommon.py +++ b/tests/system-test/7-tmq/tmqCommon.py @@ -10,7 +10,7 @@ ################################################################### # -*- coding: utf-8 -*- - +import math from asyncore import loop from collections import defaultdict import subprocess @@ -462,18 +462,22 @@ class TMQCom: for i in range(0,skipRowsOfCons): consumeFile.readline() - lines = 0 while True: dst = queryFile.readline() src = consumeFile.readline() - lines += 1 - if dst: - if dst != src: - tdLog.info("src row: %s"%src) - tdLog.info("dst row: %s"%dst) - tdLog.exit("consumerId %d consume rows[%d] is not match the rows by direct query"%(consumerId, lines)) - else: + dstSplit = dst.split(',') + srcSplit = src.split(',') + + if len(dstSplit) != len(srcSplit): + tdLog.exit("consumerId %d consume rows len is not match the rows by direct query"%consumerId) + if not dst or not src: break + for i in range(len(dstSplit)): + if srcSplit[i] != dstSplit[i]: + srcFloat = float(srcSplit[i]) + dstFloat = float(dstSplit[i]) + if not math.isclose(srcFloat, dstFloat, abs_tol=1e-9): + tdLog.exit("consumerId %d consume rows is not match the rows by direct query"%consumerId) return def getResultFileByTaosShell(self, consumerId, queryString): From 54a414a93418f20e90b521dd5be0bdc43b3f8fa6 Mon Sep 17 00:00:00 2001 From: wangmm0220 Date: Tue, 11 Apr 2023 18:35:05 +0800 Subject: [PATCH 11/24] fix:rm useless fileContent function --- tests/system-test/2-query/sml.py | 4 +-- .../7-tmq/dropDbR3ConflictTransaction.py | 28 --------------- tests/system-test/7-tmq/tmq3mnodeSwitch.py | 2 +- tests/system-test/7-tmq/tmqCheckData.py | 2 +- tests/system-test/7-tmq/tmqCheckData1.py | 34 ++---------------- tests/system-test/7-tmq/tmqConsumerGroup.py | 28 --------------- .../7-tmq/tmqUdf-multCtb-snapshot0.py | 36 +++---------------- .../7-tmq/tmqUdf-multCtb-snapshot1.py | 36 +++---------------- tests/system-test/7-tmq/tmqUdf.py | 36 +++---------------- tests/system-test/99-TDcase/TD-16821.py | 34 ++---------------- 10 files changed, 22 insertions(+), 218 deletions(-) diff --git a/tests/system-test/2-query/sml.py b/tests/system-test/2-query/sml.py index ec6309c71a..f96ed8a3ff 100644 --- a/tests/system-test/2-query/sml.py +++ b/tests/system-test/2-query/sml.py @@ -24,7 +24,7 @@ class TDTestCase: tdSql.init(conn.cursor(), True) #tdSql.init(conn.cursor(), logSql) # output sql.txt file - def checkFileContent(self, dbname="sml_db"): + def checkContent(self, dbname="sml_db"): simClientCfg="%s/taos.cfg"%tdDnodes.getSimCfgPath() buildPath = tdCom.getBuildPath() cmdStr = '%s/build/bin/sml_test %s'%(buildPath, simClientCfg) @@ -102,7 +102,7 @@ class TDTestCase: def run(self): tdSql.prepare() - self.checkFileContent() + self.checkContent() def stop(self): tdSql.close() diff --git a/tests/system-test/7-tmq/dropDbR3ConflictTransaction.py b/tests/system-test/7-tmq/dropDbR3ConflictTransaction.py index 4371a909c2..e25fb412af 100644 --- a/tests/system-test/7-tmq/dropDbR3ConflictTransaction.py +++ b/tests/system-test/7-tmq/dropDbR3ConflictTransaction.py @@ -32,34 +32,6 @@ class TDTestCase: tdSql.init(conn.cursor()) #tdSql.init(conn.cursor(), logSql) # output sql.txt file - def checkFileContent(self, consumerId, queryString): - buildPath = tdCom.getBuildPath() - cfgPath = tdCom.getClientCfgPath() - dstFile = '%s/../log/dstrows_%d.txt'%(cfgPath, consumerId) - cmdStr = '%s/build/bin/taos -c %s -s "%s >> %s"'%(buildPath, cfgPath, queryString, dstFile) - tdLog.info(cmdStr) - os.system(cmdStr) - - consumeRowsFile = '%s/../log/consumerid_%d.txt'%(cfgPath, consumerId) - tdLog.info("rows file: %s, %s"%(consumeRowsFile, dstFile)) - - consumeFile = open(consumeRowsFile, mode='r') - queryFile = open(dstFile, mode='r') - - # skip first line for it is schema - queryFile.readline() - - while True: - dst = queryFile.readline() - src = consumeFile.readline() - - if dst: - if dst != src: - tdLog.exit("consumerId %d consume rows is not match the rows by direct query"%consumerId) - else: - break - return - def prepareTestEnv(self): tdLog.printNoPrefix("======== prepare test env include database, stable, ctables, and insert data: ") paraDict = {'dbName': 'dbt', diff --git a/tests/system-test/7-tmq/tmq3mnodeSwitch.py b/tests/system-test/7-tmq/tmq3mnodeSwitch.py index 54ccc88226..8c5dc5e693 100644 --- a/tests/system-test/7-tmq/tmq3mnodeSwitch.py +++ b/tests/system-test/7-tmq/tmq3mnodeSwitch.py @@ -228,7 +228,7 @@ class TDTestCase: tdLog.exit("0 tmq consume rows error!") if expectRowsList[0] == resultList[0]: - tqCom.checkFileContent(consumerId, queryString) + tmqCom.checkFileContent(consumerId, queryString) time.sleep(10) for i in range(len(topicNameList)): diff --git a/tests/system-test/7-tmq/tmqCheckData.py b/tests/system-test/7-tmq/tmqCheckData.py index 1b45646c79..4d5edf87f1 100644 --- a/tests/system-test/7-tmq/tmqCheckData.py +++ b/tests/system-test/7-tmq/tmqCheckData.py @@ -134,7 +134,7 @@ class TDTestCase: # tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[2], resultList[0])) # tdLog.exit("2 tmq consume rows error!") - # self.checkFileContent(consumerId, queryString) + # tmqCom.checkFileContent(consumerId, queryString) time.sleep(10) for i in range(len(topicNameList)): diff --git a/tests/system-test/7-tmq/tmqCheckData1.py b/tests/system-test/7-tmq/tmqCheckData1.py index b9dac62833..1209c2812c 100644 --- a/tests/system-test/7-tmq/tmqCheckData1.py +++ b/tests/system-test/7-tmq/tmqCheckData1.py @@ -21,34 +21,6 @@ class TDTestCase: tdSql.init(conn.cursor()) #tdSql.init(conn.cursor(), logSql) # output sql.txt file - def checkFileContent(self, consumerId, queryString): - buildPath = tdCom.getBuildPath() - cfgPath = tdCom.getClientCfgPath() - dstFile = '%s/../log/dstrows_%d.txt'%(cfgPath, consumerId) - cmdStr = '%s/build/bin/taos -c %s -s "%s >> %s"'%(buildPath, cfgPath, queryString, dstFile) - tdLog.info(cmdStr) - os.system(cmdStr) - - consumeRowsFile = '%s/../log/consumerid_%d.txt'%(cfgPath, consumerId) - tdLog.info("rows file: %s, %s"%(consumeRowsFile, dstFile)) - - consumeFile = open(consumeRowsFile, mode='r') - queryFile = open(dstFile, mode='r') - - # skip first line for it is schema - queryFile.readline() - - while True: - dst = queryFile.readline() - src = consumeFile.readline() - - if dst: - if dst != src: - tdLog.exit("consumerId %d consume rows is not match the rows by direct query"%consumerId) - else: - break - return - def tmqCase1(self): tdLog.printNoPrefix("======== test case 1: ") paraDict = {'dbName': 'db1', @@ -109,7 +81,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[0], resultList[0])) tdLog.exit("0 tmq consume rows error!") - self.checkFileContent(consumerId, queryString) + tmqCom.checkFileContent(consumerId, queryString) # reinit consume info, and start tmq_sim, then check consume result tmqCom.initConsumerTable() @@ -134,7 +106,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[1], resultList[0])) tdLog.exit("1 tmq consume rows error!") - self.checkFileContent(consumerId, queryString) + tmqCom.checkFileContent(consumerId, queryString) # reinit consume info, and start tmq_sim, then check consume result tmqCom.initConsumerTable() @@ -159,7 +131,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[2], resultList[0])) tdLog.exit("2 tmq consume rows error!") - self.checkFileContent(consumerId, queryString) + tmqCom.checkFileContent(consumerId, queryString) time.sleep(10) for i in range(len(topicNameList)): diff --git a/tests/system-test/7-tmq/tmqConsumerGroup.py b/tests/system-test/7-tmq/tmqConsumerGroup.py index 02093a2d88..ed70866c5d 100644 --- a/tests/system-test/7-tmq/tmqConsumerGroup.py +++ b/tests/system-test/7-tmq/tmqConsumerGroup.py @@ -21,34 +21,6 @@ class TDTestCase: tdSql.init(conn.cursor()) #tdSql.init(conn.cursor(), logSql) # output sql.txt file - def checkFileContent(self, consumerId, queryString): - buildPath = tdCom.getBuildPath() - cfgPath = tdCom.getClientCfgPath() - dstFile = '%s/../log/dstrows_%d.txt'%(cfgPath, consumerId) - cmdStr = '%s/build/bin/taos -c %s -s "%s >> %s"'%(buildPath, cfgPath, queryString, dstFile) - tdLog.info(cmdStr) - os.system(cmdStr) - - consumeRowsFile = '%s/../log/consumerid_%d.txt'%(cfgPath, consumerId) - tdLog.info("rows file: %s, %s"%(consumeRowsFile, dstFile)) - - consumeFile = open(consumeRowsFile, mode='r') - queryFile = open(dstFile, mode='r') - - # skip first line for it is schema - queryFile.readline() - - while True: - dst = queryFile.readline() - src = consumeFile.readline() - - if dst: - if dst != src: - tdLog.exit("consumerId %d consume rows is not match the rows by direct query"%consumerId) - else: - break - return - def tmqCase1(self): tdLog.printNoPrefix("======== test case 1: ") paraDict = {'dbName': 'db1', diff --git a/tests/system-test/7-tmq/tmqUdf-multCtb-snapshot0.py b/tests/system-test/7-tmq/tmqUdf-multCtb-snapshot0.py index 297429b495..6ce798ab87 100644 --- a/tests/system-test/7-tmq/tmqUdf-multCtb-snapshot0.py +++ b/tests/system-test/7-tmq/tmqUdf-multCtb-snapshot0.py @@ -60,34 +60,6 @@ class TDTestCase: tdLog.exit("create udf functions fail") return - def checkFileContent(self, consumerId, queryString): - buildPath = tdCom.getBuildPath() - cfgPath = tdCom.getClientCfgPath() - dstFile = '%s/../log/dstrows_%d.txt'%(cfgPath, consumerId) - cmdStr = '%s/build/bin/taos -c %s -s "%s >> %s"'%(buildPath, cfgPath, queryString, dstFile) - tdLog.info(cmdStr) - os.system(cmdStr) - - consumeRowsFile = '%s/../log/consumerid_%d.txt'%(cfgPath, consumerId) - tdLog.info("rows file: %s, %s"%(consumeRowsFile, dstFile)) - - consumeFile = open(consumeRowsFile, mode='r') - queryFile = open(dstFile, mode='r') - - # skip first line for it is schema - queryFile.readline() - - while True: - dst = queryFile.readline() - src = consumeFile.readline() - - if dst: - if dst != src: - tdLog.exit("consumerId %d consume rows is not match the rows by direct query"%consumerId) - else: - break - return - def prepareTestEnv(self): tdLog.printNoPrefix("======== prepare test env include database, stable, ctables, and insert data: ") paraDict = {'dbName': 'dbt', @@ -199,7 +171,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[0], resultList[0])) tdLog.exit("0 tmq consume rows error!") - # self.checkFileContent(consumerId, queryString) + # tmqCom.checkFileContent(consumerId, queryString) # tdLog.printNoPrefix("consumerId %d check data ok!"%(consumerId)) # reinit consume info, and start tmq_sim, then check consume result @@ -226,7 +198,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[1], resultList[0])) tdLog.exit("1 tmq consume rows error!") - # self.checkFileContent(consumerId, queryString) + # tmqCom.checkFileContent(consumerId, queryString) # tdLog.printNoPrefix("consumerId %d check data ok!"%(consumerId)) time.sleep(10) @@ -309,7 +281,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[0], resultList[0])) tdLog.exit("2 tmq consume rows error!") - # self.checkFileContent(consumerId, queryString) + # tmqCom.checkFileContent(consumerId, queryString) # tdLog.printNoPrefix("consumerId %d check data ok!"%(consumerId)) # reinit consume info, and start tmq_sim, then check consume result @@ -336,7 +308,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[1], resultList[0])) tdLog.exit("3 tmq consume rows error!") - # self.checkFileContent(consumerId, queryString) + # tmqCom.checkFileContent(consumerId, queryString) # tdLog.printNoPrefix("consumerId %d check data ok!"%(consumerId)) time.sleep(10) diff --git a/tests/system-test/7-tmq/tmqUdf-multCtb-snapshot1.py b/tests/system-test/7-tmq/tmqUdf-multCtb-snapshot1.py index 9c139b50de..143c8762b8 100644 --- a/tests/system-test/7-tmq/tmqUdf-multCtb-snapshot1.py +++ b/tests/system-test/7-tmq/tmqUdf-multCtb-snapshot1.py @@ -60,34 +60,6 @@ class TDTestCase: tdLog.exit("create udf functions fail") return - def checkFileContent(self, consumerId, queryString): - buildPath = tdCom.getBuildPath() - cfgPath = tdCom.getClientCfgPath() - dstFile = '%s/../log/dstrows_%d.txt'%(cfgPath, consumerId) - cmdStr = '%s/build/bin/taos -c %s -s "%s >> %s"'%(buildPath, cfgPath, queryString, dstFile) - tdLog.info(cmdStr) - os.system(cmdStr) - - consumeRowsFile = '%s/../log/consumerid_%d.txt'%(cfgPath, consumerId) - tdLog.info("rows file: %s, %s"%(consumeRowsFile, dstFile)) - - consumeFile = open(consumeRowsFile, mode='r') - queryFile = open(dstFile, mode='r') - - # skip first line for it is schema - queryFile.readline() - - while True: - dst = queryFile.readline() - src = consumeFile.readline() - - if dst: - if dst != src: - tdLog.exit("consumerId %d consume rows is not match the rows by direct query"%consumerId) - else: - break - return - def prepareTestEnv(self): tdLog.printNoPrefix("======== prepare test env include database, stable, ctables, and insert data: ") paraDict = {'dbName': 'dbt', @@ -199,7 +171,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[0], resultList[0])) tdLog.exit("0 tmq consume rows error!") - # self.checkFileContent(consumerId, queryString) + # tmqCom.checkFileContent(consumerId, queryString) # tdLog.printNoPrefix("consumerId %d check data ok!"%(consumerId)) # reinit consume info, and start tmq_sim, then check consume result @@ -226,7 +198,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[1], resultList[0])) tdLog.exit("1 tmq consume rows error!") - # self.checkFileContent(consumerId, queryString) + # tmqCom.checkFileContent(consumerId, queryString) # tdLog.printNoPrefix("consumerId %d check data ok!"%(consumerId)) time.sleep(10) @@ -309,7 +281,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[0], resultList[0])) tdLog.exit("2 tmq consume rows error!") - # self.checkFileContent(consumerId, queryString) + # tmqCom.checkFileContent(consumerId, queryString) # tdLog.printNoPrefix("consumerId %d check data ok!"%(consumerId)) # reinit consume info, and start tmq_sim, then check consume result @@ -336,7 +308,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[1], resultList[0])) tdLog.exit("3 tmq consume rows error!") - # self.checkFileContent(consumerId, queryString) + # tmqCom.checkFileContent(consumerId, queryString) # tdLog.printNoPrefix("consumerId %d check data ok!"%(consumerId)) time.sleep(10) diff --git a/tests/system-test/7-tmq/tmqUdf.py b/tests/system-test/7-tmq/tmqUdf.py index 8593fd4f1e..04a3a2301d 100644 --- a/tests/system-test/7-tmq/tmqUdf.py +++ b/tests/system-test/7-tmq/tmqUdf.py @@ -60,34 +60,6 @@ class TDTestCase: tdLog.exit("create udf functions fail") return - def checkFileContent(self, consumerId, queryString): - buildPath = tdCom.getBuildPath() - cfgPath = tdCom.getClientCfgPath() - dstFile = '%s/../log/dstrows_%d.txt'%(cfgPath, consumerId) - cmdStr = '%s/build/bin/taos -c %s -s "%s >> %s"'%(buildPath, cfgPath, queryString, dstFile) - tdLog.info(cmdStr) - os.system(cmdStr) - - consumeRowsFile = '%s/../log/consumerid_%d.txt'%(cfgPath, consumerId) - tdLog.info("rows file: %s, %s"%(consumeRowsFile, dstFile)) - - consumeFile = open(consumeRowsFile, mode='r') - queryFile = open(dstFile, mode='r') - - # skip first line for it is schema - queryFile.readline() - - while True: - dst = queryFile.readline() - src = consumeFile.readline() - - if dst: - if dst != src: - tdLog.exit("consumerId %d consume rows is not match the rows by direct query"%consumerId) - else: - break - return - def prepareTestEnv(self): tdLog.printNoPrefix("======== prepare test env include database, stable, ctables, and insert data: ") paraDict = {'dbName': 'dbt', @@ -199,7 +171,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[0], resultList[0])) tdLog.exit("0 tmq consume rows error!") - self.checkFileContent(consumerId, queryString) + tmqCom.checkFileContent(consumerId, queryString) tdLog.printNoPrefix("consumerId %d check data ok!"%(consumerId)) @@ -227,7 +199,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[1], resultList[0])) tdLog.exit("1 tmq consume rows error!") - self.checkFileContent(consumerId, queryString) + tmqCom.checkFileContent(consumerId, queryString) tdLog.printNoPrefix("consumerId %d check data ok!"%(consumerId)) time.sleep(10) @@ -310,7 +282,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[0], resultList[0])) tdLog.exit("2 tmq consume rows error!") - self.checkFileContent(consumerId, queryString) + tmqCom.checkFileContent(consumerId, queryString) tdLog.printNoPrefix("consumerId %d check data ok!"%(consumerId)) # reinit consume info, and start tmq_sim, then check consume result @@ -337,7 +309,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[1], resultList[0])) tdLog.exit("3 tmq consume rows error!") - self.checkFileContent(consumerId, queryString) + tmqCom.checkFileContent(consumerId, queryString) tdLog.printNoPrefix("consumerId %d check data ok!"%(consumerId)) time.sleep(10) diff --git a/tests/system-test/99-TDcase/TD-16821.py b/tests/system-test/99-TDcase/TD-16821.py index f57fae752f..26b41e6afc 100644 --- a/tests/system-test/99-TDcase/TD-16821.py +++ b/tests/system-test/99-TDcase/TD-16821.py @@ -21,34 +21,6 @@ class TDTestCase: tdSql.init(conn.cursor()) #tdSql.init(conn.cursor(), logSql) # output sql.txt file - def checkFileContent(self, consumerId, queryString): - buildPath = tdCom.getBuildPath() - cfgPath = tdCom.getClientCfgPath() - dstFile = '%s/../log/dstrows_%d.txt'%(cfgPath, consumerId) - cmdStr = '%s/build/bin/taos -c %s -s "%s >> %s"'%(buildPath, cfgPath, queryString, dstFile) - tdLog.info(cmdStr) - os.system(cmdStr) - - consumeRowsFile = '%s/../log/consumerid_%d.txt'%(cfgPath, consumerId) - tdLog.info("rows file: %s, %s"%(consumeRowsFile, dstFile)) - - consumeFile = open(consumeRowsFile, mode='r') - queryFile = open(dstFile, mode='r') - - # skip first line for it is schema - queryFile.readline() - - while True: - dst = queryFile.readline() - src = consumeFile.readline() - - if dst: - if dst != src: - tdLog.exit("consumerId %d consume rows is not match the rows by direct query"%consumerId) - else: - break - return - def tmqCase1(self): tdLog.printNoPrefix("======== test case 1: ") paraDict = {'dbName': 'db1', @@ -113,7 +85,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[0], resultList[0])) tdLog.exit("0 tmq consume rows error!") - self.checkFileContent(consumerId, queryString) + tmqCom.checkFileContent(consumerId, queryString) # reinit consume info, and start tmq_sim, then check consume result tmqCom.initConsumerTable() @@ -139,7 +111,7 @@ class TDTestCase: tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[1], resultList[0])) tdLog.exit("1 tmq consume rows error!") - self.checkFileContent(consumerId, queryString) + tmqCom.checkFileContent(consumerId, queryString) # reinit consume info, and start tmq_sim, then check consume result tmqCom.initConsumerTable() @@ -165,7 +137,7 @@ class TDTestCase: # tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[2], resultList[0])) # tdLog.exit("2 tmq consume rows error!") - # self.checkFileContent(consumerId, queryString) + # tmqCom.checkFileContent(consumerId, queryString) time.sleep(10) for i in range(len(topicNameList)): From 985a2382425fb17495846b5ad5911ab71782886c Mon Sep 17 00:00:00 2001 From: wangmm0220 Date: Tue, 11 Apr 2023 19:38:10 +0800 Subject: [PATCH 12/24] fix:modify checkFileContent if one is empty --- tests/system-test/7-tmq/tmqCommon.py | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/tests/system-test/7-tmq/tmqCommon.py b/tests/system-test/7-tmq/tmqCommon.py index d17fea7b5a..b47601620a 100644 --- a/tests/system-test/7-tmq/tmqCommon.py +++ b/tests/system-test/7-tmq/tmqCommon.py @@ -468,10 +468,12 @@ class TMQCom: dstSplit = dst.split(',') srcSplit = src.split(',') - if len(dstSplit) != len(srcSplit): - tdLog.exit("consumerId %d consume rows len is not match the rows by direct query"%consumerId) if not dst or not src: break + if len(dstSplit) != len(srcSplit): + tdLog.exit("consumerId %d consume rows len is not match the rows by direct query,len(dstSplit):%d != len(srcSplit):%d, dst:%s, src:%s" + %(consumerId, len(dstSplit), len(srcSplit), dst, src)) + for i in range(len(dstSplit)): if srcSplit[i] != dstSplit[i]: srcFloat = float(srcSplit[i]) From 71283d6c4198d8305309743c090758638bcb819d Mon Sep 17 00:00:00 2001 From: Shuduo Sang Date: Wed, 12 Apr 2023 09:35:47 +0800 Subject: [PATCH 13/24] fix: script if share not exist (#20875) --- packaging/tools/install.sh | 10 ++++++---- packaging/tools/makepkg.sh | 2 +- 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/packaging/tools/install.sh b/packaging/tools/install.sh index 849ab5a92a..a1a95d9f30 100755 --- a/packaging/tools/install.sh +++ b/packaging/tools/install.sh @@ -573,16 +573,18 @@ function install_config() { } function install_share_etc() { + [ ! -d ${script_dir}/share/etc ] && return for c in `ls ${script_dir}/share/etc/`; do if [ -e /etc/$c ]; then out=/etc/$c.new.`date +%F` - ${csudo}cp -f ${script_dir}/share/etc/$c $out + ${csudo}cp -f ${script_dir}/share/etc/$c $out ||: else - ${csudo}cp -f ${script_dir}/share/etc/$c /etc/$c + ${csudo}cp -f ${script_dir}/share/etc/$c /etc/$c ||: fi done - ${csudo} cp ${script_dir}/share/srv/* ${service_config_dir} + [ ! -d ${script_dir}/share/srv ] && return + ${csudo} cp ${script_dir}/share/srv/* ${service_config_dir} ||: } function install_log() { @@ -612,7 +614,7 @@ function install_examples() { function install_web() { if [ -d "${script_dir}/share" ]; then - ${csudo}cp -rf ${script_dir}/share/* ${install_main_dir}/share + ${csudo}cp -rf ${script_dir}/share/* ${install_main_dir}/share > /dev/null 2>&1 ||: fi } diff --git a/packaging/tools/makepkg.sh b/packaging/tools/makepkg.sh index 12e215c62b..0dce526db6 100755 --- a/packaging/tools/makepkg.sh +++ b/packaging/tools/makepkg.sh @@ -150,7 +150,7 @@ fi mkdir -p ${install_dir}/bin && cp ${bin_files} ${install_dir}/bin && chmod a+x ${install_dir}/bin/* || : mkdir -p ${install_dir}/init.d && cp ${init_file_deb} ${install_dir}/init.d/${serverName}.deb mkdir -p ${install_dir}/init.d && cp ${init_file_rpm} ${install_dir}/init.d/${serverName}.rpm -mkdir -p ${install_dir}/share && cp -rf ${build_dir}/share/{etc,srv} ${install_dir}/share +mkdir -p ${install_dir}/share && cp -rf ${build_dir}/share/{etc,srv} ${install_dir}/share ||: if [ $adapterName != "taosadapter" ]; then mv ${install_dir}/cfg/${clientName2}adapter.toml ${install_dir}/cfg/$adapterName.toml From a31536446c38ba30c608614db46a3c1a4e594708 Mon Sep 17 00:00:00 2001 From: liuyao <38781207+54liuyao@users.noreply.github.com> Date: Wed, 12 Apr 2023 09:48:14 +0800 Subject: [PATCH 14/24] Update 14-stream.md --- docs/zh/12-taos-sql/14-stream.md | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/docs/zh/12-taos-sql/14-stream.md b/docs/zh/12-taos-sql/14-stream.md index f10077345f..12e466f349 100644 --- a/docs/zh/12-taos-sql/14-stream.md +++ b/docs/zh/12-taos-sql/14-stream.md @@ -10,8 +10,11 @@ description: 流式计算的相关 SQL 的详细语法 ```sql CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name[(field1_name, ...)] [TAGS (create_definition [, create_definition] ...)] SUBTABLE(expression) AS subquery stream_options: { - TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time] - WATERMARK time + TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time] + WATERMARK time + IGNORE EXPIRED [0|1] + DELETE_MARK time + FILL_HISTORY [0|1] } ``` @@ -202,3 +205,11 @@ PARTITION 子句中,为 concat("tag-", tbname)定义了一个别名cc, 对应 会对TAG信息进行如下检查 1.检查tag的schema信息是否匹配,对于不匹配的,则自动进行数据类型转换,当前只有数据长度大于4096byte时才报错,其余场景都能进行类型转换。 2.检查tag的个数是否相同,如果不同,需要显示的指定超级表与subquery的tag的对应关系,否则报错;如果相同,可以指定对应关系,也可以不指定,不指定则按位置顺序对应。 + +## 清理中间状态 + +``` +DELETE_MARK time +``` +DELETE_MARK用于删除缓存的窗口状态,也就是删除流计算的中间结果。如果不设置,默认值是10年 +T = 最新事件时间 - DELETE_MARK From 5d9f3c4133f5ccd0d2734db8166ce32404dc994b Mon Sep 17 00:00:00 2001 From: Xiaoyu Wang Date: Wed, 12 Apr 2023 10:44:04 +0800 Subject: [PATCH 15/24] merge main --- source/libs/catalog/src/ctgUtil.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/source/libs/catalog/src/ctgUtil.c b/source/libs/catalog/src/ctgUtil.c index 6beadf09da..b2b2b5a87e 100644 --- a/source/libs/catalog/src/ctgUtil.c +++ b/source/libs/catalog/src/ctgUtil.c @@ -1400,10 +1400,10 @@ int32_t ctgChkSetAuthRes(SCatalog* pCtg, SCtgAuthReq* req, SCtgAuthRsp* res) { pRes->pass = false; pRes->pCond = NULL; - // if (!pInfo->enable) { - // pRes->pass = false; - // return TSDB_CODE_SUCCESS; - // } + if (!pInfo->enable) { + pRes->pass = false; + return TSDB_CODE_SUCCESS; + } if (pInfo->superAuth) { pRes->pass = true; From 2690fd4e9ab82ad9fa9816e677061e247a2472cf Mon Sep 17 00:00:00 2001 From: huolibo Date: Wed, 12 Apr 2023 15:14:07 +0800 Subject: [PATCH 16/24] fix(driver): return error code to java (#20869) --- source/client/src/clientJniConnector.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/source/client/src/clientJniConnector.c b/source/client/src/clientJniConnector.c index 2f4bafe26a..d2a9665eee 100644 --- a/source/client/src/clientJniConnector.c +++ b/source/client/src/clientJniConnector.c @@ -816,7 +816,7 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_setBindTableNameI (*env)->ReleaseStringUTFChars(env, jname, name); jniError("bindTableName jobj:%p, conn:%p, code: 0x%x", jobj, tsconn, code); - return JNI_TDENGINE_ERROR; + return code; } jniDebug("jobj:%p, conn:%p, set stmt bind table name:%s", jobj, tsconn, name); @@ -891,7 +891,7 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_setTableNameTagsI if (code != TSDB_CODE_SUCCESS) { jniError("tableNameTags jobj:%p, conn:%p, code: 0x%x", jobj, tsconn, code); - return JNI_TDENGINE_ERROR; + return code; } return JNI_SUCCESS; } @@ -957,7 +957,7 @@ JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_bindColDataImp( if (code != TSDB_CODE_SUCCESS) { jniError("bindColData jobj:%p, conn:%p, code: 0x%x", jobj, tscon, code); - return JNI_TDENGINE_ERROR; + return code; } return JNI_SUCCESS; @@ -980,7 +980,7 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_addBatchImp(JNIEn int32_t code = taos_stmt_add_batch(pStmt); if (code != TSDB_CODE_SUCCESS) { jniError("add batch jobj:%p, conn:%p, code: 0x%x", jobj, tscon, code); - return JNI_TDENGINE_ERROR; + return code; } jniDebug("jobj:%p, conn:%p, stmt closed", jobj, tscon); @@ -1004,7 +1004,7 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_executeBatchImp(J int32_t code = taos_stmt_execute(pStmt); if (code != TSDB_CODE_SUCCESS) { jniError("excute batch jobj:%p, conn:%p, code: 0x%x", jobj, tscon, code); - return JNI_TDENGINE_ERROR; + return code; } jniDebug("jobj:%p, conn:%p, batch execute", jobj, tscon); @@ -1028,7 +1028,7 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_closeStmt(JNIEnv int32_t code = taos_stmt_close(pStmt); if (code != TSDB_CODE_SUCCESS) { jniError("close stmt jobj:%p, conn:%p, code: 0x%x", jobj, tscon, code); - return JNI_TDENGINE_ERROR; + return code; } jniDebug("jobj:%p, conn:%p, stmt closed", jobj, tscon); From 2559dad482369aab10ca0d82663e314457eec4d6 Mon Sep 17 00:00:00 2001 From: Shuduo Sang Date: Wed, 12 Apr 2023 16:22:17 +0800 Subject: [PATCH 17/24] fix: taosdump continue if fail (#20886) --- cmake/taostools_CMakeLists.txt.in | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cmake/taostools_CMakeLists.txt.in b/cmake/taostools_CMakeLists.txt.in index 0110b27b32..d8bf3a09b4 100644 --- a/cmake/taostools_CMakeLists.txt.in +++ b/cmake/taostools_CMakeLists.txt.in @@ -2,7 +2,7 @@ # taos-tools ExternalProject_Add(taos-tools GIT_REPOSITORY https://github.com/taosdata/taos-tools.git - GIT_TAG 149ac34 + GIT_TAG 0681d8b SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools" BINARY_DIR "" #BUILD_IN_SOURCE TRUE From b8ca369513ae4076910a9f0095f495aa271bb83e Mon Sep 17 00:00:00 2001 From: Shuduo Sang Date: Wed, 12 Apr 2023 19:20:16 +0800 Subject: [PATCH 18/24] docs: add rest api diff (#20892) --- .../14-reference/02-rest-api/02-rest-api.mdx | 94 ++++++++++++++++++ docs/zh/08-connector/02-rest-api.mdx | 97 +++++++++++++++++++ 2 files changed, 191 insertions(+) diff --git a/docs/en/14-reference/02-rest-api/02-rest-api.mdx b/docs/en/14-reference/02-rest-api/02-rest-api.mdx index 1691b8be8b..115391d021 100644 --- a/docs/en/14-reference/02-rest-api/02-rest-api.mdx +++ b/docs/en/14-reference/02-rest-api/02-rest-api.mdx @@ -382,6 +382,100 @@ Response body: } ``` +## REST API between TDengine 2.x and 3.0 + +### URI + +| URI | TDengine 2.x | TDengine 3.0 | +| :--------------------| :------------------: | :--------------------------------------------------: | +| /rest/sql | Supported | Supported (with different response code and body) | +| /rest/sqlt | Supported | No more supported | +| /rest/sqlutc | Supported | No more supported | + +### HTTP code + +| HTTP code | TDengine 2.x | TDengine 3.0 | note | +| :--------------------| :------------------: | :----------: | :-----------------------------------: | +| 200 | Supported | Supported | Success or taosc return error | +| 400 | Not supported | Supported | Parameter error | +| 401 | Not supported | Supported | Authentication failure | +| 404 | Supported | Supported | URI not exist | +| 500 | Not supported | Supported | Internal error | +| 503 | Supported | Supported | Insufficient system resources | + +### Response body + +| Response body | TDengine 2.x | TDengine 3.0 | note | +| :--------------------| :------------------: | :----------: | :-----------------------------------: | +| Success | "status": "succ", | "code": 0, | | +| Column meta | "head": [ + "name", + "created_time", + "ntables", + "vgroups", + "replica", + "quorum", + "days", + "keep1,keep2,keep(D)", + "cache(MB)", + "blocks", + "minrows", + "maxrows", + "wallevel", + "fsync", + "comp", + "precision", + "status" + ], | "column_meta": [ + [ + "name", + "VARCHAR", + 64 + ], + [ + "ntables", + "BIGINT", + 8 + ], + [ + "status", + "VARCHAR", + 10 + ] + ], | | +| Data | "data": [ + [ + "log", + "2020-09-02 17:23:00.039", + 4, + 1, + 1, + 1, + 10, + "30,30,30", + 1, + 3, + 100, + 4096, + 1, + 3000, + 2, + "us", + "ready" + ] + ], | "data": [ + [ + "information_schema", + 16, + "ready" + ], + [ + "performance_schema", + 9, + "ready" + ] + ], | | + ## Reference [taosAdapter](/reference/taosadapter/) diff --git a/docs/zh/08-connector/02-rest-api.mdx b/docs/zh/08-connector/02-rest-api.mdx index a081595bca..ece8eb0acd 100644 --- a/docs/zh/08-connector/02-rest-api.mdx +++ b/docs/zh/08-connector/02-rest-api.mdx @@ -383,6 +383,103 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata } ``` +## TDengine 2.x 和 3.0 之间 REST API 的差异 + +### URI + +| URI | TDengine 2.x | TDengine 3.0 | +| :--------------------| :------------------: | :--------------------------------------------------: | +| /rest/sql | 支持 | 支持 (响应代码和消息体不同) | +| /rest/sqlt | 支持 | 不再支持 | +| /rest/sqlutc | 支持 | 不再支持 | + + +### HTTP code + +| HTTP code | TDengine 2.x | TDengine 3.0 | 备注 | +| :--------------------| :------------------: | :----------: | :-----------------------------------: | +| 200 | 支持 | 支持 | 正确返回和 taosc 接口错误返回 | +| 400 | 不支持 | 支持 | 参数错误返回 | +| 401 | 不支持 | 支持 | 鉴权失败 | +| 404 | 支持 | 支持 | 接口不存在 | +| 500 | 不支持 | 支持 | 内部错误 | +| 503 | 支持 | 支持 | 系统资源不足 | + + +### 响应代码和消息体 + +| 响应代码和消息体 | TDengine 2.x | TDengine 3.0 | note | +| :--------------------| :------------------: | :----------: | :-----------------------------------: | +| Success | "status": "succ", | "code": 0, | | +| Column meta | "head": [ + "name", + "created_time", + "ntables", + "vgroups", + "replica", + "quorum", + "days", + "keep1,keep2,keep(D)", + "cache(MB)", + "blocks", + "minrows", + "maxrows", + "wallevel", + "fsync", + "comp", + "precision", + "status" + ], | "column_meta": [ + [ + "name", + "VARCHAR", + 64 + ], + [ + "ntables", + "BIGINT", + 8 + ], + [ + "status", + "VARCHAR", + 10 + ] + ], | | +| Data | "data": [ + [ + "log", + "2020-09-02 17:23:00.039", + 4, + 1, + 1, + 1, + 10, + "30,30,30", + 1, + 3, + 100, + 4096, + 1, + 3000, + 2, + "us", + "ready" + ] + ], | "data": [ + [ + "information_schema", + 16, + "ready" + ], + [ + "performance_schema", + 9, + "ready" + ] + ], | | + + ## 参考 [taosAdapter](/reference/taosadapter/) From c3abf5ede772d01d295c75d7932a1b255235bb21 Mon Sep 17 00:00:00 2001 From: Shuduo Sang Date: Wed, 12 Apr 2023 19:51:37 +0800 Subject: [PATCH 19/24] docs: use html table in rest api (#20893) * docs: add rest api diff * docs: use html table --- .../14-reference/02-rest-api/02-rest-api.mdx | 49 ++++++++++++++---- docs/zh/08-connector/02-rest-api.mdx | 50 +++++++++++++++---- 2 files changed, 80 insertions(+), 19 deletions(-) diff --git a/docs/en/14-reference/02-rest-api/02-rest-api.mdx b/docs/en/14-reference/02-rest-api/02-rest-api.mdx index 115391d021..01d1a9b12b 100644 --- a/docs/en/14-reference/02-rest-api/02-rest-api.mdx +++ b/docs/en/14-reference/02-rest-api/02-rest-api.mdx @@ -405,10 +405,18 @@ Response body: ### Response body -| Response body | TDengine 2.x | TDengine 3.0 | note | -| :--------------------| :------------------: | :----------: | :-----------------------------------: | -| Success | "status": "succ", | "code": 0, | | -| Column meta | "head": [ + + + + + + +/ + + + + + + + + + + +
Response body TDengine 2.x TDengine 3.0
Success "status": "succ", "code": 0,
Column meta +``` +"head": [ "name", "created_time", "ntables", @@ -426,7 +434,12 @@ Response body: "comp", "precision", "status" - ], | "column_meta": [ + ], +``` + +``` +"column_meta": [ [ "name", "VARCHAR", @@ -442,8 +455,17 @@ Response body: "VARCHAR", 10 ] - ], | | -| Data | "data": [ + ], +``` +
+Data + +``` +"data": [ [ "log", "2020-09-02 17:23:00.039", @@ -463,7 +485,12 @@ Response body: "us", "ready" ] - ], | "data": [ + ], +``` + +``` + "data": [ [ "information_schema", 16, @@ -474,7 +501,11 @@ Response body: 9, "ready" ] - ], | | + ], +``` +
## Reference diff --git a/docs/zh/08-connector/02-rest-api.mdx b/docs/zh/08-connector/02-rest-api.mdx index ece8eb0acd..ae0949c4fd 100644 --- a/docs/zh/08-connector/02-rest-api.mdx +++ b/docs/zh/08-connector/02-rest-api.mdx @@ -408,10 +408,18 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata ### 响应代码和消息体 -| 响应代码和消息体 | TDengine 2.x | TDengine 3.0 | note | -| :--------------------| :------------------: | :----------: | :-----------------------------------: | -| Success | "status": "succ", | "code": 0, | | -| Column meta | "head": [ + + + + + + +/ + + + + + + + + + + +
响应代码和消息体 TDengine 2.x TDengine 3.0
Success "status": "succ", "code": 0,
Column meta +``` +"head": [ "name", "created_time", "ntables", @@ -429,7 +437,12 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata "comp", "precision", "status" - ], | "column_meta": [ + ], +``` + +``` +"column_meta": [ [ "name", "VARCHAR", @@ -445,8 +458,17 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata "VARCHAR", 10 ] - ], | | -| Data | "data": [ + ], +``` +
+Data + +``` +"data": [ [ "log", "2020-09-02 17:23:00.039", @@ -466,7 +488,12 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata "us", "ready" ] - ], | "data": [ + ], +``` + +``` + "data": [ [ "information_schema", 16, @@ -477,8 +504,11 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata 9, "ready" ] - ], | | - + ], +``` +
## 参考 From c1412648d05f09fe4b600e9064312ab9390d781a Mon Sep 17 00:00:00 2001 From: Shuduo Sang Date: Wed, 12 Apr 2023 21:46:19 +0800 Subject: [PATCH 20/24] docs: use json instead of table in rest api (#20895) * docs: add rest api diff * docs: use html table * docs: use json instead of table in rest api doc --- .../14-reference/02-rest-api/02-rest-api.mdx | 97 +++++++++--------- docs/zh/08-connector/02-rest-api.mdx | 98 +++++++++---------- 2 files changed, 97 insertions(+), 98 deletions(-) diff --git a/docs/en/14-reference/02-rest-api/02-rest-api.mdx b/docs/en/14-reference/02-rest-api/02-rest-api.mdx index 01d1a9b12b..409e079b92 100644 --- a/docs/en/14-reference/02-rest-api/02-rest-api.mdx +++ b/docs/en/14-reference/02-rest-api/02-rest-api.mdx @@ -405,18 +405,12 @@ Response body: ### Response body - - - - - - -/ - - - - - - - - - - -
Response body TDengine 2.x TDengine 3.0
Success "status": "succ", "code": 0,
Column meta -``` -"head": [ +#### REST response body return from TDengine 2.x + +```JSON +{ + "status": "succ", + "head": [ "name", "created_time", "ntables", @@ -435,37 +429,7 @@ Response body: "precision", "status" ], -``` - -``` -"column_meta": [ - [ - "name", - "VARCHAR", - 64 - ], - [ - "ntables", - "BIGINT", - 8 - ], - [ - "status", - "VARCHAR", - 10 - ] - ], -``` -
-Data - -``` -"data": [ + "data": [ [ "log", "2020-09-02 17:23:00.039", @@ -485,10 +449,10 @@ Data "us", "ready" ] - ], + ], + "rows": 1 +} ``` - ``` "data": [ [ @@ -503,9 +467,44 @@ Data ] ], ``` -
+ +#### REST response body return from TDengine 3.0 + +```JSON +{ + "code": 0, + "column_meta": [ + [ + "name", + "VARCHAR", + 64 + ], + [ + "ntables", + "BIGINT", + 8 + ], + [ + "status", + "VARCHAR", + 10 + ] + ], + "data": [ + [ + "information_schema", + 16, + "ready" + ], + [ + "performance_schema", + 9, + "ready" + ] + ], + "rows": 2 +} +``` ## Reference diff --git a/docs/zh/08-connector/02-rest-api.mdx b/docs/zh/08-connector/02-rest-api.mdx index ae0949c4fd..f3f1e087d8 100644 --- a/docs/zh/08-connector/02-rest-api.mdx +++ b/docs/zh/08-connector/02-rest-api.mdx @@ -408,18 +408,12 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata ### 响应代码和消息体 - - - - - - -/ - - - - - - - - - - -
响应代码和消息体 TDengine 2.x TDengine 3.0
Success "status": "succ", "code": 0,
Column meta -``` -"head": [ +#### TDengine 2.x 响应代码和消息体 + +```JSON +{ + "status": "succ", + "head": [ "name", "created_time", "ntables", @@ -438,37 +432,7 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata "precision", "status" ], -``` - -``` -"column_meta": [ - [ - "name", - "VARCHAR", - 64 - ], - [ - "ntables", - "BIGINT", - 8 - ], - [ - "status", - "VARCHAR", - 10 - ] - ], -``` -
-Data - -``` -"data": [ + "data": [ [ "log", "2020-09-02 17:23:00.039", @@ -488,10 +452,10 @@ Data "us", "ready" ] - ], + ], + "rows": 1 +} ``` - ``` "data": [ [ @@ -506,9 +470,45 @@ Data ] ], ``` -
+ +#### TDengine 3.0 响应代码和消息体 + + +```JSON +{ + "code": 0, + "column_meta": [ + [ + "name", + "VARCHAR", + 64 + ], + [ + "ntables", + "BIGINT", + 8 + ], + [ + "status", + "VARCHAR", + 10 + ] + ], + "data": [ + [ + "information_schema", + 16, + "ready" + ], + [ + "performance_schema", + 9, + "ready" + ] + ], + "rows": 2 +} +``` ## 参考 From 226c68a91f3fcf93101f60782f236d2dbe8fee49 Mon Sep 17 00:00:00 2001 From: Shuduo Sang Date: Thu, 13 Apr 2023 10:22:51 +0800 Subject: [PATCH 21/24] docs: fix connector case (#20900) * docs: add rest api diff * docs: use html table * docs: use json instead of table in rest api doc * docs: fix connector upcase --- docs/en/14-reference/03-connector/_category_.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/14-reference/03-connector/_category_.yml b/docs/en/14-reference/03-connector/_category_.yml index e470f64aa0..6a766e9657 100644 --- a/docs/en/14-reference/03-connector/_category_.yml +++ b/docs/en/14-reference/03-connector/_category_.yml @@ -1 +1 @@ -label: "connector" \ No newline at end of file +label: "Connector" From 1cc53d00d247dd153f665570a3de9061815563f9 Mon Sep 17 00:00:00 2001 From: wade zhang <95411902+gccgdb1234@users.noreply.github.com> Date: Thu, 13 Apr 2023 11:09:14 +0800 Subject: [PATCH 22/24] Update 29-changes.md --- docs/zh/12-taos-sql/29-changes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zh/12-taos-sql/29-changes.md b/docs/zh/12-taos-sql/29-changes.md index 73fa15313b..a797966f57 100644 --- a/docs/zh/12-taos-sql/29-changes.md +++ b/docs/zh/12-taos-sql/29-changes.md @@ -27,7 +27,7 @@ description: "TDengine 3.0 版本的语法变更说明" | - | :------- | :-------- | :------- | | 1 | ALTER ACCOUNT | 废除 | 2.x中为企业版功能,3.0不再支持。语法暂时保留了,执行报“This statement is no longer supported”错误。 | 2 | ALTER ALL DNODES | 新增 | 修改所有DNODE的参数。 -| 3 | ALTER DATABASE | 调整 | 废除
  • QUORUM:写入需要的副本确认数。3.0版本使用STRICT来指定强一致还是弱一致。3.0.0版本STRICT暂不支持修改。
  • BLOCKS:VNODE使用的内存块数。3.0版本使用BUFFER来表示VNODE写入内存池的大小。
  • UPDATE:更新操作的支持模式。3.0版本所有数据库都支持部分列更新。
  • CACHELAST:缓存最新一行数据的模式。3.0版本用CACHEMODEL代替。
  • COMP:3.0版本暂不支持修改。

  • 新增
  • CACHEMODEL:表示是否在内存中缓存子表的最近数据。
  • CACHESIZE:表示缓存子表最近数据的内存大小。
  • WAL_FSYNC_PERIOD:代替原FSYNC参数。
  • WAL_LEVEL:代替原WAL参数。
  • WAL_RETENTION_PERIOD:3.0.4.0版本新增,wal文件的额外保留策略,用于数据订阅。
  • WAL_RETENTION_SIZE:3.0.4.0版本新增,wal文件的额外保留策略,用于数据订阅。
    调整
  • REPLICA:3.0.0版本暂不支持修改。
  • KEEP:3.0版本新增支持带单位的设置方式。
+| 3 | ALTER DATABASE | 调整 | 废除
  • QUORUM:写入需要的副本确认数。3.0 版本默认行为是强一致性,且不支持修改为弱一致性。
  • BLOCKS:VNODE使用的内存块数。3.0版本使用BUFFER来表示VNODE写入内存池的大小。
  • UPDATE:更新操作的支持模式。3.0版本所有数据库都支持部分列更新。
  • CACHELAST:缓存最新一行数据的模式。3.0版本用CACHEMODEL代替。
  • COMP:3.0版本暂不支持修改。

  • 新增
  • CACHEMODEL:表示是否在内存中缓存子表的最近数据。
  • CACHESIZE:表示缓存子表最近数据的内存大小。
  • WAL_FSYNC_PERIOD:代替原FSYNC参数。
  • WAL_LEVEL:代替原WAL参数。
  • WAL_RETENTION_PERIOD:3.0.4.0版本新增,wal文件的额外保留策略,用于数据订阅。
  • WAL_RETENTION_SIZE:3.0.4.0版本新增,wal文件的额外保留策略,用于数据订阅。
    调整
  • REPLICA:3.0.0版本暂不支持修改。
  • KEEP:3.0版本新增支持带单位的设置方式。
| 4 | ALTER STABLE | 调整 | 废除
  • CHANGE TAG:修改标签列的名称。3.0版本使用RENAME TAG代替。
    新增
  • RENAME TAG:代替原CHANGE TAG子句。
  • COMMENT:修改超级表的注释。
| 5 | ALTER TABLE | 调整 | 废除
  • CHANGE TAG:修改标签列的名称。3.0版本使用RENAME TAG代替。
    新增
  • RENAME TAG:代替原CHANGE TAG子句。
  • COMMENT:修改表的注释。
  • TTL:修改表的生命周期。
| 6 | ALTER USER | 调整 | 废除
  • PRIVILEGE:修改用户权限。3.0版本使用GRANT和REVOKE来授予和回收权限。
    新增
  • ENABLE:启用或停用此用户。
  • SYSINFO:修改用户是否可查看系统信息。
From dd066e6b0bc51b22f701bac496b66f9e82d7aec2 Mon Sep 17 00:00:00 2001 From: wade zhang <95411902+gccgdb1234@users.noreply.github.com> Date: Thu, 13 Apr 2023 11:10:19 +0800 Subject: [PATCH 23/24] Update 29-changes.md --- docs/en/12-taos-sql/29-changes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/12-taos-sql/29-changes.md b/docs/en/12-taos-sql/29-changes.md index a695a2cae1..f4606f263f 100644 --- a/docs/en/12-taos-sql/29-changes.md +++ b/docs/en/12-taos-sql/29-changes.md @@ -27,7 +27,7 @@ The following data types can be used in the schema for standard tables. | - | :------- | :-------- | :------- | | 1 | ALTER ACCOUNT | Deprecated| This Enterprise Edition-only statement has been removed. It returns the error "This statement is no longer supported." | 2 | ALTER ALL DNODES | Added | Modifies the configuration of all dnodes. -| 3 | ALTER DATABASE | Modified | Deprecated
  • QUORUM: Specified the required number of confirmations. STRICT is now used to specify strong or weak consistency. The STRICT parameter cannot be modified.
  • BLOCKS: Specified the memory blocks used by each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode.
  • UPDATE: Specified whether update operations were supported. All databases now support updating data in certain columns.
  • CACHELAST: Specified how to cache the newest row of data. CACHEMODEL now replaces CACHELAST.
  • COMP: Cannot be modified.
    Added
  • CACHEMODEL: Specifies whether to cache the latest subtable data.
  • CACHESIZE: Specifies the size of the cache for the newest subtable data.
  • WAL_FSYNC_PERIOD: Replaces the FSYNC parameter.
  • WAL_LEVEL: Replaces the WAL parameter.
  • WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription.
  • WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription.
    Modified
  • REPLICA: Cannot be modified.
  • KEEP: Now supports units.
+| 3 | ALTER DATABASE | Modified | Deprecated
  • QUORUM: Specified the required number of confirmations. TDengine 3.0 provides strict consistency by default and doesn't allow to change to weak consitency.
  • BLOCKS: Specified the memory blocks used by each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode.
  • UPDATE: Specified whether update operations were supported. All databases now support updating data in certain columns.
  • CACHELAST: Specified how to cache the newest row of data. CACHEMODEL now replaces CACHELAST.
  • COMP: Cannot be modified.
    Added
  • CACHEMODEL: Specifies whether to cache the latest subtable data.
  • CACHESIZE: Specifies the size of the cache for the newest subtable data.
  • WAL_FSYNC_PERIOD: Replaces the FSYNC parameter.
  • WAL_LEVEL: Replaces the WAL parameter.
  • WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription.
  • WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription.
    Modified
  • REPLICA: Cannot be modified.
  • KEEP: Now supports units.
| 4 | ALTER STABLE | Modified | Deprecated
  • CHANGE TAG: Modified the name of a tag. Replaced by RENAME TAG.
    Added
  • RENAME TAG: Replaces CHANGE TAG.
  • COMMENT: Specifies comments for a supertable.
| 5 | ALTER TABLE | Modified | Deprecated
  • CHANGE TAG: Modified the name of a tag. Replaced by RENAME TAG.
    Added
  • RENAME TAG: Replaces CHANGE TAG.
  • COMMENT: Specifies comments for a standard table.
  • TTL: Specifies the time-to-live for a standard table.
| 6 | ALTER USER | Modified | Deprecated
  • PRIVILEGE: Specified user permissions. Replaced by GRANT and REVOKE.
    Added
  • ENABLE: Enables or disables a user.
  • SYSINFO: Specifies whether a user can query system information.
From e214f93d961108abef7648aab93802979b6e9d07 Mon Sep 17 00:00:00 2001 From: Xuefeng Tan <1172915550@qq.com> Date: Thu, 13 Apr 2023 13:24:53 +0800 Subject: [PATCH 24/24] enh(taosAdapter): make the schemaless automatic database creation configurable (#20903) --- cmake/taosadapter_CMakeLists.txt.in | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cmake/taosadapter_CMakeLists.txt.in b/cmake/taosadapter_CMakeLists.txt.in index b2f335e1f7..ba937b40c1 100644 --- a/cmake/taosadapter_CMakeLists.txt.in +++ b/cmake/taosadapter_CMakeLists.txt.in @@ -2,7 +2,7 @@ # taosadapter ExternalProject_Add(taosadapter GIT_REPOSITORY https://github.com/taosdata/taosadapter.git - GIT_TAG cb1e89c + GIT_TAG e02ddb2 SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter" BINARY_DIR "" #BUILD_IN_SOURCE TRUE