diff --git a/docs/zh/04-get-started/05-cloud.md b/docs/zh/04-get-started/05-cloud.md index bd76add527..1bca09ee91 100644 --- a/docs/zh/04-get-started/05-cloud.md +++ b/docs/zh/04-get-started/05-cloud.md @@ -15,7 +15,7 @@ TDengine Cloud 大幅减轻了用户在部署、运维等方面的人力负担 要在 TDengine Cloud 注册新用户,请遵循以下简易步骤完成注册流程: -1. 打开浏览器,访问 TDengine Cloud 的首页:https://cloud.taosdata.com,在右边的“注册”部分,填入自己的姓名以及企业邮箱地址,点击“获取验证码”按钮。 +1. 打开浏览器,访问 [TDengine Cloud](https://cloud.taosdata.com),在右边的“注册”部分,填入自己的姓名以及企业邮箱地址,点击“获取验证码”按钮。 2. 检查企业邮箱,找到主题为“你的 TDengine Cloud 注册账户验证码”的邮件。从邮件内容中复制 6 位验证码,并将其粘贴到注册页面上的“验证码”输入框中。接着,点击“注册 TDengine Cloud”按钮,进入客户信息补全页面。 @@ -32,4 +32,4 @@ TDengine Cloud 大幅减轻了用户在部署、运维等方面的人力负担 3. 第 3 步,创建实例。在此步骤中,你需要填写实例的区域、名称、是否选择高可用选项以及计费方案等必填信息。确认无误后,点击“创建”按钮。大约等待 1min,新的TDengine 实例便会创建完成。随后,你可以在控制台中对该实例进行各种操作,如查询数据、创建订阅、创建流等。 -TDengine Cloud 提供多种级别的计费方案,包括入门版、基础版、标准版、专业版和旗舰版,以满足不同客户的需求。如果你觉得现有计费方案无法满足自己的特定需求,请联系 TDengine Cloud 的客户支持团队,他们将为你量身定制计费方案。注册后,你将获得一定的免费额度,以便体验服务 \ No newline at end of file +TDengine Cloud 提供多种级别的计费方案,包括入门版、基础版、标准版、专业版和旗舰版,以满足不同客户的需求。如果你觉得现有计费方案无法满足自己的特定需求,请联系 TDengine Cloud 的客户支持团队,他们将为你量身定制计费方案。注册后,你将获得一定的免费额度,以便体验服务 diff --git a/docs/zh/08-operation/03-deployment.md b/docs/zh/08-operation/03-deployment.md index 83b2c91843..2e0c2a7989 100644 --- a/docs/zh/08-operation/03-deployment.md +++ b/docs/zh/08-operation/03-deployment.md @@ -206,11 +206,11 @@ http { ### 部署 taosX -如果想使用 TDengine 的数据接入能力,需要部署 taosX 服务,关于它的详细说明和部署请参考[taosX 参考手册](../../reference/components/taosx)。 +如果想使用 TDengine 的数据接入能力,需要部署 taosX 服务,关于它的详细说明和部署请参考企业版参考手册。 ### 部署 taosX-Agent -有些数据源如 Pi, OPC 等,因为网络条件和数据源访问的限制,taosX 无法直接访问数据源,这种情况下需要部署一个代理服务 taosX-Agent,关于它的详细说明和部署请参考[taosX-Agent 参考手册](../../reference/components/taosx-agent)。 +有些数据源如 Pi, OPC 等,因为网络条件和数据源访问的限制,taosX 无法直接访问数据源,这种情况下需要部署一个代理服务 taosX-Agent,关于它的详细说明和部署请参考企业版参考手册。 ### 部署 taos-Explorer diff --git a/docs/zh/08-operation/12-multi.md b/docs/zh/08-operation/12-multi.md index 8f11ee4326..a5608ad5fa 100644 --- a/docs/zh/08-operation/12-multi.md +++ b/docs/zh/08-operation/12-multi.md @@ -70,7 +70,7 @@ dataDir /mnt/data6 2 0 |参数名称 | 参数含义 | |:-------------|:-----------------------------------------------| -|s3EndPoint | 用户所在地域的 COS 服务域名,支持 http 和 https,bucket 的区域需要与 endpoint 的保持一致,否则无法访问。例如:http://cos.ap-beijing.myqcloud.com | +|s3EndPoint | 用户所在地域的 COS 服务域名,支持 http 和 https,bucket 的区域需要与 endpoint 的保持一致,否则无法访问。 | |s3AccessKey |冒号分隔的用户 SecretId:SecretKey。例如:AKIDsQmwsfKxTo2A6nGVXZN0UlofKn6JRRSJ:lIdoy99ygEacU7iHfogaN2Xq0yumSm1E | |s3BucketName | 存储桶名称,减号后面是用户注册 COS 服务的 AppId。其中 AppId 是 COS 特有,AWS 和阿里云都没有,配置时需要作为 bucket name 的一部分,使用减号分隔。参数值均为字符串类型,但不需要引号。例如:test0711-1309024725 | |s3UploadDelaySec | data 文件持续多长时间不再变动后上传至 s3,单位:秒。最小值:1;最大值:2592000 (30天),默认值 60 秒 | diff --git a/docs/zh/08-operation/18-dual.md b/docs/zh/08-operation/18-dual.md index 354e715602..c7871a8e1e 100644 --- a/docs/zh/08-operation/18-dual.md +++ b/docs/zh/08-operation/18-dual.md @@ -83,7 +83,7 @@ taosx replica start ```shell taosx replica start -f td1:6030 -t td2:6030 ``` -该示例命令会自动创建除 information_schema、performance_schema、log、audit 库之外的同步任务。可以使用 http://td2:6041 指定该 endpoint 使用 websocket 接口(默认是原生接口)。也可以指定数据库同步:taosx replica start -f td1:6030 -t td2:6030 db1 仅创建指定的数据库同步任务。 +该示例命令会自动创建除 information_schema、performance_schema、log、audit 库之外的同步任务。可以使用 `http://td2:6041` 指定该 endpoint 使用 websocket 接口(默认是原生接口)。也可以指定数据库同步:taosx replica start -f td1:6030 -t td2:6030 db1 仅创建指定的数据库同步任务。 2. 方法二 diff --git a/docs/zh/14-reference/03-taos-sql/14-stream.md b/docs/zh/14-reference/03-taos-sql/14-stream.md index c0d14f0455..d995c2a09b 100644 --- a/docs/zh/14-reference/03-taos-sql/14-stream.md +++ b/docs/zh/14-reference/03-taos-sql/14-stream.md @@ -99,7 +99,7 @@ PARTITION 子句中,为 tbname 定义了一个别名 tname, 在PARTITION 子 ## 流式计算读取历史数据 -正常情况下,流式计算不会处理创建前已经写入源表中的数据,若要处理已经写入的数据,可以在创建流时设置 fill_history 1 选项,这样创建的流式计算会自动处理创建前、创建中、创建后写入的数据。例如: +正常情况下,流式计算不会处理创建前已经写入源表中的数据,若要处理已经写入的数据,可以在创建流时设置 fill_history 1 选项,这样创建的流式计算会自动处理创建前、创建中、创建后写入的数据。流计算处理历史数据的最大窗口数是2000万,超过限制会报错。例如: ```sql create stream if not exists s1 fill_history 1 into st1 as select count(*) from t1 interval(10s) diff --git a/packaging/tools/com.taosdata.taos-explorer.plist b/packaging/tools/com.taosdata.taos-explorer.plist new file mode 100644 index 0000000000..2edb5552ad --- /dev/null +++ b/packaging/tools/com.taosdata.taos-explorer.plist @@ -0,0 +1,33 @@ + + + + + Label + com.tdengine.taos-explorer + ProgramArguments + + /usr/local/bin/taos-explorer + + ProcessType + Interactive + Disabled + + RunAtLoad + + LaunchOnlyOnce + + SessionCreate + + ExitTimeOut + 600 + KeepAlive + + SuccessfulExit + + AfterInitialDemand + + + Program + /usr/local/bin/taos-explorer + + \ No newline at end of file diff --git a/packaging/tools/remove.sh b/packaging/tools/remove.sh index 58a17e2a50..c3f459ca9c 100755 --- a/packaging/tools/remove.sh +++ b/packaging/tools/remove.sh @@ -206,10 +206,17 @@ function clean_log() { } function clean_service_on_launchctl() { - ${csudouser}launchctl unload -w /Library/LaunchDaemons/com.taosdata.taosd.plist > /dev/null 2>&1 || : - ${csudo}rm /Library/LaunchDaemons/com.taosdata.taosd.plist > /dev/null 2>&1 || : - ${csudouser}launchctl unload -w /Library/LaunchDaemons/com.taosdata.${clientName2}adapter.plist > /dev/null 2>&1 || : - ${csudo}rm /Library/LaunchDaemons/com.taosdata.${clientName2}adapter.plist > /dev/null 2>&1 || : + ${csudo}launchctl unload -w /Library/LaunchDaemons/com.taosdata.taosd.plist || : + ${csudo}launchctl unload -w /Library/LaunchDaemons/com.taosdata.${PREFIX}adapter.plist || : + ${csudo}launchctl unload -w /Library/LaunchDaemons/com.taosdata.${PREFIX}keeper.plist || : + ${csudo}launchctl unload -w /Library/LaunchDaemons/com.taosdata.${PREFIX}-explorer.plist || : + + ${csudo}launchctl remove com.tdengine.taosd || : + ${csudo}launchctl remove com.tdengine.${PREFIX}adapter || : + ${csudo}launchctl remove com.tdengine.${PREFIX}keeper || : + ${csudo}launchctl remove com.tdengine.${PREFIX}-explorer || : + + ${csudo}rm /Library/LaunchDaemons/com.taosdata.* > /dev/null 2>&1 || : } function remove_data_and_config() { @@ -250,6 +257,12 @@ if [ -e ${install_main_dir}/uninstall_${PREFIX}x.sh ]; then fi fi + +if [ "$osType" = "Darwin" ]; then + clean_service_on_launchctl + ${csudo}rm -rf /Applications/TDengine.app +fi + remove_bin clean_header # Remove lib file @@ -282,10 +295,7 @@ elif echo $osinfo | grep -qwi "centos"; then # echo "this is centos system" ${csudo}rpm -e --noscripts tdengine >/dev/null 2>&1 || : fi -if [ "$osType" = "Darwin" ]; then - clean_service_on_launchctl - ${csudo}rm -rf /Applications/TDengine.app -fi + command -v systemctl >/dev/null 2>&1 && ${csudo}systemctl daemon-reload >/dev/null 2>&1 || true echo diff --git a/packaging/tools/tdengine.iss b/packaging/tools/tdengine.iss index 8085c55e3e..c3eb6f9f68 100644 --- a/packaging/tools/tdengine.iss +++ b/packaging/tools/tdengine.iss @@ -71,8 +71,8 @@ Source: {#MyAppSourceDir}\taosdump.exe; DestDir: "{app}"; DestName: "{#CusPrompt Filename: {sys}\sc.exe; Parameters: "create taosd start= DEMAND binPath= ""C:\\TDengine\\taosd.exe --win_service""" ; Flags: runhidden Filename: {sys}\sc.exe; Parameters: "create taosadapter start= DEMAND binPath= ""C:\\TDengine\\taosadapter.exe""" ; Flags: runhidden -Filename: "C:\Windows\System32\odbcconf.exe"; Parameters: "/S /F win_odbcinst.ini"; WorkingDir: "{app}\taos_odbc\x64"; Flags: runhidden; StatusMsg: "Configuring ODBC x64" -Filename: "C:\Windows\SysWOW64\odbcconf.exe"; Parameters: "/S /F win_odbcinst.ini"; WorkingDir: "{app}\taos_odbc\x86"; Flags: runhidden; StatusMsg: "Configuring ODBC x86" +Filename: "C:\Windows\System32\odbcconf.exe"; Parameters: "/S /F win_odbc_install.ini"; WorkingDir: "{app}\taos_odbc\x64"; Flags: runhidden; StatusMsg: "Configuring ODBC x64" +Filename: "C:\Windows\SysWOW64\odbcconf.exe"; Parameters: "/S /F win_odbc_install.ini"; WorkingDir: "{app}\taos_odbc\x86"; Flags: runhidden; StatusMsg: "Configuring ODBC x86" [UninstallRun] RunOnceId: "stoptaosd"; Filename: {sys}\sc.exe; Parameters: "stop taosd" ; Flags: runhidden diff --git a/source/client/src/clientHb.c b/source/client/src/clientHb.c index 6ee6d753e4..62d8d470ba 100644 --- a/source/client/src/clientHb.c +++ b/source/client/src/clientHb.c @@ -55,7 +55,7 @@ static int32_t hbProcessUserAuthInfoRsp(void *value, int32_t valueLen, struct SC for (int32_t i = 0; i < numOfBatchs; ++i) { SGetUserAuthRsp *rsp = taosArrayGet(batchRsp.pArray, i); if (NULL == rsp) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _return; } tscDebug("hb to update user auth, user:%s, version:%d", rsp->user, rsp->version); @@ -217,7 +217,7 @@ static int32_t hbProcessDBInfoRsp(void *value, int32_t valueLen, struct SCatalog for (int32_t i = 0; i < numOfBatchs; ++i) { SDbHbRsp *rsp = taosArrayGet(batchRsp.pArray, i); if (NULL == rsp) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _return; } if (rsp->useDbRsp) { @@ -291,7 +291,7 @@ static int32_t hbProcessStbInfoRsp(void *value, int32_t valueLen, struct SCatalo for (int32_t i = 0; i < numOfMeta; ++i) { STableMetaRsp *rsp = taosArrayGet(hbRsp.pMetaRsp, i); if (NULL == rsp) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _return; } if (rsp->numOfColumns < 0) { @@ -313,7 +313,7 @@ static int32_t hbProcessStbInfoRsp(void *value, int32_t valueLen, struct SCatalo for (int32_t i = 0; i < numOfIndex; ++i) { STableIndexRsp *rsp = taosArrayGet(hbRsp.pIndexRsp, i); if (NULL == rsp) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _return; } TSC_ERR_JRET(catalogUpdateTableIndex(pCatalog, rsp)); @@ -354,7 +354,7 @@ static int32_t hbProcessViewInfoRsp(void *value, int32_t valueLen, struct SCatal for (int32_t i = 0; i < numOfMeta; ++i) { SViewMetaRsp *rsp = taosArrayGetP(hbRsp.pViewRsp, i); if (NULL == rsp) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _return; } if (rsp->numOfCols < 0) { diff --git a/source/client/src/clientImpl.c b/source/client/src/clientImpl.c index 15bd5795e2..774cac750b 100644 --- a/source/client/src/clientImpl.c +++ b/source/client/src/clientImpl.c @@ -949,7 +949,7 @@ int32_t handleQueryExecRes(SRequestObj* pRequest, void* res, SCatalog* pCatalog, for (int32_t i = 0; i < tbNum; ++i) { STbVerInfo* tbInfo = taosArrayGet(pTbArray, i); if (NULL == tbInfo) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _return; } STbSVersion tbSver = {.tbFName = tbInfo->tbFName, .sver = tbInfo->sversion, .tver = tbInfo->tversion}; diff --git a/source/client/src/clientRawBlockWrite.c b/source/client/src/clientRawBlockWrite.c index 378d143047..b55cc75340 100644 --- a/source/client/src/clientRawBlockWrite.c +++ b/source/client/src/clientRawBlockWrite.c @@ -1922,7 +1922,7 @@ static int32_t tmqWriteRawMetaDataImpl(TAOS* taos, void* data, int32_t dataLen) const char* tbName = (const char*)taosArrayGetP(rspObj.dataRsp.blockTbName, rspObj.resIter); if (!tbName) { SET_ERROR_MSG("block tbname is null"); - code = TSDB_CODE_TMQ_INVALID_MSG; + code = terrno; goto end; } diff --git a/source/client/src/clientSml.c b/source/client/src/clientSml.c index f3a22bff75..e4e5a54a0b 100644 --- a/source/client/src/clientSml.c +++ b/source/client/src/clientSml.c @@ -393,7 +393,7 @@ int32_t smlProcessChildTable(SSmlHandle *info, SSmlLineInfo *elements) { tinfo->tags = taosArrayDup(info->preLineTagKV, NULL); if (tinfo->tags == NULL) { smlDestroyTableInfo(&tinfo); - return TSDB_CODE_OUT_OF_MEMORY; + return terrno; } for (size_t i = 0; i < taosArrayGetSize(info->preLineTagKV); i++) { SSmlKv *kv = (SSmlKv *)taosArrayGet(info->preLineTagKV, i); @@ -561,7 +561,7 @@ int32_t smlSetCTableName(SSmlTableInfo *oneTable, char *tbnameKey) { if (strlen(oneTable->childTableName) == 0) { SArray *dst = taosArrayDup(oneTable->tags, NULL); if (dst == NULL) { - return TSDB_CODE_OUT_OF_MEMORY; + return terrno; } if (oneTable->sTableNameLen >= TSDB_TABLE_NAME_LEN) { uError("SML:smlSetCTableName super table name is too long"); @@ -957,7 +957,7 @@ static int32_t smlCheckMeta(SSchema *schema, int32_t length, SArray *cols, bool for (; i < taosArrayGetSize(cols); i++) { SSmlKv *kv = (SSmlKv *)taosArrayGet(cols, i); if (kv == NULL) { - code = TSDB_CODE_SML_INVALID_DATA; + code = terrno; goto END; } if (taosHashGet(hashTmp, kv->key, kv->keyLen) == NULL) { @@ -1053,7 +1053,7 @@ static int32_t smlSendMetaMsg(SSmlHandle *info, SName *pName, SArray *pColumns, for (int32_t i = 0; i < pReq.numOfColumns; ++i) { SField *pField = taosArrayGet(pColumns, i); if (pField == NULL) { - code = TSDB_CODE_SML_INVALID_DATA; + code = terrno; goto end; } SFieldWithOptions fieldWithOption = {0}; diff --git a/source/client/src/clientStmt.c b/source/client/src/clientStmt.c index 866d0cc272..f3d765af2f 100644 --- a/source/client/src/clientStmt.c +++ b/source/client/src/clientStmt.c @@ -983,7 +983,7 @@ int stmtSetDbName(TAOS_STMT* stmt, const char* dbName) { taosMemoryFreeClear(pStmt->exec.pRequest->pDb); pStmt->exec.pRequest->pDb = taosStrdup(dbName); if (pStmt->exec.pRequest->pDb == NULL) { - return TSDB_CODE_OUT_OF_MEMORY; + return terrno; } return TSDB_CODE_SUCCESS; } diff --git a/source/client/src/clientStmt2.c b/source/client/src/clientStmt2.c index a0fd49ac86..841171bacf 100644 --- a/source/client/src/clientStmt2.c +++ b/source/client/src/clientStmt2.c @@ -850,7 +850,7 @@ static int stmtSetDbName2(TAOS_STMT2* stmt, const char* dbName) { taosMemoryFreeClear(pStmt->exec.pRequest->pDb); pStmt->exec.pRequest->pDb = taosStrdup(dbName); if (pStmt->exec.pRequest->pDb == NULL) { - return TSDB_CODE_OUT_OF_MEMORY; + return terrno; } return TSDB_CODE_SUCCESS; } diff --git a/source/client/src/clientTmq.c b/source/client/src/clientTmq.c index 975d14f3ee..fcdc5bc236 100644 --- a/source/client/src/clientTmq.c +++ b/source/client/src/clientTmq.c @@ -134,6 +134,7 @@ struct tmq_t { // poll info int64_t pollCnt; int64_t totalRows; + int8_t pollFlag; // timer tmr_h hbLiveTimer; @@ -287,7 +288,6 @@ typedef struct { static TdThreadOnce tmqInit = PTHREAD_ONCE_INIT; // initialize only once volatile int32_t tmqInitRes = 0; // initialize rsp code static SMqMgmt tmqMgmt = {0}; -static int8_t pollFlag = 0; tmq_conf_t* tmq_conf_new() { tmq_conf_t* conf = taosMemoryCalloc(1, sizeof(tmq_conf_t)); @@ -826,7 +826,7 @@ static int32_t innerCommitAll(tmq_t* tmq, SMqCommitCbParamSet* pParamSet){ for (int32_t j = 0; j < numOfVgroups; j++) { SMqClientVg* pVg = taosArrayGet(pTopic->vgs, j); if (pVg == NULL) { - code = TSDB_CODE_INVALID_PARA; + code = terrno; goto END; } @@ -977,7 +977,7 @@ void tmqSendHbReq(void* param, void* tmrId) { SMqHbReq req = {0}; req.consumerId = tmq->consumerId; req.epoch = tmq->epoch; - req.pollFlag = atomic_load_8(&pollFlag); + req.pollFlag = atomic_load_8(&tmq->pollFlag); req.topics = taosArrayInit(taosArrayGetSize(tmq->clientTopics), sizeof(TopicOffsetRows)); if (req.topics == NULL) { goto END; @@ -1057,7 +1057,7 @@ void tmqSendHbReq(void* param, void* tmrId) { if (code != 0) { tqErrorC("tmqSendHbReq asyncSendMsgToServer failed"); } - (void)atomic_val_compare_exchange_8(&pollFlag, 1, 0); + (void)atomic_val_compare_exchange_8(&tmq->pollFlag, 1, 0); END: tDestroySMqHbReq(&req); @@ -1640,6 +1640,7 @@ tmq_t* tmq_consumer_new(tmq_conf_t* conf, char* errstr, int32_t errstrLen) { pTmq->status = TMQ_CONSUMER_STATUS__INIT; pTmq->pollCnt = 0; pTmq->epoch = 0; + pTmq->pollFlag = 0; // set conf tstrncpy(pTmq->clientId, conf->clientId, TSDB_CLIENT_ID_LEN); @@ -2441,7 +2442,7 @@ TAOS_RES* tmq_consumer_poll(tmq_t* tmq, int64_t timeout) { return NULL; } - (void)atomic_val_compare_exchange_8(&pollFlag, 0, 1); + (void)atomic_val_compare_exchange_8(&tmq->pollFlag, 0, 1); while (1) { tmqHandleAllDelayedTask(tmq); diff --git a/source/common/src/rsync.c b/source/common/src/rsync.c index 47a452eab7..eef889429b 100644 --- a/source/common/src/rsync.c +++ b/source/common/src/rsync.c @@ -94,7 +94,7 @@ static int32_t generateConfigFile(char* confDir) { #endif ); uDebug("[rsync] conf:%s", confContent); - if (taosWriteFile(pFile, confContent, strlen(confContent)) != TSDB_CODE_SUCCESS) { + if (taosWriteFile(pFile, confContent, strlen(confContent)) <= 0) { uError("[rsync] write conf file error," ERRNO_ERR_FORMAT, ERRNO_ERR_DATA); (void)taosCloseFile(&pFile); code = terrno; diff --git a/source/common/src/tglobal.c b/source/common/src/tglobal.c index af1a8ccfbe..ce152c8e10 100644 --- a/source/common/src/tglobal.c +++ b/source/common/src/tglobal.c @@ -362,7 +362,7 @@ static int32_t taosSplitS3Cfg(SConfig *pCfg, const char *name, char gVarible[TSD char *strDup = NULL; if ((strDup = taosStrdup(pItem->str))== NULL){ - code = TSDB_CODE_OUT_OF_MEMORY; + code = terrno; goto _exit; } diff --git a/source/common/src/tmisce.c b/source/common/src/tmisce.c index 8de557a881..10375ba857 100644 --- a/source/common/src/tmisce.c +++ b/source/common/src/tmisce.c @@ -284,7 +284,7 @@ int32_t dumpConfToDataBlock(SSDataBlock* pBlock, int32_t startCol) { SColumnInfoData* pColInfo = taosArrayGet(pBlock->pDataBlock, col++); if (pColInfo == NULL) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; TAOS_CHECK_GOTO(code, NULL, _exit); } @@ -297,7 +297,7 @@ int32_t dumpConfToDataBlock(SSDataBlock* pBlock, int32_t startCol) { pColInfo = taosArrayGet(pBlock->pDataBlock, col++); if (pColInfo == NULL) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; TAOS_CHECK_GOTO(code, NULL, _exit); } @@ -309,7 +309,7 @@ int32_t dumpConfToDataBlock(SSDataBlock* pBlock, int32_t startCol) { pColInfo = taosArrayGet(pBlock->pDataBlock, col++); if (pColInfo == NULL) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; TAOS_CHECK_GOTO(code, NULL, _exit); } TAOS_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, scope, false), NULL, _exit); diff --git a/source/dnode/mgmt/node_mgmt/src/dmMgmt.c b/source/dnode/mgmt/node_mgmt/src/dmMgmt.c index f77571c665..277dd2e02a 100644 --- a/source/dnode/mgmt/node_mgmt/src/dmMgmt.c +++ b/source/dnode/mgmt/node_mgmt/src/dmMgmt.c @@ -65,7 +65,7 @@ int32_t dmInitDnode(SDnode *pDnode) { snprintf(path, sizeof(path), "%s%s%s", tsDataDir, TD_DIRSEP, pWrapper->name); pWrapper->path = taosStrdup(path); if (pWrapper->path == NULL) { - code = TSDB_CODE_OUT_OF_MEMORY; + code = terrno; goto _OVER; } diff --git a/source/dnode/mnode/impl/src/mndIndex.c b/source/dnode/mnode/impl/src/mndIndex.c index 0b3a0998f0..718c34e85a 100644 --- a/source/dnode/mnode/impl/src/mndIndex.c +++ b/source/dnode/mnode/impl/src/mndIndex.c @@ -157,7 +157,7 @@ static void *mndBuildDropIdxReq(SMnode *pMnode, SVgObj *pVgroup, SStbObj *pStbOb pHead->contLen = htonl(len); pHead->vgId = htonl(pVgroup->vgId); - void *pBuf = POINTER_SHIFT(pHead, sizeof(SMsgHead)); + void *pBuf = POINTER_SHIFT(pHead, sizeof(SMsgHead)); int32_t ret = 0; if ((ret = tSerializeSDropIdxReq(pBuf, len - sizeof(SMsgHead), &req)) < 0) { terrno = ret; @@ -662,6 +662,8 @@ static int32_t mndSetUpdateIdxStbCommitLogs(SMnode *pMnode, STrans *pTrans, SStb pNew->pTags = NULL; pNew->pColumns = NULL; + pNew->pCmpr = NULL; + pNew->pTags = NULL; pNew->updateTime = taosGetTimestampMs(); pNew->lock = 0; diff --git a/source/dnode/mnode/impl/src/mndMain.c b/source/dnode/mnode/impl/src/mndMain.c index 451a83a02b..617fae4d3c 100644 --- a/source/dnode/mnode/impl/src/mndMain.c +++ b/source/dnode/mnode/impl/src/mndMain.c @@ -496,7 +496,7 @@ static int32_t mndCreateDir(SMnode *pMnode, const char *path) { int32_t code = 0; pMnode->path = taosStrdup(path); if (pMnode->path == NULL) { - code = TSDB_CODE_OUT_OF_MEMORY; + code = terrno; TAOS_RETURN(code); } diff --git a/source/dnode/mnode/impl/src/mndSma.c b/source/dnode/mnode/impl/src/mndSma.c index a258155223..a3b3ec01fb 100644 --- a/source/dnode/mnode/impl/src/mndSma.c +++ b/source/dnode/mnode/impl/src/mndSma.c @@ -2350,7 +2350,7 @@ int32_t dumpTSMAInfoFromSmaObj(const SSmaObj* pSma, const SStbObj* pDestStb, STa nodesDestroyNode(pNode); } pInfo->ast = taosStrdup(pSma->ast); - if (!pInfo->ast) code = TSDB_CODE_OUT_OF_MEMORY; + if (!pInfo->ast) code = terrno; if (code == TSDB_CODE_SUCCESS && pDestStb->numOfTags > 0) { pInfo->pTags = taosArrayInit(pDestStb->numOfTags, sizeof(SSchema)); diff --git a/source/dnode/mnode/impl/src/mndStreamTransAct.c b/source/dnode/mnode/impl/src/mndStreamTransAct.c index 3ecd192222..4e0bf97587 100644 --- a/source/dnode/mnode/impl/src/mndStreamTransAct.c +++ b/source/dnode/mnode/impl/src/mndStreamTransAct.c @@ -234,6 +234,7 @@ static int32_t doSetUpdateTaskAction(SMnode *pMnode, STrans *pTrans, SStreamTask code = extractNodeEpset(pMnode, &epset, &hasEpset, pTask->id.taskId, pTask->info.nodeId); if (code != TSDB_CODE_SUCCESS || !hasEpset) { mError("failed to extract epset during create update epset, code:%s", tstrerror(code)); + taosMemoryFree(pBuf); return code; } diff --git a/source/dnode/mnode/impl/src/mndTrans.c b/source/dnode/mnode/impl/src/mndTrans.c index 40bb99d6b5..e16c6efa47 100644 --- a/source/dnode/mnode/impl/src/mndTrans.c +++ b/source/dnode/mnode/impl/src/mndTrans.c @@ -1495,7 +1495,7 @@ static int32_t mndTransExecuteActionsSerial(SMnode *pMnode, STrans *pTrans, SArr return code; } - mInfo("trans:%d, execute %d actions serial, current redoAction:%d", pTrans->id, numOfActions, pTrans->actionPos); + mInfo("trans:%d, execute %d actions serial, current action:%d", pTrans->id, numOfActions, pTrans->actionPos); for (int32_t action = pTrans->actionPos; action < numOfActions; ++action) { STransAction *pAction = taosArrayGet(pActions, action); @@ -1768,7 +1768,8 @@ static bool mndTransPerformRollbackStage(SMnode *pMnode, STrans *pTrans, bool to if (code == 0) { pTrans->stage = TRN_STAGE_UNDO_ACTION; - mInfo("trans:%d, stage from rollback to undoAction", pTrans->id); + pTrans->actionPos = 0; + mInfo("trans:%d, stage from rollback to undoAction, actionPos:%d", pTrans->id, pTrans->actionPos); continueExec = true; } else { pTrans->failedTimes++; diff --git a/source/dnode/mnode/impl/src/mndUser.c b/source/dnode/mnode/impl/src/mndUser.c index 99472ca457..63390d4772 100644 --- a/source/dnode/mnode/impl/src/mndUser.c +++ b/source/dnode/mnode/impl/src/mndUser.c @@ -594,7 +594,7 @@ int32_t mndFetchAllIpWhite(SMnode *pMnode, SHashObj **ppIpWhiteTab) { if (name == NULL) { sdbRelease(pSdb, pUser); sdbCancelFetch(pSdb, pIter); - TAOS_CHECK_GOTO(TSDB_CODE_OUT_OF_MEMORY, &lino, _OVER); + TAOS_CHECK_GOTO(terrno, &lino, _OVER); } if (taosArrayPush(pUserNames, &name) == NULL) { taosMemoryFree(name); @@ -617,7 +617,7 @@ int32_t mndFetchAllIpWhite(SMnode *pMnode, SHashObj **ppIpWhiteTab) { if (found == false) { char *name = taosStrdup(TSDB_DEFAULT_USER); if (name == NULL) { - TAOS_CHECK_GOTO(TSDB_CODE_OUT_OF_MEMORY, &lino, _OVER); + TAOS_CHECK_GOTO(terrno, &lino, _OVER); } if (taosArrayPush(pUserNames, &name) == NULL) { taosMemoryFree(name); diff --git a/source/dnode/vnode/src/tsdb/tsdbRead2.c b/source/dnode/vnode/src/tsdb/tsdbRead2.c index 36bfb56120..d4b906fe2a 100644 --- a/source/dnode/vnode/src/tsdb/tsdbRead2.c +++ b/source/dnode/vnode/src/tsdb/tsdbRead2.c @@ -596,7 +596,7 @@ static int32_t tsdbReaderCreate(SVnode* pVnode, SQueryTableDataCond* pCond, void pReader->status.pPrimaryTsCol = taosArrayGet(pReader->resBlockInfo.pResBlock->pDataBlock, pSup->slotId[0]); if (pReader->status.pPrimaryTsCol == NULL) { - code = TSDB_CODE_INVALID_PARA; + code = terrno; goto _end; } diff --git a/source/dnode/vnode/src/tsdb/tsdbSnapshot.c b/source/dnode/vnode/src/tsdb/tsdbSnapshot.c index d0ea58c28a..e8740a0650 100644 --- a/source/dnode/vnode/src/tsdb/tsdbSnapshot.c +++ b/source/dnode/vnode/src/tsdb/tsdbSnapshot.c @@ -623,6 +623,7 @@ static int32_t tsdbSnapWriteFileSetOpenReader(STsdbSnapWriter* writer) { int32_t lino = 0; if (writer->ctx->fset) { +#if 0 // open data reader SDataFileReaderConfig dataFileReaderConfig = { .tsdb = writer->tsdb, @@ -650,6 +651,7 @@ static int32_t tsdbSnapWriteFileSetOpenReader(STsdbSnapWriter* writer) { code = tsdbDataFileReaderOpen(NULL, &dataFileReaderConfig, &writer->ctx->dataReader); TSDB_CHECK_CODE(code, lino, _exit); +#endif // open stt reader array SSttLvl* lvl; @@ -791,6 +793,15 @@ static int32_t tsdbSnapWriteFileSetOpenWriter(STsdbSnapWriter* writer) { .did = writer->ctx->did, .level = 0, }; + // merge stt files to either data or a new stt file + if (writer->ctx->fset) { + for (int32_t ftype = 0; ftype < TSDB_FTYPE_MAX; ++ftype) { + if (writer->ctx->fset->farr[ftype] != NULL) { + config.files[ftype].exist = true; + config.files[ftype].file = writer->ctx->fset->farr[ftype]->f[0]; + } + } + } code = tsdbFSetWriterOpen(&config, &writer->ctx->fsetWriter); TSDB_CHECK_CODE(code, lino, _exit); @@ -842,6 +853,8 @@ static int32_t tsdbSnapWriteFileSetBegin(STsdbSnapWriter* writer, int32_t fid) { _exit: if (code) { TSDB_ERROR_LOG(TD_VID(writer->tsdb->pVnode), lino, code); + } else { + tsdbInfo("vgId:%d %s succeeded, fid:%d", TD_VID(writer->tsdb->pVnode), __func__, fid); } return code; } @@ -922,6 +935,8 @@ static int32_t tsdbSnapWriteFileSetEnd(STsdbSnapWriter* writer) { _exit: if (code) { TSDB_ERROR_LOG(TD_VID(writer->tsdb->pVnode), lino, code); + } else { + tsdbInfo("vgId:%d %s succeeded, fid:%d", TD_VID(writer->tsdb->pVnode), __func__, writer->ctx->fid); } return code; } @@ -1175,7 +1190,7 @@ _exit: if (code) { TSDB_ERROR_LOG(TD_VID(tsdb->pVnode), lino, code); } else { - tsdbInfo("vgId:%d %s done", TD_VID(tsdb->pVnode), __func__); + tsdbInfo("vgId:%d %s done, rollback:%d", TD_VID(tsdb->pVnode), __func__, rollback); } return code; } diff --git a/source/dnode/vnode/src/vnd/vnodeCommit.c b/source/dnode/vnode/src/vnd/vnodeCommit.c index 438083f9b9..4a4d305f25 100644 --- a/source/dnode/vnode/src/vnd/vnodeCommit.c +++ b/source/dnode/vnode/src/vnd/vnodeCommit.c @@ -102,9 +102,8 @@ static int32_t vnodeGetBufPoolToUse(SVnode *pVnode) { ts.tv_sec = tv.tv_sec; } - int32_t rc = taosThreadCondTimedWait(&pVnode->poolNotEmpty, &pVnode->mutex, &ts); - if (rc && rc != ETIMEDOUT) { - code = TAOS_SYSTEM_ERROR(rc); + code = taosThreadCondTimedWait(&pVnode->poolNotEmpty, &pVnode->mutex, &ts); + if (code && code != TSDB_CODE_TIMEOUT_ERROR) { TSDB_CHECK_CODE(code, lino, _exit); } } diff --git a/source/dnode/vnode/src/vnd/vnodeQuery.c b/source/dnode/vnode/src/vnd/vnodeQuery.c index d616bfd4ce..7c6a2e7313 100644 --- a/source/dnode/vnode/src/vnd/vnodeQuery.c +++ b/source/dnode/vnode/src/vnd/vnodeQuery.c @@ -254,7 +254,7 @@ int32_t vnodeGetTableCfg(SVnode *pVnode, SRpcMsg *pMsg, bool direct) { if (mer1.me.ctbEntry.commentLen > 0) { cfgRsp.pComment = taosStrdup(mer1.me.ctbEntry.comment); if (NULL == cfgRsp.pComment) { - code = TSDB_CODE_OUT_OF_MEMORY; + code = terrno; goto _exit; } } @@ -273,7 +273,7 @@ int32_t vnodeGetTableCfg(SVnode *pVnode, SRpcMsg *pMsg, bool direct) { if (mer1.me.ntbEntry.commentLen > 0) { cfgRsp.pComment = taosStrdup(mer1.me.ntbEntry.comment); if (NULL == cfgRsp.pComment) { - code = TSDB_CODE_OUT_OF_MEMORY; + code = terrno; goto _exit; } } @@ -399,7 +399,7 @@ int32_t vnodeGetBatchMeta(SVnode *pVnode, SRpcMsg *pMsg) { for (int32_t i = 0; i < msgNum; ++i) { req = taosArrayGet(batchReq.pMsgs, i); if (req == NULL) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _exit; } diff --git a/source/libs/catalog/src/ctgUtil.c b/source/libs/catalog/src/ctgUtil.c index 86a38017bd..e7759bcc7d 100644 --- a/source/libs/catalog/src/ctgUtil.c +++ b/source/libs/catalog/src/ctgUtil.c @@ -1108,7 +1108,7 @@ int32_t ctgUpdateMsgCtx(SCtgMsgCtx* pCtx, int32_t reqType, void* out, char* targ if (target) { pCtx->target = taosStrdup(target); if (NULL == pCtx->target) { - CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY); + CTG_ERR_RET(terrno); } } else { pCtx->target = NULL; @@ -1125,7 +1125,7 @@ int32_t ctgAddMsgCtx(SArray* pCtxs, int32_t reqType, void* out, char* target) { if (target) { ctx.target = taosStrdup(target); if (NULL == ctx.target) { - CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY); + CTG_ERR_RET(terrno); } } @@ -1631,7 +1631,7 @@ int32_t ctgCloneVgInfo(SDBVgInfo* src, SDBVgInfo** dst) { if (NULL == (*dst)->vgArray) { taosHashCleanup((*dst)->vgHash); taosMemoryFreeClear(*dst); - CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY); + CTG_ERR_RET(terrno); } } @@ -1698,7 +1698,7 @@ int32_t ctgCloneTableIndex(SArray* pIndex, SArray** pRes) { } pInfo->expr = taosStrdup(pInfo->expr); if (NULL == pInfo->expr) { - CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY); + CTG_ERR_RET(terrno); } } @@ -1712,7 +1712,7 @@ int32_t ctgUpdateSendTargetInfo(SMsgSendInfo* pMsgSendInfo, int32_t msgType, cha pMsgSendInfo->target.vgId = vgId; pMsgSendInfo->target.dbFName = taosStrdup(dbFName); if (NULL == pMsgSendInfo->target.dbFName) { - CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY); + CTG_ERR_RET(terrno); } } else { pMsgSendInfo->target.type = TARGET_TYPE_MNODE; diff --git a/source/libs/executor/src/groupcacheoperator.c b/source/libs/executor/src/groupcacheoperator.c index d785a1e619..13aff27d68 100644 --- a/source/libs/executor/src/groupcacheoperator.c +++ b/source/libs/executor/src/groupcacheoperator.c @@ -522,7 +522,7 @@ static int32_t buildGroupCacheBaseBlock(SSDataBlock** ppDst, SSDataBlock* pSrc) (*ppDst)->pDataBlock = taosArrayDup(pSrc->pDataBlock, NULL); if (NULL == (*ppDst)->pDataBlock) { taosMemoryFree(*ppDst); - return TSDB_CODE_OUT_OF_MEMORY; + return terrno; } TAOS_MEMCPY(&(*ppDst)->info, &pSrc->info, sizeof(pSrc->info)); blockDataDeepClear(*ppDst); diff --git a/source/libs/executor/src/streamtimewindowoperator.c b/source/libs/executor/src/streamtimewindowoperator.c index e8322d6911..6fc50bb860 100644 --- a/source/libs/executor/src/streamtimewindowoperator.c +++ b/source/libs/executor/src/streamtimewindowoperator.c @@ -49,7 +49,7 @@ #define STREAM_SESSION_OP_CHECKPOINT_NAME "StreamSessionOperator_Checkpoint" #define STREAM_STATE_OP_CHECKPOINT_NAME "StreamStateOperator_Checkpoint" -#define MAX_STREAM_HISTORY_RESULT 100000000 +#define MAX_STREAM_HISTORY_RESULT 20000000 typedef struct SStateWindowInfo { SResultWindowInfo winInfo; diff --git a/source/libs/function/src/builtins.c b/source/libs/function/src/builtins.c index be65f4cf06..30f51ab0df 100644 --- a/source/libs/function/src/builtins.c +++ b/source/libs/function/src/builtins.c @@ -238,7 +238,7 @@ static int32_t addTimezoneParam(SNodeList* pList) { return terrno; } varDataSetLen(pVal->datum.p, len); - (void)strncpy(varDataVal(pVal->datum.p), pVal->literal, len); + tstrncpy(varDataVal(pVal->datum.p), pVal->literal, len + 1); code = nodesListAppend(pList, (SNode*)pVal); if (TSDB_CODE_SUCCESS != code) { @@ -4881,6 +4881,48 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .sprocessFunc = randFunction, .finalizeFunc = NULL }, + { + .name = "forecast", + .type = FUNCTION_TYPE_FORECAST, + .classification = FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC | + FUNC_MGT_FORBID_STREAM_FUNC | FUNC_MGT_FORBID_SYSTABLE_FUNC | FUNC_MGT_KEEP_ORDER_FUNC | FUNC_MGT_PRIMARY_KEY_FUNC, + .translateFunc = translateForecast, + .getEnvFunc = getSelectivityFuncEnv, + .initFunc = functionSetup, + .processFunc = NULL, + .finalizeFunc = NULL, + .estimateReturnRowsFunc = forecastEstReturnRows, + }, + { + .name = "_frowts", + .type = FUNCTION_TYPE_FORECAST_ROWTS, + .classification = FUNC_MGT_PSEUDO_COLUMN_FUNC | FUNC_MGT_FORECAST_PC_FUNC | FUNC_MGT_KEEP_ORDER_FUNC, + .translateFunc = translateTimePseudoColumn, + .getEnvFunc = getTimePseudoFuncEnv, + .initFunc = NULL, + .sprocessFunc = NULL, + .finalizeFunc = NULL + }, + { + .name = "_flow", + .type = FUNCTION_TYPE_FORECAST_LOW, + .classification = FUNC_MGT_PSEUDO_COLUMN_FUNC | FUNC_MGT_FORECAST_PC_FUNC | FUNC_MGT_KEEP_ORDER_FUNC, + .translateFunc = translateForecastConf, + .getEnvFunc = getForecastConfEnv, + .initFunc = NULL, + .sprocessFunc = NULL, + .finalizeFunc = NULL + }, + { + .name = "_fhigh", + .type = FUNCTION_TYPE_FORECAST_HIGH, + .classification = FUNC_MGT_PSEUDO_COLUMN_FUNC | FUNC_MGT_FORECAST_PC_FUNC | FUNC_MGT_KEEP_ORDER_FUNC, + .translateFunc = translateForecastConf, + .getEnvFunc = getForecastConfEnv, + .initFunc = NULL, + .sprocessFunc = NULL, + .finalizeFunc = NULL + }, }; // clang-format on diff --git a/source/libs/function/src/builtinsimpl.c b/source/libs/function/src/builtinsimpl.c index 4a8992e1c0..f13685239a 100644 --- a/source/libs/function/src/builtinsimpl.c +++ b/source/libs/function/src/builtinsimpl.c @@ -2154,7 +2154,7 @@ int32_t percentileFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { int32_t slotId = pCtx->pExpr->base.resSchema.slotId; SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, slotId); if (NULL == pCol) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _fin_error; } @@ -3682,7 +3682,7 @@ int32_t diffFunctionByRow(SArray* pCtxArray) { for (int i = 0; i < diffColNum; ++i) { SqlFunctionCtx* pCtx = *(SqlFunctionCtx**)taosArrayGet(pCtxArray, i); if (NULL == pCtx) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _exit; } funcInputUpdate(pCtx); @@ -3696,7 +3696,7 @@ int32_t diffFunctionByRow(SArray* pCtxArray) { SqlFunctionCtx* pCtx0 = *(SqlFunctionCtx**)taosArrayGet(pCtxArray, 0); SFuncInputRow* pRow0 = (SFuncInputRow*)taosArrayGet(pRows, 0); if (NULL == pCtx0 || NULL == pRow0) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _exit; } int32_t startOffset = pCtx0->offset; @@ -3714,7 +3714,7 @@ int32_t diffFunctionByRow(SArray* pCtxArray) { SqlFunctionCtx* pCtx = *(SqlFunctionCtx**)taosArrayGet(pCtxArray, i); SFuncInputRow* pRow = (SFuncInputRow*)taosArrayGet(pRows, i); if (NULL == pCtx || NULL == pRow) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _exit; } code = funcInputGetNextRow(pCtx, pRow, &result); @@ -3737,7 +3737,7 @@ int32_t diffFunctionByRow(SArray* pCtxArray) { SqlFunctionCtx* pCtx = *(SqlFunctionCtx**)taosArrayGet(pCtxArray, i); SFuncInputRow* pRow = (SFuncInputRow*)taosArrayGet(pRows, i); if (NULL == pCtx || NULL == pRow) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _exit; } if ((keepNull || hasNotNullValue) && !isFirstRow(pCtx, pRow)){ @@ -3759,7 +3759,7 @@ int32_t diffFunctionByRow(SArray* pCtxArray) { for (int i = 0; i < diffColNum; ++i) { SqlFunctionCtx* pCtx = *(SqlFunctionCtx**)taosArrayGet(pCtxArray, i); if (NULL == pCtx) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _exit; } SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx); @@ -4436,7 +4436,7 @@ int32_t spreadPartialFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { int32_t code = TSDB_CODE_SUCCESS; SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, slotId); if (NULL == pCol) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _exit; } @@ -4626,7 +4626,7 @@ int32_t elapsedPartialFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { int32_t code = TSDB_CODE_SUCCESS; SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, slotId); if (NULL == pCol) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _exit; } @@ -4976,10 +4976,10 @@ int32_t histogramFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { int32_t len; char buf[512] = {0}; if (!pInfo->normalized) { - len = sprintf(varDataVal(buf), "{\"lower_bin\":%g, \"upper_bin\":%g, \"count\":%" PRId64 "}", + len = snprintf(varDataVal(buf), sizeof(buf) - VARSTR_HEADER_SIZE, "{\"lower_bin\":%g, \"upper_bin\":%g, \"count\":%" PRId64 "}", pInfo->bins[i].lower, pInfo->bins[i].upper, pInfo->bins[i].count); } else { - len = sprintf(varDataVal(buf), "{\"lower_bin\":%g, \"upper_bin\":%g, \"count\":%lf}", pInfo->bins[i].lower, + len = snprintf(varDataVal(buf), sizeof(buf) - VARSTR_HEADER_SIZE, "{\"lower_bin\":%g, \"upper_bin\":%g, \"count\":%lf}", pInfo->bins[i].lower, pInfo->bins[i].upper, pInfo->bins[i].percentage); } varDataSetLen(buf, len); @@ -5009,7 +5009,7 @@ int32_t histogramPartialFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { int32_t code = TSDB_CODE_SUCCESS; SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, slotId); if (NULL == pCol) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _exit; } code = colDataSetVal(pCol, pBlock->info.rows, res, false); @@ -5242,7 +5242,7 @@ int32_t hllPartialFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { int32_t code = TSDB_CODE_SUCCESS; SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, slotId); if (NULL == pCol) { - code = TSDB_CODE_OUT_OF_RANGE; + code = terrno; goto _exit; } @@ -6607,7 +6607,7 @@ int32_t blockDistFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { compRatio = pData->totalSize * 100 / (double)totalRawSize; } - int32_t len = sprintf(st + VARSTR_HEADER_SIZE, + int32_t len = snprintf(varDataVal(st), sizeof(st) - VARSTR_HEADER_SIZE, "Total_Blocks=[%d] Total_Size=[%.2f KiB] Average_size=[%.2f KiB] Compression_Ratio=[%.2f %c]", pData->numOfBlocks, pData->totalSize / 1024.0, averageSize / 1024.0, compRatio, '%'); @@ -6622,7 +6622,7 @@ int32_t blockDistFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { avgRows = pData->totalRows / pData->numOfBlocks; } - len = sprintf(st + VARSTR_HEADER_SIZE, "Block_Rows=[%" PRId64 "] MinRows=[%d] MaxRows=[%d] AvgRows=[%" PRId64 "]", + len = snprintf(varDataVal(st), sizeof(st) - VARSTR_HEADER_SIZE, "Block_Rows=[%" PRId64 "] MinRows=[%d] MaxRows=[%d] AvgRows=[%" PRId64 "]", pData->totalRows, pData->minRows, pData->maxRows, avgRows); varDataSetLen(st, len); code = colDataSetVal(pColInfo, row++, st, false); @@ -6630,14 +6630,14 @@ int32_t blockDistFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { return code; } - len = sprintf(st + VARSTR_HEADER_SIZE, "Inmem_Rows=[%d] Stt_Rows=[%d] ", pData->numOfInmemRows, pData->numOfSttRows); + len = snprintf(varDataVal(st), sizeof(st) - VARSTR_HEADER_SIZE, "Inmem_Rows=[%d] Stt_Rows=[%d] ", pData->numOfInmemRows, pData->numOfSttRows); varDataSetLen(st, len); code = colDataSetVal(pColInfo, row++, st, false); if (TSDB_CODE_SUCCESS != code) { return code; } - len = sprintf(st + VARSTR_HEADER_SIZE, "Total_Tables=[%d] Total_Filesets=[%d] Total_Vgroups=[%d]", pData->numOfTables, + len = snprintf(varDataVal(st), sizeof(st) - VARSTR_HEADER_SIZE, "Total_Tables=[%d] Total_Filesets=[%d] Total_Vgroups=[%d]", pData->numOfTables, pData->numOfFiles, pData->numOfVgroups); varDataSetLen(st, len); @@ -6646,7 +6646,7 @@ int32_t blockDistFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { return code; } - len = sprintf(st + VARSTR_HEADER_SIZE, + len = snprintf(varDataVal(st), sizeof(st) - VARSTR_HEADER_SIZE, "--------------------------------------------------------------------------------"); varDataSetLen(st, len); code = colDataSetVal(pColInfo, row++, st, false); @@ -6673,7 +6673,7 @@ int32_t blockDistFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { int32_t bucketRange = ceil(((double) (pData->defMaxRows - pData->defMinRows)) / numOfBuckets); for (int32_t i = 0; i < tListLen(pData->blockRowsHisto); ++i) { - len = sprintf(st + VARSTR_HEADER_SIZE, "%04d |", pData->defMinRows + bucketRange * (i + 1)); + len = snprintf(varDataVal(st), sizeof(st) - VARSTR_HEADER_SIZE, "%04d |", pData->defMinRows + bucketRange * (i + 1)); int32_t num = 0; if (pData->blockRowsHisto[i] > 0) { @@ -6681,13 +6681,13 @@ int32_t blockDistFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { } for (int32_t j = 0; j < num; ++j) { - int32_t x = sprintf(st + VARSTR_HEADER_SIZE + len, "%c", '|'); + int32_t x = snprintf(varDataVal(st) + len, sizeof(st) - VARSTR_HEADER_SIZE - len, "%c", '|'); len += x; } if (pData->blockRowsHisto[i] > 0) { double v = pData->blockRowsHisto[i] * 100.0 / pData->numOfBlocks; - len += sprintf(st + VARSTR_HEADER_SIZE + len, " %d (%.2f%c)", pData->blockRowsHisto[i], v, '%'); + len += snprintf(varDataVal(st) + len, sizeof(st) - VARSTR_HEADER_SIZE - len, " %d (%.2f%c)", pData->blockRowsHisto[i], v, '%'); } varDataSetLen(st, len); diff --git a/source/libs/function/src/functionMgt.c b/source/libs/function/src/functionMgt.c index 12dbd88788..1717702df7 100644 --- a/source/libs/function/src/functionMgt.c +++ b/source/libs/function/src/functionMgt.c @@ -417,7 +417,7 @@ static int32_t createColumnByFunc(const SFunctionNode* pFunc, SColumnNode** ppCo if (NULL == *ppCol) { return code; } - (void)strcpy((*ppCol)->colName, pFunc->node.aliasName); + tstrncpy((*ppCol)->colName, pFunc->node.aliasName, TSDB_COL_NAME_LEN); (*ppCol)->node.resType = pFunc->node.resType; return TSDB_CODE_SUCCESS; } @@ -446,11 +446,11 @@ static int32_t createPartialFunction(const SFunctionNode* pSrcFunc, SFunctionNod (*pPartialFunc)->hasOriginalFunc = true; (*pPartialFunc)->originalFuncId = pSrcFunc->hasOriginalFunc ? pSrcFunc->originalFuncId : pSrcFunc->funcId; char name[TSDB_FUNC_NAME_LEN + TSDB_NAME_DELIMITER_LEN + TSDB_POINTER_PRINT_BYTES + 1] = {0}; - int32_t len = snprintf(name, sizeof(name) - 1, "%s.%p", (*pPartialFunc)->functionName, pSrcFunc); + int32_t len = snprintf(name, sizeof(name), "%s.%p", (*pPartialFunc)->functionName, pSrcFunc); if (taosHashBinary(name, len) < 0) { return TSDB_CODE_FAILED; } - (void)strncpy((*pPartialFunc)->node.aliasName, name, TSDB_COL_NAME_LEN - 1); + tstrncpy((*pPartialFunc)->node.aliasName, name, TSDB_COL_NAME_LEN); (*pPartialFunc)->hasPk = pSrcFunc->hasPk; (*pPartialFunc)->pkBytes = pSrcFunc->pkBytes; return TSDB_CODE_SUCCESS; @@ -484,7 +484,7 @@ static int32_t createMidFunction(const SFunctionNode* pSrcFunc, const SFunctionN } } if (TSDB_CODE_SUCCESS == code) { - (void)strcpy(pFunc->node.aliasName, pPartialFunc->node.aliasName); + tstrncpy(pFunc->node.aliasName, pPartialFunc->node.aliasName, TSDB_COL_NAME_LEN); } if (TSDB_CODE_SUCCESS == code) { @@ -513,7 +513,7 @@ static int32_t createMergeFunction(const SFunctionNode* pSrcFunc, const SFunctio if (fmIsSameInOutType(pSrcFunc->funcId)) { pFunc->node.resType = pSrcFunc->node.resType; } - (void)strcpy(pFunc->node.aliasName, pSrcFunc->node.aliasName); + tstrncpy(pFunc->node.aliasName, pSrcFunc->node.aliasName, TSDB_COL_NAME_LEN); } if (TSDB_CODE_SUCCESS == code) { @@ -567,8 +567,8 @@ static int32_t fmCreateStateFunc(const SFunctionNode* pFunc, SFunctionNode** pSt nodesDestroyList(pParams); return code; } - (void)strcpy((*pStateFunc)->node.aliasName, pFunc->node.aliasName); - (void)strcpy((*pStateFunc)->node.userAlias, pFunc->node.userAlias); + tstrncpy((*pStateFunc)->node.aliasName, pFunc->node.aliasName, TSDB_COL_NAME_LEN); + tstrncpy((*pStateFunc)->node.userAlias, pFunc->node.userAlias, TSDB_COL_NAME_LEN); } return TSDB_CODE_SUCCESS; } @@ -614,8 +614,8 @@ static int32_t fmCreateStateMergeFunc(SFunctionNode* pFunc, SFunctionNode** pSta nodesDestroyList(pParams); return code; } - (void)strcpy((*pStateMergeFunc)->node.aliasName, pFunc->node.aliasName); - (void)strcpy((*pStateMergeFunc)->node.userAlias, pFunc->node.userAlias); + tstrncpy((*pStateMergeFunc)->node.aliasName, pFunc->node.aliasName, TSDB_COL_NAME_LEN); + tstrncpy((*pStateMergeFunc)->node.userAlias, pFunc->node.userAlias, TSDB_COL_NAME_LEN); } return TSDB_CODE_SUCCESS; } diff --git a/source/libs/function/src/tscript.c b/source/libs/function/src/tscript.c index 768581285b..eecc66d6d6 100644 --- a/source/libs/function/src/tscript.c +++ b/source/libs/function/src/tscript.c @@ -92,7 +92,7 @@ void taosValueToLuaType(lua_State *lua, int32_t type, char *val) { int taosLoadScriptInit(void* pInit) { ScriptCtx *pCtx = pInit; char funcName[MAX_FUNC_NAME] = {0}; - sprintf(funcName, "%s_init", pCtx->funcName); + snprintf(funcName, MAX_FUNC_NAME, "%s_init", pCtx->funcName); lua_State* lua = pCtx->pEnv->lua_state; lua_getglobal(lua, funcName); @@ -106,7 +106,7 @@ void taosLoadScriptNormal(void *pInit, char *pInput, int16_t iType, int16_t iByt int64_t *ptsList, int64_t key, char* pOutput, char *ptsOutput, int32_t *numOfOutput, int16_t oType, int16_t oBytes) { ScriptCtx* pCtx = pInit; char funcName[MAX_FUNC_NAME] = {0}; - sprintf(funcName, "%s_add", pCtx->funcName); + snprintf(funcName, MAX_FUNC_NAME, "%s_add", pCtx->funcName); lua_State* lua = pCtx->pEnv->lua_state; lua_getglobal(lua, funcName); @@ -143,7 +143,7 @@ void taosLoadScriptNormal(void *pInit, char *pInput, int16_t iType, int16_t iByt void taosLoadScriptMerge(void *pInit, char* data, int32_t numOfRows, char* pOutput, int32_t* numOfOutput) { ScriptCtx *pCtx = pInit; char funcName[MAX_FUNC_NAME] = {0}; - sprintf(funcName, "%s_merge", pCtx->funcName); + snprintf(funcName, MAX_FUNC_NAME, "%s_merge", pCtx->funcName); lua_State* lua = pCtx->pEnv->lua_state; lua_getglobal(lua, funcName); @@ -167,7 +167,7 @@ void taosLoadScriptMerge(void *pInit, char* data, int32_t numOfRows, char* pOutp void taosLoadScriptFinalize(void *pInit,int64_t key, char *pOutput, int32_t* numOfOutput) { ScriptCtx *pCtx = pInit; char funcName[MAX_FUNC_NAME] = {0}; - sprintf(funcName, "%s_finalize", pCtx->funcName); + snprintf(funcName, MAX_FUNC_NAME, "%s_finalize", pCtx->funcName); lua_State* lua = pCtx->pEnv->lua_state; lua_getglobal(lua, funcName); diff --git a/source/libs/function/src/tudf.c b/source/libs/function/src/tudf.c index 3a0cff7cf3..4a1c07ef48 100644 --- a/source/libs/function/src/tudf.c +++ b/source/libs/function/src/tudf.c @@ -158,11 +158,12 @@ static int32_t udfSpawnUdfd(SUdfdData *pData) { char *taosFqdnEnvItem = NULL; char *taosFqdn = getenv("TAOS_FQDN"); if (taosFqdn != NULL) { - int len = strlen("TAOS_FQDN=") + strlen(taosFqdn) + 1; + int subLen = strlen(taosFqdn); + int len = strlen("TAOS_FQDN=") + subLen + 1; taosFqdnEnvItem = taosMemoryMalloc(len); if (taosFqdnEnvItem != NULL) { tstrncpy(taosFqdnEnvItem, "TAOS_FQDN=", len); - TAOS_STRNCAT(taosFqdnEnvItem, taosFqdn, strlen(taosFqdn)); + TAOS_STRNCAT(taosFqdnEnvItem, taosFqdn, subLen); fnInfo("[UDFD]Succsess to set TAOS_FQDN:%s", taosFqdn); } else { fnError("[UDFD]Failed to allocate memory for TAOS_FQDN"); diff --git a/source/libs/parser/src/parInsertSml.c b/source/libs/parser/src/parInsertSml.c index da9c9d5b8d..cca35d9c9a 100644 --- a/source/libs/parser/src/parInsertSml.c +++ b/source/libs/parser/src/parInsertSml.c @@ -113,7 +113,7 @@ static int32_t smlBuildTagRow(SArray* cols, SBoundColInfo* tags, SSchema* pSchem SSchema* pTagSchema = &pSchema[tags->pColIndex[i]]; SSmlKv* kv = taosArrayGet(cols, i); if (kv == NULL){ - code = TSDB_CODE_SML_INVALID_DATA; + code = terrno; uError("SML smlBuildTagRow error kv is null"); goto end; } @@ -381,7 +381,7 @@ int32_t smlBindData(SQuery* query, bool dataFormat, SArray* tags, SArray* colsSc for (int32_t r = 0; r < rowNum; ++r) { void* rowData = taosArrayGetP(cols, r); if (rowData == NULL) { - ret = TSDB_CODE_SML_INVALID_DATA; + ret = terrno; goto end; } // 1. set the parsed value from sql string @@ -389,7 +389,7 @@ int32_t smlBindData(SQuery* query, bool dataFormat, SArray* tags, SArray* colsSc SSchema* pColSchema = &pSchema[pTableCxt->boundColsInfo.pColIndex[c]]; SColVal* pVal = taosArrayGet(pTableCxt->pValues, pTableCxt->boundColsInfo.pColIndex[c]); if (pVal == NULL) { - ret = TSDB_CODE_SML_INVALID_DATA; + ret = terrno; goto end; } void** p = taosHashGet(rowData, pColSchema->name, strlen(pColSchema->name)); diff --git a/source/libs/scalar/src/filter.c b/source/libs/scalar/src/filter.c index a3608cc1dc..e07ef69990 100644 --- a/source/libs/scalar/src/filter.c +++ b/source/libs/scalar/src/filter.c @@ -1764,41 +1764,41 @@ _return: return DEAL_RES_ERROR; } -int32_t fltConverToStr(char *str, int type, void *buf, int32_t bufSize, int32_t *len) { +int32_t fltConverToStr(char *str, int32_t strMaxLen, int type, void *buf, int32_t bufSize, int32_t *len) { int32_t n = 0; switch (type) { case TSDB_DATA_TYPE_NULL: - n = sprintf(str, "null"); + n = snprintf(str, strMaxLen, "null"); break; case TSDB_DATA_TYPE_BOOL: - n = sprintf(str, (*(int8_t *)buf) ? "true" : "false"); + n = snprintf(str, strMaxLen, (*(int8_t *)buf) ? "true" : "false"); break; case TSDB_DATA_TYPE_TINYINT: - n = sprintf(str, "%d", *(int8_t *)buf); + n = snprintf(str, strMaxLen, "%d", *(int8_t *)buf); break; case TSDB_DATA_TYPE_SMALLINT: - n = sprintf(str, "%d", *(int16_t *)buf); + n = snprintf(str, strMaxLen, "%d", *(int16_t *)buf); break; case TSDB_DATA_TYPE_INT: - n = sprintf(str, "%d", *(int32_t *)buf); + n = snprintf(str, strMaxLen, "%d", *(int32_t *)buf); break; case TSDB_DATA_TYPE_BIGINT: case TSDB_DATA_TYPE_TIMESTAMP: - n = sprintf(str, "%" PRId64, *(int64_t *)buf); + n = snprintf(str, strMaxLen, "%" PRId64, *(int64_t *)buf); break; case TSDB_DATA_TYPE_FLOAT: - n = sprintf(str, "%e", GET_FLOAT_VAL(buf)); + n = snprintf(str, strMaxLen, "%e", GET_FLOAT_VAL(buf)); break; case TSDB_DATA_TYPE_DOUBLE: - n = sprintf(str, "%e", GET_DOUBLE_VAL(buf)); + n = snprintf(str, strMaxLen, "%e", GET_DOUBLE_VAL(buf)); break; case TSDB_DATA_TYPE_BINARY: @@ -1817,19 +1817,19 @@ int32_t fltConverToStr(char *str, int type, void *buf, int32_t bufSize, int32_t break; case TSDB_DATA_TYPE_UTINYINT: - n = sprintf(str, "%d", *(uint8_t *)buf); + n = snprintf(str, strMaxLen, "%d", *(uint8_t *)buf); break; case TSDB_DATA_TYPE_USMALLINT: - n = sprintf(str, "%d", *(uint16_t *)buf); + n = snprintf(str, strMaxLen, "%d", *(uint16_t *)buf); break; case TSDB_DATA_TYPE_UINT: - n = sprintf(str, "%u", *(uint32_t *)buf); + n = snprintf(str, strMaxLen, "%u", *(uint32_t *)buf); break; case TSDB_DATA_TYPE_UBIGINT: - n = sprintf(str, "%" PRIu64, *(uint64_t *)buf); + n = snprintf(str, strMaxLen, "%" PRIu64, *(uint64_t *)buf); break; default: @@ -1886,8 +1886,8 @@ int32_t filterDumpInfoToString(SFilterInfo *info, const char *msg, int32_t optio SFilterField *left = FILTER_UNIT_LEFT_FIELD(info, unit); SColumnNode *refNode = (SColumnNode *)left->desc; if (unit->compare.optr <= OP_TYPE_JSON_CONTAINS) { - len = sprintf(str, "UNIT[%d] => [%d][%d] %s [", i, refNode->dataBlockId, refNode->slotId, - operatorTypeStr(unit->compare.optr)); + len += snprintf(str, sizeof(str), "UNIT[%d] => [%d][%d] %s [", i, refNode->dataBlockId, refNode->slotId, + operatorTypeStr(unit->compare.optr)); } if (unit->right.type == FLD_TYPE_VALUE && FILTER_UNIT_OPTR(unit) != OP_TYPE_IN) { @@ -1898,18 +1898,22 @@ int32_t filterDumpInfoToString(SFilterInfo *info, const char *msg, int32_t optio data += VARSTR_HEADER_SIZE; } if (data) { - FLT_ERR_RET(fltConverToStr(str + len, type, data, tlen > 32 ? 32 : tlen, &tlen)); + FLT_ERR_RET(fltConverToStr(str + len, sizeof(str) - len, type, data, tlen > 32 ? 32 : tlen, &tlen)); + len += tlen; } } else { - (void)strcat(str, "NULL"); + (void)strncat(str, "NULL", sizeof(str) - len - 1); + len += 4; } - (void)strcat(str, "]"); + (void)strncat(str, "]", sizeof(str) - len - 1); + len += 1; if (unit->compare.optr2) { - (void)strcat(str, " && "); + (void)strncat(str, " && ", sizeof(str) - len - 1); + len += 4; if (unit->compare.optr2 <= OP_TYPE_JSON_CONTAINS) { - (void)sprintf(str + strlen(str), "[%d][%d] %s [", refNode->dataBlockId, refNode->slotId, - operatorTypeStr(unit->compare.optr2)); + len += snprintf(str + len, sizeof(str) - len, "[%d][%d] %s [", refNode->dataBlockId, + refNode->slotId, operatorTypeStr(unit->compare.optr2)); } if (unit->right2.type == FLD_TYPE_VALUE && FILTER_UNIT_OPTR(unit) != OP_TYPE_IN) { @@ -1919,11 +1923,14 @@ int32_t filterDumpInfoToString(SFilterInfo *info, const char *msg, int32_t optio tlen = varDataLen(data); data += VARSTR_HEADER_SIZE; } - FLT_ERR_RET(fltConverToStr(str + strlen(str), type, data, tlen > 32 ? 32 : tlen, &tlen)); + FLT_ERR_RET(fltConverToStr(str + len, sizeof(str) - len, type, data, tlen > 32 ? 32 : tlen, &tlen)); + len += tlen; } else { - (void)strcat(str, "NULL"); + (void)strncat(str, "NULL", sizeof(str) - len - 1); + len += 4; } - (void)strcat(str, "]"); + (void)strncat(str, "]", sizeof(str) - len - 1); + len += 1; } qDebug("%s", str); // TODO @@ -1955,21 +1962,39 @@ int32_t filterDumpInfoToString(SFilterInfo *info, const char *msg, int32_t optio SFilterRangeNode *r = ctx->rs; int32_t tlen = 0; while (r) { - char str[256] = {0}; + char str[256] = {0}; + int32_t len = 0; if (FILTER_GET_FLAG(r->ra.sflag, RANGE_FLG_NULL)) { - (void)strcat(str, "(NULL)"); + (void)strncat(str, "(NULL)", sizeof(str) - len - 1); + len += 6; } else { - FILTER_GET_FLAG(r->ra.sflag, RANGE_FLG_EXCLUDE) ? strcat(str, "(") : strcat(str, "["); - FLT_ERR_RET(fltConverToStr(str + strlen(str), ctx->type, &r->ra.s, tlen > 32 ? 32 : tlen, &tlen)); - FILTER_GET_FLAG(r->ra.sflag, RANGE_FLG_EXCLUDE) ? strcat(str, ")") : strcat(str, "]"); + FILTER_GET_FLAG(r->ra.sflag, RANGE_FLG_EXCLUDE) ? + (void)strncat(str, "(", sizeof(str) - len - 1) : + (void)strncat(str, "[", sizeof(str) - len - 1); + len += 1; + FLT_ERR_RET(fltConverToStr(str + len, sizeof(str) - len, ctx->type, &r->ra.s, tlen > 32 ? 32 : tlen, &tlen)); + len += tlen; + FILTER_GET_FLAG(r->ra.sflag, RANGE_FLG_EXCLUDE) ? + (void)strncat(str, ")", sizeof(str) - len - 1) : + (void)strncat(str, "]", sizeof(str) - len - 1); + len += 1; } - (void)strcat(str, " - "); + (void)strncat(str, " - ", sizeof(str) - len - 1); + len += 3; if (FILTER_GET_FLAG(r->ra.eflag, RANGE_FLG_NULL)) { - (void)strcat(str, "(NULL)"); + (void)strncat(str, "(NULL)", sizeof(str) - len - 1); + len += 6; } else { - FILTER_GET_FLAG(r->ra.eflag, RANGE_FLG_EXCLUDE) ? strcat(str, "(") : strcat(str, "["); - FLT_ERR_RET(fltConverToStr(str + strlen(str), ctx->type, &r->ra.e, tlen > 32 ? 32 : tlen, &tlen)); - FILTER_GET_FLAG(r->ra.eflag, RANGE_FLG_EXCLUDE) ? strcat(str, ")") : strcat(str, "]"); + FILTER_GET_FLAG(r->ra.eflag, RANGE_FLG_EXCLUDE) ? + (void)strncat(str, "(", sizeof(str) - len - 1) : + (void)strncat(str, "[", sizeof(str) - len - 1); + len += 1; + FLT_ERR_RET(fltConverToStr(str + len, sizeof(str) - len, ctx->type, &r->ra.e, tlen > 32 ? 32 : tlen, &tlen)); + len += tlen; + FILTER_GET_FLAG(r->ra.eflag, RANGE_FLG_EXCLUDE) ? + (void)strncat(str, ")", sizeof(str) - len - 1) : + (void)strncat(str, "]", sizeof(str) - len - 1); + len += 1; } qDebug("range: %s", str); diff --git a/source/libs/scalar/src/scalar.c b/source/libs/scalar/src/scalar.c index 2a4951d237..209110b014 100644 --- a/source/libs/scalar/src/scalar.c +++ b/source/libs/scalar/src/scalar.c @@ -1211,7 +1211,7 @@ EDealRes sclRewriteFunction(SNode **pNode, SScalarCtx *ctx) { res->translate = true; - (void)strcpy(res->node.aliasName, node->node.aliasName); + tstrncpy(res->node.aliasName, node->node.aliasName, TSDB_COL_NAME_LEN); res->node.resType.type = output.columnData->info.type; res->node.resType.bytes = output.columnData->info.bytes; res->node.resType.scale = output.columnData->info.scale; @@ -1286,7 +1286,7 @@ EDealRes sclRewriteLogic(SNode **pNode, SScalarCtx *ctx) { res->node.resType = node->node.resType; res->translate = true; - (void)strcpy(res->node.aliasName, node->node.aliasName); + tstrncpy(res->node.aliasName, node->node.aliasName, TSDB_COL_NAME_LEN); int32_t type = output.columnData->info.type; if (IS_VAR_DATA_TYPE(type)) { res->datum.p = output.columnData->pData; @@ -1356,7 +1356,7 @@ EDealRes sclRewriteOperator(SNode **pNode, SScalarCtx *ctx) { res->translate = true; - (void)strcpy(res->node.aliasName, node->node.aliasName); + tstrncpy(res->node.aliasName, node->node.aliasName, TSDB_COL_NAME_LEN); res->node.resType = node->node.resType; if (colDataIsNull_s(output.columnData, 0)) { res->isNull = true; @@ -1419,7 +1419,7 @@ EDealRes sclRewriteCaseWhen(SNode **pNode, SScalarCtx *ctx) { res->translate = true; - (void)strcpy(res->node.aliasName, node->node.aliasName); + tstrncpy(res->node.aliasName, node->node.aliasName, TSDB_COL_NAME_LEN); res->node.resType = node->node.resType; if (colDataIsNull_s(output.columnData, 0)) { res->isNull = true; diff --git a/source/libs/scalar/src/sclfunc.c b/source/libs/scalar/src/sclfunc.c index ff47c091b7..f408314fad 100644 --- a/source/libs/scalar/src/sclfunc.c +++ b/source/libs/scalar/src/sclfunc.c @@ -2067,9 +2067,9 @@ int32_t castFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutp case TSDB_DATA_TYPE_BINARY: case TSDB_DATA_TYPE_GEOMETRY: { if (inputType == TSDB_DATA_TYPE_BOOL) { - // NOTE: sprintf will append '\0' at the end of string - int32_t len = sprintf(varDataVal(output), "%.*s", (int32_t)(outputLen - VARSTR_HEADER_SIZE), - *(int8_t *)input ? "true" : "false"); + // NOTE: snprintf will append '\0' at the end of string + int32_t len = snprintf(varDataVal(output), outputLen + TSDB_NCHAR_SIZE - VARSTR_HEADER_SIZE, "%.*s", + (int32_t)(outputLen - VARSTR_HEADER_SIZE), *(int8_t *)input ? "true" : "false"); varDataSetLen(output, len); } else if (inputType == TSDB_DATA_TYPE_BINARY) { int32_t len = TMIN(varDataLen(input), outputLen - VARSTR_HEADER_SIZE); @@ -2109,7 +2109,7 @@ int32_t castFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutp int32_t len; if (inputType == TSDB_DATA_TYPE_BOOL) { char tmp[8] = {0}; - len = sprintf(tmp, "%.*s", outputCharLen, *(int8_t *)input ? "true" : "false"); + len = snprintf(tmp, sizeof(tmp), "%.*s", outputCharLen, *(int8_t *)input ? "true" : "false"); bool ret = taosMbsToUcs4(tmp, len, (TdUcs4 *)varDataVal(output), outputLen - VARSTR_HEADER_SIZE, &len); if (!ret) { code = TSDB_CODE_SCALAR_CONVERT_ERROR; @@ -4411,11 +4411,11 @@ int32_t histogramScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarP int32_t len; char buf[512] = {0}; if (!normalized) { - len = sprintf(varDataVal(buf), "{\"lower_bin\":%g, \"upper_bin\":%g, \"count\":%" PRId64 "}", bins[k].lower, - bins[k].upper, bins[k].count); + len = snprintf(varDataVal(buf), sizeof(buf) - VARSTR_HEADER_SIZE, "{\"lower_bin\":%g, \"upper_bin\":%g, \"count\":%" PRId64 "}", + bins[k].lower, bins[k].upper, bins[k].count); } else { - len = sprintf(varDataVal(buf), "{\"lower_bin\":%g, \"upper_bin\":%g, \"count\":%lf}", bins[k].lower, - bins[k].upper, bins[k].percentage); + len = snprintf(varDataVal(buf), sizeof(buf) - VARSTR_HEADER_SIZE, "{\"lower_bin\":%g, \"upper_bin\":%g, \"count\":%lf}", + bins[k].lower, bins[k].upper, bins[k].percentage); } varDataSetLen(buf, len); SCL_ERR_JRET(colDataSetVal(pOutputData, k, buf, false)); diff --git a/source/libs/scalar/src/sclvector.c b/source/libs/scalar/src/sclvector.c index 230454483d..a7c842172a 100644 --- a/source/libs/scalar/src/sclvector.c +++ b/source/libs/scalar/src/sclvector.c @@ -734,7 +734,7 @@ int32_t vectorConvertToVarData(SSclVectorConvCtx *pCtx) { int64_t value = 0; GET_TYPED_DATA(value, int64_t, pCtx->inType, colDataGetData(pInputCol, i)); - int32_t len = sprintf(varDataVal(tmp), "%" PRId64, value); + int32_t len = snprintf(varDataVal(tmp), sizeof(tmp) - VARSTR_HEADER_SIZE, "%" PRId64, value); varDataLen(tmp) = len; if (pCtx->outType == TSDB_DATA_TYPE_NCHAR) { SCL_ERR_RET(varToNchar(tmp, pCtx->pOut, i, NULL)); @@ -751,7 +751,7 @@ int32_t vectorConvertToVarData(SSclVectorConvCtx *pCtx) { uint64_t value = 0; GET_TYPED_DATA(value, uint64_t, pCtx->inType, colDataGetData(pInputCol, i)); - int32_t len = sprintf(varDataVal(tmp), "%" PRIu64, value); + int32_t len = snprintf(varDataVal(tmp), sizeof(tmp) - VARSTR_HEADER_SIZE, "%" PRIu64, value); varDataLen(tmp) = len; if (pCtx->outType == TSDB_DATA_TYPE_NCHAR) { SCL_ERR_RET(varToNchar(tmp, pCtx->pOut, i, NULL)); @@ -768,7 +768,7 @@ int32_t vectorConvertToVarData(SSclVectorConvCtx *pCtx) { double value = 0; GET_TYPED_DATA(value, double, pCtx->inType, colDataGetData(pInputCol, i)); - int32_t len = sprintf(varDataVal(tmp), "%lf", value); + int32_t len = snprintf(varDataVal(tmp), sizeof(tmp) - VARSTR_HEADER_SIZE, "%lf", value); varDataLen(tmp) = len; if (pCtx->outType == TSDB_DATA_TYPE_NCHAR) { SCL_ERR_RET(varToNchar(tmp, pCtx->pOut, i, NULL)); diff --git a/source/libs/scalar/test/filter/filterTests.cpp b/source/libs/scalar/test/filter/filterTests.cpp index 70d6f7d0ae..8bbadd0e22 100644 --- a/source/libs/scalar/test/filter/filterTests.cpp +++ b/source/libs/scalar/test/filter/filterTests.cpp @@ -55,7 +55,7 @@ void flttInitLogFile() { tsAsyncLog = 0; qDebugFlag = 159; - (void)strcpy(tsLogDir, TD_LOG_DIR_PATH); + tstrncpy(tsLogDir, TD_LOG_DIR_PATH, PATH_MAX); if (taosInitLog(defaultLogFileNamePrefix, maxLogFileNum, false) < 0) { printf("failed to open log file in directory:%s\n", tsLogDir); @@ -101,7 +101,7 @@ int32_t flttMakeColumnNode(SNode **pNode, SSDataBlock **block, int32_t dataType, rnode->node.resType.bytes = dataBytes; rnode->dataBlockId = 0; - sprintf(rnode->dbName, "%" PRIu64, dbidx++); + snprintf(rnode->dbName, TSDB_DB_NAME_LEN, "%" PRIu64, dbidx++); if (NULL == block) { rnode->slotId = 2; @@ -666,7 +666,7 @@ TEST(columnTest, binary_column_like_binary) { int32_t rowNum = sizeof(leftv) / sizeof(leftv[0]); flttMakeColumnNode(&pLeft, &src, TSDB_DATA_TYPE_BINARY, 3, rowNum, leftv); - sprintf(&rightv[2], "%s", "__0"); + snprintf(&rightv[2], sizeof(rightv) - 2, "%s", "__0"); varDataSetLen(rightv, strlen(&rightv[2])); flttMakeValueNode(&pRight, TSDB_DATA_TYPE_BINARY, rightv); flttMakeOpNode(&opNode, OP_TYPE_LIKE, TSDB_DATA_TYPE_BOOL, pLeft, pRight); diff --git a/source/libs/scalar/test/scalar/scalarTests.cpp b/source/libs/scalar/test/scalar/scalarTests.cpp index e14b772ea8..4cab644582 100644 --- a/source/libs/scalar/test/scalar/scalarTests.cpp +++ b/source/libs/scalar/test/scalar/scalarTests.cpp @@ -81,7 +81,7 @@ void scltInitLogFile() { tsAsyncLog = 0; qDebugFlag = 159; - (void)strcpy(tsLogDir, TD_LOG_DIR_PATH); + tstrncpy(tsLogDir, TD_LOG_DIR_PATH, PATH_MAX); if (taosInitLog(defaultLogFileNamePrefix, maxLogFileNum, false) < 0) { (void)printf("failed to open log file in directory:%s\n", tsLogDir); diff --git a/source/libs/tdb/src/db/tdbPage.c b/source/libs/tdb/src/db/tdbPage.c index be391a75f1..49a15070a6 100644 --- a/source/libs/tdb/src/db/tdbPage.c +++ b/source/libs/tdb/src/db/tdbPage.c @@ -102,6 +102,10 @@ void tdbPageDestroy(SPage *pPage, void (*xFree)(void *arg, void *ptr), void *arg tdbOsFree(pPage->apOvfl[iOvfl]); } + if (TDB_DESTROY_PAGE_LOCK(pPage) != 0) { + tdbError("tdb/page-destroy: destroy page lock failed."); + } + ptr = pPage->pData; xFree(arg, ptr); diff --git a/source/os/src/osSemaphore.c b/source/os/src/osSemaphore.c index ea9e824947..cc3a13a818 100644 --- a/source/os/src/osSemaphore.c +++ b/source/os/src/osSemaphore.c @@ -377,20 +377,20 @@ int32_t tsem2_wait(tsem2_t* sem) { } int32_t tsem2_timewait(tsem2_t* sem, int64_t ms) { - int ret = 0; + int32_t code = 0; - ret = taosThreadMutexLock(&sem->mutex); - if (ret) { - return ret; + code = taosThreadMutexLock(&sem->mutex); + if (code) { + return code; } if (sem->count <= 0) { struct timespec ts = {0}; if (clock_gettime(CLOCK_MONOTONIC, &ts) == -1) { - ret = TAOS_SYSTEM_ERROR(errno); + code = TAOS_SYSTEM_ERROR(errno); (void)taosThreadMutexUnlock(&sem->mutex); - terrno = ret; - return ret; + terrno = code; + return code; } ts.tv_sec += ms / 1000; @@ -399,22 +399,18 @@ int32_t tsem2_timewait(tsem2_t* sem, int64_t ms) { ts.tv_nsec %= 1000000000; while (sem->count <= 0) { - ret = taosThreadCondTimedWait(&sem->cond, &sem->mutex, &ts); - if (ret != 0) { + code = taosThreadCondTimedWait(&sem->cond, &sem->mutex, &ts); + if (code != 0) { (void)taosThreadMutexUnlock(&sem->mutex); - if (errno == ETIMEDOUT) { - return TSDB_CODE_TIMEOUT_ERROR; - } else { - return TAOS_SYSTEM_ERROR(errno); - } + return code; } } } sem->count--; - ret = taosThreadMutexUnlock(&sem->mutex); - return ret; + code = taosThreadMutexUnlock(&sem->mutex); + return code; } #endif diff --git a/source/os/src/osThread.c b/source/os/src/osThread.c index d102a2a332..6a8c705cde 100644 --- a/source/os/src/osThread.c +++ b/source/os/src/osThread.c @@ -235,19 +235,22 @@ int32_t taosThreadCondWait(TdThreadCond *cond, TdThreadMutex *mutex) { int32_t taosThreadCondTimedWait(TdThreadCond *cond, TdThreadMutex *mutex, const struct timespec *abstime) { #ifdef __USE_WIN_THREAD - if (!abstime) return EINVAL; + if (!abstime) return 0; if (SleepConditionVariableCS(cond, mutex, (DWORD)(abstime->tv_sec * 1e3 + abstime->tv_nsec / 1e6))) return 0; - if (GetLastError() == ERROR_TIMEOUT) { - return ETIMEDOUT; + DWORD error = GetLastError(); + if (error == ERROR_TIMEOUT) { + return TSDB_CODE_TIMEOUT_ERROR; } - return EINVAL; + return TAOS_SYSTEM_WINAPI_ERROR(error); #else int32_t code = pthread_cond_timedwait(cond, mutex, abstime); - if (code && code != ETIMEDOUT) { - terrno = TAOS_SYSTEM_ERROR(code); - return terrno; + if(code == ETIMEDOUT) { + return TSDB_CODE_TIMEOUT_ERROR; + } else if (code) { + return TAOS_SYSTEM_ERROR(code); + } else { + return 0; } - return code; #endif } diff --git a/tests/develop-test/2-query/table_count_scan.py b/tests/develop-test/2-query/table_count_scan.py index 38e35a175e..b2b48c1f0b 100644 --- a/tests/develop-test/2-query/table_count_scan.py +++ b/tests/develop-test/2-query/table_count_scan.py @@ -65,7 +65,7 @@ class TDTestCase: tdSql.query('select count(*),db_name, stable_name from information_schema.ins_tables group by db_name, stable_name;') tdSql.checkRows(3) - tdSql.checkData(0, 0, 32) + tdSql.checkData(0, 0, 34) tdSql.checkData(0, 1, 'information_schema') tdSql.checkData(0, 2, None) tdSql.checkData(1, 0, 3) @@ -77,7 +77,7 @@ class TDTestCase: tdSql.query('select count(1) v,db_name, stable_name from information_schema.ins_tables group by db_name, stable_name order by v desc;') tdSql.checkRows(3) - tdSql.checkData(0, 0, 32) + tdSql.checkData(0, 0, 34) tdSql.checkData(0, 1, 'information_schema') tdSql.checkData(0, 2, None) tdSql.checkData(1, 0, 5) @@ -93,7 +93,7 @@ class TDTestCase: tdSql.checkData(1, 1, 'performance_schema') tdSql.checkData(0, 0, 3) tdSql.checkData(0, 1, 'tbl_count') - tdSql.checkData(2, 0, 32) + tdSql.checkData(2, 0, 34) tdSql.checkData(2, 1, 'information_schema') tdSql.query("select count(*) from information_schema.ins_tables where db_name='tbl_count'") @@ -106,7 +106,7 @@ class TDTestCase: tdSql.query('select count(*) from information_schema.ins_tables') tdSql.checkRows(1) - tdSql.checkData(0, 0, 40) + tdSql.checkData(0, 0, 42) tdSql.execute('create table stba (ts timestamp, c1 bool, c2 tinyint, c3 smallint, c4 int, c5 bigint, c6 float, c7 double, c8 binary(10), c9 nchar(10), c10 tinyint unsigned, c11 smallint unsigned, c12 int unsigned, c13 bigint unsigned) TAGS(t1 int, t2 binary(10), t3 double);') @@ -189,7 +189,7 @@ class TDTestCase: tdSql.checkData(2, 0, 5) tdSql.checkData(2, 1, 'performance_schema') tdSql.checkData(2, 2, None) - tdSql.checkData(3, 0, 32) + tdSql.checkData(3, 0, 34) tdSql.checkData(3, 1, 'information_schema') tdSql.checkData(3, 2, None) @@ -204,7 +204,7 @@ class TDTestCase: tdSql.checkData(2, 0, 5) tdSql.checkData(2, 1, 'performance_schema') tdSql.checkData(2, 2, None) - tdSql.checkData(3, 0, 32) + tdSql.checkData(3, 0, 34) tdSql.checkData(3, 1, 'information_schema') tdSql.checkData(3, 2, None) @@ -215,7 +215,7 @@ class TDTestCase: tdSql.checkData(0, 1, 'tbl_count') tdSql.checkData(1, 0, 5) tdSql.checkData(1, 1, 'performance_schema') - tdSql.checkData(2, 0, 32) + tdSql.checkData(2, 0, 34) tdSql.checkData(2, 1, 'information_schema') tdSql.query("select count(*) from information_schema.ins_tables where db_name='tbl_count'") @@ -228,7 +228,7 @@ class TDTestCase: tdSql.query('select count(*) from information_schema.ins_tables') tdSql.checkRows(1) - tdSql.checkData(0, 0, 41) + tdSql.checkData(0, 0, 43) tdSql.execute('drop database tbl_count') diff --git a/tests/develop-test/5-taos-tools/taosbenchmark/json/stmt_auto_create_table.json b/tests/develop-test/5-taos-tools/taosbenchmark/json/stmt_auto_create_table.json index 0054d985ee..5ce0d4d87d 100644 --- a/tests/develop-test/5-taos-tools/taosbenchmark/json/stmt_auto_create_table.json +++ b/tests/develop-test/5-taos-tools/taosbenchmark/json/stmt_auto_create_table.json @@ -63,7 +63,7 @@ "childtable_offset": 0, "insert_rows": 20, "insert_interval": 0, - "interlace_rows": 5, + "interlace_rows": 0, "disorder_ratio": 0, "disorder_range": 1000, "timestamp_step": 1, diff --git a/tests/develop-test/5-taos-tools/taosbenchmark/json/taosc_auto_create_table.json b/tests/develop-test/5-taos-tools/taosbenchmark/json/taosc_auto_create_table.json index bed3598bfe..1d0ec1765d 100644 --- a/tests/develop-test/5-taos-tools/taosbenchmark/json/taosc_auto_create_table.json +++ b/tests/develop-test/5-taos-tools/taosbenchmark/json/taosc_auto_create_table.json @@ -64,7 +64,7 @@ "childtable_offset": 0, "insert_rows": 20, "insert_interval": 0, - "interlace_rows": 5, + "interlace_rows": 0, "disorder_ratio": 0, "disorder_range": 1000, "timestamp_step": 1, diff --git a/tests/parallel_test/cases.task b/tests/parallel_test/cases.task index aca7ef6495..d5e7de3014 100644 --- a/tests/parallel_test/cases.task +++ b/tests/parallel_test/cases.task @@ -6,6 +6,7 @@ ,,n,unit-test,bash test.sh +,,n,develop-test,python3 ./test.py -f 5-taos-tools/taosbenchmark/auto_create_table_json.py # # army-test @@ -1542,7 +1543,6 @@ ,,n,develop-test,python3 ./test.py -f 2-query/ts-range.py ,,n,develop-test,python3 ./test.py -f 2-query/tag_scan.py ,,n,develop-test,python3 ./test.py -f 2-query/show_create_db.py -,,n,develop-test,python3 ./test.py -f 5-taos-tools/taosbenchmark/auto_create_table_json.py ,,n,develop-test,python3 ./test.py -f 5-taos-tools/taosbenchmark/custom_col_tag.py ,,n,develop-test,python3 ./test.py -f 5-taos-tools/taosbenchmark/default_json.py ,,n,develop-test,python3 ./test.py -f 5-taos-tools/taosbenchmark/demo.py diff --git a/tests/script/tsim/query/sys_tbname.sim b/tests/script/tsim/query/sys_tbname.sim index dabe4fcdde..9736893428 100644 --- a/tests/script/tsim/query/sys_tbname.sim +++ b/tests/script/tsim/query/sys_tbname.sim @@ -58,7 +58,7 @@ endi sql select tbname from information_schema.ins_tables; print $rows $data00 -if $rows != 41 then +if $rows != 43 then return -1 endi if $data00 != @ins_tables@ then diff --git a/tests/script/tsim/query/tableCount.sim b/tests/script/tsim/query/tableCount.sim index 5a3dd0714f..87f72eb3b6 100644 --- a/tests/script/tsim/query/tableCount.sim +++ b/tests/script/tsim/query/tableCount.sim @@ -53,7 +53,7 @@ sql select stable_name,count(table_name) from information_schema.ins_tables grou if $rows != 3 then return -1 endi -if $data01 != 38 then +if $data01 != 40 then return -1 endi if $data11 != 10 then @@ -72,7 +72,7 @@ endi if $data11 != 5 then return -1 endi -if $data21 != 32 then +if $data21 != 34 then return -1 endi if $data31 != 5 then @@ -97,7 +97,7 @@ endi if $data42 != 3 then return -1 endi -if $data52 != 32 then +if $data52 != 34 then return -1 endi if $data62 != 5 then diff --git a/tests/system-test/0-others/information_schema.py b/tests/system-test/0-others/information_schema.py index f59410b552..01e416bb26 100644 --- a/tests/system-test/0-others/information_schema.py +++ b/tests/system-test/0-others/information_schema.py @@ -61,7 +61,7 @@ class TDTestCase: self.ins_list = ['ins_dnodes','ins_mnodes','ins_qnodes','ins_snodes','ins_cluster','ins_databases','ins_functions',\ 'ins_indexes','ins_stables','ins_tables','ins_tags','ins_columns','ins_users','ins_grants','ins_vgroups','ins_configs','ins_dnode_variables',\ 'ins_topics','ins_subscriptions','ins_streams','ins_stream_tasks','ins_vnodes','ins_user_privileges','ins_views', - 'ins_compacts', 'ins_compact_details', 'ins_grants_full','ins_grants_logs', 'ins_machines', 'ins_arbgroups', 'ins_tsmas', "ins_encryptions"] + 'ins_compacts', 'ins_compact_details', 'ins_grants_full','ins_grants_logs', 'ins_machines', 'ins_arbgroups', 'ins_tsmas', "ins_encryptions", "ins_anodes", "ins_anodes_full"] self.perf_list = ['perf_connections','perf_queries','perf_consumers','perf_trans','perf_apps'] def insert_data(self,column_dict,tbname,row_num): insert_sql = self.setsql.set_insertsql(column_dict,tbname,self.binary_str,self.nchar_str) @@ -222,7 +222,7 @@ class TDTestCase: tdSql.query("select * from information_schema.ins_columns where db_name ='information_schema'") tdLog.info(len(tdSql.queryResult)) - tdSql.checkEqual(True, len(tdSql.queryResult) in range(272, 273)) + tdSql.checkEqual(True, len(tdSql.queryResult) in range(280, 281)) tdSql.query("select * from information_schema.ins_columns where db_name ='performance_schema'") tdSql.checkEqual(56, len(tdSql.queryResult)) diff --git a/tools/auto/stmt2Performance/json/query.json b/tools/auto/stmt2Performance/json/query.json new file mode 100644 index 0000000000..70f1d90edc --- /dev/null +++ b/tools/auto/stmt2Performance/json/query.json @@ -0,0 +1,23 @@ +{ + "filetype": "query", + "cfgdir": "/etc/taos", + "host": "127.0.0.1", + "port": 6030, + "user": "root", + "password": "taosdata", + "confirm_parameter_prompt": "no", + "continue_if_fail": "yes", + "databases": "dbrate", + "query_times": 20, + "query_mode": "taosc", + "specified_table_query": { + "query_interval": 0, + "concurrent": 10, + "sqls": [ + { + "sql": "select count(*) from meters", + "result": "./query_result.txt" + } + ] + } +} diff --git a/tools/auto/stmt2Performance/json/template.json b/tools/auto/stmt2Performance/json/template.json new file mode 100644 index 0000000000..659c5966a4 --- /dev/null +++ b/tools/auto/stmt2Performance/json/template.json @@ -0,0 +1,56 @@ +{ + "filetype": "insert", + "cfgdir": "/etc/taos", + "host": "127.0.0.1", + "port": 6030, + "user": "root", + "password": "taosdata", + "thread_count": 5, + "create_table_thread_count": 1, + "thread_bind_vgroup": "yes", + "confirm_parameter_prompt": "no", + "num_of_records_per_req": 2000, + "prepared_rand": 100000, + "escape_character": "yes", + "databases": [ + { + "dbinfo": { + "name": "dbrate", + "drop": "yes", + "vgroups": 2 + }, + "super_tables": [ + { + "name": "meters", + "child_table_exists": "no", + "childtable_count": 10, + "childtable_prefix": "d", + "insert_mode": "@STMT_MODE", + "interlace_rows": @INTERLACE_MODE, + "insert_rows": 100000, + "timestamp_step": 1, + "start_timestamp": "2020-10-01 00:00:00.000", + "auto_create_table": "no", + "columns": [ + { "type": "bool", "name": "bc"}, + { "type": "float", "name": "fc"}, + { "type": "double", "name": "dc"}, + { "type": "tinyint", "name": "ti"}, + { "type": "smallint", "name": "si"}, + { "type": "int", "name": "ic"}, + { "type": "bigint", "name": "bi"}, + { "type": "utinyint", "name": "uti"}, + { "type": "usmallint", "name": "usi"}, + { "type": "uint", "name": "ui"}, + { "type": "ubigint", "name": "ubi"}, + { "type": "binary", "name": "bin", "len": 32}, + { "type": "nchar", "name": "nch", "len": 64} + ], + "tags": [ + {"type": "TINYINT", "name": "groupid", "max": 10, "min": 1} + ] + } + ] + } + ] +} diff --git a/tools/auto/stmt2Performance/stmt2Perf.py b/tools/auto/stmt2Performance/stmt2Perf.py new file mode 100644 index 0000000000..e7a4d5ecbe --- /dev/null +++ b/tools/auto/stmt2Performance/stmt2Perf.py @@ -0,0 +1,356 @@ +import taos +import sys +import os +import subprocess +import time +import random +import json +from datetime import datetime + + +################################################################### +# Copyright (c) 2016 by TAOS Technologies, Inc. +# All rights reserved. +# +# This file is proprietary and confidential to TAOS Technologies. +# No part of this file may be reproduced, stored, transmitted, +# disclosed or used in any form or by any means other than as +# expressly provided by the written permission from Jianhui Tao +# +################################################################### + +# -*- coding: utf-8 -*- +dataDir = "/var/lib/taos/" +templateFile = "json/template.json" +Number = 0 +resultContext = "" + + +def showLog(str): + print(str) + +def exec(command, show=True): + if(show): + print(f"exec {command}\n") + return os.system(command) + +# run return output and error +def run(command, timeout = 60, show=True): + if(show): + print(f"run {command} timeout={timeout}s\n") + + process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + process.wait(timeout) + + output = process.stdout.read().decode(encoding="gbk") + error = process.stderr.read().decode(encoding="gbk") + + return output, error + +# return list after run +def runRetList(command, timeout=10, first=True): + output,error = run(command, timeout) + if first: + return output.splitlines() + else: + return error.splitlines() + + +def readFileContext(filename): + file = open(filename) + context = file.read() + file.close() + return context + + +def writeFileContext(filename, context): + file = open(filename, "w") + file.write(context) + file.close() + +def appendFileContext(filename, context): + global resultContext + resultContext += context + try: + file = open(filename, "a") + wsize = file.write(context) + file.close() + except: + print(f"appand file error context={context} .") + +def getFolderSize(folder): + total_size = 0 + for dirpath, dirnames, filenames in os.walk(folder): + for filename in filenames: + filepath = os.path.join(dirpath, filename) + total_size += os.path.getsize(filepath) + return total_size + +def waitClusterAlive(loop): + for i in range(loop): + command = 'taos -s "show cluster alive\G;" ' + out,err = run(command) + print(out) + if out.find("status: 1") >= 0: + showLog(f" i={i} wait cluster alive ok.\n") + return True + + showLog(f" i={i} wait cluster alive ...\n") + time.sleep(1) + + showLog(f" i={i} wait cluster alive failed.\n") + return False + +def waitCompactFinish(loop): + for i in range(loop): + command = 'taos -s "show compacts;" ' + out,err = run(command) + if out.find("Query OK, 0 row(s) in set") >= 0: + showLog(f" i={i} wait compact finish ok\n") + return True + + showLog(f" i={i} wait compact ...\n") + time.sleep(1) + + showLog(f" i={i} wait compact failed.\n") + return False + + +def getTypeName(datatype): + str1 = datatype.split(",")[0] + str2 = str1.split(":")[1] + str3 = str2.replace('"','').replace(' ','') + return str3 + + +def getMatch(datatype, algo): + if algo == "tsz": + if datatype == "float" or datatype == "double": + return True + else: + return False + else: + return True + + +def generateJsonFile(stmt, interlace): + print(f"doTest stmt: {stmt} interlace_rows={interlace}\n") + + # replace datatype + context = readFileContext(templateFile) + # replace compress + context = context.replace("@STMT_MODE", stmt) + context = context.replace("@INTERLACE_MODE", interlace) + + # write to file + fileName = f"json/test_{stmt}_{interlace}.json" + if os.path.exists(fileName): + os.remove(fileName) + writeFileContext(fileName, context) + + return fileName + +def taosdStart(): + cmd = "nohup /usr/bin/taosd 2>&1 & " + ret = exec(cmd) + print(f"exec taosd ret = {ret}\n") + time.sleep(3) + waitClusterAlive(10) + +def taosdStop(): + i = 1 + toBeKilled = "taosd" + killCmd = "ps -ef|grep -w %s| grep -v grep | awk '{print $2}' | xargs kill -TERM > /dev/null 2>&1" % toBeKilled + psCmd = "ps -ef|grep -w %s| grep -v grep | awk '{print $2}'" % toBeKilled + processID = subprocess.check_output(psCmd, shell=True) + while(processID): + os.system(killCmd) + time.sleep(1) + processID = subprocess.check_output(psCmd, shell=True) + print(f"i={i} kill taosd pid={processID}") + i += 1 + +def cleanAndStartTaosd(): + + # stop + taosdStop() + # clean + exec(f"rm -rf {dataDir}") + # start + taosdStart() + +def findContextValue(context, label): + start = context.find(label) + if start == -1 : + return "" + start += len(label) + 2 + # skip blank + while context[start] == ' ': + start += 1 + + # find end ',' + end = start + ends = [',','}',']', 0] + while context[end] not in ends: + end += 1 + return context[start:end] + + +def writeTemplateInfo(resultFile): + # create info + context = readFileContext(templateFile) + vgroups = findContextValue(context, "vgroups") + childCount = findContextValue(context, "childtable_count") + insertRows = findContextValue(context, "insert_rows") + line = f"vgroups = {vgroups}\nchildtable_count = {childCount}\ninsert_rows = {insertRows}\n\n" + print(line) + appendFileContext(resultFile, line) + + +def totalCompressRate(stmt, interlace, resultFile, writeSpeed, querySpeed): + global Number + # flush + command = 'taos -s "flush database dbrate;"' + rets = exec(command) + + command = 'taos -s "compact database dbrate;"' + rets = exec(command) + waitCompactFinish(60) + + # read compress rate + command = 'taos -s "show table distributed dbrate.meters\G;"' + rets = runRetList(command) + print(rets) + + str1 = rets[5] + arr = str1.split(" ") + + # Total_Size KB + str2 = arr[2] + pos = str2.find("=[") + totalSize = int(float(str2[pos+2:])/1024) + + # Compression_Ratio + str2 = arr[6] + pos = str2.find("=[") + rate = str2[pos+2:] + print("rate =" + rate) + + # total data file size + #dataSize = getFolderSize(f"{dataDir}/vnode/") + #dataSizeMB = int(dataSize/1024/1024) + + # appand to file + + Number += 1 + context = "%10s %10s %15s %10s %10s %30s %15s\n"%( Number, stmt, interlace, str(totalSize)+" MB", rate+"%", writeSpeed + " Records/second", querySpeed) + showLog(context) + appendFileContext(resultFile, context) + +def testWrite(jsonFile): + command = f"taosBenchmark -f {jsonFile}" + output, context = run(command, 60000) + # SUCC: Spent 0.960248 (real 0.947154) seconds to insert rows: 100000 with 1 thread(s) into dbrate 104139.76 (real 105579.45) records/second + + # find second real + pos = context.find("(real ") + if pos == -1: + print(f"error, run command={command} output not found first \"(real\" keyword. error={context}") + exit(1) + pos = context.find("(real ", pos + 5) + if pos == -1: + print(f"error, run command={command} output not found second \"(real\" keyword. error={context}") + exit(1) + + pos += 5 + length = len(context) + while pos < length and context[pos] == ' ': + pos += 1 + end = context.find(".", pos) + if end == -1: + print(f"error, run command={command} output not found second \".\" keyword. error={context}") + exit(1) + + speed = context[pos: end] + #print(f"write pos ={pos} end={end} speed={speed}\n output={context} \n") + return speed + +def testQuery(): + command = f"taosBenchmark -f json/query.json" + lines = runRetList(command, 60000) + # INFO: Spend 6.7350 second completed total queries: 10, the QPS of all threads: 1.485 + speed = None + + for i in range(0, len(lines)): + # find second real + context = lines[i] + pos = context.find("the QPS of all threads:") + if pos == -1 : + continue + pos += 24 + speed = context[pos:] + break + #print(f"query pos ={pos} speed={speed}\n output={context} \n") + + if speed is None: + print(f"error, run command={command} output not found second \"the QPS of all threads:\" keyword. error={lines}") + exit(1) + else: + return speed + +def doTest(stmt, interlace, resultFile): + print(f"doTest stmtMode: {stmt} interlaceRows={interlace}\n") + #cleanAndStartTaosd() + + + # json + jsonFile = generateJsonFile(stmt, interlace) + + # run taosBenchmark + t1 = time.time() + writeSpeed = testWrite(jsonFile) + t2 = time.time() + # total write speed + querySpeed = testQuery() + + # total compress rate + totalCompressRate(stmt, interlace, resultFile, writeSpeed, querySpeed) + + +def main(): + + # test compress method + stmtModes = ["stmt", "stmt2", "taosc"] + interlaceModes = ["0", "1"] + + # record result + resultFile = "./result.txt" + timestamp = time.time() + now = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(timestamp)) + context = f"\n---------------------- test rate ({now}) ---------------------------------\n" + + appendFileContext(resultFile, context) + # json info + writeTemplateInfo(resultFile) + # head + context = "\n%10s %10s %15s %10s %10s %30s %15s\n"%("No", "stmtMode", "interlaceRows", "dataSize", "rate", "writeSpeed", "query-QPS") + appendFileContext(resultFile, context) + + + # loop for all compression + for stmt in stmtModes: + # do test + for interlace in interlaceModes: + doTest(stmt, interlace, resultFile) + appendFileContext(resultFile, " \n") + + timestamp = time.time() + now = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(timestamp)) + appendFileContext(resultFile, f"\n{now} finished test!\n") + + +if __name__ == "__main__": + print("welcome use TDengine compress rate test tools.\n") + main() + # show result + print(resultContext) \ No newline at end of file diff --git a/tools/auto/testCompression/json/query.json b/tools/auto/testCompression/json/query.json new file mode 100644 index 0000000000..e810c1009f --- /dev/null +++ b/tools/auto/testCompression/json/query.json @@ -0,0 +1,23 @@ +{ + "filetype": "query", + "cfgdir": "/etc/taos", + "host": "127.0.0.1", + "port": 6030, + "user": "root", + "password": "taosdata", + "confirm_parameter_prompt": "no", + "continue_if_fail": "yes", + "databases": "dbrate", + "query_times": 20, + "query_mode": "taosc", + "specified_table_query": { + "query_interval": 0, + "concurrent": 10, + "sqls": [ + { + "sql": "select * from meters", + "result": "./query_res0.txt" + } + ] + } +} diff --git a/tools/auto/testCompression/testCompression.py b/tools/auto/testCompression/testCompression.py index 768fa87f71..281a097f8a 100644 --- a/tools/auto/testCompression/testCompression.py +++ b/tools/auto/testCompression/testCompression.py @@ -48,9 +48,13 @@ def run(command, timeout = 60, show=True): return output, error # return list after run -def runRetList(command, timeout=10): +def runRetList(command, timeout=10, first=True): output,error = run(command, timeout) - return output.splitlines() + if first: + return output.splitlines() + else: + return error.splitlines() + def readFileContext(filename): file = open(filename) @@ -204,7 +208,7 @@ def writeTemplateInfo(resultFile): appendFileContext(resultFile, line) -def totalCompressRate(algo, resultFile, writeSecond): +def totalCompressRate(algo, resultFile, writeSpeed, querySpeed): global Number # flush command = 'taos -s "flush database dbrate;"' @@ -239,10 +243,60 @@ def totalCompressRate(algo, resultFile, writeSecond): # appand to file Number += 1 - context = "%10s %10s %10s %10s %10s\n"%( Number, algo, str(totalSize)+" MB", rate+"%", writeSecond + " s") + context = "%10s %10s %10s %10s %30s %15s\n"%( Number, algo, str(totalSize)+" MB", rate+"%", writeSpeed + " Records/second", querySpeed) showLog(context) appendFileContext(resultFile, context) +def testWrite(jsonFile): + command = f"taosBenchmark -f {jsonFile}" + output, context = run(command, 60000) + # SUCC: Spent 0.960248 (real 0.947154) seconds to insert rows: 100000 with 1 thread(s) into dbrate 104139.76 (real 105579.45) records/second + + # find second real + pos = context.find("(real ") + if pos == -1: + print(f"error, run command={command} output not found first \"(real\" keyword. error={context}") + exit(1) + pos = context.find("(real ", pos + 5) + if pos == -1: + print(f"error, run command={command} output not found second \"(real\" keyword. error={context}") + exit(1) + + pos += 5 + length = len(context) + while pos < length and context[pos] == ' ': + pos += 1 + end = context.find(".", pos) + if end == -1: + print(f"error, run command={command} output not found second \".\" keyword. error={context}") + exit(1) + + speed = context[pos: end] + #print(f"write pos ={pos} end={end} speed={speed}\n output={context} \n") + return speed + +def testQuery(): + command = f"taosBenchmark -f json/query.json" + lines = runRetList(command, 60000) + # INFO: Spend 6.7350 second completed total queries: 10, the QPS of all threads: 1.485 + speed = None + + for i in range(20, len(lines)): + # find second real + pos = context.find("the QPS of all threads:") + context = lines[26] + if pos == -1 : + continue + pos += 24 + speed = context[pos:] + break + #print(f"query pos ={pos} speed={speed}\n output={context} \n") + + if speed is None: + print(f"error, run command={command} output not found second \"the QPS of all threads:\" keyword. error={lines}") + exit(1) + else: + return speed def doTest(algo, resultFile): print(f"doTest algo: {algo} \n") @@ -254,16 +308,20 @@ def doTest(algo, resultFile): # run taosBenchmark t1 = time.time() - exec(f"taosBenchmark -f {jsonFile}") + writeSpeed = testWrite(jsonFile) t2 = time.time() + # total write speed + querySpeed = testQuery() # total compress rate - totalCompressRate(algo, resultFile, str(int(t2-t1))) + totalCompressRate(algo, resultFile, writeSpeed, querySpeed) + def main(): # test compress method algos = ["lz4", "zlib", "zstd", "xz", "disabled"] + #algos = ["lz4"] # record result resultFile = "./result.txt" @@ -275,7 +333,7 @@ def main(): # json info writeTemplateInfo(resultFile) # head - context = "\n%10s %10s %10s %10s %10s\n"%("No", "compress", "dataSize", "rate", "insertSeconds") + context = "\n%10s %10s %10s %10s %30s %15s\n"%("No", "compress", "dataSize", "rate", "writeSpeed", "query-QPS") appendFileContext(resultFile, context)