diff --git a/README-CN.md b/README-CN.md index e30e38ae78..0b7e42d4fa 100644 --- a/README-CN.md +++ b/README-CN.md @@ -303,14 +303,14 @@ Query OK, 2 row(s) in set (0.001700s) TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用: -- [Java](https://docs.taosdata.com/reference/connector/java/) -- [C/C++](https://www.taosdata.com/cn/documentation/connector#c-cpp) -- [Python](https://docs.taosdata.com/reference/connector/python/) -- [Go](https://docs.taosdata.com/reference/connector/go/) -- [Node.js](https://docs.taosdata.com/reference/connector/node/) -- [Rust](https://docs.taosdata.com/reference/connector/rust/) -- [C#](https://docs.taosdata.com/reference/connector/csharp/) -- [RESTful API](https://docs.taosdata.com/reference/rest-api/) +- [Java](https://docs.taosdata.com/connector/java/) +- [C/C++](https://docs.taosdata.com/connector/cpp/) +- [Python](https://docs.taosdata.com/connector/python/) +- [Go](https://docs.taosdata.com/connector/go/) +- [Node.js](https://docs.taosdata.com/connector/node/) +- [Rust](https://docs.taosdata.com/connector/rust/) +- [C#](https://docs.taosdata.com/connector/csharp/) +- [RESTful API](https://docs.taosdata.com/connector/rest-api/) # 成为社区贡献者 diff --git a/README.md b/README.md index 02dd9984e8..611d97aac9 100644 --- a/README.md +++ b/README.md @@ -19,29 +19,29 @@ English | [简体中文](README-CN.md) | We are hiring, check [here](https://tde # What is TDengine? -TDengine is an open source, high-performance, cloud native time-series database optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages: +TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/what-is-a-time-series-database/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages: -- **High-Performance**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression. +- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression. -- **Simplified Solution**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly. +- **[Simplified Solution](https://tdengine.com/tdengine/simplified-time-series-data-solution/)**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly. -- **Cloud Native**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds. +- **[Cloud Native](https://tdengine.com/tdengine/cloud-native-time-series-database/)**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds. -- **Ease of Use**: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access. +- **[Ease of Use](https://docs.tdengine.com/get-started/docker/)**: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access. -- **Easy Data Analytics**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way. +- **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way. -- **Open Source**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide. +- **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide. # Documentation -For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.taosdata.com) ([TDengine 文档](https://docs.taosdata.com)) +For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([TDengine 文档](https://docs.taosdata.com)) # Building -At the moment, TDengine server supports running on Linux, Windows systems.Any OS application can also choose the RESTful interface of taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU , and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. +At the moment, TDengine server supports running on Linux and Windows systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. -You can choose to install through source code according to your needs, [container](https://docs.taosdata.com/get-started/docker/), [installation package](https://docs.taosdata.com/get-started/package/) or [Kubernetes](https://docs.taosdata.com/deployment/k8s/) to install. This quick guide only applies to installing from source. +You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source. TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine. @@ -256,6 +256,7 @@ After building successfully, TDengine can be installed by: nmake install ``` + ## Quick Run @@ -304,14 +306,14 @@ Query OK, 2 row(s) in set (0.001700s) TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation. -- [Java](https://docs.taosdata.com/reference/connector/java/) -- [C/C++](https://docs.taosdata.com/reference/connector/cpp/) -- [Python](https://docs.taosdata.com/reference/connector/python/) -- [Go](https://docs.taosdata.com/reference/connector/go/) -- [Node.js](https://docs.taosdata.com/reference/connector/node/) -- [Rust](https://docs.taosdata.com/reference/connector/rust/) -- [C#](https://docs.taosdata.com/reference/connector/csharp/) -- [RESTful API](https://docs.taosdata.com/reference/rest-api/) +- [Java](https://docs.tdengine.com/reference/connector/java/) +- [C/C++](https://docs.tdengine.com/reference/connector/cpp/) +- [Python](https://docs.tdengine.com/reference/connector/python/) +- [Go](https://docs.tdengine.com/reference/connector/go/) +- [Node.js](https://docs.tdengine.com/reference/connector/node/) +- [Rust](https://docs.tdengine.com/reference/connector/rust/) +- [C#](https://docs.tdengine.com/reference/connector/csharp/) +- [RESTful API](https://docs.tdengine.com/reference/rest-api/) # Contribute to TDengine diff --git a/docs/en/10-deployment/05-helm.md b/docs/en/10-deployment/05-helm.md index 48cd9df32c..302730f1b5 100644 --- a/docs/en/10-deployment/05-helm.md +++ b/docs/en/10-deployment/05-helm.md @@ -170,71 +170,21 @@ taoscfg: # number of replications, for cluster only TAOS_REPLICA: "1" - - # number of days per DB file - # TAOS_DAYS: "10" - - # number of days to keep DB file, default is 10 years. - #TAOS_KEEP: "3650" - - # cache block size (Mbyte) - #TAOS_CACHE: "16" - - # number of cache blocks per vnode - #TAOS_BLOCKS: "6" - - # minimum rows of records in file block - #TAOS_MIN_ROWS: "100" - - # maximum rows of records in file block - #TAOS_MAX_ROWS: "4096" - # - # TAOS_NUM_OF_THREADS_PER_CORE: number of threads per CPU core - #TAOS_NUM_OF_THREADS_PER_CORE: "1.0" + # TAOS_NUM_OF_RPC_THREADS: number of threads for RPC + #TAOS_NUM_OF_RPC_THREADS: "2" + # # TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data #TAOS_NUM_OF_COMMIT_THREADS: "4" - # - # TAOS_RATIO_OF_QUERY_CORES: - # the proportion of total CPU cores available for query processing - # 2.0: the query threads will be set to double of the CPU cores. - # 1.0: all CPU cores are available for query processing [default]. - # 0.5: only half of the CPU cores are available for query. - # 0.0: only one core available. - #TAOS_RATIO_OF_QUERY_CORES: "1.0" - - # - # TAOS_KEEP_COLUMN_NAME: - # the last_row/first/last aggregator will not change the original column name in the result fields - #TAOS_KEEP_COLUMN_NAME: "0" - - # enable/disable backuping vnode directory when removing vnode - #TAOS_VNODE_BAK: "1" - # enable/disable installation / usage report #TAOS_TELEMETRY_REPORTING: "1" - # enable/disable load balancing - #TAOS_BALANCE: "1" - - # max timer control blocks - #TAOS_MAX_TMR_CTRL: "512" - # time interval of system monitor, seconds #TAOS_MONITOR_INTERVAL: "30" - # number of seconds allowed for a dnode to be offline, for cluster only - #TAOS_OFFLINE_THRESHOLD: "8640000" - - # RPC re-try timer, millisecond - #TAOS_RPC_TIMER: "1000" - - # RPC maximum time for ack, seconds. - #TAOS_RPC_MAX_TIME: "600" - # time interval of dnode status reporting to mnode, seconds, for cluster only #TAOS_STATUS_INTERVAL: "1" @@ -245,37 +195,7 @@ taoscfg: #TAOS_MIN_SLIDING_TIME: "10" # minimum time window, milli-second - #TAOS_MIN_INTERVAL_TIME: "10" - - # maximum delay before launching a stream computation, milli-second - #TAOS_MAX_STREAM_COMP_DELAY: "20000" - - # maximum delay before launching a stream computation for the first time, milli-second - #TAOS_MAX_FIRST_STREAM_COMP_DELAY: "10000" - - # retry delay when a stream computation fails, milli-second - #TAOS_RETRY_STREAM_COMP_DELAY: "10" - - # the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9 - #TAOS_STREAM_COMP_DELAY_RATIO: "0.1" - - # max number of vgroups per db, 0 means configured automatically - #TAOS_MAX_VGROUPS_PER_DB: "0" - - # max number of tables per vnode - #TAOS_MAX_TABLES_PER_VNODE: "1000000" - - # the number of acknowledgments required for successful data writing - #TAOS_QUORUM: "1" - - # enable/disable compression - #TAOS_COMP: "2" - - # write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync - #TAOS_WAL_LEVEL: "1" - - # if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away - #TAOS_FSYNC: "3000" + #TAOS_MIN_INTERVAL_TIME: "1" # the compressed rpc message, option: # -1 (no compression) @@ -283,17 +203,8 @@ taoscfg: # > 0 (rpc message body which larger than this value will be compressed) #TAOS_COMPRESS_MSG_SIZE: "-1" - # max length of an SQL - #TAOS_MAX_SQL_LENGTH: "1048576" - - # the maximum number of records allowed for super table time sorting - #TAOS_MAX_NUM_OF_ORDERED_RES: "100000" - # max number of connections allowed in dnode - #TAOS_MAX_SHELL_CONNS: "5000" - - # max number of connections allowed in client - #TAOS_MAX_CONNECTIONS: "5000" + #TAOS_MAX_SHELL_CONNS: "50000" # stop writing logs when the disk size of the log folder is less than this value #TAOS_MINIMAL_LOG_DIR_G_B: "0.1" @@ -313,21 +224,8 @@ taoscfg: # enable/disable system monitor #TAOS_MONITOR: "1" - # enable/disable recording the SQL statements via restful interface - #TAOS_HTTP_ENABLE_RECORD_SQL: "0" - - # number of threads used to process http requests - #TAOS_HTTP_MAX_THREADS: "2" - - # maximum number of rows returned by the restful interface - #TAOS_RESTFUL_ROW_LIMIT: "10240" - - # The following parameter is used to limit the maximum number of lines in log files. - # max number of lines per log filters - # numOfLogLines 10000000 - # enable/disable async log - #TAOS_ASYNC_LOG: "0" + #TAOS_ASYNC_LOG: "1" # # time of keeping log files, days @@ -344,25 +242,8 @@ taoscfg: # debug flag for all log type, take effect when non-zero value\ #TAOS_DEBUG_FLAG: "143" - # enable/disable recording the SQL in taos client - #TAOS_ENABLE_RECORD_SQL: "0" - # generate core file when service crash #TAOS_ENABLE_CORE_FILE: "1" - - # maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden - #TAOS_MAX_BINARY_DISPLAY_WIDTH: "30" - - # enable/disable stream (continuous query) - #TAOS_STREAM: "1" - - # in retrieve blocking model, only in 50% query threads will be used in query processing in dnode - #TAOS_RETRIEVE_BLOCKING_MODEL: "0" - - # the maximum allowed query buffer size in MB during query processing for each data node - # -1 no limit (default) - # 0 no query allowed, queries are disabled - #TAOS_QUERY_BUFFER_SIZE: "-1" ``` ## Scaling Out diff --git a/docs/zh/10-deployment/05-helm.md b/docs/zh/10-deployment/05-helm.md index 9a723ff62f..34f3a7c5d9 100644 --- a/docs/zh/10-deployment/05-helm.md +++ b/docs/zh/10-deployment/05-helm.md @@ -171,70 +171,19 @@ taoscfg: TAOS_REPLICA: "1" - # number of days per DB file - # TAOS_DAYS: "10" - - # number of days to keep DB file, default is 10 years. - #TAOS_KEEP: "3650" - - # cache block size (Mbyte) - #TAOS_CACHE: "16" - - # number of cache blocks per vnode - #TAOS_BLOCKS: "6" - - # minimum rows of records in file block - #TAOS_MIN_ROWS: "100" - - # maximum rows of records in file block - #TAOS_MAX_ROWS: "4096" - - # - # TAOS_NUM_OF_THREADS_PER_CORE: number of threads per CPU core - #TAOS_NUM_OF_THREADS_PER_CORE: "1.0" + # TAOS_NUM_OF_RPC_THREADS: number of threads for RPC + #TAOS_NUM_OF_RPC_THREADS: "2" # # TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data #TAOS_NUM_OF_COMMIT_THREADS: "4" - # - # TAOS_RATIO_OF_QUERY_CORES: - # the proportion of total CPU cores available for query processing - # 2.0: the query threads will be set to double of the CPU cores. - # 1.0: all CPU cores are available for query processing [default]. - # 0.5: only half of the CPU cores are available for query. - # 0.0: only one core available. - #TAOS_RATIO_OF_QUERY_CORES: "1.0" - - # - # TAOS_KEEP_COLUMN_NAME: - # the last_row/first/last aggregator will not change the original column name in the result fields - #TAOS_KEEP_COLUMN_NAME: "0" - - # enable/disable backuping vnode directory when removing vnode - #TAOS_VNODE_BAK: "1" - # enable/disable installation / usage report #TAOS_TELEMETRY_REPORTING: "1" - # enable/disable load balancing - #TAOS_BALANCE: "1" - - # max timer control blocks - #TAOS_MAX_TMR_CTRL: "512" - # time interval of system monitor, seconds #TAOS_MONITOR_INTERVAL: "30" - # number of seconds allowed for a dnode to be offline, for cluster only - #TAOS_OFFLINE_THRESHOLD: "8640000" - - # RPC re-try timer, millisecond - #TAOS_RPC_TIMER: "1000" - - # RPC maximum time for ack, seconds. - #TAOS_RPC_MAX_TIME: "600" - # time interval of dnode status reporting to mnode, seconds, for cluster only #TAOS_STATUS_INTERVAL: "1" @@ -245,37 +194,7 @@ taoscfg: #TAOS_MIN_SLIDING_TIME: "10" # minimum time window, milli-second - #TAOS_MIN_INTERVAL_TIME: "10" - - # maximum delay before launching a stream computation, milli-second - #TAOS_MAX_STREAM_COMP_DELAY: "20000" - - # maximum delay before launching a stream computation for the first time, milli-second - #TAOS_MAX_FIRST_STREAM_COMP_DELAY: "10000" - - # retry delay when a stream computation fails, milli-second - #TAOS_RETRY_STREAM_COMP_DELAY: "10" - - # the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9 - #TAOS_STREAM_COMP_DELAY_RATIO: "0.1" - - # max number of vgroups per db, 0 means configured automatically - #TAOS_MAX_VGROUPS_PER_DB: "0" - - # max number of tables per vnode - #TAOS_MAX_TABLES_PER_VNODE: "1000000" - - # the number of acknowledgments required for successful data writing - #TAOS_QUORUM: "1" - - # enable/disable compression - #TAOS_COMP: "2" - - # write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync - #TAOS_WAL_LEVEL: "1" - - # if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away - #TAOS_FSYNC: "3000" + #TAOS_MIN_INTERVAL_TIME: "1" # the compressed rpc message, option: # -1 (no compression) @@ -283,17 +202,8 @@ taoscfg: # > 0 (rpc message body which larger than this value will be compressed) #TAOS_COMPRESS_MSG_SIZE: "-1" - # max length of an SQL - #TAOS_MAX_SQL_LENGTH: "1048576" - - # the maximum number of records allowed for super table time sorting - #TAOS_MAX_NUM_OF_ORDERED_RES: "100000" - # max number of connections allowed in dnode - #TAOS_MAX_SHELL_CONNS: "5000" - - # max number of connections allowed in client - #TAOS_MAX_CONNECTIONS: "5000" + #TAOS_MAX_SHELL_CONNS: "50000" # stop writing logs when the disk size of the log folder is less than this value #TAOS_MINIMAL_LOG_DIR_G_B: "0.1" @@ -313,21 +223,8 @@ taoscfg: # enable/disable system monitor #TAOS_MONITOR: "1" - # enable/disable recording the SQL statements via restful interface - #TAOS_HTTP_ENABLE_RECORD_SQL: "0" - - # number of threads used to process http requests - #TAOS_HTTP_MAX_THREADS: "2" - - # maximum number of rows returned by the restful interface - #TAOS_RESTFUL_ROW_LIMIT: "10240" - - # The following parameter is used to limit the maximum number of lines in log files. - # max number of lines per log filters - # numOfLogLines 10000000 - # enable/disable async log - #TAOS_ASYNC_LOG: "0" + #TAOS_ASYNC_LOG: "1" # # time of keeping log files, days @@ -344,25 +241,8 @@ taoscfg: # debug flag for all log type, take effect when non-zero value\ #TAOS_DEBUG_FLAG: "143" - # enable/disable recording the SQL in taos client - #TAOS_ENABLE_RECORD_SQL: "0" - # generate core file when service crash #TAOS_ENABLE_CORE_FILE: "1" - - # maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden - #TAOS_MAX_BINARY_DISPLAY_WIDTH: "30" - - # enable/disable stream (continuous query) - #TAOS_STREAM: "1" - - # in retrieve blocking model, only in 50% query threads will be used in query processing in dnode - #TAOS_RETRIEVE_BLOCKING_MODEL: "0" - - # the maximum allowed query buffer size in MB during query processing for each data node - # -1 no limit (default) - # 0 no query allowed, queries are disabled - #TAOS_QUERY_BUFFER_SIZE: "-1" ``` ## 扩容 diff --git a/source/dnode/vnode/src/tsdb/tsdbCommit.c b/source/dnode/vnode/src/tsdb/tsdbCommit.c index 020f3b0bc6..04a6de8472 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCommit.c +++ b/source/dnode/vnode/src/tsdb/tsdbCommit.c @@ -835,6 +835,9 @@ static int32_t tsdbMergeCommitLast(SCommitter *pCommitter, STbDataIter *pIter) { // set block data schema if need if (pBlockData->suid == 0 && pBlockData->uid == 0) { + code = tsdbCommitterUpdateTableSchema(pCommitter, pTbData->suid, pTbData->uid); + if (code) goto _err; + code = tBlockDataInit(pBlockData, pTbData->suid, pTbData->suid ? 0 : pTbData->uid, pCommitter->skmTable.pTSchema); if (code) goto _err; @@ -963,7 +966,20 @@ static int32_t tsdbCommitTableData(SCommitter *pCommitter, STbData *pTbData) { pRow = NULL; } - if (pRow == NULL) goto _exit; + if (pRow == NULL) { + if (pCommitter->dReader.pBlockIdx && tTABLEIDCmprFn(pCommitter->dReader.pBlockIdx, pTbData) == 0) { + SBlockIdx blockIdx = {.suid = pTbData->suid, .uid = pTbData->uid}; + code = tsdbWriteBlock(pCommitter->dWriter.pWriter, &pCommitter->dReader.mBlock, &blockIdx); + if (code) goto _err; + + if (taosArrayPush(pCommitter->dWriter.aBlockIdx, &blockIdx) == NULL) { + code = TSDB_CODE_OUT_OF_MEMORY; + goto _err; + } + } + + goto _exit; + } int32_t iBlock = 0; SBlock block; diff --git a/source/libs/executor/src/timewindowoperator.c b/source/libs/executor/src/timewindowoperator.c index d56ede49f7..cadaf4a9d5 100644 --- a/source/libs/executor/src/timewindowoperator.c +++ b/source/libs/executor/src/timewindowoperator.c @@ -2146,6 +2146,7 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp // todo set the correct primary timestamp column // output the result + bool hasInterp = true; for (int32_t j = 0; j < pExprSup->numOfExprs; ++j) { SExprInfo* pExprInfo = &pExprSup->pExprInfo[j]; int32_t srcSlot = pExprInfo->base.pParam[0].pCol->slotId; @@ -2157,7 +2158,6 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp switch (pSliceInfo->fillType) { case TSDB_FILL_NULL: { colDataAppendNULL(pDst, rows); - pResBlock->info.rows += 1; break; } @@ -2177,7 +2177,6 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp GET_TYPED_DATA(v, int64_t, pVar->nType, &pVar->i); colDataAppend(pDst, rows, (char*)&v, false); } - pResBlock->info.rows += 1; break; } @@ -2191,6 +2190,7 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp // before interp range, do not fill if (start.key == INT64_MIN || end.key == INT64_MAX) { + hasInterp = false; break; } @@ -2202,28 +2202,27 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp } taosMemoryFree(current.val); - pResBlock->info.rows += 1; break; } case TSDB_FILL_PREV: { if (!pSliceInfo->isPrevRowSet) { + hasInterp = false; break; } SGroupKeys* pkey = taosArrayGet(pSliceInfo->pPrevRow, srcSlot); colDataAppend(pDst, rows, pkey->pData, false); - pResBlock->info.rows += 1; break; } case TSDB_FILL_NEXT: { if (!pSliceInfo->isNextRowSet) { + hasInterp = false; break; } SGroupKeys* pkey = taosArrayGet(pSliceInfo->pNextRow, srcSlot); colDataAppend(pDst, rows, pkey->pData, false); - pResBlock->info.rows += 1; break; } @@ -2232,6 +2231,11 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp break; } } + + if (hasInterp) { + pResBlock->info.rows += 1; + } + } static int32_t initPrevRowsKeeper(STimeSliceOperatorInfo* pInfo, SSDataBlock* pBlock) { @@ -2412,6 +2416,11 @@ static SSDataBlock* doTimeslice(SOperatorInfo* pOperator) { SColumnInfoData* pSrc = taosArrayGet(pBlock->pDataBlock, srcSlot); SColumnInfoData* pDst = taosArrayGet(pResBlock->pDataBlock, dstSlot); + if (colDataIsNull_s(pSrc, i)) { + colDataAppendNULL(pDst, pResBlock->info.rows); + continue; + } + char* v = colDataGetData(pSrc, i); colDataAppend(pDst, pResBlock->info.rows, v, false); } diff --git a/tests/script/tsim/parser/alter1.sim b/tests/script/tsim/parser/alter1.sim index 9d0049e45e..369419dcd9 100644 --- a/tests/script/tsim/parser/alter1.sim +++ b/tests/script/tsim/parser/alter1.sim @@ -130,4 +130,4 @@ endi # return -1 #endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/parser/binary_escapeCharacter.sim b/tests/script/tsim/parser/binary_escapeCharacter.sim index 0b437d8b04..5a9c0e7bb1 100644 --- a/tests/script/tsim/parser/binary_escapeCharacter.sim +++ b/tests/script/tsim/parser/binary_escapeCharacter.sim @@ -101,4 +101,4 @@ sql_error insert into tb values(now, '\'); #sql_error insert into tb values(now, '\\\n'); sql insert into tb values(now, '\n'); -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/parser/col_arithmetic_operation.sim b/tests/script/tsim/parser/col_arithmetic_operation.sim index f22beefdf8..9a2ba34c85 100644 --- a/tests/script/tsim/parser/col_arithmetic_operation.sim +++ b/tests/script/tsim/parser/col_arithmetic_operation.sim @@ -132,4 +132,4 @@ sql_error select max(c1-c2) from $tb print =====================> td-1764 sql select sum(c1)/count(*), sum(c1) as b, count(*) as b from $stb interval(1y) -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/parser/columnValue_unsign.sim b/tests/script/tsim/parser/columnValue_unsign.sim index 85ff490bf4..7ae1b20eca 100644 --- a/tests/script/tsim/parser/columnValue_unsign.sim +++ b/tests/script/tsim/parser/columnValue_unsign.sim @@ -129,4 +129,4 @@ if $rows != 1 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/parser/fill_stb.sim b/tests/script/tsim/parser/fill_stb.sim index 656b1ac94e..6c61631aa8 100644 --- a/tests/script/tsim/parser/fill_stb.sim +++ b/tests/script/tsim/parser/fill_stb.sim @@ -279,7 +279,7 @@ endi #endi ## linear fill -sql select _wstart, max(c1), min(c2), avg(c3), sum(c4), first(c7), last(c8), first(c9) from $stb where ts >= $ts0 and ts <= $tsu partition by t1 interval(5m) fill(linear) +sql select _wstart, max(c1), min(c2), avg(c3), sum(c4), first(c7), last(c8), first(c9) from $stb where ts >= $ts0 and ts <= $tsu partition by t1 interval(5m) fill(linear) $val = $rowNum * 2 $val = $val - 1 $val = $val * $tbNum diff --git a/tests/script/tsim/parser/import_file.sim b/tests/script/tsim/parser/import_file.sim index e031e0249d..37dc0c4476 100644 --- a/tests/script/tsim/parser/import_file.sim +++ b/tests/script/tsim/parser/import_file.sim @@ -69,4 +69,4 @@ endi system rm -f $inFileName -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/parser/repeatAlter.sim b/tests/script/tsim/parser/repeatAlter.sim index d28a03e193..b4012048cc 100644 --- a/tests/script/tsim/parser/repeatAlter.sim +++ b/tests/script/tsim/parser/repeatAlter.sim @@ -6,4 +6,4 @@ while $i <= $loops $i = $i + 1 endw -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/parser/select_from_cache_disk.sim b/tests/script/tsim/parser/select_from_cache_disk.sim index 0983e36a3a..3c0b13c638 100644 --- a/tests/script/tsim/parser/select_from_cache_disk.sim +++ b/tests/script/tsim/parser/select_from_cache_disk.sim @@ -60,4 +60,4 @@ if $data12 != 1 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/parser/single_row_in_tb.sim b/tests/script/tsim/parser/single_row_in_tb.sim index 1bd53ad24e..e7b4c9a871 100644 --- a/tests/script/tsim/parser/single_row_in_tb.sim +++ b/tests/script/tsim/parser/single_row_in_tb.sim @@ -33,4 +33,4 @@ print ================== server restart completed run tsim/parser/single_row_in_tb_query.sim -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/parser/single_row_in_tb_query.sim b/tests/script/tsim/parser/single_row_in_tb_query.sim index 422756b798..37e193f9d2 100644 --- a/tests/script/tsim/parser/single_row_in_tb_query.sim +++ b/tests/script/tsim/parser/single_row_in_tb_query.sim @@ -195,4 +195,4 @@ endi print ===============>safty check TD-4927 sql select first(ts, c1) from sr_stb where ts<1 group by t1; -sql select first(ts, c1) from sr_stb where ts>0 and ts<1; \ No newline at end of file +sql select first(ts, c1) from sr_stb where ts>0 and ts<1; diff --git a/tests/script/tsim/query/complex_group.sim b/tests/script/tsim/query/complex_group.sim index 3dad8059cd..d7d14c0ee8 100644 --- a/tests/script/tsim/query/complex_group.sim +++ b/tests/script/tsim/query/complex_group.sim @@ -454,4 +454,4 @@ if $rows != 1 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/complex_having.sim b/tests/script/tsim/query/complex_having.sim index 9e28c3803e..4c0af6d10c 100644 --- a/tests/script/tsim/query/complex_having.sim +++ b/tests/script/tsim/query/complex_having.sim @@ -365,4 +365,4 @@ if $rows != 1 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/complex_limit.sim b/tests/script/tsim/query/complex_limit.sim index 2a90e7ff1d..acb133f650 100644 --- a/tests/script/tsim/query/complex_limit.sim +++ b/tests/script/tsim/query/complex_limit.sim @@ -508,4 +508,4 @@ if $rows != 1 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/complex_select.sim b/tests/script/tsim/query/complex_select.sim index f4c9877bfd..b7697e5cab 100644 --- a/tests/script/tsim/query/complex_select.sim +++ b/tests/script/tsim/query/complex_select.sim @@ -558,4 +558,4 @@ if $data00 != 33 then endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/complex_where.sim b/tests/script/tsim/query/complex_where.sim index bda1c036f0..847f67ed34 100644 --- a/tests/script/tsim/query/complex_where.sim +++ b/tests/script/tsim/query/complex_where.sim @@ -669,4 +669,4 @@ if $rows != 1 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/crash_sql.sim b/tests/script/tsim/query/crash_sql.sim index 1d20491869..169f2e7272 100644 --- a/tests/script/tsim/query/crash_sql.sim +++ b/tests/script/tsim/query/crash_sql.sim @@ -79,4 +79,4 @@ print ================ start query ====================== print ================ SQL used to cause taosd or taos shell crash sql_error select sum(c1) ,count(c1) from ct4 group by c1 having sum(c10) between 0 and 1 ; -#system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +#system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/diff.sim b/tests/script/tsim/query/diff.sim index f0d82b01e9..badd139a9f 100644 --- a/tests/script/tsim/query/diff.sim +++ b/tests/script/tsim/query/diff.sim @@ -25,17 +25,17 @@ $i = 0 while $i < $tbNum $tb = $tbPrefix . $i sql create table $tb using $mt tags( $i ) - + $x = 0 while $x < $rowNum $cc = $x * 60000 $ms = 1601481600000 + $cc - sql insert into $tb values ($ms , $x ) + sql insert into $tb values ($ms , $x ) $x = $x + 1 - endw - + endw + $i = $i + 1 -endw +endw sleep 100 @@ -61,7 +61,7 @@ sql select _rowts, diff(tbcol) from $tb where ts > $ms print ===> rows: $rows print ===> $data00 $data01 $data02 $data03 $data04 $data05 print ===> $data10 $data11 $data12 $data13 $data14 $data15 -if $data11 != 1 then +if $data11 != 1 then return -1 endi @@ -72,7 +72,7 @@ sql select _rowts, diff(tbcol) from $tb where ts <= $ms print ===> rows: $rows print ===> $data00 $data01 $data02 $data03 $data04 $data05 print ===> $data10 $data11 $data12 $data13 $data14 $data15 -if $data11 != 1 then +if $data11 != 1 then return -1 endi @@ -82,7 +82,7 @@ sql select _rowts, diff(tbcol) as b from $tb print ===> rows: $rows print ===> $data00 $data01 $data02 $data03 $data04 $data05 print ===> $data10 $data11 $data12 $data13 $data14 $data15 -if $data11 != 1 then +if $data11 != 1 then return -1 endi @@ -107,4 +107,4 @@ if $rows != 2 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/explain.sim b/tests/script/tsim/query/explain.sim index 30a857815c..2871252d91 100644 --- a/tests/script/tsim/query/explain.sim +++ b/tests/script/tsim/query/explain.sim @@ -3,7 +3,7 @@ system sh/deploy.sh -n dnode1 -i 1 system sh/exec.sh -n dnode1 -s start sql connect -print ======== step1 +print ======== step1 sql create database db1 vgroups 3; sql use db1; sql select * from information_schema.ins_databases; @@ -30,7 +30,7 @@ sql insert into tb4 values (now, 4, "Bitmap Heap Scan on tenk1 t1 (cost=5.07..2 #sql insert into tb4 values (now, 4, "Bitmap Heap Scan on tenk1 t1 (cost=5.07..229.20 rows=101 width=244) (actual time=0.080..0.526 rows=100 loops=1)"); -print ======== step2 +print ======== step2 sql explain select * from st1 where -2; sql explain select ts from tb1; sql explain select * from st1; @@ -41,14 +41,14 @@ sql explain select count(*),sum(f1) from st1; sql explain select count(*),sum(f1) from st1 group by f1; #sql explain select count(f1) from tb1 interval(10s, 2s) sliding(3s) fill(prev); -print ======== step3 +print ======== step3 sql explain verbose true select * from st1 where -2; sql explain verbose true select ts from tb1 where f1 > 0; sql explain verbose true select * from st1 where f1 > 0 and ts > '2020-10-31 00:00:00' and ts < '2021-10-31 00:00:00'; sql explain verbose true select count(*) from st1 partition by tbname slimit 1 soffset 2 limit 2 offset 1; sql explain verbose true select * from information_schema.ins_stables where db_name='db2'; -print ======== step4 +print ======== step4 sql explain analyze select ts from st1 where -2; sql explain analyze select ts from tb1; sql explain analyze select ts from st1; @@ -59,7 +59,7 @@ sql explain analyze select count(*),sum(f1) from tb1; sql explain analyze select count(*),sum(f1) from st1; sql explain analyze select count(*),sum(f1) from st1 group by f1; -print ======== step5 +print ======== step5 sql explain analyze verbose true select ts from st1 where -2; sql explain analyze verbose true select ts from tb1; sql explain analyze verbose true select ts from st1; @@ -87,12 +87,12 @@ sql explain analyze verbose true select count(f1) from st1 group by tbname; #sql explain select * from tb1, tb2 where tb1.ts=tb2.ts; #sql explain select * from st1, st2 where tb1.ts=tb2.ts; #sql explain analyze verbose true select sum(a+b) from (select _rowts, min(f1) b,count(*) a from st1 where f1 > 0 interval(1a)) where a < 0 interval(1s); -#sql explain select min(f1) from st1 interval(1m, 2a) sliding(30s); +#sql explain select min(f1) from st1 interval(1m, 2a) sliding(30s); #sql explain verbose true select count(*),sum(f1) from st1 where f1 > 0 and ts > '2021-10-31 00:00:00' group by f1 having sum(f1) > 0; -#sql explain analyze select min(f1) from st1 interval(3m, 2a) sliding(1m); +#sql explain analyze select min(f1) from st1 interval(3m, 2a) sliding(1m); #sql explain analyze select count(f1) from tb1 interval(10s, 2s) sliding(3s) fill(prev); #sql explain analyze verbose true select count(*),sum(f1) from st1 where f1 > 0 and ts > '2021-10-31 00:00:00' group by f1 having sum(f1) > 0; -#sql explain analyze verbose true select min(f1) from st1 interval(3m, 2a) sliding(1m); +#sql explain analyze verbose true select min(f1) from st1 interval(3m, 2a) sliding(1m); system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/interval.sim b/tests/script/tsim/query/interval.sim index cc8a73daec..833da4a8ba 100644 --- a/tests/script/tsim/query/interval.sim +++ b/tests/script/tsim/query/interval.sim @@ -177,4 +177,4 @@ print =============== clear # return -1 #endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/scalarFunction.sim b/tests/script/tsim/query/scalarFunction.sim index 103e66e54e..1b8115fec6 100644 --- a/tests/script/tsim/query/scalarFunction.sim +++ b/tests/script/tsim/query/scalarFunction.sim @@ -33,7 +33,7 @@ print =============== create normal table sql create table ntb (ts timestamp, c1 int, c2 float, c3 double) sql show tables -if $rows != 101 then +if $rows != 101 then return -1 endi @@ -444,7 +444,7 @@ if $loop_test == 0 then print =============== stop and restart taosd system sh/exec.sh -n dnode1 -s stop -x SIGINT system sh/exec.sh -n dnode1 -s start - + $loop_cnt = 0 check_dnode_ready_0: $loop_cnt = $loop_cnt + 1 @@ -462,7 +462,7 @@ if $loop_test == 0 then goto check_dnode_ready_0 endi - $loop_test = 1 + $loop_test = 1 goto loop_test_pos endi diff --git a/tests/script/tsim/query/scalarNull.sim b/tests/script/tsim/query/scalarNull.sim index ec95c94f23..6abe3d62d9 100644 --- a/tests/script/tsim/query/scalarNull.sim +++ b/tests/script/tsim/query/scalarNull.sim @@ -3,7 +3,7 @@ system sh/deploy.sh -n dnode1 -i 1 system sh/exec.sh -n dnode1 -s start sql connect -print ======== step1 +print ======== step1 sql create database db1 vgroups 3; sql use db1; sql select * from information_schema.ins_databases; diff --git a/tests/script/tsim/query/session.sim b/tests/script/tsim/query/session.sim index 158448d765..b6eb4ed3aa 100644 --- a/tests/script/tsim/query/session.sim +++ b/tests/script/tsim/query/session.sim @@ -35,8 +35,8 @@ sql INSERT INTO dev_001 VALUES('2020-05-13 13:00:00.001', 12) sql INSERT INTO dev_001 VALUES('2020-05-14 13:00:00.001', 13) sql INSERT INTO dev_001 VALUES('2020-05-15 14:00:00.000', 14) sql INSERT INTO dev_001 VALUES('2020-05-20 10:00:00.000', 15) -sql INSERT INTO dev_001 VALUES('2020-05-27 10:00:00.001', 16) - +sql INSERT INTO dev_001 VALUES('2020-05-27 10:00:00.001', 16) + sql INSERT INTO dev_002 VALUES('2020-05-13 10:00:00.000', 1) sql INSERT INTO dev_002 VALUES('2020-05-13 10:00:00.005', 2) sql INSERT INTO dev_002 VALUES('2020-05-13 10:00:00.009', 3) @@ -46,7 +46,7 @@ sql INSERT INTO dev_002 VALUES('2020-05-13 10:00:00.036', 6) sql INSERT INTO dev_002 VALUES('2020-05-13 10:00:00.51', 7) # vnode does not return the precision of the table -print ====> create database d1 precision 'us' +print ====> create database d1 precision 'us' sql create database d1 precision 'us' sql use d1 sql create table dev_001 (ts timestamp ,i timestamp ,j int) @@ -54,7 +54,7 @@ sql insert into dev_001 values(1623046993681000,now,1)(1623046993681001,now+1s,2 sql create table secondts(ts timestamp,t2 timestamp,i int) sql insert into secondts values(1623046993681000,now,1)(1623046993681001,now+1s,2)(1623046993681002,now+2s,3)(1623046993681004,now+5s,4) -$loop_test = 0 +$loop_test = 0 loop_test_pos: sql use $dbNamme @@ -299,7 +299,7 @@ if $loop_test == 0 then print =============== stop and restart taosd system sh/exec.sh -n dnode1 -s stop -x SIGINT system sh/exec.sh -n dnode1 -s start - + $loop_cnt = 0 check_dnode_ready_0: $loop_cnt = $loop_cnt + 1 @@ -317,7 +317,7 @@ if $loop_test == 0 then goto check_dnode_ready_0 endi - $loop_test = 1 + $loop_test = 1 goto loop_test_pos endi diff --git a/tests/script/tsim/query/stddev.sim b/tests/script/tsim/query/stddev.sim index d61c7273e1..b45c7d80a3 100644 --- a/tests/script/tsim/query/stddev.sim +++ b/tests/script/tsim/query/stddev.sim @@ -409,4 +409,4 @@ if $rows != 2 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/time_process.sim b/tests/script/tsim/query/time_process.sim index b3c0e9561f..83a6445846 100644 --- a/tests/script/tsim/query/time_process.sim +++ b/tests/script/tsim/query/time_process.sim @@ -111,4 +111,4 @@ if $rows != 2 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/udf.sim b/tests/script/tsim/query/udf.sim index 7cc1403bcb..7f8b1044ef 100644 --- a/tests/script/tsim/query/udf.sim +++ b/tests/script/tsim/query/udf.sim @@ -9,7 +9,7 @@ system sh/cfg.sh -n dnode1 -c udf -v 1 system sh/exec.sh -n dnode1 -s start sql connect -print ======== step1 udf +print ======== step1 udf system sh/compile_udf.sh sql create database udf vgroups 3; sql use udf; diff --git a/tests/system-test/2-query/interp.py b/tests/system-test/2-query/interp.py index 934ba9e161..5550519e05 100644 --- a/tests/system-test/2-query/interp.py +++ b/tests/system-test/2-query/interp.py @@ -551,7 +551,57 @@ class TDTestCase: tdSql.checkData(0, 0, 15) tdSql.checkData(1, 0, 15) - tdLog.printNoPrefix("==========step9:test error cases") + tdLog.printNoPrefix("==========step9:test multi-interp cases") + tdSql.query(f"select interp(c0),interp(c1),interp(c2),interp(c3) from {dbname}.{tbname} range('2020-02-09 00:00:05', '2020-02-13 00:00:05') every(1d) fill(null)") + tdSql.checkRows(5) + tdSql.checkCols(4) + + for i in range (tdSql.queryCols): + tdSql.checkData(0, i, None) + tdSql.checkData(1, i, None) + tdSql.checkData(2, i, 15) + tdSql.checkData(3, i, None) + tdSql.checkData(4, i, None) + + tdSql.query(f"select interp(c0),interp(c1),interp(c2),interp(c3) from {dbname}.{tbname} range('2020-02-09 00:00:05', '2020-02-13 00:00:05') every(1d) fill(value, 1)") + tdSql.checkRows(5) + tdSql.checkCols(4) + + for i in range (tdSql.queryCols): + tdSql.checkData(0, i, 1) + tdSql.checkData(1, i, 1) + tdSql.checkData(2, i, 15) + tdSql.checkData(3, i, 1) + tdSql.checkData(4, i, 1) + + tdSql.query(f"select interp(c0),interp(c1),interp(c2),interp(c3) from {dbname}.{tbname} range('2020-02-09 00:00:05', '2020-02-13 00:00:05') every(1d) fill(prev)") + tdSql.checkRows(5) + tdSql.checkCols(4) + + for i in range (tdSql.queryCols): + tdSql.checkData(0, i, 5) + tdSql.checkData(1, i, 5) + tdSql.checkData(2, i, 15) + tdSql.checkData(3, i, 15) + tdSql.checkData(4, i, 15) + + tdSql.query(f"select interp(c0),interp(c1),interp(c2),interp(c3) from {dbname}.{tbname} range('2020-02-09 00:00:05', '2020-02-13 00:00:05') every(1d) fill(next)") + tdSql.checkRows(3) + tdSql.checkCols(4) + + for i in range (tdSql.queryCols): + tdSql.checkData(0, i, 15) + tdSql.checkData(1, i, 15) + tdSql.checkData(2, i, 15) + + tdSql.query(f"select interp(c0),interp(c1),interp(c2),interp(c3) from {dbname}.{tbname} range('2020-02-09 00:00:05', '2020-02-13 00:00:05') every(1d) fill(linear)") + tdSql.checkRows(1) + tdSql.checkCols(4) + + for i in range (tdSql.queryCols): + tdSql.checkData(0, i, 15) + + tdLog.printNoPrefix("==========step10:test error cases") tdSql.error(f"select interp(c0) from {dbname}.{tbname}") tdSql.error(f"select interp(c0) from {dbname}.{tbname} range('2020-02-10 00:00:05', '2020-02-15 00:00:05')")