Merge branch '3.0' of github.com:taosdata/TDengine into 3.0

This commit is contained in:
gccgdb1234 2022-08-25 14:10:49 +08:00
commit 4dc88f3dd6
32 changed files with 168 additions and 330 deletions

View File

@ -303,14 +303,14 @@ Query OK, 2 row(s) in set (0.001700s)
TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用: TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
- [Java](https://docs.taosdata.com/reference/connector/java/) - [Java](https://docs.taosdata.com/connector/java/)
- [C/C++](https://www.taosdata.com/cn/documentation/connector#c-cpp) - [C/C++](https://docs.taosdata.com/connector/cpp/)
- [Python](https://docs.taosdata.com/reference/connector/python/) - [Python](https://docs.taosdata.com/connector/python/)
- [Go](https://docs.taosdata.com/reference/connector/go/) - [Go](https://docs.taosdata.com/connector/go/)
- [Node.js](https://docs.taosdata.com/reference/connector/node/) - [Node.js](https://docs.taosdata.com/connector/node/)
- [Rust](https://docs.taosdata.com/reference/connector/rust/) - [Rust](https://docs.taosdata.com/connector/rust/)
- [C#](https://docs.taosdata.com/reference/connector/csharp/) - [C#](https://docs.taosdata.com/connector/csharp/)
- [RESTful API](https://docs.taosdata.com/reference/rest-api/) - [RESTful API](https://docs.taosdata.com/connector/rest-api/)
# 成为社区贡献者 # 成为社区贡献者

View File

@ -19,29 +19,29 @@ English | [简体中文](README-CN.md) | We are hiring, check [here](https://tde
# What is TDengine # What is TDengine
TDengine is an open source, high-performance, cloud native time-series database optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages: TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/what-is-a-time-series-database/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages:
- **High-Performance**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression. - **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
- **Simplified Solution**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly. - **[Simplified Solution](https://tdengine.com/tdengine/simplified-time-series-data-solution/)**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly.
- **Cloud Native**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds. - **[Cloud Native](https://tdengine.com/tdengine/cloud-native-time-series-database/)**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds.
- **Ease of Use**: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access. - **[Ease of Use](https://docs.tdengine.com/get-started/docker/)**: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access.
- **Easy Data Analytics**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way. - **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
- **Open Source**: TDengines core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide. - **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengines core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide.
# Documentation # Documentation
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.taosdata.com) ([TDengine 文档](https://docs.taosdata.com)) For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([TDengine 文档](https://docs.taosdata.com))
# Building # Building
At the moment, TDengine server supports running on Linux, Windows systems.Any OS application can also choose the RESTful interface of taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU , and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. At the moment, TDengine server supports running on Linux and Windows systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
You can choose to install through source code according to your needs, [container](https://docs.taosdata.com/get-started/docker/), [installation package](https://docs.taosdata.com/get-started/package/) or [Kubernetes](https://docs.taosdata.com/deployment/k8s/) to install. This quick guide only applies to installing from source. You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source.
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine. TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
@ -256,6 +256,7 @@ After building successfully, TDengine can be installed by:
nmake install nmake install
``` ```
<!--
## On macOS platform ## On macOS platform
After building successfully, TDengine can be installed by: After building successfully, TDengine can be installed by:
@ -263,6 +264,7 @@ After building successfully, TDengine can be installed by:
```bash ```bash
sudo make install sudo make install
``` ```
-->
## Quick Run ## Quick Run
@ -304,14 +306,14 @@ Query OK, 2 row(s) in set (0.001700s)
TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation. TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
- [Java](https://docs.taosdata.com/reference/connector/java/) - [Java](https://docs.tdengine.com/reference/connector/java/)
- [C/C++](https://docs.taosdata.com/reference/connector/cpp/) - [C/C++](https://docs.tdengine.com/reference/connector/cpp/)
- [Python](https://docs.taosdata.com/reference/connector/python/) - [Python](https://docs.tdengine.com/reference/connector/python/)
- [Go](https://docs.taosdata.com/reference/connector/go/) - [Go](https://docs.tdengine.com/reference/connector/go/)
- [Node.js](https://docs.taosdata.com/reference/connector/node/) - [Node.js](https://docs.tdengine.com/reference/connector/node/)
- [Rust](https://docs.taosdata.com/reference/connector/rust/) - [Rust](https://docs.tdengine.com/reference/connector/rust/)
- [C#](https://docs.taosdata.com/reference/connector/csharp/) - [C#](https://docs.tdengine.com/reference/connector/csharp/)
- [RESTful API](https://docs.taosdata.com/reference/rest-api/) - [RESTful API](https://docs.tdengine.com/reference/rest-api/)
# Contribute to TDengine # Contribute to TDengine

View File

@ -170,71 +170,21 @@ taoscfg:
# number of replications, for cluster only # number of replications, for cluster only
TAOS_REPLICA: "1" TAOS_REPLICA: "1"
# number of days per DB file
# TAOS_DAYS: "10"
# number of days to keep DB file, default is 10 years.
#TAOS_KEEP: "3650"
# cache block size (Mbyte)
#TAOS_CACHE: "16"
# number of cache blocks per vnode
#TAOS_BLOCKS: "6"
# minimum rows of records in file block
#TAOS_MIN_ROWS: "100"
# maximum rows of records in file block
#TAOS_MAX_ROWS: "4096"
# #
# TAOS_NUM_OF_THREADS_PER_CORE: number of threads per CPU core # TAOS_NUM_OF_RPC_THREADS: number of threads for RPC
#TAOS_NUM_OF_THREADS_PER_CORE: "1.0" #TAOS_NUM_OF_RPC_THREADS: "2"
# #
# TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data # TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data
#TAOS_NUM_OF_COMMIT_THREADS: "4" #TAOS_NUM_OF_COMMIT_THREADS: "4"
#
# TAOS_RATIO_OF_QUERY_CORES:
# the proportion of total CPU cores available for query processing
# 2.0: the query threads will be set to double of the CPU cores.
# 1.0: all CPU cores are available for query processing [default].
# 0.5: only half of the CPU cores are available for query.
# 0.0: only one core available.
#TAOS_RATIO_OF_QUERY_CORES: "1.0"
#
# TAOS_KEEP_COLUMN_NAME:
# the last_row/first/last aggregator will not change the original column name in the result fields
#TAOS_KEEP_COLUMN_NAME: "0"
# enable/disable backuping vnode directory when removing vnode
#TAOS_VNODE_BAK: "1"
# enable/disable installation / usage report # enable/disable installation / usage report
#TAOS_TELEMETRY_REPORTING: "1" #TAOS_TELEMETRY_REPORTING: "1"
# enable/disable load balancing
#TAOS_BALANCE: "1"
# max timer control blocks
#TAOS_MAX_TMR_CTRL: "512"
# time interval of system monitor, seconds # time interval of system monitor, seconds
#TAOS_MONITOR_INTERVAL: "30" #TAOS_MONITOR_INTERVAL: "30"
# number of seconds allowed for a dnode to be offline, for cluster only
#TAOS_OFFLINE_THRESHOLD: "8640000"
# RPC re-try timer, millisecond
#TAOS_RPC_TIMER: "1000"
# RPC maximum time for ack, seconds.
#TAOS_RPC_MAX_TIME: "600"
# time interval of dnode status reporting to mnode, seconds, for cluster only # time interval of dnode status reporting to mnode, seconds, for cluster only
#TAOS_STATUS_INTERVAL: "1" #TAOS_STATUS_INTERVAL: "1"
@ -245,37 +195,7 @@ taoscfg:
#TAOS_MIN_SLIDING_TIME: "10" #TAOS_MIN_SLIDING_TIME: "10"
# minimum time window, milli-second # minimum time window, milli-second
#TAOS_MIN_INTERVAL_TIME: "10" #TAOS_MIN_INTERVAL_TIME: "1"
# maximum delay before launching a stream computation, milli-second
#TAOS_MAX_STREAM_COMP_DELAY: "20000"
# maximum delay before launching a stream computation for the first time, milli-second
#TAOS_MAX_FIRST_STREAM_COMP_DELAY: "10000"
# retry delay when a stream computation fails, milli-second
#TAOS_RETRY_STREAM_COMP_DELAY: "10"
# the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9
#TAOS_STREAM_COMP_DELAY_RATIO: "0.1"
# max number of vgroups per db, 0 means configured automatically
#TAOS_MAX_VGROUPS_PER_DB: "0"
# max number of tables per vnode
#TAOS_MAX_TABLES_PER_VNODE: "1000000"
# the number of acknowledgments required for successful data writing
#TAOS_QUORUM: "1"
# enable/disable compression
#TAOS_COMP: "2"
# write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync
#TAOS_WAL_LEVEL: "1"
# if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away
#TAOS_FSYNC: "3000"
# the compressed rpc message, option: # the compressed rpc message, option:
# -1 (no compression) # -1 (no compression)
@ -283,17 +203,8 @@ taoscfg:
# > 0 (rpc message body which larger than this value will be compressed) # > 0 (rpc message body which larger than this value will be compressed)
#TAOS_COMPRESS_MSG_SIZE: "-1" #TAOS_COMPRESS_MSG_SIZE: "-1"
# max length of an SQL
#TAOS_MAX_SQL_LENGTH: "1048576"
# the maximum number of records allowed for super table time sorting
#TAOS_MAX_NUM_OF_ORDERED_RES: "100000"
# max number of connections allowed in dnode # max number of connections allowed in dnode
#TAOS_MAX_SHELL_CONNS: "5000" #TAOS_MAX_SHELL_CONNS: "50000"
# max number of connections allowed in client
#TAOS_MAX_CONNECTIONS: "5000"
# stop writing logs when the disk size of the log folder is less than this value # stop writing logs when the disk size of the log folder is less than this value
#TAOS_MINIMAL_LOG_DIR_G_B: "0.1" #TAOS_MINIMAL_LOG_DIR_G_B: "0.1"
@ -313,21 +224,8 @@ taoscfg:
# enable/disable system monitor # enable/disable system monitor
#TAOS_MONITOR: "1" #TAOS_MONITOR: "1"
# enable/disable recording the SQL statements via restful interface
#TAOS_HTTP_ENABLE_RECORD_SQL: "0"
# number of threads used to process http requests
#TAOS_HTTP_MAX_THREADS: "2"
# maximum number of rows returned by the restful interface
#TAOS_RESTFUL_ROW_LIMIT: "10240"
# The following parameter is used to limit the maximum number of lines in log files.
# max number of lines per log filters
# numOfLogLines 10000000
# enable/disable async log # enable/disable async log
#TAOS_ASYNC_LOG: "0" #TAOS_ASYNC_LOG: "1"
# #
# time of keeping log files, days # time of keeping log files, days
@ -344,25 +242,8 @@ taoscfg:
# debug flag for all log type, take effect when non-zero value\ # debug flag for all log type, take effect when non-zero value\
#TAOS_DEBUG_FLAG: "143" #TAOS_DEBUG_FLAG: "143"
# enable/disable recording the SQL in taos client
#TAOS_ENABLE_RECORD_SQL: "0"
# generate core file when service crash # generate core file when service crash
#TAOS_ENABLE_CORE_FILE: "1" #TAOS_ENABLE_CORE_FILE: "1"
# maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden
#TAOS_MAX_BINARY_DISPLAY_WIDTH: "30"
# enable/disable stream (continuous query)
#TAOS_STREAM: "1"
# in retrieve blocking model, only in 50% query threads will be used in query processing in dnode
#TAOS_RETRIEVE_BLOCKING_MODEL: "0"
# the maximum allowed query buffer size in MB during query processing for each data node
# -1 no limit (default)
# 0 no query allowed, queries are disabled
#TAOS_QUERY_BUFFER_SIZE: "-1"
``` ```
## Scaling Out ## Scaling Out

View File

@ -171,70 +171,19 @@ taoscfg:
TAOS_REPLICA: "1" TAOS_REPLICA: "1"
# number of days per DB file # TAOS_NUM_OF_RPC_THREADS: number of threads for RPC
# TAOS_DAYS: "10" #TAOS_NUM_OF_RPC_THREADS: "2"
# number of days to keep DB file, default is 10 years.
#TAOS_KEEP: "3650"
# cache block size (Mbyte)
#TAOS_CACHE: "16"
# number of cache blocks per vnode
#TAOS_BLOCKS: "6"
# minimum rows of records in file block
#TAOS_MIN_ROWS: "100"
# maximum rows of records in file block
#TAOS_MAX_ROWS: "4096"
#
# TAOS_NUM_OF_THREADS_PER_CORE: number of threads per CPU core
#TAOS_NUM_OF_THREADS_PER_CORE: "1.0"
# #
# TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data # TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data
#TAOS_NUM_OF_COMMIT_THREADS: "4" #TAOS_NUM_OF_COMMIT_THREADS: "4"
#
# TAOS_RATIO_OF_QUERY_CORES:
# the proportion of total CPU cores available for query processing
# 2.0: the query threads will be set to double of the CPU cores.
# 1.0: all CPU cores are available for query processing [default].
# 0.5: only half of the CPU cores are available for query.
# 0.0: only one core available.
#TAOS_RATIO_OF_QUERY_CORES: "1.0"
#
# TAOS_KEEP_COLUMN_NAME:
# the last_row/first/last aggregator will not change the original column name in the result fields
#TAOS_KEEP_COLUMN_NAME: "0"
# enable/disable backuping vnode directory when removing vnode
#TAOS_VNODE_BAK: "1"
# enable/disable installation / usage report # enable/disable installation / usage report
#TAOS_TELEMETRY_REPORTING: "1" #TAOS_TELEMETRY_REPORTING: "1"
# enable/disable load balancing
#TAOS_BALANCE: "1"
# max timer control blocks
#TAOS_MAX_TMR_CTRL: "512"
# time interval of system monitor, seconds # time interval of system monitor, seconds
#TAOS_MONITOR_INTERVAL: "30" #TAOS_MONITOR_INTERVAL: "30"
# number of seconds allowed for a dnode to be offline, for cluster only
#TAOS_OFFLINE_THRESHOLD: "8640000"
# RPC re-try timer, millisecond
#TAOS_RPC_TIMER: "1000"
# RPC maximum time for ack, seconds.
#TAOS_RPC_MAX_TIME: "600"
# time interval of dnode status reporting to mnode, seconds, for cluster only # time interval of dnode status reporting to mnode, seconds, for cluster only
#TAOS_STATUS_INTERVAL: "1" #TAOS_STATUS_INTERVAL: "1"
@ -245,37 +194,7 @@ taoscfg:
#TAOS_MIN_SLIDING_TIME: "10" #TAOS_MIN_SLIDING_TIME: "10"
# minimum time window, milli-second # minimum time window, milli-second
#TAOS_MIN_INTERVAL_TIME: "10" #TAOS_MIN_INTERVAL_TIME: "1"
# maximum delay before launching a stream computation, milli-second
#TAOS_MAX_STREAM_COMP_DELAY: "20000"
# maximum delay before launching a stream computation for the first time, milli-second
#TAOS_MAX_FIRST_STREAM_COMP_DELAY: "10000"
# retry delay when a stream computation fails, milli-second
#TAOS_RETRY_STREAM_COMP_DELAY: "10"
# the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9
#TAOS_STREAM_COMP_DELAY_RATIO: "0.1"
# max number of vgroups per db, 0 means configured automatically
#TAOS_MAX_VGROUPS_PER_DB: "0"
# max number of tables per vnode
#TAOS_MAX_TABLES_PER_VNODE: "1000000"
# the number of acknowledgments required for successful data writing
#TAOS_QUORUM: "1"
# enable/disable compression
#TAOS_COMP: "2"
# write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync
#TAOS_WAL_LEVEL: "1"
# if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away
#TAOS_FSYNC: "3000"
# the compressed rpc message, option: # the compressed rpc message, option:
# -1 (no compression) # -1 (no compression)
@ -283,17 +202,8 @@ taoscfg:
# > 0 (rpc message body which larger than this value will be compressed) # > 0 (rpc message body which larger than this value will be compressed)
#TAOS_COMPRESS_MSG_SIZE: "-1" #TAOS_COMPRESS_MSG_SIZE: "-1"
# max length of an SQL
#TAOS_MAX_SQL_LENGTH: "1048576"
# the maximum number of records allowed for super table time sorting
#TAOS_MAX_NUM_OF_ORDERED_RES: "100000"
# max number of connections allowed in dnode # max number of connections allowed in dnode
#TAOS_MAX_SHELL_CONNS: "5000" #TAOS_MAX_SHELL_CONNS: "50000"
# max number of connections allowed in client
#TAOS_MAX_CONNECTIONS: "5000"
# stop writing logs when the disk size of the log folder is less than this value # stop writing logs when the disk size of the log folder is less than this value
#TAOS_MINIMAL_LOG_DIR_G_B: "0.1" #TAOS_MINIMAL_LOG_DIR_G_B: "0.1"
@ -313,21 +223,8 @@ taoscfg:
# enable/disable system monitor # enable/disable system monitor
#TAOS_MONITOR: "1" #TAOS_MONITOR: "1"
# enable/disable recording the SQL statements via restful interface
#TAOS_HTTP_ENABLE_RECORD_SQL: "0"
# number of threads used to process http requests
#TAOS_HTTP_MAX_THREADS: "2"
# maximum number of rows returned by the restful interface
#TAOS_RESTFUL_ROW_LIMIT: "10240"
# The following parameter is used to limit the maximum number of lines in log files.
# max number of lines per log filters
# numOfLogLines 10000000
# enable/disable async log # enable/disable async log
#TAOS_ASYNC_LOG: "0" #TAOS_ASYNC_LOG: "1"
# #
# time of keeping log files, days # time of keeping log files, days
@ -344,25 +241,8 @@ taoscfg:
# debug flag for all log type, take effect when non-zero value\ # debug flag for all log type, take effect when non-zero value\
#TAOS_DEBUG_FLAG: "143" #TAOS_DEBUG_FLAG: "143"
# enable/disable recording the SQL in taos client
#TAOS_ENABLE_RECORD_SQL: "0"
# generate core file when service crash # generate core file when service crash
#TAOS_ENABLE_CORE_FILE: "1" #TAOS_ENABLE_CORE_FILE: "1"
# maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden
#TAOS_MAX_BINARY_DISPLAY_WIDTH: "30"
# enable/disable stream (continuous query)
#TAOS_STREAM: "1"
# in retrieve blocking model, only in 50% query threads will be used in query processing in dnode
#TAOS_RETRIEVE_BLOCKING_MODEL: "0"
# the maximum allowed query buffer size in MB during query processing for each data node
# -1 no limit (default)
# 0 no query allowed, queries are disabled
#TAOS_QUERY_BUFFER_SIZE: "-1"
``` ```
## 扩容 ## 扩容

View File

@ -835,6 +835,9 @@ static int32_t tsdbMergeCommitLast(SCommitter *pCommitter, STbDataIter *pIter) {
// set block data schema if need // set block data schema if need
if (pBlockData->suid == 0 && pBlockData->uid == 0) { if (pBlockData->suid == 0 && pBlockData->uid == 0) {
code = tsdbCommitterUpdateTableSchema(pCommitter, pTbData->suid, pTbData->uid);
if (code) goto _err;
code = code =
tBlockDataInit(pBlockData, pTbData->suid, pTbData->suid ? 0 : pTbData->uid, pCommitter->skmTable.pTSchema); tBlockDataInit(pBlockData, pTbData->suid, pTbData->suid ? 0 : pTbData->uid, pCommitter->skmTable.pTSchema);
if (code) goto _err; if (code) goto _err;
@ -963,7 +966,20 @@ static int32_t tsdbCommitTableData(SCommitter *pCommitter, STbData *pTbData) {
pRow = NULL; pRow = NULL;
} }
if (pRow == NULL) goto _exit; if (pRow == NULL) {
if (pCommitter->dReader.pBlockIdx && tTABLEIDCmprFn(pCommitter->dReader.pBlockIdx, pTbData) == 0) {
SBlockIdx blockIdx = {.suid = pTbData->suid, .uid = pTbData->uid};
code = tsdbWriteBlock(pCommitter->dWriter.pWriter, &pCommitter->dReader.mBlock, &blockIdx);
if (code) goto _err;
if (taosArrayPush(pCommitter->dWriter.aBlockIdx, &blockIdx) == NULL) {
code = TSDB_CODE_OUT_OF_MEMORY;
goto _err;
}
}
goto _exit;
}
int32_t iBlock = 0; int32_t iBlock = 0;
SBlock block; SBlock block;

View File

@ -2146,6 +2146,7 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp
// todo set the correct primary timestamp column // todo set the correct primary timestamp column
// output the result // output the result
bool hasInterp = true;
for (int32_t j = 0; j < pExprSup->numOfExprs; ++j) { for (int32_t j = 0; j < pExprSup->numOfExprs; ++j) {
SExprInfo* pExprInfo = &pExprSup->pExprInfo[j]; SExprInfo* pExprInfo = &pExprSup->pExprInfo[j];
int32_t srcSlot = pExprInfo->base.pParam[0].pCol->slotId; int32_t srcSlot = pExprInfo->base.pParam[0].pCol->slotId;
@ -2157,7 +2158,6 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp
switch (pSliceInfo->fillType) { switch (pSliceInfo->fillType) {
case TSDB_FILL_NULL: { case TSDB_FILL_NULL: {
colDataAppendNULL(pDst, rows); colDataAppendNULL(pDst, rows);
pResBlock->info.rows += 1;
break; break;
} }
@ -2177,7 +2177,6 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp
GET_TYPED_DATA(v, int64_t, pVar->nType, &pVar->i); GET_TYPED_DATA(v, int64_t, pVar->nType, &pVar->i);
colDataAppend(pDst, rows, (char*)&v, false); colDataAppend(pDst, rows, (char*)&v, false);
} }
pResBlock->info.rows += 1;
break; break;
} }
@ -2191,6 +2190,7 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp
// before interp range, do not fill // before interp range, do not fill
if (start.key == INT64_MIN || end.key == INT64_MAX) { if (start.key == INT64_MIN || end.key == INT64_MAX) {
hasInterp = false;
break; break;
} }
@ -2202,28 +2202,27 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp
} }
taosMemoryFree(current.val); taosMemoryFree(current.val);
pResBlock->info.rows += 1;
break; break;
} }
case TSDB_FILL_PREV: { case TSDB_FILL_PREV: {
if (!pSliceInfo->isPrevRowSet) { if (!pSliceInfo->isPrevRowSet) {
hasInterp = false;
break; break;
} }
SGroupKeys* pkey = taosArrayGet(pSliceInfo->pPrevRow, srcSlot); SGroupKeys* pkey = taosArrayGet(pSliceInfo->pPrevRow, srcSlot);
colDataAppend(pDst, rows, pkey->pData, false); colDataAppend(pDst, rows, pkey->pData, false);
pResBlock->info.rows += 1;
break; break;
} }
case TSDB_FILL_NEXT: { case TSDB_FILL_NEXT: {
if (!pSliceInfo->isNextRowSet) { if (!pSliceInfo->isNextRowSet) {
hasInterp = false;
break; break;
} }
SGroupKeys* pkey = taosArrayGet(pSliceInfo->pNextRow, srcSlot); SGroupKeys* pkey = taosArrayGet(pSliceInfo->pNextRow, srcSlot);
colDataAppend(pDst, rows, pkey->pData, false); colDataAppend(pDst, rows, pkey->pData, false);
pResBlock->info.rows += 1;
break; break;
} }
@ -2232,6 +2231,11 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp
break; break;
} }
} }
if (hasInterp) {
pResBlock->info.rows += 1;
}
} }
static int32_t initPrevRowsKeeper(STimeSliceOperatorInfo* pInfo, SSDataBlock* pBlock) { static int32_t initPrevRowsKeeper(STimeSliceOperatorInfo* pInfo, SSDataBlock* pBlock) {
@ -2412,6 +2416,11 @@ static SSDataBlock* doTimeslice(SOperatorInfo* pOperator) {
SColumnInfoData* pSrc = taosArrayGet(pBlock->pDataBlock, srcSlot); SColumnInfoData* pSrc = taosArrayGet(pBlock->pDataBlock, srcSlot);
SColumnInfoData* pDst = taosArrayGet(pResBlock->pDataBlock, dstSlot); SColumnInfoData* pDst = taosArrayGet(pResBlock->pDataBlock, dstSlot);
if (colDataIsNull_s(pSrc, i)) {
colDataAppendNULL(pDst, pResBlock->info.rows);
continue;
}
char* v = colDataGetData(pSrc, i); char* v = colDataGetData(pSrc, i);
colDataAppend(pDst, pResBlock->info.rows, v, false); colDataAppend(pDst, pResBlock->info.rows, v, false);
} }

View File

@ -551,7 +551,57 @@ class TDTestCase:
tdSql.checkData(0, 0, 15) tdSql.checkData(0, 0, 15)
tdSql.checkData(1, 0, 15) tdSql.checkData(1, 0, 15)
tdLog.printNoPrefix("==========step9:test error cases") tdLog.printNoPrefix("==========step9:test multi-interp cases")
tdSql.query(f"select interp(c0),interp(c1),interp(c2),interp(c3) from {dbname}.{tbname} range('2020-02-09 00:00:05', '2020-02-13 00:00:05') every(1d) fill(null)")
tdSql.checkRows(5)
tdSql.checkCols(4)
for i in range (tdSql.queryCols):
tdSql.checkData(0, i, None)
tdSql.checkData(1, i, None)
tdSql.checkData(2, i, 15)
tdSql.checkData(3, i, None)
tdSql.checkData(4, i, None)
tdSql.query(f"select interp(c0),interp(c1),interp(c2),interp(c3) from {dbname}.{tbname} range('2020-02-09 00:00:05', '2020-02-13 00:00:05') every(1d) fill(value, 1)")
tdSql.checkRows(5)
tdSql.checkCols(4)
for i in range (tdSql.queryCols):
tdSql.checkData(0, i, 1)
tdSql.checkData(1, i, 1)
tdSql.checkData(2, i, 15)
tdSql.checkData(3, i, 1)
tdSql.checkData(4, i, 1)
tdSql.query(f"select interp(c0),interp(c1),interp(c2),interp(c3) from {dbname}.{tbname} range('2020-02-09 00:00:05', '2020-02-13 00:00:05') every(1d) fill(prev)")
tdSql.checkRows(5)
tdSql.checkCols(4)
for i in range (tdSql.queryCols):
tdSql.checkData(0, i, 5)
tdSql.checkData(1, i, 5)
tdSql.checkData(2, i, 15)
tdSql.checkData(3, i, 15)
tdSql.checkData(4, i, 15)
tdSql.query(f"select interp(c0),interp(c1),interp(c2),interp(c3) from {dbname}.{tbname} range('2020-02-09 00:00:05', '2020-02-13 00:00:05') every(1d) fill(next)")
tdSql.checkRows(3)
tdSql.checkCols(4)
for i in range (tdSql.queryCols):
tdSql.checkData(0, i, 15)
tdSql.checkData(1, i, 15)
tdSql.checkData(2, i, 15)
tdSql.query(f"select interp(c0),interp(c1),interp(c2),interp(c3) from {dbname}.{tbname} range('2020-02-09 00:00:05', '2020-02-13 00:00:05') every(1d) fill(linear)")
tdSql.checkRows(1)
tdSql.checkCols(4)
for i in range (tdSql.queryCols):
tdSql.checkData(0, i, 15)
tdLog.printNoPrefix("==========step10:test error cases")
tdSql.error(f"select interp(c0) from {dbname}.{tbname}") tdSql.error(f"select interp(c0) from {dbname}.{tbname}")
tdSql.error(f"select interp(c0) from {dbname}.{tbname} range('2020-02-10 00:00:05', '2020-02-15 00:00:05')") tdSql.error(f"select interp(c0) from {dbname}.{tbname} range('2020-02-10 00:00:05', '2020-02-15 00:00:05')")