diff --git a/README-CN.md b/README-CN.md index e30e38ae78..0b7e42d4fa 100644 --- a/README-CN.md +++ b/README-CN.md @@ -303,14 +303,14 @@ Query OK, 2 row(s) in set (0.001700s) TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用: -- [Java](https://docs.taosdata.com/reference/connector/java/) -- [C/C++](https://www.taosdata.com/cn/documentation/connector#c-cpp) -- [Python](https://docs.taosdata.com/reference/connector/python/) -- [Go](https://docs.taosdata.com/reference/connector/go/) -- [Node.js](https://docs.taosdata.com/reference/connector/node/) -- [Rust](https://docs.taosdata.com/reference/connector/rust/) -- [C#](https://docs.taosdata.com/reference/connector/csharp/) -- [RESTful API](https://docs.taosdata.com/reference/rest-api/) +- [Java](https://docs.taosdata.com/connector/java/) +- [C/C++](https://docs.taosdata.com/connector/cpp/) +- [Python](https://docs.taosdata.com/connector/python/) +- [Go](https://docs.taosdata.com/connector/go/) +- [Node.js](https://docs.taosdata.com/connector/node/) +- [Rust](https://docs.taosdata.com/connector/rust/) +- [C#](https://docs.taosdata.com/connector/csharp/) +- [RESTful API](https://docs.taosdata.com/connector/rest-api/) # 成为社区贡献者 diff --git a/README.md b/README.md index 02dd9984e8..611d97aac9 100644 --- a/README.md +++ b/README.md @@ -19,29 +19,29 @@ English | [简体中文](README-CN.md) | We are hiring, check [here](https://tde # What is TDengine? -TDengine is an open source, high-performance, cloud native time-series database optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages: +TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/what-is-a-time-series-database/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages: -- **High-Performance**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression. +- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression. -- **Simplified Solution**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly. +- **[Simplified Solution](https://tdengine.com/tdengine/simplified-time-series-data-solution/)**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly. -- **Cloud Native**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds. +- **[Cloud Native](https://tdengine.com/tdengine/cloud-native-time-series-database/)**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds. -- **Ease of Use**: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access. +- **[Ease of Use](https://docs.tdengine.com/get-started/docker/)**: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access. -- **Easy Data Analytics**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way. +- **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way. -- **Open Source**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide. +- **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide. # Documentation -For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.taosdata.com) ([TDengine 文档](https://docs.taosdata.com)) +For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([TDengine 文档](https://docs.taosdata.com)) # Building -At the moment, TDengine server supports running on Linux, Windows systems.Any OS application can also choose the RESTful interface of taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU , and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. +At the moment, TDengine server supports running on Linux and Windows systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. -You can choose to install through source code according to your needs, [container](https://docs.taosdata.com/get-started/docker/), [installation package](https://docs.taosdata.com/get-started/package/) or [Kubernetes](https://docs.taosdata.com/deployment/k8s/) to install. This quick guide only applies to installing from source. +You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source. TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine. @@ -256,6 +256,7 @@ After building successfully, TDengine can be installed by: nmake install ``` + ## Quick Run @@ -304,14 +306,14 @@ Query OK, 2 row(s) in set (0.001700s) TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation. -- [Java](https://docs.taosdata.com/reference/connector/java/) -- [C/C++](https://docs.taosdata.com/reference/connector/cpp/) -- [Python](https://docs.taosdata.com/reference/connector/python/) -- [Go](https://docs.taosdata.com/reference/connector/go/) -- [Node.js](https://docs.taosdata.com/reference/connector/node/) -- [Rust](https://docs.taosdata.com/reference/connector/rust/) -- [C#](https://docs.taosdata.com/reference/connector/csharp/) -- [RESTful API](https://docs.taosdata.com/reference/rest-api/) +- [Java](https://docs.tdengine.com/reference/connector/java/) +- [C/C++](https://docs.tdengine.com/reference/connector/cpp/) +- [Python](https://docs.tdengine.com/reference/connector/python/) +- [Go](https://docs.tdengine.com/reference/connector/go/) +- [Node.js](https://docs.tdengine.com/reference/connector/node/) +- [Rust](https://docs.tdengine.com/reference/connector/rust/) +- [C#](https://docs.tdengine.com/reference/connector/csharp/) +- [RESTful API](https://docs.tdengine.com/reference/rest-api/) # Contribute to TDengine diff --git a/cmake/cmake.define b/cmake/cmake.define index 989b69a89b..376a55d396 100644 --- a/cmake/cmake.define +++ b/cmake/cmake.define @@ -103,6 +103,9 @@ IF (TD_WINDOWS) SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${COMMON_FLAGS}") ELSE () + IF (${TD_DARWIN}) + set(CMAKE_MACOSX_RPATH 0) + ENDIF () IF (${COVER} MATCHES "true") MESSAGE(STATUS "Test coverage mode, add extra flags") SET(GCC_COVERAGE_COMPILE_FLAGS "-fprofile-arcs -ftest-coverage") diff --git a/cmake/taostools_CMakeLists.txt.in b/cmake/taostools_CMakeLists.txt.in index e593e6d62b..2d9b00eee7 100644 --- a/cmake/taostools_CMakeLists.txt.in +++ b/cmake/taostools_CMakeLists.txt.in @@ -2,7 +2,7 @@ # taos-tools ExternalProject_Add(taos-tools GIT_REPOSITORY https://github.com/taosdata/taos-tools.git - GIT_TAG 2af2222 + GIT_TAG 833b721 SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools" BINARY_DIR "" #BUILD_IN_SOURCE TRUE diff --git a/docs/en/04-concept/index.md b/docs/en/04-concept/index.md index 44dcad82fc..5a9c55fdd6 100644 --- a/docs/en/04-concept/index.md +++ b/docs/en/04-concept/index.md @@ -104,15 +104,15 @@ Each row contains the device ID, time stamp, collected metrics (current, voltage ## Metric -Metric refers to the physical quantity collected by sensors, equipment or other types of data collection devices, such as current, voltage, temperature, pressure, GPS position, etc., which change with time, and the data type can be integer, float, Boolean, or strings. As time goes by, the amount of collected metric data stored increases. +Metric refers to the physical quantity collected by sensors, equipment or other types of data collection devices, such as current, voltage, temperature, pressure, GPS position, etc., which change with time, and the data type can be integer, float, Boolean, or strings. As time goes by, the amount of collected metric data stored increases. In the smart meters example, current, voltage and phase are the metrics. ## Label/Tag -Label/Tag refers to the static properties of sensors, equipment or other types of data collection devices, which do not change with time, such as device model, color, fixed location of the device, etc. The data type can be any type. Although static, TDengine allows users to add, delete or update tag values at any time. Unlike the collected metric data, the amount of tag data stored does not change over time. +Label/Tag refers to the static properties of sensors, equipment or other types of data collection devices, which do not change with time, such as device model, color, fixed location of the device, etc. The data type can be any type. Although static, TDengine allows users to add, delete or update tag values at any time. Unlike the collected metric data, the amount of tag data stored does not change over time. In the meters example, `location` and `groupid` are the tags. ## Data Collection Point -Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points. +Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points. In the smart meters example, d1001, d1002, d1003, and d1004 are the data collection points. ## Table @@ -137,7 +137,7 @@ The design of one table for one data collection point will require a huge number STable is a template for a type of data collection point. A STable contains a set of data collection points (tables) that have the same schema or data structure, but with different static attributes (tags). To describe a STable, in addition to defining the table structure of the metrics, it is also necessary to define the schema of its tags. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established. -In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**. +In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**. In the smart meters example, we can create a super table named `meters`. ## Subtable @@ -156,7 +156,9 @@ The relationship between a STable and the subtables created based on this STable Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs. In essence, querying a supertable is a very efficient aggregate query on multiple DCPs of the same type. -In TDengine, it is recommended to use a subtable instead of a regular table for a DCP. +In TDengine, it is recommended to use a subtable instead of a regular table for a DCP. In the smart meters example, we can create subtables like d1001, d1002, d1003, and d1004 under super table meters. + +To better understand the data model using metri, tags, super table and subtable, please refer to the diagram below which demonstrates the data model of the smart meters example. ![Meters Data Model Diagram](./supertable.webp) ## Database diff --git a/docs/en/04-concept/supertable.webp b/docs/en/04-concept/supertable.webp new file mode 100644 index 0000000000..764b8f3de7 Binary files /dev/null and b/docs/en/04-concept/supertable.webp differ diff --git a/docs/en/07-develop/07-tmq.mdx b/docs/en/07-develop/07-tmq.mdx index ceeea64fca..17b3f5caa0 100644 --- a/docs/en/07-develop/07-tmq.mdx +++ b/docs/en/07-develop/07-tmq.mdx @@ -16,7 +16,7 @@ import CDemo from "./_sub_c.mdx"; TDengine provides data subscription and consumption interfaces similar to message queue products. These interfaces make it easier for applications to obtain data written to TDengine either in real time and to process data in the order that events occurred. This simplifies your time-series data processing systems and reduces your costs because it is no longer necessary to deploy a message queue product such as Kafka. -To use TDengine data subscription, you define topics like in Kafka. However, a topic in TDengine is based on query conditions for an existing supertable, standard table, or subtable - in other words, a SELECT statement. You can use SQL to filter data by tag, table name, column, or expression and then perform a scalar function or user-defined function on the data. Aggregate functions are not supported. This gives TDengine data subscription more flexibility than similar products. The granularity of data can be controlled on demand by applications, while filtering and preprocessing are handled by TDengine instead of the application layer. This implementation reduces the amount of data transmitted and the complexity of applications. +To use TDengine data subscription, you define topics like in Kafka. However, a topic in TDengine is based on query conditions for an existing supertable, table, or subtable - in other words, a SELECT statement. You can use SQL to filter data by tag, table name, column, or expression and then perform a scalar function or user-defined function on the data. Aggregate functions are not supported. This gives TDengine data subscription more flexibility than similar products. The granularity of data can be controlled on demand by applications, while filtering and preprocessing are handled by TDengine instead of the application layer. This implementation reduces the amount of data transmitted and the complexity of applications. By subscribing to a topic, a consumer can obtain the latest data in that topic in real time. Multiple consumers can be formed into a consumer group that consumes messages together. Consumer groups enable faster speed through multi-threaded, distributed data consumption. Note that consumers in different groups that are subscribed to the same topic do not consume messages together. A single consumer can subscribe to multiple topics. If the data in a supertable is sharded across multiple vnodes, consumer groups can consume it much more efficiently than single consumers. TDengine also includes an acknowledgement mechanism that ensures at-least-once delivery in complicated environments where machines may crash or restart. diff --git a/docs/en/10-deployment/05-helm.md b/docs/en/10-deployment/05-helm.md index 48cd9df32c..302730f1b5 100644 --- a/docs/en/10-deployment/05-helm.md +++ b/docs/en/10-deployment/05-helm.md @@ -170,71 +170,21 @@ taoscfg: # number of replications, for cluster only TAOS_REPLICA: "1" - - # number of days per DB file - # TAOS_DAYS: "10" - - # number of days to keep DB file, default is 10 years. - #TAOS_KEEP: "3650" - - # cache block size (Mbyte) - #TAOS_CACHE: "16" - - # number of cache blocks per vnode - #TAOS_BLOCKS: "6" - - # minimum rows of records in file block - #TAOS_MIN_ROWS: "100" - - # maximum rows of records in file block - #TAOS_MAX_ROWS: "4096" - # - # TAOS_NUM_OF_THREADS_PER_CORE: number of threads per CPU core - #TAOS_NUM_OF_THREADS_PER_CORE: "1.0" + # TAOS_NUM_OF_RPC_THREADS: number of threads for RPC + #TAOS_NUM_OF_RPC_THREADS: "2" + # # TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data #TAOS_NUM_OF_COMMIT_THREADS: "4" - # - # TAOS_RATIO_OF_QUERY_CORES: - # the proportion of total CPU cores available for query processing - # 2.0: the query threads will be set to double of the CPU cores. - # 1.0: all CPU cores are available for query processing [default]. - # 0.5: only half of the CPU cores are available for query. - # 0.0: only one core available. - #TAOS_RATIO_OF_QUERY_CORES: "1.0" - - # - # TAOS_KEEP_COLUMN_NAME: - # the last_row/first/last aggregator will not change the original column name in the result fields - #TAOS_KEEP_COLUMN_NAME: "0" - - # enable/disable backuping vnode directory when removing vnode - #TAOS_VNODE_BAK: "1" - # enable/disable installation / usage report #TAOS_TELEMETRY_REPORTING: "1" - # enable/disable load balancing - #TAOS_BALANCE: "1" - - # max timer control blocks - #TAOS_MAX_TMR_CTRL: "512" - # time interval of system monitor, seconds #TAOS_MONITOR_INTERVAL: "30" - # number of seconds allowed for a dnode to be offline, for cluster only - #TAOS_OFFLINE_THRESHOLD: "8640000" - - # RPC re-try timer, millisecond - #TAOS_RPC_TIMER: "1000" - - # RPC maximum time for ack, seconds. - #TAOS_RPC_MAX_TIME: "600" - # time interval of dnode status reporting to mnode, seconds, for cluster only #TAOS_STATUS_INTERVAL: "1" @@ -245,37 +195,7 @@ taoscfg: #TAOS_MIN_SLIDING_TIME: "10" # minimum time window, milli-second - #TAOS_MIN_INTERVAL_TIME: "10" - - # maximum delay before launching a stream computation, milli-second - #TAOS_MAX_STREAM_COMP_DELAY: "20000" - - # maximum delay before launching a stream computation for the first time, milli-second - #TAOS_MAX_FIRST_STREAM_COMP_DELAY: "10000" - - # retry delay when a stream computation fails, milli-second - #TAOS_RETRY_STREAM_COMP_DELAY: "10" - - # the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9 - #TAOS_STREAM_COMP_DELAY_RATIO: "0.1" - - # max number of vgroups per db, 0 means configured automatically - #TAOS_MAX_VGROUPS_PER_DB: "0" - - # max number of tables per vnode - #TAOS_MAX_TABLES_PER_VNODE: "1000000" - - # the number of acknowledgments required for successful data writing - #TAOS_QUORUM: "1" - - # enable/disable compression - #TAOS_COMP: "2" - - # write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync - #TAOS_WAL_LEVEL: "1" - - # if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away - #TAOS_FSYNC: "3000" + #TAOS_MIN_INTERVAL_TIME: "1" # the compressed rpc message, option: # -1 (no compression) @@ -283,17 +203,8 @@ taoscfg: # > 0 (rpc message body which larger than this value will be compressed) #TAOS_COMPRESS_MSG_SIZE: "-1" - # max length of an SQL - #TAOS_MAX_SQL_LENGTH: "1048576" - - # the maximum number of records allowed for super table time sorting - #TAOS_MAX_NUM_OF_ORDERED_RES: "100000" - # max number of connections allowed in dnode - #TAOS_MAX_SHELL_CONNS: "5000" - - # max number of connections allowed in client - #TAOS_MAX_CONNECTIONS: "5000" + #TAOS_MAX_SHELL_CONNS: "50000" # stop writing logs when the disk size of the log folder is less than this value #TAOS_MINIMAL_LOG_DIR_G_B: "0.1" @@ -313,21 +224,8 @@ taoscfg: # enable/disable system monitor #TAOS_MONITOR: "1" - # enable/disable recording the SQL statements via restful interface - #TAOS_HTTP_ENABLE_RECORD_SQL: "0" - - # number of threads used to process http requests - #TAOS_HTTP_MAX_THREADS: "2" - - # maximum number of rows returned by the restful interface - #TAOS_RESTFUL_ROW_LIMIT: "10240" - - # The following parameter is used to limit the maximum number of lines in log files. - # max number of lines per log filters - # numOfLogLines 10000000 - # enable/disable async log - #TAOS_ASYNC_LOG: "0" + #TAOS_ASYNC_LOG: "1" # # time of keeping log files, days @@ -344,25 +242,8 @@ taoscfg: # debug flag for all log type, take effect when non-zero value\ #TAOS_DEBUG_FLAG: "143" - # enable/disable recording the SQL in taos client - #TAOS_ENABLE_RECORD_SQL: "0" - # generate core file when service crash #TAOS_ENABLE_CORE_FILE: "1" - - # maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden - #TAOS_MAX_BINARY_DISPLAY_WIDTH: "30" - - # enable/disable stream (continuous query) - #TAOS_STREAM: "1" - - # in retrieve blocking model, only in 50% query threads will be used in query processing in dnode - #TAOS_RETRIEVE_BLOCKING_MODEL: "0" - - # the maximum allowed query buffer size in MB during query processing for each data node - # -1 no limit (default) - # 0 no query allowed, queries are disabled - #TAOS_QUERY_BUFFER_SIZE: "-1" ``` ## Scaling Out diff --git a/docs/en/12-taos-sql/01-data-type.md b/docs/en/12-taos-sql/01-data-type.md index b830994ac9..876de50f35 100644 --- a/docs/en/12-taos-sql/01-data-type.md +++ b/docs/en/12-taos-sql/01-data-type.md @@ -11,7 +11,7 @@ When using TDengine to store and query data, the most important part of the data - The format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128` - Internal function `now` can be used to get the current timestamp on the client side - The current timestamp of the client side is applied when `now` is used to insert data -- Epoch Time:timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from 1970-01-01 00:00:00.000 (UTC/GMT) +- Epoch Time:timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from UTC 1970-01-01 00:00:00. - Add/subtract operations can be carried out on timestamps. For example `now-2h` means 2 hours prior to the time at which query is executed. The units of time in operations can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `select * from t1 where ts > now-2w and ts <= now-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operations. Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`. The default time precision is millisecond. In the statement below, the precision is set to nanonseconds. diff --git a/docs/en/12-taos-sql/24-show.md b/docs/en/12-taos-sql/24-show.md index 96503c9598..6b56161322 100644 --- a/docs/en/12-taos-sql/24-show.md +++ b/docs/en/12-taos-sql/24-show.md @@ -3,7 +3,7 @@ sidebar_label: SHOW Statement title: SHOW Statement for Metadata --- -In addition to running SELECT statements on INFORMATION_SCHEMA, you can also use SHOW to obtain system metadata, information, and status. +`SHOW` command can be used to get brief system information. To get details about metatadata, information, and status in the system, please use `select` to query the tables in database `INFORMATION_SCHEMA`. ## SHOW ACCOUNTS diff --git a/docs/en/13-operation/01-pkg-install.md b/docs/en/13-operation/01-pkg-install.md index a8d8d7b474..b6cc0582bc 100644 --- a/docs/en/13-operation/01-pkg-install.md +++ b/docs/en/13-operation/01-pkg-install.md @@ -15,6 +15,27 @@ About details of installing TDenine, please refer to [Installation Guide](../../ ## Uninstall + + +Apt-get package of TDengine can be uninstalled as below: + +```bash +$ sudo apt-get remove tdengine +Reading package lists... Done +Building dependency tree +Reading state information... Done +The following packages will be REMOVED: + tdengine +0 upgraded, 0 newly installed, 1 to remove and 18 not upgraded. +After this operation, 68.3 MB disk space will be freed. +Do you want to continue? [Y/n] y +(Reading database ... 135625 files and directories currently installed.) +Removing tdengine (3.0.0.0) ... +TDengine is removed successfully! + +``` + + Deb package of TDengine can be uninstalled as below: diff --git a/docs/en/14-reference/02-rest-api/02-rest-api.mdx b/docs/en/14-reference/02-rest-api/02-rest-api.mdx index 8d4186a36b..74ba78b7fc 100644 --- a/docs/en/14-reference/02-rest-api/02-rest-api.mdx +++ b/docs/en/14-reference/02-rest-api/02-rest-api.mdx @@ -10,7 +10,7 @@ One difference from the native connector is that the REST interface is stateless ## Installation -The REST interface does not rely on any TDengine native library, so the client application does not need to install any TDengine libraries. The client application's development language only needs to support the HTTP protocol. +The REST interface does not rely on any TDengine native library, so the client application does not need to install any TDengine libraries. The client application's development language only needs to support the HTTP protocol. The REST interface is provided by [taosAdapter](../taosadapter), to use REST interface you need to make sure `taosAdapter` is running properly. ## Verification diff --git a/docs/en/14-reference/03-connector/cpp.mdx b/docs/en/14-reference/03-connector/03-cpp.mdx similarity index 99% rename from docs/en/14-reference/03-connector/cpp.mdx rename to docs/en/14-reference/03-connector/03-cpp.mdx index 5839ed4af8..02d7df48db 100644 --- a/docs/en/14-reference/03-connector/cpp.mdx +++ b/docs/en/14-reference/03-connector/03-cpp.mdx @@ -1,5 +1,4 @@ --- -sidebar_position: 1 sidebar_label: C/C++ title: C/C++ Connector --- diff --git a/docs/en/14-reference/03-connector/java.mdx b/docs/en/14-reference/03-connector/04-java.mdx similarity index 99% rename from docs/en/14-reference/03-connector/java.mdx rename to docs/en/14-reference/03-connector/04-java.mdx index 39514c37eb..0f977393f1 100644 --- a/docs/en/14-reference/03-connector/java.mdx +++ b/docs/en/14-reference/03-connector/04-java.mdx @@ -1,6 +1,5 @@ --- toc_max_heading_level: 4 -sidebar_position: 2 sidebar_label: Java title: TDengine Java Connector description: The TDengine Java Connector is implemented on the standard JDBC API and provides native and REST connectors. diff --git a/docs/en/14-reference/03-connector/go.mdx b/docs/en/14-reference/03-connector/05-go.mdx similarity index 99% rename from docs/en/14-reference/03-connector/go.mdx rename to docs/en/14-reference/03-connector/05-go.mdx index 2926355040..00e3bc1bc3 100644 --- a/docs/en/14-reference/03-connector/go.mdx +++ b/docs/en/14-reference/03-connector/05-go.mdx @@ -1,6 +1,5 @@ --- toc_max_heading_level: 4 -sidebar_position: 4 sidebar_label: Go title: TDengine Go Connector --- diff --git a/docs/en/14-reference/03-connector/rust.mdx b/docs/en/14-reference/03-connector/06-rust.mdx similarity index 99% rename from docs/en/14-reference/03-connector/rust.mdx rename to docs/en/14-reference/03-connector/06-rust.mdx index e9b16ba94d..1184c98a28 100644 --- a/docs/en/14-reference/03-connector/rust.mdx +++ b/docs/en/14-reference/03-connector/06-rust.mdx @@ -1,6 +1,5 @@ --- toc_max_heading_level: 4 -sidebar_position: 5 sidebar_label: Rust title: TDengine Rust Connector --- diff --git a/docs/en/14-reference/03-connector/python.mdx b/docs/en/14-reference/03-connector/07-python.mdx similarity index 99% rename from docs/en/14-reference/03-connector/python.mdx rename to docs/en/14-reference/03-connector/07-python.mdx index e183bbee22..fc95033baa 100644 --- a/docs/en/14-reference/03-connector/python.mdx +++ b/docs/en/14-reference/03-connector/07-python.mdx @@ -1,5 +1,4 @@ --- -sidebar_position: 3 sidebar_label: Python title: TDengine Python Connector description: "taospy is the official Python connector for TDengine. taospy provides a rich API that makes it easy for Python applications to use TDengine. tasopy wraps both the native and REST interfaces of TDengine, corresponding to the two submodules of tasopy: taos and taosrest. In addition to wrapping the native and REST interfaces, taospy also provides a programming interface that conforms to the Python Data Access Specification (PEP 249), making it easy to integrate taospy with many third-party tools, such as SQLAlchemy and pandas." diff --git a/docs/en/14-reference/03-connector/node.mdx b/docs/en/14-reference/03-connector/08-node.mdx similarity index 99% rename from docs/en/14-reference/03-connector/node.mdx rename to docs/en/14-reference/03-connector/08-node.mdx index d170044435..f93632b417 100644 --- a/docs/en/14-reference/03-connector/node.mdx +++ b/docs/en/14-reference/03-connector/08-node.mdx @@ -1,6 +1,5 @@ --- toc_max_heading_level: 4 -sidebar_position: 6 sidebar_label: Node.js title: TDengine Node.js Connector --- diff --git a/docs/en/14-reference/03-connector/csharp.mdx b/docs/en/14-reference/03-connector/09-csharp.mdx similarity index 99% rename from docs/en/14-reference/03-connector/csharp.mdx rename to docs/en/14-reference/03-connector/09-csharp.mdx index 388ae49d09..c745b8dd1a 100644 --- a/docs/en/14-reference/03-connector/csharp.mdx +++ b/docs/en/14-reference/03-connector/09-csharp.mdx @@ -1,6 +1,5 @@ --- toc_max_heading_level: 4 -sidebar_position: 7 sidebar_label: C# title: C# Connector --- diff --git a/docs/en/14-reference/03-connector/php.mdx b/docs/en/14-reference/03-connector/10-php.mdx similarity index 98% rename from docs/en/14-reference/03-connector/php.mdx rename to docs/en/14-reference/03-connector/10-php.mdx index 08cf34495f..820f703759 100644 --- a/docs/en/14-reference/03-connector/php.mdx +++ b/docs/en/14-reference/03-connector/10-php.mdx @@ -1,6 +1,5 @@ --- -sidebar_position: 1 -sidebar_label: PHP (community contribution) +sidebar_label: PHP title: PHP Connector --- diff --git a/docs/en/14-reference/03-connector/03-connector.mdx b/docs/en/14-reference/03-connector/index.mdx similarity index 100% rename from docs/en/14-reference/03-connector/03-connector.mdx rename to docs/en/14-reference/03-connector/index.mdx diff --git a/docs/zh/02-intro.md b/docs/zh/02-intro.md index f726b4ea92..f6779b8776 100644 --- a/docs/zh/02-intro.md +++ b/docs/zh/02-intro.md @@ -1,5 +1,6 @@ --- title: 产品简介 +description: 简要介绍 TDengine 的主要功能 toc_max_heading_level: 2 --- diff --git a/docs/zh/04-concept/index.md b/docs/zh/04-concept/index.md index 8e97d4a2f4..89d3df9c97 100644 --- a/docs/zh/04-concept/index.md +++ b/docs/zh/04-concept/index.md @@ -1,5 +1,7 @@ --- +sidebar_label: 基本概念 title: 数据模型和基本概念 +description: TDengine 的数据模型和基本概念 --- 为了便于解释基本概念,便于撰写示例程序,整个 TDengine 文档以智能电表作为典型时序数据场景。假设每个智能电表采集电流、电压、相位三个量,有多个智能电表,每个电表有位置 location 和分组 group ID 的静态属性. 其采集的数据类似如下的表格: @@ -104,15 +106,15 @@ title: 数据模型和基本概念 ## 采集量 (Metric) -采集量是指传感器、设备或其他类型采集点采集的物理量,比如电流、电压、温度、压力、GPS 位置等,是随时间变化的,数据类型可以是整型、浮点型、布尔型,也可是字符串。随着时间的推移,存储的采集量的数据量越来越大。 +采集量是指传感器、设备或其他类型采集点采集的物理量,比如电流、电压、温度、压力、GPS 位置等,是随时间变化的,数据类型可以是整型、浮点型、布尔型,也可是字符串。随着时间的推移,存储的采集量的数据量越来越大。智能电表示例中的电流、电压、相位就是采集量。 ## 标签 (Label/Tag) -标签是指传感器、设备或其他类型采集点的静态属性,不是随时间变化的,比如设备型号、颜色、设备的所在地等,数据类型可以是任何类型。虽然是静态的,但 TDengine 容许用户修改、删除或增加标签值。与采集量不一样的是,随时间的推移,存储的标签的数据量不会有什么变化。 +标签是指传感器、设备或其他类型采集点的静态属性,不是随时间变化的,比如设备型号、颜色、设备的所在地等,数据类型可以是任何类型。虽然是静态的,但 TDengine 容许用户修改、删除或增加标签值。与采集量不一样的是,随时间的推移,存储的标签的数据量不会有什么变化。智能电表示例中的location与groupId就是标签。 ## 数据采集点 (Data Collection Point) -数据采集点是指按照预设时间周期或受事件触发采集物理量的硬件或软件。一个数据采集点可以采集一个或多个采集量,**但这些采集量都是同一时刻采集的,具有相同的时间戳**。对于复杂的设备,往往有多个数据采集点,每个数据采集点采集的周期都可能不一样,而且完全独立,不同步。比如对于一台汽车,有数据采集点专门采集 GPS 位置,有数据采集点专门采集发动机状态,有数据采集点专门采集车内的环境,这样一台汽车就有三个数据采集点。 +数据采集点是指按照预设时间周期或受事件触发采集物理量的硬件或软件。一个数据采集点可以采集一个或多个采集量,**但这些采集量都是同一时刻采集的,具有相同的时间戳**。对于复杂的设备,往往有多个数据采集点,每个数据采集点采集的周期都可能不一样,而且完全独立,不同步。比如对于一台汽车,有数据采集点专门采集 GPS 位置,有数据采集点专门采集发动机状态,有数据采集点专门采集车内的环境,这样一台汽车就有三个数据采集点。智能电表示例中的d1001, d1002, d1003, d1004等就是数据采集点。 ## 表 (Table) @@ -131,13 +133,14 @@ TDengine 建议用数据采集点的名字(如上表中的 D1001)来做表 对于复杂的设备,比如汽车,它有多个数据采集点,那么就需要为一台汽车建立多张表。 + ## 超级表 (STable) 由于一个数据采集点一张表,导致表的数量巨增,难以管理,而且应用经常需要做采集点之间的聚合操作,聚合的操作也变得复杂起来。为解决这个问题,TDengine 引入超级表(Super Table,简称为 STable)的概念。 超级表是指某一特定类型的数据采集点的集合。同一类型的数据采集点,其表的结构是完全一样的,但每个表(数据采集点)的静态属性(标签)是不一样的。描述一个超级表(某一特定类型的数据采集点的集合),除需要定义采集量的表结构之外,还需要定义其标签的 schema,标签的数据类型可以是整数、浮点数、字符串,标签可以有多个,可以事后增加、删除或修改。如果整个系统有 N 个不同类型的数据采集点,就需要建立 N 个超级表。 -在 TDengine 的设计里,**表用来代表一个具体的数据采集点,超级表用来代表一组相同类型的数据采集点集合**。 +在 TDengine 的设计里,**表用来代表一个具体的数据采集点,超级表用来代表一组相同类型的数据采集点集合**。智能电表示例中,我们可以创建一个超级表meters. ## 子表 (Subtable) @@ -156,7 +159,9 @@ TDengine 建议用数据采集点的名字(如上表中的 D1001)来做表 查询既可以在表上进行,也可以在超级表上进行。针对超级表的查询,TDengine 将把所有子表中的数据视为一个整体数据集进行处理,会先把满足标签过滤条件的表从超级表中找出来,然后再扫描这些表的时序数据,进行聚合操作,这样需要扫描的数据集会大幅减少,从而显著提高查询的性能。本质上,TDengine 通过对超级表查询的支持,实现了多个同类数据采集点的高效聚合。 -TDengine系统建议给一个数据采集点建表,需要通过超级表建表,而不是建普通表。 +TDengine系统建议给一个数据采集点建表,需要通过超级表建表,而不是建普通表。在智能电表的示例中,我们可以通过超级表meters创建子表d1001, d1002, d1003, d1004等。 + +为了更好地理解超级与子表的关系,可以参考下面关于智能电表数据模型的示意图。 ![智能电表数据模型示意图](./supertable.webp) ## 库 (database) diff --git a/docs/zh/04-concept/supertable.webp b/docs/zh/04-concept/supertable.webp new file mode 100644 index 0000000000..764b8f3de7 Binary files /dev/null and b/docs/zh/04-concept/supertable.webp differ diff --git a/docs/zh/05-get-started/01-docker.md b/docs/zh/05-get-started/01-docker.md index f0f09d4c7e..e2be419517 100644 --- a/docs/zh/05-get-started/01-docker.md +++ b/docs/zh/05-get-started/01-docker.md @@ -1,6 +1,7 @@ --- sidebar_label: Docker title: 通过 Docker 快速体验 TDengine +description: 使用 Docker 快速体验 TDengine 的高效写入和查询 --- 本节首先介绍如何通过 Docker 快速体验 TDengine,然后介绍如何在 Docker 环境下体验 TDengine 的写入和查询功能。如果你不熟悉 Docker,请使用[安装包的方式快速体验](../../get-started/package/)。如果您希望为 TDengine 贡献代码或对内部技术实现感兴趣,请参考 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装. diff --git a/docs/zh/05-get-started/03-package.md b/docs/zh/05-get-started/03-package.md index a1c1802d77..3e0fb056a5 100644 --- a/docs/zh/05-get-started/03-package.md +++ b/docs/zh/05-get-started/03-package.md @@ -1,6 +1,7 @@ --- sidebar_label: 安装包 title: 使用安装包立即开始 +description: 使用安装包快速体验 TDengine --- import Tabs from "@theme/Tabs"; diff --git a/docs/zh/07-develop/01-connect/index.md b/docs/zh/07-develop/01-connect/index.md index 3e44e6c5da..075d99cfee 100644 --- a/docs/zh/07-develop/01-connect/index.md +++ b/docs/zh/07-develop/01-connect/index.md @@ -1,6 +1,6 @@ --- title: 建立连接 -description: "本节介绍如何使用连接器建立与 TDengine 的连接,给出连接器安装、连接的简单说明。" +description: 使用连接器建立与 TDengine 的连接,以及连接器的安装和连接 --- import Tabs from "@theme/Tabs"; diff --git a/docs/zh/07-develop/02-model/index.mdx b/docs/zh/07-develop/02-model/index.mdx index 1609eb5362..634c8a98d4 100644 --- a/docs/zh/07-develop/02-model/index.mdx +++ b/docs/zh/07-develop/02-model/index.mdx @@ -1,5 +1,7 @@ --- +sidebar_label: 数据建模 title: TDengine 数据建模 +description: TDengine 中如何建立数据模型 --- TDengine 采用类关系型数据模型,需要建库、建表。因此对于一个具体的应用场景,需要考虑库、超级表和普通表的设计。本节不讨论细致的语法规则,只介绍概念。 diff --git a/docs/zh/07-develop/03-insert-data/index.md b/docs/zh/07-develop/03-insert-data/index.md index 55a28e4a8b..f1e5ada4df 100644 --- a/docs/zh/07-develop/03-insert-data/index.md +++ b/docs/zh/07-develop/03-insert-data/index.md @@ -1,5 +1,7 @@ --- +sidebar_label: 写入数据 title: 写入数据 +description: TDengine 的各种写入方式 --- TDengine 支持多种写入协议,包括 SQL,InfluxDB Line 协议, OpenTSDB Telnet 协议,OpenTSDB JSON 格式协议。数据可以单条插入,也可以批量插入,可以插入一个数据采集点的数据,也可以同时插入多个数据采集点的数据。同时,TDengine 支持多线程插入,支持时间乱序数据插入,也支持历史数据插入。InfluxDB Line 协议、OpenTSDB Telnet 协议和 OpenTSDB JSON 格式协议是 TDengine 支持的三种无模式写入协议。使用无模式方式写入无需提前创建超级表和子表,并且引擎能自适用数据对表结构做调整。 diff --git a/docs/zh/07-develop/04-query-data/index.mdx b/docs/zh/07-develop/04-query-data/index.mdx index 2631d147a5..c083c30c2c 100644 --- a/docs/zh/07-develop/04-query-data/index.mdx +++ b/docs/zh/07-develop/04-query-data/index.mdx @@ -1,4 +1,5 @@ --- +sidebar_label: 查询数据 title: 查询数据 description: "主要查询功能,通过连接器执行同步查询和异步查询" --- diff --git a/docs/zh/07-develop/index.md b/docs/zh/07-develop/index.md index 20c0170844..efaffaea71 100644 --- a/docs/zh/07-develop/index.md +++ b/docs/zh/07-develop/index.md @@ -1,5 +1,7 @@ --- title: 开发指南 +sidebar_label: 开发指南 +description: 让开发者能够快速上手的指南 --- 开发一个应用,如果你准备采用TDengine作为时序数据处理的工具,那么有如下几个事情要做: diff --git a/docs/zh/08-connector/02-rest-api.mdx b/docs/zh/08-connector/02-rest-api.mdx index 4b9171c07d..e254244657 100644 --- a/docs/zh/08-connector/02-rest-api.mdx +++ b/docs/zh/08-connector/02-rest-api.mdx @@ -1,5 +1,7 @@ --- title: REST API +sidebar_label: REST API +description: 详细介绍 TDengine 提供的 RESTful API. --- 为支持各种不同类型平台的开发,TDengine 提供符合 REST 设计标准的 API,即 REST API。为最大程度降低学习成本,不同于其他数据库 REST API 的设计方法,TDengine 直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库,仅需要一个 URL。REST 连接器的使用参见 [视频教程](https://www.taosdata.com/blog/2020/11/11/1965.html)。 @@ -10,7 +12,7 @@ title: REST API ## 安装 -RESTful 接口不依赖于任何 TDengine 的库,因此客户端不需要安装任何 TDengine 的库,只要客户端的开发语言支持 HTTP 协议即可。 +RESTful 接口不依赖于任何 TDengine 的库,因此客户端不需要安装任何 TDengine 的库,只要客户端的开发语言支持 HTTP 协议即可。TDengine 的 RESTful API 由 [taosAdapter](../../reference/taosadapter) 提供,在使用 RESTful API 之前需要确保 `taosAdapter` 正常运行。 ## 验证 diff --git a/docs/zh/08-connector/03-cpp.mdx b/docs/zh/08-connector/03-cpp.mdx index d27eeb7dfb..c0bd33f129 100644 --- a/docs/zh/08-connector/03-cpp.mdx +++ b/docs/zh/08-connector/03-cpp.mdx @@ -1,5 +1,4 @@ --- -sidebar_position: 1 sidebar_label: C/C++ title: C/C++ Connector --- diff --git a/docs/zh/08-connector/04-java.mdx b/docs/zh/08-connector/04-java.mdx index 20d2e4fabd..6b1715f8c6 100644 --- a/docs/zh/08-connector/04-java.mdx +++ b/docs/zh/08-connector/04-java.mdx @@ -1,6 +1,5 @@ --- toc_max_heading_level: 4 -sidebar_position: 2 sidebar_label: Java title: TDengine Java Connector description: TDengine Java 连接器基于标准 JDBC API 实现, 并提供原生连接与 REST连接两种连接器。 diff --git a/docs/zh/08-connector/06-rust.mdx b/docs/zh/08-connector/06-rust.mdx index 187e2f0b33..26f53c82d6 100644 --- a/docs/zh/08-connector/06-rust.mdx +++ b/docs/zh/08-connector/06-rust.mdx @@ -1,6 +1,5 @@ --- toc_max_heading_level: 4 -sidebar_position: 5 sidebar_label: Rust title: TDengine Rust Connector --- diff --git a/docs/zh/08-connector/07-python.mdx b/docs/zh/08-connector/07-python.mdx index 88a5d4f84d..0242486d3b 100644 --- a/docs/zh/08-connector/07-python.mdx +++ b/docs/zh/08-connector/07-python.mdx @@ -1,5 +1,4 @@ --- -sidebar_position: 3 sidebar_label: Python title: TDengine Python Connector description: "taospy 是 TDengine 的官方 Python 连接器。taospy 提供了丰富的 API, 使得 Python 应用可以很方便地使用 TDengine。tasopy 对 TDengine 的原生接口和 REST 接口都进行了封装, 分别对应 tasopy 的两个子模块:tasos 和 taosrest。除了对原生接口和 REST 接口的封装,taospy 还提供了符合 Python 数据访问规范(PEP 249)的编程接口。这使得 taospy 和很多第三方工具集成变得简单,比如 SQLAlchemy 和 pandas" diff --git a/docs/zh/08-connector/08-node.mdx b/docs/zh/08-connector/08-node.mdx index 63d690e554..167ae069d6 100644 --- a/docs/zh/08-connector/08-node.mdx +++ b/docs/zh/08-connector/08-node.mdx @@ -1,6 +1,5 @@ --- toc_max_heading_level: 4 -sidebar_position: 6 sidebar_label: Node.js title: TDengine Node.js Connector --- diff --git a/docs/zh/08-connector/09-csharp.mdx b/docs/zh/08-connector/09-csharp.mdx index 8214717583..be27bfb685 100644 --- a/docs/zh/08-connector/09-csharp.mdx +++ b/docs/zh/08-connector/09-csharp.mdx @@ -1,6 +1,5 @@ --- toc_max_heading_level: 4 -sidebar_position: 7 sidebar_label: C# title: C# Connector --- diff --git a/docs/zh/08-connector/10-php.mdx b/docs/zh/08-connector/10-php.mdx index 53611c0274..5e32c709de 100644 --- a/docs/zh/08-connector/10-php.mdx +++ b/docs/zh/08-connector/10-php.mdx @@ -1,6 +1,5 @@ --- -sidebar_position: 1 -sidebar_label: PHP(社区贡献) +sidebar_label: PHP title: PHP Connector --- diff --git a/docs/zh/08-connector/_01-error-code.md b/docs/zh/08-connector/_01-error-code.md index 53e006e108..3111d4bbf8 100644 --- a/docs/zh/08-connector/_01-error-code.md +++ b/docs/zh/08-connector/_01-error-code.md @@ -1,6 +1,7 @@ --- sidebar_label: 错误码 title: TDengine C/C++ 连接器错误码 +description: C/C++ 连接器的错误码列表和详细说明 --- 本文中详细列举了在使用 TDengine C/C++ 连接器时客户端可能得到的错误码以及所要采取的相应动作。其它语言的连接器在使用原生连接方式时也会所得到的返回码返回给连接器的调用者。 diff --git a/docs/zh/10-deployment/01-deploy.md b/docs/zh/10-deployment/01-deploy.md index 22a9c2ff8e..8d8a2eb6d8 100644 --- a/docs/zh/10-deployment/01-deploy.md +++ b/docs/zh/10-deployment/01-deploy.md @@ -1,6 +1,7 @@ --- sidebar_label: 手动部署 title: 集群部署和管理 +description: 使用命令行工具手动部署 TDengine 集群 --- ## 准备工作 diff --git a/docs/zh/10-deployment/03-k8s.md b/docs/zh/10-deployment/03-k8s.md index 396b834324..5d512700b6 100644 --- a/docs/zh/10-deployment/03-k8s.md +++ b/docs/zh/10-deployment/03-k8s.md @@ -1,6 +1,7 @@ --- sidebar_label: Kubernetes title: 在 Kubernetes 上部署 TDengine 集群 +description: 利用 Kubernetes 部署 TDengine 集群的详细指南 --- 作为面向云原生架构设计的时序数据库,TDengine 支持 Kubernetes 部署。这里介绍如何使用 YAML 文件一步一步从头创建一个 TDengine 集群,并重点介绍 Kubernetes 环境下 TDengine 的常用操作。 diff --git a/docs/zh/10-deployment/05-helm.md b/docs/zh/10-deployment/05-helm.md index 9a723ff62f..9a3b21f092 100644 --- a/docs/zh/10-deployment/05-helm.md +++ b/docs/zh/10-deployment/05-helm.md @@ -1,6 +1,7 @@ --- sidebar_label: Helm title: 使用 Helm 部署 TDengine 集群 +description: 使用 Helm 部署 TDengine 集群的详细指南 --- Helm 是 Kubernetes 的包管理器,上一节使用 Kubernets 部署 TDengine 集群的操作已经足够简单,但 Helm 依然可以提供更强大的能力。 @@ -171,70 +172,19 @@ taoscfg: TAOS_REPLICA: "1" - # number of days per DB file - # TAOS_DAYS: "10" - - # number of days to keep DB file, default is 10 years. - #TAOS_KEEP: "3650" - - # cache block size (Mbyte) - #TAOS_CACHE: "16" - - # number of cache blocks per vnode - #TAOS_BLOCKS: "6" - - # minimum rows of records in file block - #TAOS_MIN_ROWS: "100" - - # maximum rows of records in file block - #TAOS_MAX_ROWS: "4096" - - # - # TAOS_NUM_OF_THREADS_PER_CORE: number of threads per CPU core - #TAOS_NUM_OF_THREADS_PER_CORE: "1.0" + # TAOS_NUM_OF_RPC_THREADS: number of threads for RPC + #TAOS_NUM_OF_RPC_THREADS: "2" # # TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data #TAOS_NUM_OF_COMMIT_THREADS: "4" - # - # TAOS_RATIO_OF_QUERY_CORES: - # the proportion of total CPU cores available for query processing - # 2.0: the query threads will be set to double of the CPU cores. - # 1.0: all CPU cores are available for query processing [default]. - # 0.5: only half of the CPU cores are available for query. - # 0.0: only one core available. - #TAOS_RATIO_OF_QUERY_CORES: "1.0" - - # - # TAOS_KEEP_COLUMN_NAME: - # the last_row/first/last aggregator will not change the original column name in the result fields - #TAOS_KEEP_COLUMN_NAME: "0" - - # enable/disable backuping vnode directory when removing vnode - #TAOS_VNODE_BAK: "1" - # enable/disable installation / usage report #TAOS_TELEMETRY_REPORTING: "1" - # enable/disable load balancing - #TAOS_BALANCE: "1" - - # max timer control blocks - #TAOS_MAX_TMR_CTRL: "512" - # time interval of system monitor, seconds #TAOS_MONITOR_INTERVAL: "30" - # number of seconds allowed for a dnode to be offline, for cluster only - #TAOS_OFFLINE_THRESHOLD: "8640000" - - # RPC re-try timer, millisecond - #TAOS_RPC_TIMER: "1000" - - # RPC maximum time for ack, seconds. - #TAOS_RPC_MAX_TIME: "600" - # time interval of dnode status reporting to mnode, seconds, for cluster only #TAOS_STATUS_INTERVAL: "1" @@ -245,37 +195,7 @@ taoscfg: #TAOS_MIN_SLIDING_TIME: "10" # minimum time window, milli-second - #TAOS_MIN_INTERVAL_TIME: "10" - - # maximum delay before launching a stream computation, milli-second - #TAOS_MAX_STREAM_COMP_DELAY: "20000" - - # maximum delay before launching a stream computation for the first time, milli-second - #TAOS_MAX_FIRST_STREAM_COMP_DELAY: "10000" - - # retry delay when a stream computation fails, milli-second - #TAOS_RETRY_STREAM_COMP_DELAY: "10" - - # the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9 - #TAOS_STREAM_COMP_DELAY_RATIO: "0.1" - - # max number of vgroups per db, 0 means configured automatically - #TAOS_MAX_VGROUPS_PER_DB: "0" - - # max number of tables per vnode - #TAOS_MAX_TABLES_PER_VNODE: "1000000" - - # the number of acknowledgments required for successful data writing - #TAOS_QUORUM: "1" - - # enable/disable compression - #TAOS_COMP: "2" - - # write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync - #TAOS_WAL_LEVEL: "1" - - # if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away - #TAOS_FSYNC: "3000" + #TAOS_MIN_INTERVAL_TIME: "1" # the compressed rpc message, option: # -1 (no compression) @@ -283,17 +203,8 @@ taoscfg: # > 0 (rpc message body which larger than this value will be compressed) #TAOS_COMPRESS_MSG_SIZE: "-1" - # max length of an SQL - #TAOS_MAX_SQL_LENGTH: "1048576" - - # the maximum number of records allowed for super table time sorting - #TAOS_MAX_NUM_OF_ORDERED_RES: "100000" - # max number of connections allowed in dnode - #TAOS_MAX_SHELL_CONNS: "5000" - - # max number of connections allowed in client - #TAOS_MAX_CONNECTIONS: "5000" + #TAOS_MAX_SHELL_CONNS: "50000" # stop writing logs when the disk size of the log folder is less than this value #TAOS_MINIMAL_LOG_DIR_G_B: "0.1" @@ -313,21 +224,8 @@ taoscfg: # enable/disable system monitor #TAOS_MONITOR: "1" - # enable/disable recording the SQL statements via restful interface - #TAOS_HTTP_ENABLE_RECORD_SQL: "0" - - # number of threads used to process http requests - #TAOS_HTTP_MAX_THREADS: "2" - - # maximum number of rows returned by the restful interface - #TAOS_RESTFUL_ROW_LIMIT: "10240" - - # The following parameter is used to limit the maximum number of lines in log files. - # max number of lines per log filters - # numOfLogLines 10000000 - # enable/disable async log - #TAOS_ASYNC_LOG: "0" + #TAOS_ASYNC_LOG: "1" # # time of keeping log files, days @@ -344,25 +242,8 @@ taoscfg: # debug flag for all log type, take effect when non-zero value\ #TAOS_DEBUG_FLAG: "143" - # enable/disable recording the SQL in taos client - #TAOS_ENABLE_RECORD_SQL: "0" - # generate core file when service crash #TAOS_ENABLE_CORE_FILE: "1" - - # maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden - #TAOS_MAX_BINARY_DISPLAY_WIDTH: "30" - - # enable/disable stream (continuous query) - #TAOS_STREAM: "1" - - # in retrieve blocking model, only in 50% query threads will be used in query processing in dnode - #TAOS_RETRIEVE_BLOCKING_MODEL: "0" - - # the maximum allowed query buffer size in MB during query processing for each data node - # -1 no limit (default) - # 0 no query allowed, queries are disabled - #TAOS_QUERY_BUFFER_SIZE: "-1" ``` ## 扩容 diff --git a/docs/zh/10-deployment/index.md b/docs/zh/10-deployment/index.md index 96ac7b176d..4ff1add779 100644 --- a/docs/zh/10-deployment/index.md +++ b/docs/zh/10-deployment/index.md @@ -1,5 +1,7 @@ --- +sidebar_label: 部署集群 title: 部署集群 +description: 部署 TDengine 集群的多种方式 --- TDengine 支持集群,提供水平扩展的能力。如果需要获得更高的处理能力,只需要多增加节点即可。TDengine 采用虚拟节点技术,将一个节点虚拟化为多个虚拟节点,以实现负载均衡。同时,TDengine可以将多个节点上的虚拟节点组成虚拟节点组,通过多副本机制,以保证供系统的高可用。TDengine的集群功能完全开源。 diff --git a/docs/zh/12-taos-sql/01-data-type.md b/docs/zh/12-taos-sql/01-data-type.md index 628086f5a9..b8ef050fb7 100644 --- a/docs/zh/12-taos-sql/01-data-type.md +++ b/docs/zh/12-taos-sql/01-data-type.md @@ -11,7 +11,7 @@ description: "TDengine 支持的数据类型: 时间戳、浮点型、JSON 类 - 时间格式为 `YYYY-MM-DD HH:mm:ss.MS`,默认时间分辨率为毫秒。比如:`2017-08-12 18:25:58.128` - 内部函数 now 是客户端的当前时间 - 插入记录时,如果时间戳为 now,插入数据时使用提交这条记录的客户端的当前时间 -- Epoch Time:时间戳也可以是一个长整数,表示从格林威治时间 1970-01-01 00:00:00.000 (UTC/GMT) 开始的毫秒数(相应地,如果所在 Database 的时间精度设置为“微秒”,则长整型格式的时间戳含义也就对应于从格林威治时间 1970-01-01 00:00:00.000 (UTC/GMT) 开始的微秒数;纳秒精度逻辑类似。) +- Epoch Time:时间戳也可以是一个长整数,表示从 UTC 时间 1970-01-01 00:00:00 开始的毫秒数。相应地,如果所在 Database 的时间精度设置为“微秒”,则长整型格式的时间戳含义也就对应于从 UTC 时间 1970-01-01 00:00:00 开始的微秒数;纳秒精度逻辑类似。 - 时间可以加减,比如 now-2h,表明查询时刻向前推 2 个小时(最近 2 小时)。数字后面的时间单位可以是 b(纳秒)、u(微秒)、a(毫秒)、s(秒)、m(分)、h(小时)、d(天)、w(周)。 比如 `select * from t1 where ts > now-2w and ts <= now-1w`,表示查询两周前整整一周的数据。在指定降采样操作(down sampling)的时间窗口(interval)时,时间单位还可以使用 n (自然月) 和 y (自然年)。 TDengine 缺省的时间戳精度是毫秒,但通过在 `CREATE DATABASE` 时传递的 PRECISION 参数也可以支持微秒和纳秒。 diff --git a/docs/zh/12-taos-sql/03-table.md b/docs/zh/12-taos-sql/03-table.md index 0e104bb7b6..a93b010c4c 100644 --- a/docs/zh/12-taos-sql/03-table.md +++ b/docs/zh/12-taos-sql/03-table.md @@ -1,5 +1,7 @@ --- title: 表管理 +sidebar_label: 表 +description: 对表的各种管理操作 --- ## 创建表 diff --git a/docs/zh/12-taos-sql/04-stable.md b/docs/zh/12-taos-sql/04-stable.md index 59d9657694..450ff07fd8 100644 --- a/docs/zh/12-taos-sql/04-stable.md +++ b/docs/zh/12-taos-sql/04-stable.md @@ -1,6 +1,7 @@ --- sidebar_label: 超级表管理 title: 超级表 STable 管理 +description: 对超级表的各种管理操作 --- ## 创建超级表 diff --git a/docs/zh/12-taos-sql/05-insert.md b/docs/zh/12-taos-sql/05-insert.md index c91e70c481..59af9c55ed 100644 --- a/docs/zh/12-taos-sql/05-insert.md +++ b/docs/zh/12-taos-sql/05-insert.md @@ -1,6 +1,7 @@ --- sidebar_label: 数据写入 title: 数据写入 +description: 写入数据的详细语法 --- ## 写入语法 diff --git a/docs/zh/12-taos-sql/06-select.md b/docs/zh/12-taos-sql/06-select.md index 5312d7d2f3..0c305231e0 100644 --- a/docs/zh/12-taos-sql/06-select.md +++ b/docs/zh/12-taos-sql/06-select.md @@ -1,6 +1,7 @@ --- sidebar_label: 数据查询 title: 数据查询 +description: 查询数据的详细语法 --- ## 查询语法 diff --git a/docs/zh/12-taos-sql/10-function.md b/docs/zh/12-taos-sql/10-function.md index e99d915101..af31a1d4bd 100644 --- a/docs/zh/12-taos-sql/10-function.md +++ b/docs/zh/12-taos-sql/10-function.md @@ -1,6 +1,7 @@ --- sidebar_label: 函数 title: 函数 +description: TDengine 支持的函数列表 toc_max_heading_level: 4 --- diff --git a/docs/zh/12-taos-sql/12-distinguished.md b/docs/zh/12-taos-sql/12-distinguished.md index 2dad49ece9..b9e06033d6 100644 --- a/docs/zh/12-taos-sql/12-distinguished.md +++ b/docs/zh/12-taos-sql/12-distinguished.md @@ -1,6 +1,7 @@ --- sidebar_label: 时序数据特色查询 title: 时序数据特色查询 +description: TDengine 提供的时序数据特有的查询功能 --- TDengine 是专为时序数据而研发的大数据平台,存储和计算都针对时序数据的特定进行了量身定制,在支持标准 SQL 的基础之上,还提供了一系列贴合时序业务场景的特色查询语法,极大的方便时序场景的应用开发。 diff --git a/docs/zh/12-taos-sql/13-tmq.md b/docs/zh/12-taos-sql/13-tmq.md index b05d2bf680..571300ad8c 100644 --- a/docs/zh/12-taos-sql/13-tmq.md +++ b/docs/zh/12-taos-sql/13-tmq.md @@ -1,6 +1,7 @@ --- sidebar_label: 数据订阅 title: 数据订阅 +description: TDengine 消息队列提供的数据订阅功能 --- TDengine 3.0.0.0 开始对消息队列做了大幅的优化和增强以简化用户的解决方案。 diff --git a/docs/zh/12-taos-sql/14-stream.md b/docs/zh/12-taos-sql/14-stream.md index 28f52be59a..70b062a6ca 100644 --- a/docs/zh/12-taos-sql/14-stream.md +++ b/docs/zh/12-taos-sql/14-stream.md @@ -1,6 +1,7 @@ --- sidebar_label: 流式计算 title: 流式计算 +description: 流式计算的相关 SQL 的详细语法 --- diff --git a/docs/zh/12-taos-sql/16-operators.md b/docs/zh/12-taos-sql/16-operators.md index 22b78455fb..48e9991799 100644 --- a/docs/zh/12-taos-sql/16-operators.md +++ b/docs/zh/12-taos-sql/16-operators.md @@ -1,6 +1,7 @@ --- sidebar_label: 运算符 title: 运算符 +description: TDengine 支持的所有运算符 --- ## 算术运算符 diff --git a/docs/zh/12-taos-sql/17-json.md b/docs/zh/12-taos-sql/17-json.md index 4a4a8cca73..4cbd8eef36 100644 --- a/docs/zh/12-taos-sql/17-json.md +++ b/docs/zh/12-taos-sql/17-json.md @@ -1,6 +1,7 @@ --- sidebar_label: JSON 类型使用说明 title: JSON 类型使用说明 +description: 对 JSON 类型如何使用的详细说明 --- diff --git a/docs/zh/12-taos-sql/18-escape.md b/docs/zh/12-taos-sql/18-escape.md index d478340599..7e543743a3 100644 --- a/docs/zh/12-taos-sql/18-escape.md +++ b/docs/zh/12-taos-sql/18-escape.md @@ -1,5 +1,7 @@ --- title: 转义字符说明 +sidebar_label: 转义字符 +description: TDengine 中使用转义字符的详细规则 --- ## 转义字符表 diff --git a/docs/zh/12-taos-sql/19-limit.md b/docs/zh/12-taos-sql/19-limit.md index ff552fc977..473bb29c1c 100644 --- a/docs/zh/12-taos-sql/19-limit.md +++ b/docs/zh/12-taos-sql/19-limit.md @@ -1,6 +1,7 @@ --- sidebar_label: 命名与边界限制 title: 命名与边界限制 +description: 合法字符集和命名中的限制规则 --- ## 名称命名规则 diff --git a/docs/zh/12-taos-sql/20-keywords.md b/docs/zh/12-taos-sql/20-keywords.md index cac29d7863..047c6b08c9 100644 --- a/docs/zh/12-taos-sql/20-keywords.md +++ b/docs/zh/12-taos-sql/20-keywords.md @@ -1,6 +1,7 @@ --- sidebar_label: 保留关键字 title: TDengine 保留关键字 +description: TDengine 保留关键字的详细列表 --- ## 保留关键字 diff --git a/docs/zh/12-taos-sql/21-node.md b/docs/zh/12-taos-sql/21-node.md index 4816daf420..d47dc8198f 100644 --- a/docs/zh/12-taos-sql/21-node.md +++ b/docs/zh/12-taos-sql/21-node.md @@ -1,6 +1,7 @@ --- sidebar_label: 集群管理 title: 集群管理 +description: 管理集群的 SQL 命令的详细解析 --- 组成 TDengine 集群的物理实体是 dnode (data node 的缩写),它是一个运行在操作系统之上的进程。在 dnode 中可以建立负责时序数据存储的 vnode (virtual node),在多节点集群环境下当某个数据库的 replica 为 3 时,该数据库中的每个 vgroup 由 3 个 vnode 组成;当数据库的 replica 为 1 时,该数据库中的每个 vgroup 由 1 个 vnode 组成。如果要想配置某个数据库为多副本,则集群中的 dnode 数量至少为 3。在 dnode 还可以创建 mnode (management node),单个集群中最多可以创建三个 mnode。在 TDengine 3.0.0.0 中为了支持存算分离,引入了一种新的逻辑节点 qnode (query node),qnode 和 vnode 既可以共存在一个 dnode 中,也可以完全分离在不同的 dnode 上。 diff --git a/docs/zh/12-taos-sql/22-meta.md b/docs/zh/12-taos-sql/22-meta.md index 8139b2fc55..e9cda45b0f 100644 --- a/docs/zh/12-taos-sql/22-meta.md +++ b/docs/zh/12-taos-sql/22-meta.md @@ -1,6 +1,7 @@ --- sidebar_label: 元数据 title: 存储元数据的 Information_Schema 数据库 +description: Information_Schema 数据库中存储了系统中所有的元数据信息 --- TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数据库元数据、数据库系统信息和状态的访问,例如数据库或表的名称,当前执行的 SQL 语句等。该数据库存储有关 TDengine 维护的所有其他数据库的信息。它包含多个只读表。实际上,这些表都是视图,而不是基表,因此没有与它们关联的文件。所以对这些表只能查询,不能进行 INSERT 等写入操作。`INFORMATION_SCHEMA` 数据库旨在以一种更一致的方式来提供对 TDengine 支持的各种 SHOW 语句(如 SHOW TABLES、SHOW DATABASES)所提供的信息的访问。与 SHOW 语句相比,使用 SELECT ... FROM INFORMATION_SCHEMA.tablename 具有以下优点: diff --git a/docs/zh/12-taos-sql/23-perf.md b/docs/zh/12-taos-sql/23-perf.md index ac852ee150..e6ff4960a7 100644 --- a/docs/zh/12-taos-sql/23-perf.md +++ b/docs/zh/12-taos-sql/23-perf.md @@ -1,6 +1,7 @@ --- sidebar_label: 统计数据 title: 存储统计数据的 Performance_Schema 数据库 +description: Performance_Schema 数据库中存储了系统中的各种统计信息 --- TDengine 3.0 版本开始提供一个内置数据库 `performance_schema`,其中存储了与性能有关的统计数据。本节详细介绍其中的表和表结构。 diff --git a/docs/zh/12-taos-sql/24-show.md b/docs/zh/12-taos-sql/24-show.md index 781f94324c..14b51fb4c1 100644 --- a/docs/zh/12-taos-sql/24-show.md +++ b/docs/zh/12-taos-sql/24-show.md @@ -1,9 +1,10 @@ --- sidebar_label: SHOW 命令 title: 使用 SHOW 命令查看系统元数据 +description: SHOW 命令的完整列表 --- -除了使用 `select` 语句查询 `INFORMATION_SCHEMA` 数据库中的表获得系统中的各种元数据、系统信息和状态之外,也可以用 `SHOW` 命令来实现同样的目的。 +SHOW 命令可以用来获取简要的系统信息。若想获取系统中详细的各种元数据、系统信息和状态,请使用 select 语句查询 INFORMATION_SCHEMA 数据库中的表。 ## SHOW ACCOUNTS diff --git a/docs/zh/12-taos-sql/25-grant.md b/docs/zh/12-taos-sql/25-grant.md index c41a3fcfc9..6f7024d32e 100644 --- a/docs/zh/12-taos-sql/25-grant.md +++ b/docs/zh/12-taos-sql/25-grant.md @@ -1,6 +1,7 @@ --- sidebar_label: 权限管理 title: 权限管理 +description: 企业版中才具有的权限管理功能 --- 本节讲述如何在 TDengine 中进行权限管理的相关操作。 diff --git a/docs/zh/12-taos-sql/26-udf.md b/docs/zh/12-taos-sql/26-udf.md index 7ddcad298b..764fde6e1f 100644 --- a/docs/zh/12-taos-sql/26-udf.md +++ b/docs/zh/12-taos-sql/26-udf.md @@ -1,6 +1,7 @@ --- sidebar_label: 自定义函数 title: 用户自定义函数 +description: 使用 UDF 的详细指南 --- 除了 TDengine 的内置函数以外,用户还可以编写自己的函数逻辑并加入TDengine系统中。 diff --git a/docs/zh/12-taos-sql/27-index.md b/docs/zh/12-taos-sql/27-index.md index 2c0907723e..f88c6cf4ff 100644 --- a/docs/zh/12-taos-sql/27-index.md +++ b/docs/zh/12-taos-sql/27-index.md @@ -1,6 +1,7 @@ --- sidebar_label: 索引 title: 使用索引 +description: 索引功能的使用细节 --- TDengine 从 3.0.0.0 版本开始引入了索引功能,支持 SMA 索引和 FULLTEXT 索引。 diff --git a/docs/zh/12-taos-sql/28-recovery.md b/docs/zh/12-taos-sql/28-recovery.md index 72b220b8ff..582c373907 100644 --- a/docs/zh/12-taos-sql/28-recovery.md +++ b/docs/zh/12-taos-sql/28-recovery.md @@ -1,6 +1,7 @@ --- sidebar_label: 异常恢复 title: 异常恢复 +description: 如何终止出现问题的连接、查询和事务以使系统恢复正常 --- 在一个复杂的应用场景中,连接和查询任务等有可能进入一种错误状态或者耗时过长迟迟无法结束,此时需要有能够终止这些连接或任务的方法。 diff --git a/docs/zh/14-reference/07-tdinsight/index.mdx b/docs/zh/14-reference/07-tdinsight/index.mdx index 9548922e65..ecd6362143 100644 --- a/docs/zh/14-reference/07-tdinsight/index.mdx +++ b/docs/zh/14-reference/07-tdinsight/index.mdx @@ -1,6 +1,7 @@ --- -title: TDinsight - 基于Grafana的TDengine零依赖监控解决方案 +title: TDinsight sidebar_label: TDinsight +description: 基于Grafana的TDengine零依赖监控解决方案 --- TDinsight 是使用监控数据库和 [Grafana] 对 TDengine 进行监控的解决方案。 diff --git a/docs/zh/14-reference/12-config/index.md b/docs/zh/14-reference/12-config/index.md index d2efc5baf3..7b31e10572 100644 --- a/docs/zh/14-reference/12-config/index.md +++ b/docs/zh/14-reference/12-config/index.md @@ -698,122 +698,123 @@ charset 的有效值是 UTF-8。 | 45 | numOfVnodeFetchThreads | 否 | 是 | | 46 | numOfVnodeWriteThreads | 否 | 是 | | 47 | numOfVnodeSyncThreads | 否 | 是 | -| 48 | numOfQnodeQueryThreads | 否 | 是 | -| 49 | numOfQnodeFetchThreads | 否 | 是 | -| 50 | numOfSnodeSharedThreads | 否 | 是 | -| 51 | numOfSnodeUniqueThreads | 否 | 是 | -| 52 | rpcQueueMemoryAllowed | 否 | 是 | -| 53 | logDir | 是 | 是 | -| 54 | minimalLogDirGB | 是 | 是 | -| 55 | numOfLogLines | 是 | 是 | -| 56 | asyncLog | 是 | 是 | -| 57 | logKeepDays | 是 | 是 | -| 58 | debugFlag | 是 | 是 | -| 59 | tmrDebugFlag | 是 | 是 | -| 60 | uDebugFlag | 是 | 是 | -| 61 | rpcDebugFlag | 是 | 是 | -| 62 | jniDebugFlag | 是 | 是 | -| 63 | qDebugFlag | 是 | 是 | -| 64 | cDebugFlag | 是 | 是 | -| 65 | dDebugFlag | 是 | 是 | -| 66 | vDebugFlag | 是 | 是 | -| 67 | mDebugFlag | 是 | 是 | -| 68 | wDebugFlag | 是 | 是 | -| 69 | sDebugFlag | 是 | 是 | -| 70 | tsdbDebugFlag | 是 | 是 | -| 71 | tqDebugFlag | 否 | 是 | -| 72 | fsDebugFlag | 是 | 是 | -| 73 | udfDebugFlag | 否 | 是 | -| 74 | smaDebugFlag | 否 | 是 | -| 75 | idxDebugFlag | 否 | 是 | -| 76 | tdbDebugFlag | 否 | 是 | -| 77 | metaDebugFlag | 否 | 是 | -| 78 | timezone | 是 | 是 | -| 79 | locale | 是 | 是 | -| 80 | charset | 是 | 是 | -| 81 | udf | 是 | 是 | -| 82 | enableCoreFile | 是 | 是 | -| 83 | arbitrator | 是 | 否 | -| 84 | numOfThreadsPerCore | 是 | 否 | -| 85 | numOfMnodes | 是 | 否 | -| 86 | vnodeBak | 是 | 否 | -| 87 | balance | 是 | 否 | -| 88 | balanceInterval | 是 | 否 | -| 89 | offlineThreshold | 是 | 否 | -| 90 | role | 是 | 否 | -| 91 | dnodeNopLoop | 是 | 否 | -| 92 | keepTimeOffset | 是 | 否 | -| 93 | rpcTimer | 是 | 否 | -| 94 | rpcMaxTime | 是 | 否 | -| 95 | rpcForceTcp | 是 | 否 | -| 96 | tcpConnTimeout | 是 | 否 | -| 97 | syncCheckInterval | 是 | 否 | -| 98 | maxTmrCtrl | 是 | 否 | -| 99 | monitorReplica | 是 | 否 | -| 100 | smlTagNullName | 是 | 否 | -| 101 | keepColumnName | 是 | 否 | -| 102 | ratioOfQueryCores | 是 | 否 | -| 103 | maxStreamCompDelay | 是 | 否 | -| 104 | maxFirstStreamCompDelay | 是 | 否 | -| 105 | retryStreamCompDelay | 是 | 否 | -| 106 | streamCompDelayRatio | 是 | 否 | -| 107 | maxVgroupsPerDb | 是 | 否 | -| 108 | maxTablesPerVnode | 是 | 否 | -| 109 | minTablesPerVnode | 是 | 否 | -| 110 | tableIncStepPerVnode | 是 | 否 | -| 111 | cache | 是 | 否 | -| 112 | blocks | 是 | 否 | -| 113 | days | 是 | 否 | -| 114 | keep | 是 | 否 | -| 115 | minRows | 是 | 否 | -| 116 | maxRows | 是 | 否 | -| 117 | quorum | 是 | 否 | -| 118 | comp | 是 | 否 | -| 119 | walLevel | 是 | 否 | -| 120 | fsync | 是 | 否 | -| 121 | replica | 是 | 否 | -| 122 | partitions | 是 | 否 | -| 123 | quorum | 是 | 否 | -| 124 | update | 是 | 否 | -| 125 | cachelast | 是 | 否 | -| 126 | maxSQLLength | 是 | 否 | -| 127 | maxWildCardsLength | 是 | 否 | -| 128 | maxRegexStringLen | 是 | 否 | -| 129 | maxNumOfOrderedRes | 是 | 否 | -| 130 | maxConnections | 是 | 否 | -| 131 | mnodeEqualVnodeNum | 是 | 否 | -| 132 | http | 是 | 否 | -| 133 | httpEnableRecordSql | 是 | 否 | -| 134 | httpMaxThreads | 是 | 否 | -| 135 | restfulRowLimit | 是 | 否 | -| 136 | httpDbNameMandatory | 是 | 否 | -| 137 | httpKeepAlive | 是 | 否 | -| 138 | enableRecordSql | 是 | 否 | -| 139 | maxBinaryDisplayWidth | 是 | 否 | -| 140 | stream | 是 | 否 | -| 141 | retrieveBlockingModel | 是 | 否 | -| 142 | tsdbMetaCompactRatio | 是 | 否 | -| 143 | defaultJSONStrType | 是 | 否 | -| 144 | walFlushSize | 是 | 否 | -| 145 | keepTimeOffset | 是 | 否 | -| 146 | flowctrl | 是 | 否 | -| 147 | slaveQuery | 是 | 否 | -| 148 | adjustMaster | 是 | 否 | -| 149 | topicBinaryLen | 是 | 否 | -| 150 | telegrafUseFieldNum | 是 | 否 | -| 151 | deadLockKillQuery | 是 | 否 | -| 152 | clientMerge | 是 | 否 | -| 153 | sdbDebugFlag | 是 | 否 | -| 154 | odbcDebugFlag | 是 | 否 | -| 155 | httpDebugFlag | 是 | 否 | -| 156 | monDebugFlag | 是 | 否 | -| 157 | cqDebugFlag | 是 | 否 | -| 158 | shortcutFlag | 是 | 否 | -| 159 | probeSeconds | 是 | 否 | -| 160 | probeKillSeconds | 是 | 否 | -| 161 | probeInterval | 是 | 否 | -| 162 | lossyColumns | 是 | 否 | -| 163 | fPrecision | 是 | 否 | -| 164 | dPrecision | 是 | 否 | -| 165 | maxRange | 是 | 否 | -| 166 | range | 是 | 否 | +| 48 | numOfVnodeRsmaThreads | 否 | 是 | +| 49 | numOfQnodeQueryThreads | 否 | 是 | +| 50 | numOfQnodeFetchThreads | 否 | 是 | +| 51 | numOfSnodeSharedThreads | 否 | 是 | +| 52 | numOfSnodeUniqueThreads | 否 | 是 | +| 53 | rpcQueueMemoryAllowed | 否 | 是 | +| 54 | logDir | 是 | 是 | +| 55 | minimalLogDirGB | 是 | 是 | +| 56 | numOfLogLines | 是 | 是 | +| 57 | asyncLog | 是 | 是 | +| 58 | logKeepDays | 是 | 是 | +| 59 | debugFlag | 是 | 是 | +| 60 | tmrDebugFlag | 是 | 是 | +| 61 | uDebugFlag | 是 | 是 | +| 62 | rpcDebugFlag | 是 | 是 | +| 63 | jniDebugFlag | 是 | 是 | +| 64 | qDebugFlag | 是 | 是 | +| 65 | cDebugFlag | 是 | 是 | +| 66 | dDebugFlag | 是 | 是 | +| 67 | vDebugFlag | 是 | 是 | +| 68 | mDebugFlag | 是 | 是 | +| 69 | wDebugFlag | 是 | 是 | +| 70 | sDebugFlag | 是 | 是 | +| 71 | tsdbDebugFlag | 是 | 是 | +| 72 | tqDebugFlag | 否 | 是 | +| 73 | fsDebugFlag | 是 | 是 | +| 74 | udfDebugFlag | 否 | 是 | +| 75 | smaDebugFlag | 否 | 是 | +| 76 | idxDebugFlag | 否 | 是 | +| 77 | tdbDebugFlag | 否 | 是 | +| 78 | metaDebugFlag | 否 | 是 | +| 79 | timezone | 是 | 是 | +| 80 | locale | 是 | 是 | +| 81 | charset | 是 | 是 | +| 82 | udf | 是 | 是 | +| 83 | enableCoreFile | 是 | 是 | +| 84 | arbitrator | 是 | 否 | +| 85 | numOfThreadsPerCore | 是 | 否 | +| 86 | numOfMnodes | 是 | 否 | +| 87 | vnodeBak | 是 | 否 | +| 88 | balance | 是 | 否 | +| 89 | balanceInterval | 是 | 否 | +| 90 | offlineThreshold | 是 | 否 | +| 91 | role | 是 | 否 | +| 92 | dnodeNopLoop | 是 | 否 | +| 93 | keepTimeOffset | 是 | 否 | +| 94 | rpcTimer | 是 | 否 | +| 95 | rpcMaxTime | 是 | 否 | +| 96 | rpcForceTcp | 是 | 否 | +| 97 | tcpConnTimeout | 是 | 否 | +| 98 | syncCheckInterval | 是 | 否 | +| 99 | maxTmrCtrl | 是 | 否 | +| 100 | monitorReplica | 是 | 否 | +| 101 | smlTagNullName | 是 | 否 | +| 102 | keepColumnName | 是 | 否 | +| 103 | ratioOfQueryCores | 是 | 否 | +| 104 | maxStreamCompDelay | 是 | 否 | +| 105 | maxFirstStreamCompDelay | 是 | 否 | +| 106 | retryStreamCompDelay | 是 | 否 | +| 107 | streamCompDelayRatio | 是 | 否 | +| 108 | maxVgroupsPerDb | 是 | 否 | +| 109 | maxTablesPerVnode | 是 | 否 | +| 110 | minTablesPerVnode | 是 | 否 | +| 111 | tableIncStepPerVnode | 是 | 否 | +| 112 | cache | 是 | 否 | +| 113 | blocks | 是 | 否 | +| 114 | days | 是 | 否 | +| 115 | keep | 是 | 否 | +| 116 | minRows | 是 | 否 | +| 117 | maxRows | 是 | 否 | +| 118 | quorum | 是 | 否 | +| 119 | comp | 是 | 否 | +| 120 | walLevel | 是 | 否 | +| 121 | fsync | 是 | 否 | +| 122 | replica | 是 | 否 | +| 123 | partitions | 是 | 否 | +| 124 | quorum | 是 | 否 | +| 125 | update | 是 | 否 | +| 126 | cachelast | 是 | 否 | +| 127 | maxSQLLength | 是 | 否 | +| 128 | maxWildCardsLength | 是 | 否 | +| 129 | maxRegexStringLen | 是 | 否 | +| 130 | maxNumOfOrderedRes | 是 | 否 | +| 131 | maxConnections | 是 | 否 | +| 132 | mnodeEqualVnodeNum | 是 | 否 | +| 133 | http | 是 | 否 | +| 134 | httpEnableRecordSql | 是 | 否 | +| 135 | httpMaxThreads | 是 | 否 | +| 136 | restfulRowLimit | 是 | 否 | +| 137 | httpDbNameMandatory | 是 | 否 | +| 138 | httpKeepAlive | 是 | 否 | +| 139 | enableRecordSql | 是 | 否 | +| 140 | maxBinaryDisplayWidth | 是 | 否 | +| 141 | stream | 是 | 否 | +| 142 | retrieveBlockingModel | 是 | 否 | +| 143 | tsdbMetaCompactRatio | 是 | 否 | +| 144 | defaultJSONStrType | 是 | 否 | +| 145 | walFlushSize | 是 | 否 | +| 146 | keepTimeOffset | 是 | 否 | +| 147 | flowctrl | 是 | 否 | +| 148 | slaveQuery | 是 | 否 | +| 149 | adjustMaster | 是 | 否 | +| 150 | topicBinaryLen | 是 | 否 | +| 151 | telegrafUseFieldNum | 是 | 否 | +| 152 | deadLockKillQuery | 是 | 否 | +| 153 | clientMerge | 是 | 否 | +| 154 | sdbDebugFlag | 是 | 否 | +| 155 | odbcDebugFlag | 是 | 否 | +| 156 | httpDebugFlag | 是 | 否 | +| 157 | monDebugFlag | 是 | 否 | +| 158 | cqDebugFlag | 是 | 否 | +| 159 | shortcutFlag | 是 | 否 | +| 160 | probeSeconds | 是 | 否 | +| 161 | probeKillSeconds | 是 | 否 | +| 162 | probeInterval | 是 | 否 | +| 163 | lossyColumns | 是 | 否 | +| 164 | fPrecision | 是 | 否 | +| 165 | dPrecision | 是 | 否 | +| 166 | maxRange | 是 | 否 | +| 167 | range | 是 | 否 | diff --git a/docs/zh/14-reference/index.md b/docs/zh/14-reference/index.md index e9c0c4fe23..9d0a44af57 100644 --- a/docs/zh/14-reference/index.md +++ b/docs/zh/14-reference/index.md @@ -1,5 +1,6 @@ --- title: 参考手册 +description: TDengine 中的各种组件的详细说明 --- 参考手册是对 TDengine 本身、 TDengine 各语言连接器及自带的工具最详细的介绍。 diff --git a/docs/zh/17-operation/01-pkg-install.md b/docs/zh/17-operation/01-pkg-install.md index 5e4cc93130..671dc00cee 100644 --- a/docs/zh/17-operation/01-pkg-install.md +++ b/docs/zh/17-operation/01-pkg-install.md @@ -47,7 +47,23 @@ lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/ -内容 TBD +卸载命令如下: + +``` +$ sudo apt-get remove tdengine +Reading package lists... Done +Building dependency tree +Reading state information... Done +The following packages will be REMOVED: + tdengine +0 upgraded, 0 newly installed, 1 to remove and 18 not upgraded. +After this operation, 68.3 MB disk space will be freed. +Do you want to continue? [Y/n] y +(Reading database ... 135625 files and directories currently installed.) +Removing tdengine (3.0.0.0) ... +TDengine is removed successfully! + +``` @@ -57,7 +73,7 @@ lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/ ``` $ sudo dpkg -r tdengine (Reading database ... 120119 files and directories currently installed.) -Removing tdengine (3.0.0.10002) ... +Removing tdengine (3.0.0.0) ... TDengine is removed successfully! ``` diff --git a/docs/zh/17-operation/02-planning.mdx b/docs/zh/17-operation/02-planning.mdx index 0d63c4eaf3..28e3f54020 100644 --- a/docs/zh/17-operation/02-planning.mdx +++ b/docs/zh/17-operation/02-planning.mdx @@ -1,6 +1,7 @@ --- sidebar_label: 容量规划 title: 容量规划 +description: 如何规划一个 TDengine 集群所需的物理资源 --- 使用 TDengine 来搭建一个物联网大数据平台,计算资源、存储资源需要根据业务场景进行规划。下面分别讨论系统运行所需要的内存、CPU 以及硬盘空间。 diff --git a/docs/zh/17-operation/03-tolerance.md b/docs/zh/17-operation/03-tolerance.md index 1ce485b042..79cf10c39a 100644 --- a/docs/zh/17-operation/03-tolerance.md +++ b/docs/zh/17-operation/03-tolerance.md @@ -1,5 +1,7 @@ --- title: 容错和灾备 +sidebar_label: 容错和灾备 +description: TDengine 的容错和灾备功能 --- ## 容错 diff --git a/docs/zh/17-operation/07-import.md b/docs/zh/17-operation/07-import.md index 7dee05720d..17945be595 100644 --- a/docs/zh/17-operation/07-import.md +++ b/docs/zh/17-operation/07-import.md @@ -1,5 +1,6 @@ --- title: 数据导入 +description: 如何导入外部数据到 TDengine --- TDengine 提供多种方便的数据导入功能,一种按脚本文件导入,一种按数据文件导入,一种是 taosdump 工具导入本身导出的文件。 diff --git a/docs/zh/17-operation/08-export.md b/docs/zh/17-operation/08-export.md index 042ecc7ba2..ecc3b2f110 100644 --- a/docs/zh/17-operation/08-export.md +++ b/docs/zh/17-operation/08-export.md @@ -1,5 +1,6 @@ --- title: 数据导出 +description: 如何导出 TDengine 中的数据 --- 为方便数据导出,TDengine 提供了两种导出方式,分别是按表导出和用 taosdump 导出。 diff --git a/docs/zh/17-operation/10-monitor.md b/docs/zh/17-operation/10-monitor.md index 9f0f06fde2..e936f35dca 100644 --- a/docs/zh/17-operation/10-monitor.md +++ b/docs/zh/17-operation/10-monitor.md @@ -1,5 +1,6 @@ --- title: 系统监控 +description: 监控 TDengine 的运行状态 --- TDengine 通过 [taosKeeper](/reference/taosKeeper/) 将服务器的 CPU、内存、硬盘空间、带宽、请求数、磁盘读写速度等信息定时写入指定数据库。TDengine 还将重要的系统操作(比如登录、创建、删除数据库等)日志以及各种错误报警信息进行记录。系统管理员可以从 CLI 直接查看这个数据库,也可以在 WEB 通过图形化界面查看这些监测信息。 diff --git a/docs/zh/17-operation/17-diagnose.md b/docs/zh/17-operation/17-diagnose.md index e6e9be7153..ec529096a7 100644 --- a/docs/zh/17-operation/17-diagnose.md +++ b/docs/zh/17-operation/17-diagnose.md @@ -1,5 +1,6 @@ --- title: 诊断及其他 +description: 一些常见问题的诊断技巧 --- ## 网络连接诊断 diff --git a/docs/zh/20-third-party/01-grafana.mdx b/docs/zh/20-third-party/01-grafana.mdx index becb1a70a9..83f3f8bb25 100644 --- a/docs/zh/20-third-party/01-grafana.mdx +++ b/docs/zh/20-third-party/01-grafana.mdx @@ -1,6 +1,7 @@ --- sidebar_label: Grafana title: Grafana +description: 使用 Grafana 与 TDengine 的详细说明 --- import Tabs from "@theme/Tabs"; diff --git a/docs/zh/20-third-party/02-prometheus.md b/docs/zh/20-third-party/02-prometheus.md index 0fe534b8df..eb6c3bf1d0 100644 --- a/docs/zh/20-third-party/02-prometheus.md +++ b/docs/zh/20-third-party/02-prometheus.md @@ -1,6 +1,7 @@ --- sidebar_label: Prometheus title: Prometheus +description: 使用 Prometheus 访问 TDengine --- import Prometheus from "../14-reference/_prometheus.mdx" diff --git a/docs/zh/20-third-party/03-telegraf.md b/docs/zh/20-third-party/03-telegraf.md index 88a69211c0..84883e665a 100644 --- a/docs/zh/20-third-party/03-telegraf.md +++ b/docs/zh/20-third-party/03-telegraf.md @@ -1,6 +1,7 @@ --- sidebar_label: Telegraf title: Telegraf 写入 +description: 使用 Telegraf 向 TDengine 写入数据 --- import Telegraf from "../14-reference/_telegraf.mdx" diff --git a/docs/zh/20-third-party/05-collectd.md b/docs/zh/20-third-party/05-collectd.md index 04892fd42e..cc2235f260 100644 --- a/docs/zh/20-third-party/05-collectd.md +++ b/docs/zh/20-third-party/05-collectd.md @@ -1,6 +1,7 @@ --- sidebar_label: collectd title: collectd 写入 +description: 使用 collected 向 TDengine 写入数据 --- import CollectD from "../14-reference/_collectd.mdx" diff --git a/docs/zh/20-third-party/06-statsd.md b/docs/zh/20-third-party/06-statsd.md index 260d011835..122c9fd94c 100644 --- a/docs/zh/20-third-party/06-statsd.md +++ b/docs/zh/20-third-party/06-statsd.md @@ -1,6 +1,7 @@ --- sidebar_label: StatsD title: StatsD 直接写入 +description: 使用 StatsD 向 TDengine 写入 --- import StatsD from "../14-reference/_statsd.mdx" diff --git a/docs/zh/20-third-party/07-icinga2.md b/docs/zh/20-third-party/07-icinga2.md index ed1f1404a7..06ead57655 100644 --- a/docs/zh/20-third-party/07-icinga2.md +++ b/docs/zh/20-third-party/07-icinga2.md @@ -1,6 +1,7 @@ --- sidebar_label: icinga2 title: icinga2 写入 +description: 使用 icinga2 写入 TDengine --- import Icinga2 from "../14-reference/_icinga2.mdx" diff --git a/docs/zh/20-third-party/08-tcollector.md b/docs/zh/20-third-party/08-tcollector.md index a1245e8c27..78d0b4a5df 100644 --- a/docs/zh/20-third-party/08-tcollector.md +++ b/docs/zh/20-third-party/08-tcollector.md @@ -1,6 +1,7 @@ --- sidebar_label: TCollector title: TCollector 写入 +description: 使用 TCollector 写入 TDengine --- import TCollector from "../14-reference/_tcollector.mdx" diff --git a/docs/zh/20-third-party/09-emq-broker.md b/docs/zh/20-third-party/09-emq-broker.md index f252e520a7..782a139e22 100644 --- a/docs/zh/20-third-party/09-emq-broker.md +++ b/docs/zh/20-third-party/09-emq-broker.md @@ -1,6 +1,7 @@ --- sidebar_label: EMQX Broker title: EMQX Broker 写入 +description: 使用 EMQX Broker 写入 TDengine --- MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/emqx)是一开源的 MQTT Broker 软件,无需任何代码,只需要在 EMQX Dashboard 里使用“规则”做简单配置,即可将 MQTT 的数据直接写入 TDengine。EMQX 支持通过 发送到 Web 服务的方式保存数据到 TDengine,也在企业版上提供原生的 TDengine 驱动实现直接保存。 diff --git a/docs/zh/20-third-party/10-hive-mq-broker.md b/docs/zh/20-third-party/10-hive-mq-broker.md index f75ed793d6..a388ff6daf 100644 --- a/docs/zh/20-third-party/10-hive-mq-broker.md +++ b/docs/zh/20-third-party/10-hive-mq-broker.md @@ -1,6 +1,7 @@ --- sidebar_label: HiveMQ Broker title: HiveMQ Broker 写入 +description: 使用 HivMQ Broker 写入 TDengine --- [HiveMQ](https://www.hivemq.com/) 是一个提供免费个人版和企业版的 MQTT 代理,主要用于企业和新兴的机器到机器 M2M 通讯和内部传输,满足可伸缩性、易管理和安全特性。HiveMQ 提供了开源的插件开发包。可以通过 HiveMQ extension - TDengine 保存数据到 TDengine。详细使用方法请参考 [HiveMQ extension - TDengine 说明文档](https://github.com/huskar-t/hivemq-tdengine-extension/blob/b62a26ecc164a310104df57691691b237e091c89/README.md)。 diff --git a/docs/zh/20-third-party/11-kafka.md b/docs/zh/20-third-party/11-kafka.md index 09bda4664f..1172f4fbc5 100644 --- a/docs/zh/20-third-party/11-kafka.md +++ b/docs/zh/20-third-party/11-kafka.md @@ -1,6 +1,7 @@ --- sidebar_label: Kafka -title: TDengine Kafka Connector 使用教程 +title: TDengine Kafka Connector +description: 使用 TDengine Kafka Connector 的详细指南 --- TDengine Kafka Connector 包含两个插件: TDengine Source Connector 和 TDengine Sink Connector。用户只需提供简单的配置文件,就可以将 Kafka 中指定 topic 的数据(批量或实时)同步到 TDengine, 或将 TDengine 中指定数据库的数据(批量或实时)同步到 Kafka。 diff --git a/docs/zh/21-tdinternal/01-arch.md b/docs/zh/21-tdinternal/01-arch.md index a910c584d6..d74366d129 100644 --- a/docs/zh/21-tdinternal/01-arch.md +++ b/docs/zh/21-tdinternal/01-arch.md @@ -1,6 +1,7 @@ --- sidebar_label: 整体架构 title: 整体架构 +description: TDengine 架构设计,包括:集群、存储、缓存与持久化、数据备份、多级存储等 --- ## 集群与基本逻辑单元 diff --git a/docs/zh/21-tdinternal/03-high-availability.md b/docs/zh/21-tdinternal/03-high-availability.md index ba056b6f16..4cdf04f6d1 100644 --- a/docs/zh/21-tdinternal/03-high-availability.md +++ b/docs/zh/21-tdinternal/03-high-availability.md @@ -1,5 +1,6 @@ --- title: 高可用 +description: TDengine 的高可用设计 --- ## Vnode 的高可用性 diff --git a/docs/zh/21-tdinternal/05-load-balance.md b/docs/zh/21-tdinternal/05-load-balance.md index 2376dd3e61..07af2328d5 100644 --- a/docs/zh/21-tdinternal/05-load-balance.md +++ b/docs/zh/21-tdinternal/05-load-balance.md @@ -1,5 +1,6 @@ --- title: 负载均衡 +description: TDengine 的负载均衡设计 --- TDengine 中的负载均衡主要指对时序数据的处理的负载均衡。TDengine 采用 Hash 一致性算法将一个数据库中的所有表和子表的数据均衡分散在属于该数据库的所有 vgroup 中,每张表或子表只能由一个 vgroup 处理,一个 vgroup 可能负责处理多个表或子表。 @@ -7,7 +8,7 @@ TDengine 中的负载均衡主要指对时序数据的处理的负载均衡。TD 创建数据库时可以指定其中的 vgroup 的数量: ```sql -create database db0 vgroups 100; +create database db0 vgroups 20; ``` 如何指定合适的 vgroup 的数量,这取决于系统资源。假定系统中只计划建立一个数据库,则 vgroup 数量由集群中所有 dnode 所能使用的资源决定。原则上可用的 CPU 和 Memory 越多,可建立的 vgroup 也越多。但也要考虑到磁盘性能,过多的 vgroup 在磁盘性能达到上限后反而会拖累整个系统的性能。假如系统中会建立多个数据库,则多个数据库的 vgroup 之和取决于系统中可用资源的数量。要综合考虑多个数据库之间表的数量、写入频率、数据量等多个因素在多个数据库之间分配 vgroup。实际中建议首先根据系统资源配置选择一个初始的 vgroup 数量,比如 CPU 总核数的 2 倍,以此为起点通过测试找到最佳的 vgroup 数量配置,此为系统中的 vgroup 总数。如果有多个数据库的话,再根据各个数据库的表数和数据量对 vgroup 进行分配。 diff --git a/docs/zh/21-tdinternal/index.md b/docs/zh/21-tdinternal/index.md index 63a746623e..21f106edc9 100644 --- a/docs/zh/21-tdinternal/index.md +++ b/docs/zh/21-tdinternal/index.md @@ -1,5 +1,6 @@ --- title: 技术内幕 +description: TDengine 的内部设计 --- ```mdx-code-block diff --git a/docs/zh/25-application/01-telegraf.md b/docs/zh/25-application/01-telegraf.md index a949fa9721..4e9597f964 100644 --- a/docs/zh/25-application/01-telegraf.md +++ b/docs/zh/25-application/01-telegraf.md @@ -1,6 +1,7 @@ --- sidebar_label: TDengine + Telegraf + Grafana -title: 使用 TDengine + Telegraf + Grafana 快速搭建 IT 运维展示系统 +title: TDengine + Telegraf + Grafana +description: 使用 TDengine + Telegraf + Grafana 快速搭建 IT 运维展示系统 --- ## 背景介绍 diff --git a/docs/zh/25-application/02-collectd.md b/docs/zh/25-application/02-collectd.md index 6bdebd5030..c6230f48ab 100644 --- a/docs/zh/25-application/02-collectd.md +++ b/docs/zh/25-application/02-collectd.md @@ -1,6 +1,7 @@ --- sidebar_label: TDengine + collectd/StatsD + Grafana -title: 使用 TDengine + collectd/StatsD + Grafana 快速搭建 IT 运维监控系统 +title: TDengine + collectd/StatsD + Grafana +description: 使用 TDengine + collectd/StatsD + Grafana 快速搭建 IT 运维监控系统 --- ## 背景介绍 diff --git a/docs/zh/25-application/index.md b/docs/zh/25-application/index.md index 1305cf230f..76aa179927 100644 --- a/docs/zh/25-application/index.md +++ b/docs/zh/25-application/index.md @@ -1,5 +1,6 @@ --- title: 应用实践 +description: TDengine 配合其它开源组件的一些应用示例 --- ```mdx-code-block diff --git a/docs/zh/27-train-faq/01-faq.md b/docs/zh/27-train-faq/01-faq.md index 04ee011b93..2fd9dff80b 100644 --- a/docs/zh/27-train-faq/01-faq.md +++ b/docs/zh/27-train-faq/01-faq.md @@ -1,5 +1,6 @@ --- title: 常见问题及反馈 +description: 一些常见问题的解决方法汇总 --- ## 问题反馈 diff --git a/docs/zh/27-train-faq/index.md b/docs/zh/27-train-faq/index.md index b42bff0288..e7159d98c8 100644 --- a/docs/zh/27-train-faq/index.md +++ b/docs/zh/27-train-faq/index.md @@ -1,5 +1,6 @@ --- title: FAQ 及其他 +description: 用户经常遇到的问题 --- ```mdx-code-block diff --git a/docs/zh/28-releases/01-tdengine.md b/docs/zh/28-releases/01-tdengine.md index a64798caa0..e3e1463131 100644 --- a/docs/zh/28-releases/01-tdengine.md +++ b/docs/zh/28-releases/01-tdengine.md @@ -1,6 +1,7 @@ --- sidebar_label: TDengine 发布历史 title: TDengine 发布历史 +description: TDengine 发布历史、Release Notes 及下载链接 --- import Release from "/components/ReleaseV3"; @@ -9,7 +10,7 @@ import Release from "/components/ReleaseV3"; -## 3.0.0.0 + diff --git a/docs/zh/28-releases/02-tools.md b/docs/zh/28-releases/02-tools.md index 7513333040..61129d74e5 100644 --- a/docs/zh/28-releases/02-tools.md +++ b/docs/zh/28-releases/02-tools.md @@ -1,6 +1,7 @@ --- sidebar_label: taosTools 发布历史 title: taosTools 发布历史 +description: taosTools 的发布历史、Release Notes 和下载链接 --- import Release from "/components/ReleaseV3"; diff --git a/examples/c/stream_demo.c b/examples/c/stream_demo.c index 55556f21a1..1c9d11b755 100644 --- a/examples/c/stream_demo.c +++ b/examples/c/stream_demo.c @@ -96,7 +96,7 @@ int32_t create_stream() { taos_free_result(pRes); pRes = taos_query(pConn, - "create stream stream1 trigger at_once watermark 10s into outstb as select _wstart start, k from st1 partition by tbname state_window(k)"); + "create stream stream1 trigger at_once watermark 10s into outstb as select _wstart start, avg(k) from st1 partition by tbname interval(10s)"); if (taos_errno(pRes) != 0) { printf("failed to create stream stream1, reason:%s\n", taos_errstr(pRes)); return -1; diff --git a/include/common/systable.h b/include/common/systable.h index ed2e6a46c3..01c9807627 100644 --- a/include/common/systable.h +++ b/include/common/systable.h @@ -22,27 +22,27 @@ extern "C" { #ifndef TDENGINE_SYSTABLE_H #define TDENGINE_SYSTABLE_H -#define TSDB_INFORMATION_SCHEMA_DB "information_schema" -#define TSDB_INS_TABLE_DNODES "ins_dnodes" -#define TSDB_INS_TABLE_MNODES "ins_mnodes" -#define TSDB_INS_TABLE_MODULES "ins_modules" -#define TSDB_INS_TABLE_QNODES "ins_qnodes" -#define TSDB_INS_TABLE_BNODES "ins_bnodes" -#define TSDB_INS_TABLE_SNODES "ins_snodes" -#define TSDB_INS_TABLE_CLUSTER "ins_cluster" -#define TSDB_INS_TABLE_DATABASES "ins_databases" -#define TSDB_INS_TABLE_FUNCTIONS "ins_functions" -#define TSDB_INS_TABLE_INDEXES "ins_indexes" -#define TSDB_INS_TABLE_STABLES "ins_stables" -#define TSDB_INS_TABLE_TABLES "ins_tables" -#define TSDB_INS_TABLE_TAGS "ins_tags" -#define TSDB_INS_TABLE_TABLE_DISTRIBUTED "ins_table_distributed" -#define TSDB_INS_TABLE_USERS "ins_users" -#define TSDB_INS_TABLE_LICENCES "ins_grants" -#define TSDB_INS_TABLE_VGROUPS "ins_vgroups" -#define TSDB_INS_TABLE_VNODES "ins_vnodes" -#define TSDB_INS_TABLE_CONFIGS "ins_configs" -#define TSDB_INS_TABLE_DNODE_VARIABLES "ins_dnode_variables" +#define TSDB_INFORMATION_SCHEMA_DB "information_schema" +#define TSDB_INS_TABLE_DNODES "ins_dnodes" +#define TSDB_INS_TABLE_MNODES "ins_mnodes" +#define TSDB_INS_TABLE_MODULES "ins_modules" +#define TSDB_INS_TABLE_QNODES "ins_qnodes" +#define TSDB_INS_TABLE_BNODES "ins_bnodes" +#define TSDB_INS_TABLE_SNODES "ins_snodes" +#define TSDB_INS_TABLE_CLUSTER "ins_cluster" +#define TSDB_INS_TABLE_DATABASES "ins_databases" +#define TSDB_INS_TABLE_FUNCTIONS "ins_functions" +#define TSDB_INS_TABLE_INDEXES "ins_indexes" +#define TSDB_INS_TABLE_STABLES "ins_stables" +#define TSDB_INS_TABLE_TABLES "ins_tables" +#define TSDB_INS_TABLE_TAGS "ins_tags" +#define TSDB_INS_TABLE_TABLE_DISTRIBUTED "ins_table_distributed" +#define TSDB_INS_TABLE_USERS "ins_users" +#define TSDB_INS_TABLE_LICENCES "ins_grants" +#define TSDB_INS_TABLE_VGROUPS "ins_vgroups" +#define TSDB_INS_TABLE_VNODES "ins_vnodes" +#define TSDB_INS_TABLE_CONFIGS "ins_configs" +#define TSDB_INS_TABLE_DNODE_VARIABLES "ins_dnode_variables" #define TSDB_PERFORMANCE_SCHEMA_DB "performance_schema" #define TSDB_PERFS_TABLE_SMAS "perf_smas" @@ -60,16 +60,20 @@ typedef struct SSysDbTableSchema { const char* name; const int32_t type; const int32_t bytes; + const bool sysInfo; } SSysDbTableSchema; typedef struct SSysTableMeta { const char* name; const SSysDbTableSchema* schema; const int32_t colNum; + const bool sysInfo; } SSysTableMeta; void getInfosDbMeta(const SSysTableMeta** pInfosTableMeta, size_t* size); void getPerfDbMeta(const SSysTableMeta** pPerfsTableMeta, size_t* size); +void getVisibleInfosTablesNum(bool sysInfo, size_t* size); +bool invisibleColumn(bool sysInfo, int8_t tableType, int8_t flags); #ifdef __cplusplus } diff --git a/include/common/tcommon.h b/include/common/tcommon.h index 14efec8757..03672f96f3 100644 --- a/include/common/tcommon.h +++ b/include/common/tcommon.h @@ -44,6 +44,30 @@ enum { ) // clang-format on +typedef struct { + TSKEY ts; + uint64_t groupId; +} SWinKey; + +static inline int SWinKeyCmpr(const void* pKey1, int kLen1, const void* pKey2, int kLen2) { + SWinKey* pWin1 = (SWinKey*)pKey1; + SWinKey* pWin2 = (SWinKey*)pKey2; + + if (pWin1->groupId > pWin2->groupId) { + return 1; + } else if (pWin1->groupId < pWin2->groupId) { + return -1; + } + + if (pWin1->ts > pWin2->ts) { + return 1; + } else if (pWin1->ts < pWin2->ts) { + return -1; + } + + return 0; +} + enum { TMQ_MSG_TYPE__DUMMY = 0, TMQ_MSG_TYPE__POLL_RSP, @@ -182,7 +206,7 @@ typedef struct SColumn { int16_t slotId; char name[TSDB_COL_NAME_LEN]; - int8_t flag; // column type: normal column, tag, or user-input column (integer/float/string) + int16_t colType; // column type: normal column, tag, or window column int16_t type; int32_t bytes; uint8_t precision; diff --git a/include/common/tglobal.h b/include/common/tglobal.h index f872d1dbc2..03e15ed8e7 100644 --- a/include/common/tglobal.h +++ b/include/common/tglobal.h @@ -66,6 +66,7 @@ extern int32_t tsNumOfVnodeStreamThreads; extern int32_t tsNumOfVnodeFetchThreads; extern int32_t tsNumOfVnodeWriteThreads; extern int32_t tsNumOfVnodeSyncThreads; +extern int32_t tsNumOfVnodeRsmaThreads; extern int32_t tsNumOfQnodeQueryThreads; extern int32_t tsNumOfQnodeFetchThreads; extern int32_t tsNumOfSnodeSharedThreads; diff --git a/include/common/tmsg.h b/include/common/tmsg.h index 37a0ecf79b..d09b5eecfa 100644 --- a/include/common/tmsg.h +++ b/include/common/tmsg.h @@ -268,6 +268,35 @@ STSRow* tGetSubmitBlkNext(SSubmitBlkIter* pIter); // for debug int32_t tPrintFixedSchemaSubmitReq(SSubmitReq* pReq, STSchema* pSchema); +struct SSchema { + int8_t type; + int8_t flags; + col_id_t colId; + int32_t bytes; + char name[TSDB_COL_NAME_LEN]; +}; + + +typedef struct { + char tbName[TSDB_TABLE_NAME_LEN]; + char stbName[TSDB_TABLE_NAME_LEN]; + char dbFName[TSDB_DB_FNAME_LEN]; + int64_t dbId; + int32_t numOfTags; + int32_t numOfColumns; + int8_t precision; + int8_t tableType; + int32_t sversion; + int32_t tversion; + uint64_t suid; + uint64_t tuid; + int32_t vgId; + int8_t sysInfo; + SSchema* pSchemas; +} STableMetaRsp; + + + typedef struct { int32_t code; int8_t hashMeta; @@ -276,6 +305,7 @@ typedef struct { int32_t numOfRows; int32_t affectedRows; int64_t sver; + STableMetaRsp* pMeta; } SSubmitBlkRsp; typedef struct { @@ -290,19 +320,14 @@ typedef struct { int32_t tEncodeSSubmitRsp(SEncoder* pEncoder, const SSubmitRsp* pRsp); int32_t tDecodeSSubmitRsp(SDecoder* pDecoder, SSubmitRsp* pRsp); +void tFreeSSubmitBlkRsp(void* param); void tFreeSSubmitRsp(SSubmitRsp* pRsp); -#define COL_SMA_ON ((int8_t)0x1) -#define COL_IDX_ON ((int8_t)0x2) -#define COL_SET_NULL ((int8_t)0x10) -#define COL_SET_VAL ((int8_t)0x20) -struct SSchema { - int8_t type; - int8_t flags; - col_id_t colId; - int32_t bytes; - char name[TSDB_COL_NAME_LEN]; -}; +#define COL_SMA_ON ((int8_t)0x1) +#define COL_IDX_ON ((int8_t)0x2) +#define COL_SET_NULL ((int8_t)0x10) +#define COL_SET_VAL ((int8_t)0x20) +#define COL_IS_SYSINFO ((int8_t)0x40) #define COL_IS_SET(FLG) (((FLG) & (COL_SET_VAL | COL_SET_NULL)) != 0) #define COL_CLR_SET(FLG) ((FLG) &= (~(COL_SET_VAL | COL_SET_NULL))) @@ -472,6 +497,14 @@ int32_t tSerializeSMCreateStbReq(void* buf, int32_t bufLen, SMCreateStbReq* pReq int32_t tDeserializeSMCreateStbReq(void* buf, int32_t bufLen, SMCreateStbReq* pReq); void tFreeSMCreateStbReq(SMCreateStbReq* pReq); +typedef struct { + STableMetaRsp* pMeta; +} SMCreateStbRsp; + +int32_t tEncodeSMCreateStbRsp(SEncoder* pEncoder, const SMCreateStbRsp* pRsp); +int32_t tDecodeSMCreateStbRsp(SDecoder* pDecoder, SMCreateStbRsp* pRsp); +void tFreeSMCreateStbRsp(SMCreateStbRsp* pRsp); + typedef struct { char name[TSDB_TABLE_FNAME_LEN]; int8_t igNotExists; @@ -530,6 +563,7 @@ typedef struct { uint32_t connId; int32_t dnodeNum; int8_t superUser; + int8_t sysInfo; int8_t connType; SEpSet epSet; int32_t svrTimestamp; @@ -1239,23 +1273,6 @@ typedef struct { SVgroupInfo vgroups[]; } SVgroupsInfo; -typedef struct { - char tbName[TSDB_TABLE_NAME_LEN]; - char stbName[TSDB_TABLE_NAME_LEN]; - char dbFName[TSDB_DB_FNAME_LEN]; - int64_t dbId; - int32_t numOfTags; - int32_t numOfColumns; - int8_t precision; - int8_t tableType; - int32_t sversion; - int32_t tversion; - uint64_t suid; - uint64_t tuid; - int32_t vgId; - SSchema* pSchemas; -} STableMetaRsp; - typedef struct { STableMetaRsp* pMeta; } SMAlterStbRsp; @@ -1266,7 +1283,7 @@ void tFreeSMAlterStbRsp(SMAlterStbRsp* pRsp); int32_t tSerializeSTableMetaRsp(void* buf, int32_t bufLen, STableMetaRsp* pRsp); int32_t tDeserializeSTableMetaRsp(void* buf, int32_t bufLen, STableMetaRsp* pRsp); -void tFreeSTableMetaRsp(STableMetaRsp* pRsp); +void tFreeSTableMetaRsp(void* pRsp); void tFreeSTableIndexRsp(void* info); typedef struct { @@ -2028,11 +2045,13 @@ int tEncodeSVCreateTbBatchReq(SEncoder* pCoder, const SVCreateTbBatchReq* pReq); int tDecodeSVCreateTbBatchReq(SDecoder* pCoder, SVCreateTbBatchReq* pReq); typedef struct { - int32_t code; + int32_t code; + STableMetaRsp* pMeta; } SVCreateTbRsp, SVUpdateTbRsp; int tEncodeSVCreateTbRsp(SEncoder* pCoder, const SVCreateTbRsp* pRsp); int tDecodeSVCreateTbRsp(SDecoder* pCoder, SVCreateTbRsp* pRsp); +void tFreeSVCreateTbRsp(void* param); int32_t tSerializeSVCreateTbReq(void** buf, SVCreateTbReq* pReq); void* tDeserializeSVCreateTbReq(void* buf, SVCreateTbReq* pReq); diff --git a/include/libs/command/command.h b/include/libs/command/command.h index 8a4ecad37d..b3339a417b 100644 --- a/include/libs/command/command.h +++ b/include/libs/command/command.h @@ -17,12 +17,12 @@ #define TDENGINE_COMMAND_H #include "cmdnodes.h" -#include "tmsg.h" #include "plannodes.h" +#include "tmsg.h" typedef struct SExplainCtx SExplainCtx; -int32_t qExecCommand(SNode* pStmt, SRetrieveTableRsp** pRsp); +int32_t qExecCommand(bool sysInfoUser, SNode *pStmt, SRetrieveTableRsp **pRsp); int32_t qExecStaticExplain(SQueryPlan *pDag, SRetrieveTableRsp **pRsp); int32_t qExecExplainBegin(SQueryPlan *pDag, SExplainCtx **pCtx, int64_t startTs); diff --git a/include/libs/nodes/plannodes.h b/include/libs/nodes/plannodes.h index 8661baceb2..6fd6a316eb 100644 --- a/include/libs/nodes/plannodes.h +++ b/include/libs/nodes/plannodes.h @@ -317,6 +317,7 @@ typedef struct SSystemTableScanPhysiNode { SEpSet mgmtEpSet; bool showRewrite; int32_t accountId; + bool sysInfo; } SSystemTableScanPhysiNode; typedef struct STableScanPhysiNode { diff --git a/include/libs/nodes/querynodes.h b/include/libs/nodes/querynodes.h index cec6f1a691..3a1eaf289e 100644 --- a/include/libs/nodes/querynodes.h +++ b/include/libs/nodes/querynodes.h @@ -57,7 +57,9 @@ typedef enum EColumnType { COLUMN_TYPE_COLUMN = 1, COLUMN_TYPE_TAG, COLUMN_TYPE_TBNAME, - COLUMN_TYPE_WINDOW_PC, + COLUMN_TYPE_WINDOW_START, + COLUMN_TYPE_WINDOW_END, + COLUMN_TYPE_WINDOW_DURATION, COLUMN_TYPE_GROUP_KEY } EColumnType; diff --git a/include/libs/parser/parser.h b/include/libs/parser/parser.h index 717278d51d..95bde85864 100644 --- a/include/libs/parser/parser.h +++ b/include/libs/parser/parser.h @@ -49,6 +49,7 @@ typedef struct SParseContext { SStmtCallback* pStmtCb; const char* pUser; bool isSuperUser; + bool enableSysInfo; bool async; int8_t schemalessType; const char* svrVer; diff --git a/include/libs/planner/planner.h b/include/libs/planner/planner.h index d1a5c5db10..05caa7a7bb 100644 --- a/include/libs/planner/planner.h +++ b/include/libs/planner/planner.h @@ -38,6 +38,7 @@ typedef struct SPlanContext { char* pMsg; int32_t msgLen; const char* pUser; + bool sysInfo; } SPlanContext; // Create the physical plan for the query, according to the AST. diff --git a/include/libs/qcom/query.h b/include/libs/qcom/query.h index 34d870397f..1fa7dca7dc 100644 --- a/include/libs/qcom/query.h +++ b/include/libs/qcom/query.h @@ -215,6 +215,7 @@ void initQueryModuleMsgHandle(); const SSchema* tGetTbnameColumnSchema(); bool tIsValidSchema(struct SSchema* pSchema, int32_t numOfCols, int32_t numOfTags); +int32_t queryCreateCTableMetaFromMsg(STableMetaRsp *msg, SCTableMeta *pMeta); int32_t queryCreateTableMetaFromMsg(STableMetaRsp* msg, bool isSuperTable, STableMeta** pMeta); char* jobTaskStatusStr(int32_t status); diff --git a/include/libs/stream/tstream.h b/include/libs/stream/tstream.h index 16b259cf59..2c27509008 100644 --- a/include/libs/stream/tstream.h +++ b/include/libs/stream/tstream.h @@ -551,16 +551,17 @@ typedef struct { } SStreamStateCur; #if 1 -int32_t streamStatePut(SStreamState* pState, const void* key, int32_t kLen, const void* value, int32_t vLen); -int32_t streamStateGet(SStreamState* pState, const void* key, int32_t kLen, void** pVal, int32_t* pVLen); -int32_t streamStateDel(SStreamState* pState, const void* key, int32_t kLen); +int32_t streamStatePut(SStreamState* pState, const SWinKey* key, const void* value, int32_t vLen); +int32_t streamStateGet(SStreamState* pState, const SWinKey* key, void** pVal, int32_t* pVLen); +int32_t streamStateDel(SStreamState* pState, const SWinKey* key); +void streamFreeVal(void* val); -SStreamStateCur* streamStateGetCur(SStreamState* pState, const void* key, int32_t kLen); -SStreamStateCur* streamStateSeekKeyNext(SStreamState* pState, const void* key, int32_t kLen); -SStreamStateCur* streamStateSeekKeyPrev(SStreamState* pState, const void* key, int32_t kLen); +SStreamStateCur* streamStateGetCur(SStreamState* pState, const SWinKey* key); +SStreamStateCur* streamStateSeekKeyNext(SStreamState* pState, const SWinKey* key); +SStreamStateCur* streamStateSeekKeyPrev(SStreamState* pState, const SWinKey* key); void streamStateFreeCur(SStreamStateCur* pCur); -int32_t streamGetKVByCur(SStreamStateCur* pCur, void** pKey, int32_t* pKLen, void** pVal, int32_t* pVLen); +int32_t streamStateGetKVByCur(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen); int32_t streamStateSeekFirst(SStreamState* pState, SStreamStateCur* pCur); int32_t streamStateSeekLast(SStreamState* pState, SStreamStateCur* pCur); diff --git a/include/util/thash.h b/include/util/thash.h index 781c22a56a..f4d09eb090 100644 --- a/include/util/thash.h +++ b/include/util/thash.h @@ -210,6 +210,8 @@ void taosHashSetEqualFp(SHashObj *pHashObj, _equal_fn_t fp); */ void taosHashSetFreeFp(SHashObj *pHashObj, _hash_free_fn_t fp); +int64_t taosHashGetCompTimes(SHashObj *pHashObj); + #ifdef __cplusplus } #endif diff --git a/packaging/release.bat b/packaging/release.bat index ffd3a68048..591227382f 100644 --- a/packaging/release.bat +++ b/packaging/release.bat @@ -40,10 +40,12 @@ if not exist %work_dir%\debug\ver-%2-x86 ( ) cd %work_dir%\debug\ver-%2-x64 call vcvarsall.bat x64 -cmake ../../ -G "NMake Makefiles JOM" -DCMAKE_MAKE_PROGRAM=jom -DBUILD_TOOLS=true -DBUILD_HTTP=false -DVERNUMBER=%2 -DCPUTYPE=x64 +cmake ../../ -G "NMake Makefiles JOM" -DCMAKE_MAKE_PROGRAM=jom -DBUILD_TOOLS=true -DBUILD_HTTP=false -DBUILD_TEST=false -DVERNUMBER=%2 -DCPUTYPE=x64 cmake --build . rd /s /Q C:\TDengine cmake --install . +for /r c:\TDengine %%i in (*.dll) do signtool sign /f D:\\123.pfx /p taosdata %%i +for /r c:\TDengine %%i in (*.exe) do signtool sign /f D:\\123.pfx /p taosdata %%i if not %errorlevel% == 0 ( call :RUNFAILED build x64 failed & exit /b 1) cd %package_dir% iscc /DMyAppInstallName="%packagServerName_x64%" /DMyAppVersion="%2" /DMyAppExcludeSource="" tools\tdengine.iss /O..\release @@ -51,19 +53,7 @@ if not %errorlevel% == 0 ( call :RUNFAILED package %packagServerName_x64% faile iscc /DMyAppInstallName="%packagClientName_x64%" /DMyAppVersion="%2" /DMyAppExcludeSource="taosd.exe" tools\tdengine.iss /O..\release if not %errorlevel% == 0 ( call :RUNFAILED package %packagClientName_x64% failed & exit /b 1) -cd %work_dir%\debug\ver-%2-x86 -call vcvarsall.bat x86 -cmake ../../ -G "NMake Makefiles JOM" -DCMAKE_MAKE_PROGRAM=jom -DBUILD_TOOLS=true -DBUILD_HTTP=false -DVERNUMBER=%2 -DCPUTYPE=x86 -cmake --build . -rd /s /Q C:\TDengine -cmake --install . -if not %errorlevel% == 0 ( call :RUNFAILED build x86 failed & exit /b 1) -cd %package_dir% -@REM iscc /DMyAppInstallName="%packagServerName_x86%" /DMyAppVersion="%2" /DMyAppExcludeSource="" tools\tdengine.iss /O..\release -@REM if not %errorlevel% == 0 ( call :RUNFAILED package %packagServerName_x86% failed & exit /b 1) -iscc /DMyAppInstallName="%packagClientName_x86%" /DMyAppVersion="%2" /DMyAppExcludeSource="taosd.exe" tools\tdengine.iss /O..\release -if not %errorlevel% == 0 ( call :RUNFAILED package %packagClientName_x86% failed & exit /b 1) - +for /r ..\release %%i in (*.exe) do signtool sign /f d:\\123.pfx /p taosdata %%i goto EXIT0 :USAGE diff --git a/source/client/inc/clientInt.h b/source/client/inc/clientInt.h index 855dfb15ee..07e5f75e87 100644 --- a/source/client/inc/clientInt.h +++ b/source/client/inc/clientInt.h @@ -95,15 +95,15 @@ typedef struct { } SClientHbMgr; typedef struct SQueryExecMetric { - int64_t start; // start timestamp, us + int64_t start; // start timestamp, us int64_t syntaxStart; // start to parse, us - int64_t syntaxEnd; // end to parse, us - int64_t ctgStart; // start to parse, us - int64_t ctgEnd; // end to parse, us + int64_t syntaxEnd; // end to parse, us + int64_t ctgStart; // start to parse, us + int64_t ctgEnd; // end to parse, us int64_t semanticEnd; int64_t execEnd; - int64_t send; // start to send to server, us - int64_t rsp; // receive response from server, us + int64_t send; // start to send to server, us + int64_t rsp; // receive response from server, us } SQueryExecMetric; struct SAppInstInfo { @@ -137,6 +137,7 @@ typedef struct STscObj { char db[TSDB_DB_FNAME_LEN]; char sVer[TSDB_VERSION_LEN]; char sDetailVer[128]; + int8_t sysInfo; int8_t connType; int32_t acctId; uint32_t connId; @@ -257,7 +258,7 @@ SRequestObj* execQuery(uint64_t connId, const char* sql, int sqlLen, bool valida TAOS_RES* taosQueryImpl(TAOS* taos, const char* sql, bool validateOnly); void taosAsyncQueryImpl(uint64_t connId, const char* sql, __taos_async_fn_t fp, void* param, bool validateOnly); -int32_t getVersion1BlockMetaSize(const char* p, int32_t numOfCols); +int32_t getVersion1BlockMetaSize(const char* p, int32_t numOfCols); static FORCE_INLINE SReqResultInfo* tmqGetCurResInfo(TAOS_RES* res) { SMqRspObj* msg = (SMqRspObj*)res; @@ -368,8 +369,9 @@ void launchAsyncQuery(SRequestObj* pRequest, SQuery* pQuery, SMetaData* int32_t refreshMeta(STscObj* pTscObj, SRequestObj* pRequest); int32_t updateQnodeList(SAppInstInfo* pInfo, SArray* pNodeList); void doAsyncQuery(SRequestObj* pRequest, bool forceUpdateMeta); -int32_t removeMeta(STscObj* pTscObj, SArray* tbList); // todo move to clientImpl.c and become a static function -int32_t handleAlterTbExecRes(void* res, struct SCatalog* pCatalog); // todo move to xxx +int32_t removeMeta(STscObj* pTscObj, SArray* tbList); +int32_t handleAlterTbExecRes(void* res, struct SCatalog* pCatalog); +int32_t handleCreateTbExecRes(void* res, SCatalog* pCatalog); bool qnodeRequired(SRequestObj* pRequest); #ifdef __cplusplus diff --git a/source/client/src/clientEnv.c b/source/client/src/clientEnv.c index ae92d2dc7c..1342e89b52 100644 --- a/source/client/src/clientEnv.c +++ b/source/client/src/clientEnv.c @@ -404,7 +404,9 @@ void taos_init_imp(void) { schedulerInit(); tscDebug("starting to initialize TAOS driver"); +#ifndef WINDOWS taosSetCoreDump(true); +#endif initTaskQueue(); fmFuncMgtInit(); diff --git a/source/client/src/clientImpl.c b/source/client/src/clientImpl.c index 998b9cee5c..0480c49ca1 100644 --- a/source/client/src/clientImpl.c +++ b/source/client/src/clientImpl.c @@ -215,6 +215,7 @@ int32_t parseSql(SRequestObj* pRequest, bool topicQuery, SQuery** pQuery, SStmtC .pUser = pTscObj->user, .schemalessType = pTscObj->schemalessType, .isSuperUser = (0 == strcmp(pTscObj->user, TSDB_DEFAULT_USER)), + .enableSysInfo = pTscObj->sysInfo, .svrVer = pTscObj->sVer, .nodeOffline = (pTscObj->pAppInfo->onlineDnodes < pTscObj->pAppInfo->totalDnodes)}; @@ -246,7 +247,7 @@ int32_t parseSql(SRequestObj* pRequest, bool topicQuery, SQuery** pQuery, SStmtC int32_t execLocalCmd(SRequestObj* pRequest, SQuery* pQuery) { SRetrieveTableRsp* pRsp = NULL; - int32_t code = qExecCommand(pQuery->pRoot, &pRsp); + int32_t code = qExecCommand(pRequest->pTscObj->sysInfo, pQuery->pRoot, &pRsp); if (TSDB_CODE_SUCCESS == code && NULL != pRsp) { code = setQueryResultFromRsp(&pRequest->body.resInfo, pRsp, false, true); } @@ -284,7 +285,7 @@ void asyncExecLocalCmd(SRequestObj* pRequest, SQuery* pQuery) { return; } - int32_t code = qExecCommand(pQuery->pRoot, &pRsp); + int32_t code = qExecCommand(pRequest->pTscObj->sysInfo, pQuery->pRoot, &pRsp); if (TSDB_CODE_SUCCESS == code && NULL != pRsp) { code = setQueryResultFromRsp(&pRequest->body.resInfo, pRsp, false, true); } @@ -419,7 +420,8 @@ int32_t getPlan(SRequestObj* pRequest, SQuery* pQuery, SQueryPlan** pPlan, SArra .showRewrite = pQuery->showRewrite, .pMsg = pRequest->msgBuf, .msgLen = ERROR_MSG_BUF_DEFAULT_SIZE, - .pUser = pRequest->pTscObj->user}; + .pUser = pRequest->pTscObj->user, + .sysInfo = pRequest->pTscObj->sysInfo}; return qCreateQueryPlan(&cxt, pPlan, pNodeList); } @@ -721,6 +723,12 @@ int32_t handleSubmitExecRes(SRequestObj* pRequest, void* res, SCatalog* pCatalog for (int32_t i = 0; i < pRsp->nBlocks; ++i) { SSubmitBlkRsp* blk = pRsp->pBlocks + i; + if (blk->pMeta) { + handleCreateTbExecRes(blk->pMeta, pCatalog); + tFreeSTableMetaRsp(blk->pMeta); + taosMemoryFreeClear(blk->pMeta); + } + if (NULL == blk->tblFName || 0 == blk->tblFName[0]) { continue; } @@ -780,6 +788,10 @@ int32_t handleAlterTbExecRes(void* res, SCatalog* pCatalog) { return catalogUpdateTableMeta(pCatalog, (STableMetaRsp*)res); } +int32_t handleCreateTbExecRes(void* res, SCatalog* pCatalog) { + return catalogUpdateTableMeta(pCatalog, (STableMetaRsp*)res); +} + int32_t handleQueryExecRsp(SRequestObj* pRequest) { if (NULL == pRequest->body.resInfo.execRes.res) { return TSDB_CODE_SUCCESS; @@ -802,6 +814,19 @@ int32_t handleQueryExecRsp(SRequestObj* pRequest) { code = handleAlterTbExecRes(pRes->res, pCatalog); break; } + case TDMT_VND_CREATE_TABLE: { + SArray* pList = (SArray*)pRes->res; + int32_t num = taosArrayGetSize(pList); + for (int32_t i = 0; i < num; ++i) { + void* res = taosArrayGetP(pList, i); + code = handleCreateTbExecRes(res, pCatalog); + } + break; + } + case TDMT_MND_CREATE_STB: { + code = handleCreateTbExecRes(pRes->res, pCatalog); + break; + } case TDMT_VND_SUBMIT: { atomic_add_fetch_64((int64_t*)&pAppInfo->summary.insertBytes, pRes->numOfBytes); @@ -861,17 +886,13 @@ void schedulerExecCb(SExecResult* pResult, void* param, int32_t code) { return; } - if (code == TSDB_CODE_SUCCESS) { - code = handleQueryExecRsp(pRequest); - ASSERT(pRequest->code == TSDB_CODE_SUCCESS); - pRequest->code = code; - } - tscDebug("schedulerExecCb request type %s", TMSG_INFO(pRequest->type)); - if (NEED_CLIENT_RM_TBLMETA_REQ(pRequest->type)) { + if (NEED_CLIENT_RM_TBLMETA_REQ(pRequest->type) && NULL == pRequest->body.resInfo.execRes.res) { removeMeta(pTscObj, pRequest->targetTableList); } + handleQueryExecRsp(pRequest); + // return to client pRequest->body.queryFp(pRequest->body.param, pRequest, code); } @@ -932,6 +953,10 @@ SRequestObj* launchQueryImpl(SRequestObj* pRequest, SQuery* pQuery, bool keepQue qDestroyQuery(pQuery); } + if (NEED_CLIENT_RM_TBLMETA_REQ(pRequest->type) && NULL == pRequest->body.resInfo.execRes.res) { + removeMeta(pRequest->pTscObj, pRequest->targetTableList); + } + handleQueryExecRsp(pRequest); if (NULL != pRequest && TSDB_CODE_SUCCESS != code) { @@ -992,7 +1017,8 @@ void launchAsyncQuery(SRequestObj* pRequest, SQuery* pQuery, SMetaData* pResultM .showRewrite = pQuery->showRewrite, .pMsg = pRequest->msgBuf, .msgLen = ERROR_MSG_BUF_DEFAULT_SIZE, - .pUser = pRequest->pTscObj->user}; + .pUser = pRequest->pTscObj->user, + .sysInfo = pRequest->pTscObj->sysInfo}; SAppInstInfo* pAppInfo = getAppInfo(pRequest); SQueryPlan* pDag = NULL; @@ -1129,10 +1155,6 @@ SRequestObj* execQuery(uint64_t connId, const char* sql, int sqlLen, bool valida inRetry = true; } while (retryNum++ < REQUEST_TOTAL_EXEC_TIMES); - if (NEED_CLIENT_RM_TBLMETA_REQ(pRequest->type)) { - removeMeta(pRequest->pTscObj, pRequest->targetTableList); - } - return pRequest; } @@ -1577,10 +1599,11 @@ static int32_t doConvertUCS4(SReqResultInfo* pResultInfo, int32_t numOfRows, int } int32_t getVersion1BlockMetaSize(const char* p, int32_t numOfCols) { - int32_t cols = *(int32_t*) (p + sizeof(int32_t) * 3); + int32_t cols = *(int32_t*)(p + sizeof(int32_t) * 3); ASSERT(numOfCols == cols); - return sizeof(int32_t) + sizeof(int32_t) + sizeof(int32_t)*3 + sizeof(uint64_t) + numOfCols * (sizeof(int8_t) + sizeof(int32_t)); + return sizeof(int32_t) + sizeof(int32_t) + sizeof(int32_t) * 3 + sizeof(uint64_t) + + numOfCols * (sizeof(int8_t) + sizeof(int32_t)); } static int32_t estimateJsonLen(SReqResultInfo* pResultInfo, int32_t numOfCols, int32_t numOfRows) { diff --git a/source/client/src/clientMain.c b/source/client/src/clientMain.c index 31ae443d5b..9ceb6e0683 100644 --- a/source/client/src/clientMain.c +++ b/source/client/src/clientMain.c @@ -759,6 +759,7 @@ int32_t createParseContext(const SRequestObj *pRequest, SParseContext **pCxt) { .pUser = pTscObj->user, .schemalessType = pTscObj->schemalessType, .isSuperUser = (0 == strcmp(pTscObj->user, TSDB_DEFAULT_USER)), + .enableSysInfo = pTscObj->sysInfo, .async = true, .svrVer = pTscObj->sVer, .nodeOffline = (pTscObj->pAppInfo->onlineDnodes < pTscObj->pAppInfo->totalDnodes)}; diff --git a/source/client/src/clientMsgHandler.c b/source/client/src/clientMsgHandler.c index 0c4cf23c4e..a7a16d484c 100644 --- a/source/client/src/clientMsgHandler.c +++ b/source/client/src/clientMsgHandler.c @@ -96,6 +96,7 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) { connectRsp.epSet.eps[i].fqdn, connectRsp.epSet.eps[i].port, pTscObj->id); } + pTscObj->sysInfo = connectRsp.sysInfo; pTscObj->connId = connectRsp.connId; pTscObj->acctId = connectRsp.acctId; tstrncpy(pTscObj->sVer, connectRsp.sVer, tListLen(pTscObj->sVer)); @@ -232,13 +233,36 @@ int32_t processCreateSTableRsp(void* param, SDataBuf* pMsg, int32_t code) { assert(pMsg != NULL && param != NULL); SRequestObj* pRequest = param; - taosMemoryFree(pMsg->pData); if (code != TSDB_CODE_SUCCESS) { setErrno(pRequest, code); + } else { + SMCreateStbRsp createRsp = {0}; + SDecoder coder = {0}; + tDecoderInit(&coder, pMsg->pData, pMsg->len); + tDecodeSMCreateStbRsp(&coder, &createRsp); + tDecoderClear(&coder); + + pRequest->body.resInfo.execRes.msgType = TDMT_MND_CREATE_STB; + pRequest->body.resInfo.execRes.res = createRsp.pMeta; } + taosMemoryFree(pMsg->pData); + if (pRequest->body.queryFp != NULL) { - removeMeta(pRequest->pTscObj, pRequest->tableList); + SExecResult* pRes = &pRequest->body.resInfo.execRes; + + if (code == TSDB_CODE_SUCCESS) { + SCatalog* pCatalog = NULL; + int32_t ret = catalogGetHandle(pRequest->pTscObj->pAppInfo->clusterId, &pCatalog); + if (pRes->res != NULL) { + ret = handleCreateTbExecRes(pRes->res, pCatalog); + } + + if (ret != TSDB_CODE_SUCCESS) { + code = ret; + } + } + pRequest->body.queryFp(pRequest->body.param, pRequest, code); } else { tsem_post(&pRequest->body.rspSem); diff --git a/source/common/src/systable.c b/source/common/src/systable.c index 68a77a9f33..0465f1f3f4 100644 --- a/source/common/src/systable.c +++ b/source/common/src/systable.c @@ -15,343 +15,345 @@ #include "systable.h" #include "taos.h" +#include "taosdef.h" #include "tdef.h" #include "tgrant.h" +#include "tmsg.h" #include "types.h" #define SYSTABLE_SCH_TABLE_NAME_LEN ((TSDB_TABLE_NAME_LEN - 1) + VARSTR_HEADER_SIZE) #define SYSTABLE_SCH_DB_NAME_LEN ((TSDB_DB_NAME_LEN - 1) + VARSTR_HEADER_SIZE) #define SYSTABLE_SCH_COL_NAME_LEN ((TSDB_COL_NAME_LEN - 1) + VARSTR_HEADER_SIZE) +// clang-format off static const SSysDbTableSchema dnodesSchema[] = { - {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "endpoint", .bytes = TSDB_EP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "vnodes", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT}, - {.name = "support_vnodes", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT}, - {.name = "status", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, - {.name = "note", .bytes = 256 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, + {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "endpoint", .bytes = TSDB_EP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "vnodes", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT, .sysInfo = true}, + {.name = "support_vnodes", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT, .sysInfo = true}, + {.name = "status", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true}, + {.name = "note", .bytes = 256 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, }; static const SSysDbTableSchema mnodesSchema[] = { - {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "endpoint", .bytes = TSDB_EP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "role", .bytes = 12 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "status", .bytes = 9 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, + {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "endpoint", .bytes = TSDB_EP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "role", .bytes = 12 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "status", .bytes = 9 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true}, }; static const SSysDbTableSchema modulesSchema[] = { - {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "endpoint", .bytes = 134 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "module", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, + {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "endpoint", .bytes = 134 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "module", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, }; static const SSysDbTableSchema qnodesSchema[] = { - {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "endpoint", .bytes = TSDB_EP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, + {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "endpoint", .bytes = TSDB_EP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true}, }; static const SSysDbTableSchema snodesSchema[] = { - {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "endpoint", .bytes = TSDB_EP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, + {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "endpoint", .bytes = TSDB_EP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true}, }; static const SSysDbTableSchema bnodesSchema[] = { - {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "endpoint", .bytes = TSDB_EP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, + {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "endpoint", .bytes = TSDB_EP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true}, }; static const SSysDbTableSchema clusterSchema[] = { - {.name = "id", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT}, - {.name = "name", .bytes = TSDB_CLUSTER_ID_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "uptime", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, + {.name = "id", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = true}, + {.name = "name", .bytes = TSDB_CLUSTER_ID_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "uptime", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true}, }; static const SSysDbTableSchema userDBSchema[] = { - {.name = "name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, - {.name = "vgroups", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "ntables", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT}, - {.name = "replica", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT}, - {.name = "strict", .bytes = TSDB_DB_STRICT_STR_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "duration", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "keep", .bytes = 32 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "buffer", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "pagesize", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "pages", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "minrows", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "maxrows", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "comp", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT}, - {.name = "precision", .bytes = 2 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "status", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "retentions", .bytes = 60 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "single_stable", .bytes = 1, .type = TSDB_DATA_TYPE_BOOL}, - {.name = "cachemodel", .bytes = TSDB_CACHE_MODEL_STR_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "cachesize", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "wal_level", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT}, - {.name = "wal_fsync_period", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "wal_retention_period", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "wal_retention_size", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT}, - {.name = "wal_roll_period", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "wal_segment_size", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT}, + {.name = "name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, + {.name = "vgroups", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "ntables", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false}, + {.name = "replica", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT, .sysInfo = true}, + {.name = "strict", .bytes = TSDB_DB_STRICT_STR_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "duration", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "keep", .bytes = 32 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "buffer", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "pagesize", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "pages", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "minrows", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "maxrows", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "comp", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT, .sysInfo = true}, + {.name = "precision", .bytes = 2 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "status", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "retentions", .bytes = 60 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "single_stable", .bytes = 1, .type = TSDB_DATA_TYPE_BOOL, .sysInfo = true}, + {.name = "cachemodel", .bytes = TSDB_CACHE_MODEL_STR_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "cachesize", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "wal_level", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT, .sysInfo = true}, + {.name = "wal_fsync_period", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "wal_retention_period", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "wal_retention_size", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = true}, + {.name = "wal_roll_period", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "wal_segment_size", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = true}, }; static const SSysDbTableSchema userFuncSchema[] = { - {.name = "name", .bytes = TSDB_FUNC_NAME_LEN - 1 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "comment", .bytes = PATH_MAX - 1 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "aggregate", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "output_type", .bytes = TSDB_TYPE_STR_MAX_LEN - 1 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, - {.name = "code_len", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "bufsize", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, + {.name = "name", .bytes = TSDB_FUNC_NAME_LEN - 1 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "comment", .bytes = PATH_MAX - 1 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "aggregate", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, + {.name = "output_type", .bytes = TSDB_TYPE_STR_MAX_LEN - 1 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, + {.name = "code_len", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, + {.name = "bufsize", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, }; static const SSysDbTableSchema userIdxSchema[] = { - {.name = "index_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "db_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "table_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "vgroup_id", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, + {.name = "index_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "db_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "table_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "vgroup_id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, }; static const SSysDbTableSchema userStbsSchema[] = { - {.name = "stable_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "db_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, - {.name = "columns", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "tags", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "last_update", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, - {.name = "table_comment", .bytes = TSDB_TB_COMMENT_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "watermark", .bytes = 64 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "max_delay", .bytes = 64 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "rollup", .bytes = 128 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, + {.name = "stable_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "db_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, + {.name = "columns", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, + {.name = "tags", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, + {.name = "last_update", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, + {.name = "table_comment", .bytes = TSDB_TB_COMMENT_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "watermark", .bytes = 64 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "max_delay", .bytes = 64 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "rollup", .bytes = 128 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, }; static const SSysDbTableSchema streamSchema[] = { - {.name = "stream_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, - {.name = "sql", .bytes = TSDB_SHOW_SQL_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "status", .bytes = 20 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "source_db", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "target_db", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "target_table", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "watermark", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT}, - {.name = "trigger", .bytes = 20 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, + {.name = "stream_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, + {.name = "sql", .bytes = TSDB_SHOW_SQL_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "status", .bytes = 20 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "source_db", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "target_db", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "target_table", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "watermark", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false}, + {.name = "trigger", .bytes = 20 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, }; static const SSysDbTableSchema userTblsSchema[] = { - {.name = "table_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "db_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, - {.name = "columns", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "stable_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "uid", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT}, - {.name = "vgroup_id", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "ttl", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "table_comment", .bytes = TSDB_TB_COMMENT_LEN - 1 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "type", .bytes = 21 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, + {.name = "table_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "db_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, + {.name = "columns", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, + {.name = "stable_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "uid", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false}, + {.name = "vgroup_id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "ttl", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, + {.name = "table_comment", .bytes = TSDB_TB_COMMENT_LEN - 1 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "type", .bytes = 21 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, }; static const SSysDbTableSchema userTagsSchema[] = { - {.name = "table_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "db_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "stable_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "tag_name", .bytes = TSDB_COL_NAME_LEN - 1 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "tag_type", .bytes = 32 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "tag_value", .bytes = TSDB_MAX_TAGS_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, + {.name = "table_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "db_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "stable_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "tag_name", .bytes = TSDB_COL_NAME_LEN - 1 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "tag_type", .bytes = 32 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "tag_value", .bytes = TSDB_MAX_TAGS_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, }; static const SSysDbTableSchema userTblDistSchema[] = { - {.name = "db_name", .bytes = 32 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "table_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "distributed_histogram", .bytes = 500 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "min_of_rows", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "max_of_rows", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "avg_of_rows", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "stddev_of_rows", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "rows", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT}, - {.name = "blocks", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "storage_size", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT}, - {.name = "compression_ratio", .bytes = 8, .type = TSDB_DATA_TYPE_DOUBLE}, - {.name = "rows_in_mem", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "seek_header_time", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, + {.name = "db_name", .bytes = 32 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "table_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "distributed_histogram", .bytes = 500 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "min_of_rows", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "max_of_rows", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "avg_of_rows", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "stddev_of_rows", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "rows", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = true}, + {.name = "blocks", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "storage_size", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = true}, + {.name = "compression_ratio", .bytes = 8, .type = TSDB_DATA_TYPE_DOUBLE, .sysInfo = true}, + {.name = "rows_in_mem", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "seek_header_time", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, }; static const SSysDbTableSchema userUsersSchema[] = { - {.name = "name", .bytes = TSDB_USER_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "super", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT}, - {.name = "enable", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT}, - {.name = "sysinfo", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, + {.name = "name", .bytes = TSDB_USER_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "super", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT, .sysInfo = false}, + {.name = "enable", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT, .sysInfo = false}, + {.name = "sysinfo", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT, .sysInfo = false}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, }; GRANTS_SCHEMA; static const SSysDbTableSchema vgroupsSchema[] = { - {.name = "vgroup_id", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "db_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "tables", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "v1_dnode", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "v1_status", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "v2_dnode", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "v2_status", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "v3_dnode", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "v3_status", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "status", .bytes = 12 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "nfiles", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "file_size", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "tsma", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT}, + {.name = "vgroup_id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "db_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "tables", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "v1_dnode", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "v1_status", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "v2_dnode", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "v2_status", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "v3_dnode", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "v3_status", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "status", .bytes = 12 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "nfiles", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "file_size", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, + {.name = "tsma", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT, .sysInfo = true}, }; static const SSysDbTableSchema smaSchema[] = { - {.name = "sma_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, - {.name = "stable_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "vgroup_id", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, + {.name = "sma_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, + {.name = "stable_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "vgroup_id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, }; static const SSysDbTableSchema transSchema[] = { - {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, - {.name = "stage", .bytes = TSDB_TRANS_STAGE_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "db1", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "db2", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "failed_times", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "last_exec_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, - {.name = "last_action_info", - .bytes = (TSDB_TRANS_ERROR_LEN - 1) + VARSTR_HEADER_SIZE, - .type = TSDB_DATA_TYPE_VARCHAR}, + {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, + {.name = "stage", .bytes = TSDB_TRANS_STAGE_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "db1", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "db2", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "failed_times", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, + {.name = "last_exec_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, + {.name = "last_action_info", .bytes = (TSDB_TRANS_ERROR_LEN - 1) + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, }; static const SSysDbTableSchema configSchema[] = { - {.name = "name", .bytes = TSDB_CONFIG_OPTION_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "value", .bytes = TSDB_CONFIG_VALUE_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, + {.name = "name", .bytes = TSDB_CONFIG_OPTION_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "value", .bytes = TSDB_CONFIG_VALUE_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, }; static const SSysDbTableSchema variablesSchema[] = { {.name = "dnode_id", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "name", .bytes = TSDB_CONFIG_OPTION_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "value", .bytes = TSDB_CONFIG_VALUE_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, + {.name = "name", .bytes = TSDB_CONFIG_OPTION_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "value", .bytes = TSDB_CONFIG_VALUE_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, }; static const SSysTableMeta infosMeta[] = { - {TSDB_INS_TABLE_DNODES, dnodesSchema, tListLen(dnodesSchema)}, - {TSDB_INS_TABLE_MNODES, mnodesSchema, tListLen(mnodesSchema)}, - {TSDB_INS_TABLE_MODULES, modulesSchema, tListLen(modulesSchema)}, - {TSDB_INS_TABLE_QNODES, qnodesSchema, tListLen(qnodesSchema)}, + {TSDB_INS_TABLE_DNODES, dnodesSchema, tListLen(dnodesSchema), true}, + {TSDB_INS_TABLE_MNODES, mnodesSchema, tListLen(mnodesSchema), true}, + {TSDB_INS_TABLE_MODULES, modulesSchema, tListLen(modulesSchema), true}, + {TSDB_INS_TABLE_QNODES, qnodesSchema, tListLen(qnodesSchema), true}, // {TSDB_INS_TABLE_SNODES, snodesSchema, tListLen(snodesSchema)}, // {TSDB_INS_TABLE_BNODES, bnodesSchema, tListLen(bnodesSchema)}, - {TSDB_INS_TABLE_CLUSTER, clusterSchema, tListLen(clusterSchema)}, - {TSDB_INS_TABLE_DATABASES, userDBSchema, tListLen(userDBSchema)}, - {TSDB_INS_TABLE_FUNCTIONS, userFuncSchema, tListLen(userFuncSchema)}, - {TSDB_INS_TABLE_INDEXES, userIdxSchema, tListLen(userIdxSchema)}, - {TSDB_INS_TABLE_STABLES, userStbsSchema, tListLen(userStbsSchema)}, - {TSDB_INS_TABLE_TABLES, userTblsSchema, tListLen(userTblsSchema)}, - {TSDB_INS_TABLE_TAGS, userTagsSchema, tListLen(userTagsSchema)}, + {TSDB_INS_TABLE_CLUSTER, clusterSchema, tListLen(clusterSchema), true}, + {TSDB_INS_TABLE_DATABASES, userDBSchema, tListLen(userDBSchema), false}, + {TSDB_INS_TABLE_FUNCTIONS, userFuncSchema, tListLen(userFuncSchema), false}, + {TSDB_INS_TABLE_INDEXES, userIdxSchema, tListLen(userIdxSchema), false}, + {TSDB_INS_TABLE_STABLES, userStbsSchema, tListLen(userStbsSchema), false}, + {TSDB_INS_TABLE_TABLES, userTblsSchema, tListLen(userTblsSchema), false}, + {TSDB_INS_TABLE_TAGS, userTagsSchema, tListLen(userTagsSchema), false}, // {TSDB_INS_TABLE_TABLE_DISTRIBUTED, userTblDistSchema, tListLen(userTblDistSchema)}, - {TSDB_INS_TABLE_USERS, userUsersSchema, tListLen(userUsersSchema)}, - {TSDB_INS_TABLE_LICENCES, grantsSchema, tListLen(grantsSchema)}, - {TSDB_INS_TABLE_VGROUPS, vgroupsSchema, tListLen(vgroupsSchema)}, - {TSDB_INS_TABLE_CONFIGS, configSchema, tListLen(configSchema)}, - {TSDB_INS_TABLE_DNODE_VARIABLES, variablesSchema, tListLen(variablesSchema)}, + {TSDB_INS_TABLE_USERS, userUsersSchema, tListLen(userUsersSchema), false}, + {TSDB_INS_TABLE_LICENCES, grantsSchema, tListLen(grantsSchema), true}, + {TSDB_INS_TABLE_VGROUPS, vgroupsSchema, tListLen(vgroupsSchema), true}, + {TSDB_INS_TABLE_CONFIGS, configSchema, tListLen(configSchema), true}, + {TSDB_INS_TABLE_DNODE_VARIABLES, variablesSchema, tListLen(variablesSchema), true}, }; static const SSysDbTableSchema connectionsSchema[] = { - {.name = "conn_id", .bytes = 4, .type = TSDB_DATA_TYPE_UINT}, - {.name = "user", .bytes = TSDB_USER_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY}, - {.name = "app", .bytes = TSDB_APP_NAME_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY}, - {.name = "pid", .bytes = 4, .type = TSDB_DATA_TYPE_UINT}, - {.name = "end_point", .bytes = TSDB_EP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY}, - {.name = "login_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, - {.name = "last_access", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, + {.name = "conn_id", .bytes = 4, .type = TSDB_DATA_TYPE_UINT, .sysInfo = false}, + {.name = "user", .bytes = TSDB_USER_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false}, + {.name = "app", .bytes = TSDB_APP_NAME_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false}, + {.name = "pid", .bytes = 4, .type = TSDB_DATA_TYPE_UINT, .sysInfo = false}, + {.name = "end_point", .bytes = TSDB_EP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false}, + {.name = "login_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, + {.name = "last_access", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, }; static const SSysDbTableSchema topicSchema[] = { - {.name = "topic_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY}, - {.name = "db_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, - {.name = "sql", .bytes = TSDB_SHOW_SQL_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY}, + {.name = "topic_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false}, + {.name = "db_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, + {.name = "sql", .bytes = TSDB_SHOW_SQL_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false}, // TODO config }; static const SSysDbTableSchema consumerSchema[] = { - {.name = "consumer_id", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT}, - {.name = "consumer_group", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY}, - {.name = "client_id", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY}, - {.name = "status", .bytes = 20 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY}, - {.name = "topics", .bytes = TSDB_TOPIC_FNAME_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY}, - /*{.name = "end_point", .bytes = TSDB_IPv4ADDR_LEN + 6 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},*/ - {.name = "up_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, - {.name = "subscribe_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, - {.name = "rebalance_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, + {.name = "consumer_id", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false}, + {.name = "consumer_group", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false}, + {.name = "client_id", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false}, + {.name = "status", .bytes = 20 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false}, + {.name = "topics", .bytes = TSDB_TOPIC_FNAME_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false}, + /*{.name = "end_point", .bytes = TSDB_IPv4ADDR_LEN + 6 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false},*/ + {.name = "up_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, + {.name = "subscribe_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, + {.name = "rebalance_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, }; static const SSysDbTableSchema subscriptionSchema[] = { - {.name = "topic_name", .bytes = TSDB_TOPIC_FNAME_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY}, - {.name = "consumer_group", .bytes = TSDB_CGROUP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY}, - {.name = "vgroup_id", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "consumer_id", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT}, + {.name = "topic_name", .bytes = TSDB_TOPIC_FNAME_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false}, + {.name = "consumer_group", .bytes = TSDB_CGROUP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false}, + {.name = "vgroup_id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, + {.name = "consumer_id", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false}, }; static const SSysDbTableSchema offsetSchema[] = { - {.name = "topic_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY}, - {.name = "group_id", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY}, - {.name = "vgroup_id", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "committed_offset", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT}, - {.name = "current_offset", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT}, - {.name = "skip_log_cnt", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT}, + {.name = "topic_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false}, + {.name = "group_id", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false}, + {.name = "vgroup_id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, + {.name = "committed_offset", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false}, + {.name = "current_offset", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false}, + {.name = "skip_log_cnt", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false}, }; static const SSysDbTableSchema querySchema[] = { - {.name = "kill_id", .bytes = TSDB_QUERY_ID_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "query_id", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT}, - {.name = "conn_id", .bytes = 4, .type = TSDB_DATA_TYPE_UINT}, - {.name = "app", .bytes = TSDB_APP_NAME_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "pid", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "user", .bytes = TSDB_USER_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "end_point", .bytes = TSDB_IPv4ADDR_LEN + 6 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, - {.name = "exec_usec", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT}, - {.name = "stable_query", .bytes = 1, .type = TSDB_DATA_TYPE_BOOL}, - {.name = "sub_num", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "sub_status", .bytes = TSDB_SHOW_SUBQUERY_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "sql", .bytes = TSDB_SHOW_SQL_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, + {.name = "kill_id", .bytes = TSDB_QUERY_ID_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "query_id", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT, .sysInfo = false}, + {.name = "conn_id", .bytes = 4, .type = TSDB_DATA_TYPE_UINT, .sysInfo = false}, + {.name = "app", .bytes = TSDB_APP_NAME_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "pid", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, + {.name = "user", .bytes = TSDB_USER_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "end_point", .bytes = TSDB_IPv4ADDR_LEN + 6 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, + {.name = "exec_usec", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false}, + {.name = "stable_query", .bytes = 1, .type = TSDB_DATA_TYPE_BOOL, .sysInfo = false}, + {.name = "sub_num", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, + {.name = "sub_status", .bytes = TSDB_SHOW_SUBQUERY_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "sql", .bytes = TSDB_SHOW_SQL_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, }; static const SSysDbTableSchema appSchema[] = { - {.name = "app_id", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT}, - {.name = "ip", .bytes = TSDB_IPv4ADDR_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "pid", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, - {.name = "name", .bytes = TSDB_APP_NAME_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "start_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, - {.name = "insert_req", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT}, - {.name = "insert_row", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT}, - {.name = "insert_time", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT}, - {.name = "insert_bytes", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT}, - {.name = "fetch_bytes", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT}, - {.name = "query_time", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT}, - {.name = "slow_query", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT}, - {.name = "total_req", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT}, - {.name = "current_req", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT}, - {.name = "last_access", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, + {.name = "app_id", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT, .sysInfo = false}, + {.name = "ip", .bytes = TSDB_IPv4ADDR_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "pid", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, + {.name = "name", .bytes = TSDB_APP_NAME_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, + {.name = "start_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, + {.name = "insert_req", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT, .sysInfo = false}, + {.name = "insert_row", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT, .sysInfo = false}, + {.name = "insert_time", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT, .sysInfo = false}, + {.name = "insert_bytes", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT, .sysInfo = false}, + {.name = "fetch_bytes", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT, .sysInfo = false}, + {.name = "query_time", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT, .sysInfo = false}, + {.name = "slow_query", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT, .sysInfo = false}, + {.name = "total_req", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT, .sysInfo = false}, + {.name = "current_req", .bytes = 8, .type = TSDB_DATA_TYPE_UBIGINT, .sysInfo = false}, + {.name = "last_access", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, }; static const SSysTableMeta perfsMeta[] = { - {TSDB_PERFS_TABLE_CONNECTIONS, connectionsSchema, tListLen(connectionsSchema)}, - {TSDB_PERFS_TABLE_QUERIES, querySchema, tListLen(querySchema)}, - {TSDB_PERFS_TABLE_TOPICS, topicSchema, tListLen(topicSchema)}, - {TSDB_PERFS_TABLE_CONSUMERS, consumerSchema, tListLen(consumerSchema)}, - {TSDB_PERFS_TABLE_SUBSCRIPTIONS, subscriptionSchema, tListLen(subscriptionSchema)}, + {TSDB_PERFS_TABLE_CONNECTIONS, connectionsSchema, tListLen(connectionsSchema), false}, + {TSDB_PERFS_TABLE_QUERIES, querySchema, tListLen(querySchema), false}, + {TSDB_PERFS_TABLE_TOPICS, topicSchema, tListLen(topicSchema), false}, + {TSDB_PERFS_TABLE_CONSUMERS, consumerSchema, tListLen(consumerSchema), false}, + {TSDB_PERFS_TABLE_SUBSCRIPTIONS, subscriptionSchema, tListLen(subscriptionSchema), false}, // {TSDB_PERFS_TABLE_OFFSETS, offsetSchema, tListLen(offsetSchema)}, - {TSDB_PERFS_TABLE_TRANS, transSchema, tListLen(transSchema)}, - {TSDB_PERFS_TABLE_SMAS, smaSchema, tListLen(smaSchema)}, - {TSDB_PERFS_TABLE_STREAMS, streamSchema, tListLen(streamSchema)}, - {TSDB_PERFS_TABLE_APPS, appSchema, tListLen(appSchema)}}; + {TSDB_PERFS_TABLE_TRANS, transSchema, tListLen(transSchema), false}, + {TSDB_PERFS_TABLE_SMAS, smaSchema, tListLen(smaSchema), false}, + {TSDB_PERFS_TABLE_STREAMS, streamSchema, tListLen(streamSchema), false}, + {TSDB_PERFS_TABLE_APPS, appSchema, tListLen(appSchema), false}}; +// clang-format on void getInfosDbMeta(const SSysTableMeta** pInfosTableMeta, size_t* size) { if (pInfosTableMeta) { @@ -370,3 +372,26 @@ void getPerfDbMeta(const SSysTableMeta** pPerfsTableMeta, size_t* size) { *size = tListLen(perfsMeta); } } + +void getVisibleInfosTablesNum(bool sysInfo, size_t* size) { + if (sysInfo) { + getInfosDbMeta(NULL, size); + return; + } + *size = 0; + const SSysTableMeta* pMeta = NULL; + size_t totalNum = 0; + getInfosDbMeta(&pMeta, &totalNum); + for (size_t i = 0; i < totalNum; ++i) { + if (!pMeta[i].sysInfo) { + ++(*size); + } + } +} + +bool invisibleColumn(bool sysInfo, int8_t tableType, int8_t flags) { + if (sysInfo || TSDB_SYSTEM_TABLE != tableType) { + return false; + } + return 0 != (flags & COL_IS_SYSINFO); +} diff --git a/source/common/src/tglobal.c b/source/common/src/tglobal.c index a628f551f4..ee9d751555 100644 --- a/source/common/src/tglobal.c +++ b/source/common/src/tglobal.c @@ -61,6 +61,7 @@ int32_t tsNumOfVnodeStreamThreads = 2; int32_t tsNumOfVnodeFetchThreads = 4; int32_t tsNumOfVnodeWriteThreads = 2; int32_t tsNumOfVnodeSyncThreads = 2; +int32_t tsNumOfVnodeRsmaThreads = 2; int32_t tsNumOfQnodeQueryThreads = 4; int32_t tsNumOfQnodeFetchThreads = 4; int32_t tsNumOfSnodeSharedThreads = 2; @@ -76,7 +77,7 @@ bool tsMonitorComp = false; // telem bool tsEnableTelem = true; -int32_t tsTelemInterval = 86400; +int32_t tsTelemInterval = 43200; char tsTelemServer[TSDB_FQDN_LEN] = "telemetry.taosdata.com"; uint16_t tsTelemPort = 80; @@ -378,6 +379,10 @@ static int32_t taosAddServerCfg(SConfig *pCfg) { tsNumOfVnodeSyncThreads = TMAX(tsNumOfVnodeSyncThreads, 16); if (cfgAddInt32(pCfg, "numOfVnodeSyncThreads", tsNumOfVnodeSyncThreads, 1, 1024, 0) != 0) return -1; + tsNumOfVnodeRsmaThreads = tsNumOfCores; + tsNumOfVnodeRsmaThreads = TMAX(tsNumOfVnodeRsmaThreads, 4); + if (cfgAddInt32(pCfg, "numOfVnodeRsmaThreads", tsNumOfVnodeRsmaThreads, 1, 1024, 0) != 0) return -1; + tsNumOfQnodeQueryThreads = tsNumOfCores * 2; tsNumOfQnodeQueryThreads = TMAX(tsNumOfQnodeQueryThreads, 4); if (cfgAddInt32(pCfg, "numOfQnodeQueryThreads", tsNumOfQnodeQueryThreads, 1, 1024, 0) != 0) return -1; @@ -415,6 +420,7 @@ static int32_t taosAddServerCfg(SConfig *pCfg) { if (cfgAddInt32(pCfg, "mqRebalanceInterval", tsMqRebalanceInterval, 1, 10000, 1) != 0) return -1; if (cfgAddInt32(pCfg, "ttlUnit", tsTtlUnit, 1, 86400 * 365, 1) != 0) return -1; if (cfgAddInt32(pCfg, "ttlPushInterval", tsTtlPushInterval, 1, 100000, 1) != 0) return -1; + if (cfgAddInt32(pCfg, "uptimeInterval", tsUptimeInterval, 1, 100000, 1) != 0) return -1; if (cfgAddBool(pCfg, "udf", tsStartUdfd, 0) != 0) return -1; GRANT_CFG_ADD; @@ -539,6 +545,7 @@ static int32_t taosSetServerCfg(SConfig *pCfg) { tsNumOfVnodeFetchThreads = cfgGetItem(pCfg, "numOfVnodeFetchThreads")->i32; tsNumOfVnodeWriteThreads = cfgGetItem(pCfg, "numOfVnodeWriteThreads")->i32; tsNumOfVnodeSyncThreads = cfgGetItem(pCfg, "numOfVnodeSyncThreads")->i32; + tsNumOfVnodeRsmaThreads = cfgGetItem(pCfg, "numOfVnodeRsmaThreads")->i32; tsNumOfQnodeQueryThreads = cfgGetItem(pCfg, "numOfQnodeQueryThreads")->i32; tsNumOfQnodeFetchThreads = cfgGetItem(pCfg, "numOfQnodeFetchThreads")->i32; tsNumOfSnodeSharedThreads = cfgGetItem(pCfg, "numOfSnodeSharedThreads")->i32; @@ -561,6 +568,7 @@ static int32_t taosSetServerCfg(SConfig *pCfg) { tsMqRebalanceInterval = cfgGetItem(pCfg, "mqRebalanceInterval")->i32; tsTtlUnit = cfgGetItem(pCfg, "ttlUnit")->i32; tsTtlPushInterval = cfgGetItem(pCfg, "ttlPushInterval")->i32; + tsUptimeInterval = cfgGetItem(pCfg, "uptimeInterval")->i32; tsStartUdfd = cfgGetItem(pCfg, "udf")->bval; @@ -783,6 +791,8 @@ int32_t taosSetCfg(SConfig *pCfg, char *name) { tsNumOfVnodeWriteThreads = cfgGetItem(pCfg, "numOfVnodeWriteThreads")->i32; } else if (strcasecmp("numOfVnodeSyncThreads", name) == 0) { tsNumOfVnodeSyncThreads = cfgGetItem(pCfg, "numOfVnodeSyncThreads")->i32; + } else if (strcasecmp("numOfVnodeRsmaThreads", name) == 0) { + tsNumOfVnodeRsmaThreads = cfgGetItem(pCfg, "numOfVnodeRsmaThreads")->i32; } else if (strcasecmp("numOfQnodeQueryThreads", name) == 0) { tsNumOfQnodeQueryThreads = cfgGetItem(pCfg, "numOfQnodeQueryThreads")->i32; } else if (strcasecmp("numOfQnodeFetchThreads", name) == 0) { diff --git a/source/common/src/tmsg.c b/source/common/src/tmsg.c index 22dcd6ba2c..8dc4931573 100644 --- a/source/common/src/tmsg.c +++ b/source/common/src/tmsg.c @@ -3196,12 +3196,16 @@ static int32_t tDecodeSTableMetaRsp(SDecoder *pDecoder, STableMetaRsp *pRsp) { if (tDecodeI32(pDecoder, &pRsp->vgId) < 0) return -1; int32_t totalCols = pRsp->numOfTags + pRsp->numOfColumns; - pRsp->pSchemas = taosMemoryMalloc(sizeof(SSchema) * totalCols); - if (pRsp->pSchemas == NULL) return -1; + if (totalCols > 0) { + pRsp->pSchemas = taosMemoryMalloc(sizeof(SSchema) * totalCols); + if (pRsp->pSchemas == NULL) return -1; - for (int32_t i = 0; i < totalCols; ++i) { - SSchema *pSchema = &pRsp->pSchemas[i]; - if (tDecodeSSchema(pDecoder, pSchema) < 0) return -1; + for (int32_t i = 0; i < totalCols; ++i) { + SSchema *pSchema = &pRsp->pSchemas[i]; + if (tDecodeSSchema(pDecoder, pSchema) < 0) return -1; + } + } else { + pRsp->pSchemas = NULL; } return 0; @@ -3326,7 +3330,7 @@ int32_t tDeserializeSSTbHbRsp(void *buf, int32_t bufLen, SSTbHbRsp *pRsp) { return 0; } -void tFreeSTableMetaRsp(STableMetaRsp *pRsp) { taosMemoryFreeClear(pRsp->pSchemas); } +void tFreeSTableMetaRsp(void *pRsp) { taosMemoryFreeClear(((STableMetaRsp*)pRsp)->pSchemas); } void tFreeSTableIndexRsp(void *info) { if (NULL == info) { @@ -3630,6 +3634,7 @@ int32_t tSerializeSConnectRsp(void *buf, int32_t bufLen, SConnectRsp *pRsp) { if (tEncodeU32(&encoder, pRsp->connId) < 0) return -1; if (tEncodeI32(&encoder, pRsp->dnodeNum) < 0) return -1; if (tEncodeI8(&encoder, pRsp->superUser) < 0) return -1; + if (tEncodeI8(&encoder, pRsp->sysInfo) < 0) return -1; if (tEncodeI8(&encoder, pRsp->connType) < 0) return -1; if (tEncodeSEpSet(&encoder, &pRsp->epSet) < 0) return -1; if (tEncodeI32(&encoder, pRsp->svrTimestamp) < 0) return -1; @@ -3652,6 +3657,7 @@ int32_t tDeserializeSConnectRsp(void *buf, int32_t bufLen, SConnectRsp *pRsp) { if (tDecodeU32(&decoder, &pRsp->connId) < 0) return -1; if (tDecodeI32(&decoder, &pRsp->dnodeNum) < 0) return -1; if (tDecodeI8(&decoder, &pRsp->superUser) < 0) return -1; + if (tDecodeI8(&decoder, &pRsp->sysInfo) < 0) return -1; if (tDecodeI8(&decoder, &pRsp->connType) < 0) return -1; if (tDecodeSEpSet(&decoder, &pRsp->epSet) < 0) return -1; if (tDecodeI32(&decoder, &pRsp->svrTimestamp) < 0) return -1; @@ -5090,6 +5096,10 @@ int tEncodeSVCreateTbRsp(SEncoder *pCoder, const SVCreateTbRsp *pRsp) { if (tStartEncode(pCoder) < 0) return -1; if (tEncodeI32(pCoder, pRsp->code) < 0) return -1; + if (tEncodeI32(pCoder, pRsp->pMeta ? 1 : 0) < 0) return -1; + if (pRsp->pMeta) { + if (tEncodeSTableMetaRsp(pCoder, pRsp->pMeta) < 0) return -1; + } tEndEncode(pCoder); return 0; @@ -5100,10 +5110,32 @@ int tDecodeSVCreateTbRsp(SDecoder *pCoder, SVCreateTbRsp *pRsp) { if (tDecodeI32(pCoder, &pRsp->code) < 0) return -1; + int32_t meta = 0; + if (tDecodeI32(pCoder, &meta) < 0) return -1; + if (meta) { + pRsp->pMeta = taosMemoryCalloc(1, sizeof(STableMetaRsp)); + if (NULL == pRsp->pMeta) return -1; + if (tDecodeSTableMetaRsp(pCoder, pRsp->pMeta) < 0) return -1; + } else { + pRsp->pMeta = NULL; + } + tEndDecode(pCoder); return 0; } +void tFreeSVCreateTbRsp(void* param) { + if (NULL == param) { + return; + } + + SVCreateTbRsp* pRsp = (SVCreateTbRsp*)param; + if (pRsp->pMeta) { + taosMemoryFree(pRsp->pMeta->pSchemas); + taosMemoryFree(pRsp->pMeta); + } +} + // TDMT_VND_DROP_TABLE ================= static int32_t tEncodeSVDropTbReq(SEncoder *pCoder, const SVDropTbReq *pReq) { if (tStartEncode(pCoder) < 0) return -1; @@ -5292,6 +5324,10 @@ static int32_t tEncodeSSubmitBlkRsp(SEncoder *pEncoder, const SSubmitBlkRsp *pBl if (tEncodeI32v(pEncoder, pBlock->numOfRows) < 0) return -1; if (tEncodeI32v(pEncoder, pBlock->affectedRows) < 0) return -1; if (tEncodeI64v(pEncoder, pBlock->sver) < 0) return -1; + if (tEncodeI32(pEncoder, pBlock->pMeta ? 1 : 0) < 0) return -1; + if (pBlock->pMeta) { + if (tEncodeSTableMetaRsp(pEncoder, pBlock->pMeta) < 0) return -1; + } tEndEncode(pEncoder); return 0; @@ -5309,6 +5345,16 @@ static int32_t tDecodeSSubmitBlkRsp(SDecoder *pDecoder, SSubmitBlkRsp *pBlock) { if (tDecodeI32v(pDecoder, &pBlock->numOfRows) < 0) return -1; if (tDecodeI32v(pDecoder, &pBlock->affectedRows) < 0) return -1; if (tDecodeI64v(pDecoder, &pBlock->sver) < 0) return -1; + + int32_t meta = 0; + if (tDecodeI32(pDecoder, &meta) < 0) return -1; + if (meta) { + pBlock->pMeta = taosMemoryCalloc(1, sizeof(STableMetaRsp)); + if (NULL == pBlock->pMeta) return -1; + if (tDecodeSTableMetaRsp(pDecoder, pBlock->pMeta) < 0) return -1; + } else { + pBlock->pMeta = NULL; + } tEndDecode(pDecoder); return 0; @@ -5347,6 +5393,21 @@ int32_t tDecodeSSubmitRsp(SDecoder *pDecoder, SSubmitRsp *pRsp) { return 0; } +void tFreeSSubmitBlkRsp(void* param) { + if (NULL == param) { + return; + } + + SSubmitBlkRsp* pRsp = (SSubmitBlkRsp*)param; + + taosMemoryFree(pRsp->tblFName); + if (pRsp->pMeta) { + taosMemoryFree(pRsp->pMeta->pSchemas); + taosMemoryFree(pRsp->pMeta); + } +} + + void tFreeSSubmitRsp(SSubmitRsp *pRsp) { if (NULL == pRsp) return; @@ -5558,6 +5619,60 @@ void tFreeSMAlterStbRsp(SMAlterStbRsp *pRsp) { } } + +int32_t tEncodeSMCreateStbRsp(SEncoder *pEncoder, const SMCreateStbRsp *pRsp) { + if (tStartEncode(pEncoder) < 0) return -1; + if (tEncodeI32(pEncoder, pRsp->pMeta->pSchemas ? 1 : 0) < 0) return -1; + if (pRsp->pMeta->pSchemas) { + if (tEncodeSTableMetaRsp(pEncoder, pRsp->pMeta) < 0) return -1; + } + tEndEncode(pEncoder); + return 0; +} + +int32_t tDecodeSMCreateStbRsp(SDecoder *pDecoder, SMCreateStbRsp *pRsp) { + int32_t meta = 0; + if (tStartDecode(pDecoder) < 0) return -1; + if (tDecodeI32(pDecoder, &meta) < 0) return -1; + if (meta) { + pRsp->pMeta = taosMemoryCalloc(1, sizeof(STableMetaRsp)); + if (NULL == pRsp->pMeta) return -1; + if (tDecodeSTableMetaRsp(pDecoder, pRsp->pMeta) < 0) return -1; + } + tEndDecode(pDecoder); + return 0; +} + +int32_t tDeserializeSMCreateStbRsp(void *buf, int32_t bufLen, SMCreateStbRsp *pRsp) { + int32_t meta = 0; + SDecoder decoder = {0}; + tDecoderInit(&decoder, buf, bufLen); + + if (tStartDecode(&decoder) < 0) return -1; + if (tDecodeI32(&decoder, &meta) < 0) return -1; + if (meta) { + pRsp->pMeta = taosMemoryCalloc(1, sizeof(STableMetaRsp)); + if (NULL == pRsp->pMeta) return -1; + if (tDecodeSTableMetaRsp(&decoder, pRsp->pMeta) < 0) return -1; + } + tEndDecode(&decoder); + tDecoderClear(&decoder); + return 0; +} + +void tFreeSMCreateStbRsp(SMCreateStbRsp *pRsp) { + if (NULL == pRsp) { + return; + } + + if (pRsp->pMeta) { + taosMemoryFree(pRsp->pMeta->pSchemas); + taosMemoryFree(pRsp->pMeta); + } +} + + + int32_t tEncodeSTqOffsetVal(SEncoder *pEncoder, const STqOffsetVal *pOffsetVal) { if (tEncodeI8(pEncoder, pOffsetVal->type) < 0) return -1; if (pOffsetVal->type == TMQ_OFFSET__SNAPSHOT_DATA || pOffsetVal->type == TMQ_OFFSET__SNAPSHOT_META) { diff --git a/source/dnode/mnode/impl/inc/mndInfoSchema.h b/source/dnode/mnode/impl/inc/mndInfoSchema.h index b10d92ee3d..4f98465cd1 100644 --- a/source/dnode/mnode/impl/inc/mndInfoSchema.h +++ b/source/dnode/mnode/impl/inc/mndInfoSchema.h @@ -24,7 +24,8 @@ extern "C" { int32_t mndInitInfos(SMnode *pMnode); void mndCleanupInfos(SMnode *pMnode); -int32_t mndBuildInsTableSchema(SMnode *pMnode, const char *dbFName, const char *tbName, STableMetaRsp *pRsp); +int32_t mndBuildInsTableSchema(SMnode *pMnode, const char *dbFName, const char *tbName, bool sysinfo, + STableMetaRsp *pRsp); int32_t mndBuildInsTableCfg(SMnode *pMnode, const char *dbFName, const char *tbName, STableCfgRsp *pRsp); #ifdef __cplusplus diff --git a/source/dnode/mnode/impl/inc/mndStb.h b/source/dnode/mnode/impl/inc/mndStb.h index 010199a89f..8f0d55e100 100644 --- a/source/dnode/mnode/impl/inc/mndStb.h +++ b/source/dnode/mnode/impl/inc/mndStb.h @@ -35,6 +35,7 @@ SDbObj *mndAcquireDbByStb(SMnode *pMnode, const char *stbName); int32_t mndBuildStbFromReq(SMnode *pMnode, SStbObj *pDst, SMCreateStbReq *pCreate, SDbObj *pDb); int32_t mndAddStbToTrans(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SStbObj *pStb); void mndFreeStb(SStbObj *pStb); +int32_t mndBuildSMCreateStbRsp(SMnode *pMnode, char* dbFName, char* stbFName, void **pCont, int32_t *pLen); void mndExtractDbNameFromStbFullName(const char *stbFullName, char *dst); void mndExtractTbNameFromStbFullName(const char *stbFullName, char *dst, int32_t dstSize); diff --git a/source/dnode/mnode/impl/src/mndDb.c b/source/dnode/mnode/impl/src/mndDb.c index 853ace79fd..8c1c3ba873 100644 --- a/source/dnode/mnode/impl/src/mndDb.c +++ b/source/dnode/mnode/impl/src/mndDb.c @@ -1731,7 +1731,7 @@ static int32_t mndRetrieveDbs(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBloc SDbObj infoschemaDb = {0}; setInformationSchemaDbCfg(&infoschemaDb); size_t numOfTables = 0; - getInfosDbMeta(NULL, &numOfTables); + getVisibleInfosTablesNum(sysinfo, &numOfTables); mndDumpDbInfoData(pMnode, pBlock, &infoschemaDb, pShow, numOfRows, numOfTables, true, 0, 1); numOfRows += 1; diff --git a/source/dnode/mnode/impl/src/mndInfoSchema.c b/source/dnode/mnode/impl/src/mndInfoSchema.c index bf33cf603f..09172115f8 100644 --- a/source/dnode/mnode/impl/src/mndInfoSchema.c +++ b/source/dnode/mnode/impl/src/mndInfoSchema.c @@ -14,8 +14,8 @@ */ #define _DEFAULT_SOURCE -#include "systable.h" #include "mndInt.h" +#include "systable.h" static int32_t mndInitInfosTableSchema(const SSysDbTableSchema *pSrc, int32_t colNum, SSchema **pDst) { SSchema *schema = taosMemoryCalloc(colNum, sizeof(SSchema)); @@ -29,6 +29,9 @@ static int32_t mndInitInfosTableSchema(const SSysDbTableSchema *pSrc, int32_t co schema[i].type = pSrc[i].type; schema[i].colId = i + 1; schema[i].bytes = pSrc[i].bytes; + if (pSrc[i].sysInfo) { + schema[i].flags |= COL_IS_SYSINFO; + } } *pDst = schema; @@ -43,13 +46,14 @@ static int32_t mndInsInitMeta(SHashObj *hash) { meta.sversion = 1; meta.tversion = 1; - size_t size = 0; - const SSysTableMeta* pInfosTableMeta = NULL; + size_t size = 0; + const SSysTableMeta *pInfosTableMeta = NULL; getInfosDbMeta(&pInfosTableMeta, &size); for (int32_t i = 0; i < size; ++i) { tstrncpy(meta.tbName, pInfosTableMeta[i].name, sizeof(meta.tbName)); meta.numOfColumns = pInfosTableMeta[i].colNum; + meta.sysInfo = pInfosTableMeta[i].sysInfo; if (mndInitInfosTableSchema(pInfosTableMeta[i].schema, pInfosTableMeta[i].colNum, &meta.pSchemas)) { return -1; @@ -64,14 +68,15 @@ static int32_t mndInsInitMeta(SHashObj *hash) { return 0; } -int32_t mndBuildInsTableSchema(SMnode *pMnode, const char *dbFName, const char *tbName, STableMetaRsp *pRsp) { +int32_t mndBuildInsTableSchema(SMnode *pMnode, const char *dbFName, const char *tbName, bool sysinfo, + STableMetaRsp *pRsp) { if (NULL == pMnode->infosMeta) { terrno = TSDB_CODE_APP_NOT_READY; return -1; } STableMetaRsp *pMeta = taosHashGet(pMnode->infosMeta, tbName, strlen(tbName)); - if (NULL == pMeta) { + if (NULL == pMeta || (!sysinfo && pMeta->sysInfo)) { mError("invalid information schema table name:%s", tbName); terrno = TSDB_CODE_MND_INVALID_SYS_TABLENAME; return -1; @@ -121,7 +126,6 @@ int32_t mndBuildInsTableCfg(SMnode *pMnode, const char *dbFName, const char *tbN return 0; } - int32_t mndInitInfos(SMnode *pMnode) { pMnode->infosMeta = taosHashInit(20, taosGetDefaultHashFunction(TSDB_DATA_TYPE_VARCHAR), false, HASH_NO_LOCK); if (pMnode->infosMeta == NULL) { diff --git a/source/dnode/mnode/impl/src/mndMain.c b/source/dnode/mnode/impl/src/mndMain.c index 65a539bc90..2221718023 100644 --- a/source/dnode/mnode/impl/src/mndMain.c +++ b/source/dnode/mnode/impl/src/mndMain.c @@ -132,7 +132,7 @@ static void *mndThreadFp(void *param) { mndCalMqRebalance(pMnode); } - if (lastTime % (tsTelemInterval * 10) == 1) { + if (lastTime % (tsTelemInterval * 10) == ((tsTelemInterval - 1) * 10)) { mndPullupTelem(pMnode); } diff --git a/source/dnode/mnode/impl/src/mndProfile.c b/source/dnode/mnode/impl/src/mndProfile.c index e55c562e38..e8737e30c9 100644 --- a/source/dnode/mnode/impl/src/mndProfile.c +++ b/source/dnode/mnode/impl/src/mndProfile.c @@ -270,6 +270,7 @@ static int32_t mndProcessConnectReq(SRpcMsg *pReq) { SConnectRsp connectRsp = {0}; connectRsp.acctId = pUser->acctId; connectRsp.superUser = pUser->superUser; + connectRsp.sysInfo = pUser->sysInfo; connectRsp.clusterId = pMnode->clusterId; connectRsp.connId = pConn->id; connectRsp.connType = connReq.connType; diff --git a/source/dnode/mnode/impl/src/mndStb.c b/source/dnode/mnode/impl/src/mndStb.c index fa7d0b4856..dc8285740a 100644 --- a/source/dnode/mnode/impl/src/mndStb.c +++ b/source/dnode/mnode/impl/src/mndStb.c @@ -1774,6 +1774,67 @@ static int32_t mndBuildSMAlterStbRsp(SDbObj *pDb, SStbObj *pObj, void **pCont, i return 0; } +int32_t mndBuildSMCreateStbRsp(SMnode *pMnode, char* dbFName, char* stbFName, void **pCont, int32_t *pLen) { + int32_t ret = -1; + SDbObj *pDb = mndAcquireDb(pMnode, dbFName); + if (NULL == pDb) { + return -1; + } + + SStbObj *pObj = mndAcquireStb(pMnode, stbFName); + if (NULL == pObj) { + goto _OVER; + } + + SEncoder ec = {0}; + uint32_t contLen = 0; + SMCreateStbRsp stbRsp = {0}; + SName name = {0}; + tNameFromString(&name, pObj->name, T_NAME_ACCT | T_NAME_DB | T_NAME_TABLE); + + stbRsp.pMeta = taosMemoryCalloc(1, sizeof(STableMetaRsp)); + if (NULL == stbRsp.pMeta) { + terrno = TSDB_CODE_OUT_OF_MEMORY; + goto _OVER; + } + + ret = mndBuildStbSchemaImp(pDb, pObj, name.tname, stbRsp.pMeta); + if (ret) { + tFreeSMCreateStbRsp(&stbRsp); + goto _OVER; + } + + tEncodeSize(tEncodeSMCreateStbRsp, &stbRsp, contLen, ret); + if (ret) { + tFreeSMCreateStbRsp(&stbRsp); + goto _OVER; + } + + void *cont = taosMemoryMalloc(contLen); + tEncoderInit(&ec, cont, contLen); + tEncodeSMCreateStbRsp(&ec, &stbRsp); + tEncoderClear(&ec); + + tFreeSMCreateStbRsp(&stbRsp); + + *pCont = cont; + *pLen = contLen; + + ret = 0; + +_OVER: + if (pObj) { + mndReleaseStb(pMnode, pObj); + } + + if (pDb) { + mndReleaseDb(pMnode, pDb); + } + + return ret; +} + + static int32_t mndAlterStbImp(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SStbObj *pStb, bool needRsp, void *alterOriData, int32_t alterOriDataLen) { int32_t code = -1; @@ -2157,6 +2218,10 @@ static int32_t mndProcessTableMetaReq(SRpcMsg *pReq) { STableInfoReq infoReq = {0}; STableMetaRsp metaRsp = {0}; + SUserObj *pUser = mndAcquireUser(pMnode, pReq->info.conn.user); + if (pUser == NULL) return 0; + bool sysinfo = pUser->sysInfo; + if (tDeserializeSTableInfoReq(pReq->pCont, pReq->contLen, &infoReq) != 0) { terrno = TSDB_CODE_INVALID_MSG; goto _OVER; @@ -2164,7 +2229,7 @@ static int32_t mndProcessTableMetaReq(SRpcMsg *pReq) { if (0 == strcmp(infoReq.dbFName, TSDB_INFORMATION_SCHEMA_DB)) { mDebug("information_schema table:%s.%s, start to retrieve meta", infoReq.dbFName, infoReq.tbName); - if (mndBuildInsTableSchema(pMnode, infoReq.dbFName, infoReq.tbName, &metaRsp) != 0) { + if (mndBuildInsTableSchema(pMnode, infoReq.dbFName, infoReq.tbName, sysinfo, &metaRsp) != 0) { goto _OVER; } } else if (0 == strcmp(infoReq.dbFName, TSDB_PERFORMANCE_SCHEMA_DB)) { @@ -2203,6 +2268,7 @@ _OVER: mError("stb:%s.%s, failed to retrieve meta since %s", infoReq.dbFName, infoReq.tbName, terrstr()); } + mndReleaseUser(pMnode, pUser); tFreeSTableMetaRsp(&metaRsp); return code; } diff --git a/source/dnode/mnode/impl/src/mndTelem.c b/source/dnode/mnode/impl/src/mndTelem.c index 27814fe5be..93f7531a27 100644 --- a/source/dnode/mnode/impl/src/mndTelem.c +++ b/source/dnode/mnode/impl/src/mndTelem.c @@ -131,7 +131,9 @@ static int32_t mndProcessTelemTimer(SRpcMsg* pReq) { char* pCont = mndBuildTelemetryReport(pMnode); if (pCont != NULL) { if (taosSendHttpReport(tsTelemServer, tsTelemPort, pCont, strlen(pCont), HTTP_FLAT) != 0) { - mError("failed to send telemetry msg"); + mError("failed to send telemetry report"); + } else { + mTrace("succeed to send telemetry report"); } taosMemoryFree(pCont); } diff --git a/source/dnode/mnode/impl/src/mndTrans.c b/source/dnode/mnode/impl/src/mndTrans.c index 17b4336465..1d8d62e534 100644 --- a/source/dnode/mnode/impl/src/mndTrans.c +++ b/source/dnode/mnode/impl/src/mndTrans.c @@ -17,6 +17,7 @@ #include "mndTrans.h" #include "mndConsumer.h" #include "mndDb.h" +#include "mndStb.h" #include "mndPrivilege.h" #include "mndShow.h" #include "mndSync.h" @@ -900,15 +901,6 @@ static void mndTransSendRpcRsp(SMnode *pMnode, STrans *pTrans) { } SRpcMsg rspMsg = {.code = code, .info = *pInfo}; - if (pTrans->rpcRspLen != 0) { - void *rpcCont = rpcMallocCont(pTrans->rpcRspLen); - if (rpcCont != NULL) { - memcpy(rpcCont, pTrans->rpcRsp, pTrans->rpcRspLen); - rspMsg.pCont = rpcCont; - rspMsg.contLen = pTrans->rpcRspLen; - } - } - if (pTrans->originRpcType == TDMT_MND_CREATE_DB) { mDebug("trans:%d, origin msgtype:%s", pTrans->id, TMSG_INFO(pTrans->originRpcType)); SDbObj *pDb = mndAcquireDb(pMnode, pTrans->dbname1); @@ -924,6 +916,21 @@ static void mndTransSendRpcRsp(SMnode *pMnode, STrans *pTrans) { } } mndReleaseDb(pMnode, pDb); + } else if (pTrans->originRpcType == TDMT_MND_CREATE_STB) { + void *pCont = NULL; + int32_t contLen = 0; + if (0 == mndBuildSMCreateStbRsp(pMnode, pTrans->dbname1, pTrans->dbname2, &pCont, &contLen) != 0) { + mndTransSetRpcRsp(pTrans, pCont, contLen); + } + } + + if (pTrans->rpcRspLen != 0) { + void *rpcCont = rpcMallocCont(pTrans->rpcRspLen); + if (rpcCont != NULL) { + memcpy(rpcCont, pTrans->rpcRsp, pTrans->rpcRspLen); + rspMsg.pCont = rpcCont; + rspMsg.contLen = pTrans->rpcRspLen; + } } tmsgSendRsp(&rspMsg); @@ -1308,7 +1315,7 @@ static bool mndTransPerformRedoActionStage(SMnode *pMnode, STrans *pTrans) { if (pTrans->policy == TRN_POLICY_ROLLBACK) { if (pTrans->lastAction != 0) { STransAction *pAction = taosArrayGet(pTrans->redoActions, pTrans->lastAction); - if (pAction->retryCode != 0 && pAction->retryCode != pAction->errCode) { + if (pAction->retryCode != 0 && pAction->retryCode == pAction->errCode) { if (pTrans->failedTimes < 6) { mError("trans:%d, stage keep on redoAction since action:%d code:0x%x not 0x%x, failedTimes:%d", pTrans->id, pTrans->lastAction, pTrans->code, pAction->retryCode, pTrans->failedTimes); diff --git a/source/dnode/vnode/src/inc/sma.h b/source/dnode/vnode/src/inc/sma.h index ca77042bb2..abfffc045f 100644 --- a/source/dnode/vnode/src/inc/sma.h +++ b/source/dnode/vnode/src/inc/sma.h @@ -33,7 +33,6 @@ extern "C" { // clang-format on #define RSMA_TASK_INFO_HASH_SLOT (8) -#define RSMA_EXECUTOR_MAX (1) typedef struct SSmaEnv SSmaEnv; typedef struct SSmaStat SSmaStat; @@ -49,9 +48,12 @@ typedef struct SQTaskFWriter SQTaskFWriter; struct SSmaEnv { SRWLatch lock; int8_t type; + int8_t flag; // 0x01 inClose SSmaStat *pStat; }; +#define SMA_ENV_FLG_CLOSE ((int8_t)0x1) + typedef struct { int8_t inited; int32_t rsetId; @@ -93,7 +95,6 @@ struct SRSmaStat { int64_t refId; // shared by fetch tasks volatile int64_t nBufItems; // number of items in queue buffer SRWLatch lock; // r/w lock for rsma fs(e.g. qtaskinfo) - volatile int8_t nExecutor; // [1, max(half of query threads, 4)] int8_t triggerStat; // shared by fetch tasks int8_t commitStat; // 0 not in committing, 1 in committing SArray *aTaskFile; // qTaskFiles committed recently(for recovery/snapshot r/w) @@ -107,6 +108,7 @@ struct SSmaStat { SRSmaStat rsmaStat; // rollup sma }; T_REF_DECLARE() + char data[]; }; #define SMA_STAT_TSMA(s) (&(s)->tsmaStat) diff --git a/source/dnode/vnode/src/inc/vnodeInt.h b/source/dnode/vnode/src/inc/vnodeInt.h index 39c5f3873e..9b252df58b 100644 --- a/source/dnode/vnode/src/inc/vnodeInt.h +++ b/source/dnode/vnode/src/inc/vnodeInt.h @@ -102,7 +102,7 @@ int metaCommit(SMeta* pMeta); int metaCreateSTable(SMeta* pMeta, int64_t version, SVCreateStbReq* pReq); int metaAlterSTable(SMeta* pMeta, int64_t version, SVCreateStbReq* pReq); int metaDropSTable(SMeta* pMeta, int64_t verison, SVDropStbReq* pReq, SArray* tbUidList); -int metaCreateTable(SMeta* pMeta, int64_t version, SVCreateTbReq* pReq); +int metaCreateTable(SMeta* pMeta, int64_t version, SVCreateTbReq* pReq, STableMetaRsp **pMetaRsp); int metaDropTable(SMeta* pMeta, int64_t version, SVDropTbReq* pReq, SArray* tbUids); int metaTtlDropTable(SMeta* pMeta, int64_t ttl, SArray* tbUids); int metaAlterTable(SMeta* pMeta, int64_t version, SVAlterTbReq* pReq, STableMetaRsp* pMetaRsp); @@ -189,7 +189,6 @@ SSubmitReq* tqBlockToSubmit(SVnode* pVnode, const SArray* pBlocks, const STSchem int32_t smaInit(); void smaCleanUp(); int32_t smaOpen(SVnode* pVnode); -int32_t smaPreClose(SSma* pSma); int32_t smaClose(SSma* pSma); int32_t smaBegin(SSma* pSma); int32_t smaSyncPreCommit(SSma* pSma); @@ -199,7 +198,6 @@ int32_t smaAsyncPreCommit(SSma* pSma); int32_t smaAsyncCommit(SSma* pSma); int32_t smaAsyncPostCommit(SSma* pSma); int32_t smaDoRetention(SSma* pSma, int64_t now); -int32_t smaProcessExec(SSma* pSma, void* pMsg); int32_t tdProcessTSmaCreate(SSma* pSma, int64_t version, const char* msg); int32_t tdProcessTSmaInsert(SSma* pSma, int64_t indexUid, const char* msg); @@ -323,7 +321,6 @@ struct SVnode { TdThreadMutex lock; bool blocked; bool restored; - bool inClose; tsem_t syncSem; SQHandle* pQuery; }; diff --git a/source/dnode/vnode/src/meta/metaTable.c b/source/dnode/vnode/src/meta/metaTable.c index cf9b9d1205..21a5d9a000 100644 --- a/source/dnode/vnode/src/meta/metaTable.c +++ b/source/dnode/vnode/src/meta/metaTable.c @@ -369,7 +369,7 @@ int metaAlterSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) { return 0; } -int metaCreateTable(SMeta *pMeta, int64_t version, SVCreateTbReq *pReq) { +int metaCreateTable(SMeta *pMeta, int64_t version, SVCreateTbReq *pReq, STableMetaRsp **pMetaRsp) { SMetaEntry me = {0}; SMetaReader mr = {0}; @@ -448,6 +448,21 @@ int metaCreateTable(SMeta *pMeta, int64_t version, SVCreateTbReq *pReq) { if (metaHandleEntry(pMeta, &me) < 0) goto _err; + if (pMetaRsp) { + *pMetaRsp = taosMemoryCalloc(1, sizeof(STableMetaRsp)); + + if (*pMetaRsp) { + if (me.type == TSDB_CHILD_TABLE) { + (*pMetaRsp)->tableType = TSDB_CHILD_TABLE; + (*pMetaRsp)->tuid = pReq->uid; + (*pMetaRsp)->suid = pReq->ctb.suid; + strcpy((*pMetaRsp)->tbName, pReq->name); + } else { + metaUpdateMetaRsp(pReq->uid, pReq->name, &pReq->ntb.schemaRow, *pMetaRsp); + } + } + } + metaDebug("vgId:%d, table:%s uid %" PRId64 " is created, type:%" PRId8, TD_VID(pMeta->pVnode), pReq->name, pReq->uid, pReq->type); return 0; diff --git a/source/dnode/vnode/src/sma/smaEnv.c b/source/dnode/vnode/src/sma/smaEnv.c index e3b83f9955..32a419022a 100644 --- a/source/dnode/vnode/src/sma/smaEnv.c +++ b/source/dnode/vnode/src/sma/smaEnv.c @@ -23,11 +23,13 @@ extern SSmaMgmt smaMgmt; // declaration of static functions -static int32_t tdInitSmaStat(SSmaStat **pSmaStat, int8_t smaType, const SSma *pSma); -static SSmaEnv *tdNewSmaEnv(const SSma *pSma, int8_t smaType, const char *path); -static int32_t tdInitSmaEnv(SSma *pSma, int8_t smaType, const char *path, SSmaEnv **pEnv); -static void *tdFreeTSmaStat(STSmaStat *pStat); -static void tdDestroyRSmaStat(void *pRSmaStat); +static int32_t tdNewSmaEnv(SSma *pSma, int8_t smaType, SSmaEnv **ppEnv); +static int32_t tdInitSmaEnv(SSma *pSma, int8_t smaType, SSmaEnv **ppEnv); +static int32_t tdInitSmaStat(SSmaStat **pSmaStat, int8_t smaType, const SSma *pSma); +static int32_t tdRsmaStartExecutor(const SSma *pSma); +static int32_t tdRsmaStopExecutor(const SSma *pSma); +static void *tdFreeTSmaStat(STSmaStat *pStat); +static void tdDestroyRSmaStat(void *pRSmaStat); /** * @brief rsma init @@ -97,35 +99,42 @@ void smaCleanUp() { } } -static SSmaEnv *tdNewSmaEnv(const SSma *pSma, int8_t smaType, const char *path) { +static int32_t tdNewSmaEnv(SSma *pSma, int8_t smaType, SSmaEnv **ppEnv) { SSmaEnv *pEnv = NULL; pEnv = (SSmaEnv *)taosMemoryCalloc(1, sizeof(SSmaEnv)); + *ppEnv = pEnv; if (!pEnv) { terrno = TSDB_CODE_OUT_OF_MEMORY; - return NULL; + return TSDB_CODE_FAILED; } SMA_ENV_TYPE(pEnv) = smaType; taosInitRWLatch(&(pEnv->lock)); + (smaType == TSDB_SMA_TYPE_TIME_RANGE) ? atomic_store_ptr(&SMA_TSMA_ENV(pSma), *ppEnv) + : atomic_store_ptr(&SMA_RSMA_ENV(pSma), *ppEnv); + if (tdInitSmaStat(&SMA_ENV_STAT(pEnv), smaType, pSma) != TSDB_CODE_SUCCESS) { tdFreeSmaEnv(pEnv); - return NULL; + *ppEnv = NULL; + (smaType == TSDB_SMA_TYPE_TIME_RANGE) ? atomic_store_ptr(&SMA_TSMA_ENV(pSma), NULL) + : atomic_store_ptr(&SMA_RSMA_ENV(pSma), NULL); + return TSDB_CODE_FAILED; } - return pEnv; + return TSDB_CODE_SUCCESS; } -static int32_t tdInitSmaEnv(SSma *pSma, int8_t smaType, const char *path, SSmaEnv **pEnv) { - if (!pEnv) { +static int32_t tdInitSmaEnv(SSma *pSma, int8_t smaType, SSmaEnv **ppEnv) { + if (!ppEnv) { terrno = TSDB_CODE_INVALID_PTR; return TSDB_CODE_FAILED; } - if (!(*pEnv)) { - if (!(*pEnv = tdNewSmaEnv(pSma, smaType, path))) { + if (!(*ppEnv)) { + if (tdNewSmaEnv(pSma, smaType, ppEnv) != TSDB_CODE_SUCCESS) { return TSDB_CODE_FAILED; } } @@ -199,7 +208,7 @@ static int32_t tdInitSmaStat(SSmaStat **pSmaStat, int8_t smaType, const SSma *pS * tdInitSmaStat invoked in other multithread environment later. */ if (!(*pSmaStat)) { - *pSmaStat = (SSmaStat *)taosMemoryCalloc(1, sizeof(SSmaStat)); + *pSmaStat = (SSmaStat *)taosMemoryCalloc(1, sizeof(SSmaStat) + sizeof(TdThread) * tsNumOfVnodeRsmaThreads); if (!(*pSmaStat)) { terrno = TSDB_CODE_OUT_OF_MEMORY; return TSDB_CODE_FAILED; @@ -231,6 +240,10 @@ static int32_t tdInitSmaStat(SSmaStat **pSmaStat, int8_t smaType, const SSma *pS if (!RSMA_INFO_HASH(pRSmaStat)) { return TSDB_CODE_FAILED; } + + if (tdRsmaStartExecutor(pSma) < 0) { + return TSDB_CODE_FAILED; + } } else if (smaType == TSDB_SMA_TYPE_TIME_RANGE) { // TODO } else { @@ -291,6 +304,9 @@ static void tdDestroyRSmaStat(void *pRSmaStat) { } } + // step 4: + tdRsmaStopExecutor(pSma); + // step 5: free pStat taosMemoryFreeClear(pStat); } @@ -381,17 +397,70 @@ int32_t tdCheckAndInitSmaEnv(SSma *pSma, int8_t smaType) { pEnv = (smaType == TSDB_SMA_TYPE_TIME_RANGE) ? atomic_load_ptr(&SMA_TSMA_ENV(pSma)) : atomic_load_ptr(&SMA_RSMA_ENV(pSma)); if (!pEnv) { - char rname[TSDB_FILENAME_LEN] = {0}; - - if (tdInitSmaEnv(pSma, smaType, rname, &pEnv) < 0) { + if (tdInitSmaEnv(pSma, smaType, &pEnv) < 0) { tdUnLockSma(pSma); return TSDB_CODE_FAILED; } - - (smaType == TSDB_SMA_TYPE_TIME_RANGE) ? atomic_store_ptr(&SMA_TSMA_ENV(pSma), pEnv) - : atomic_store_ptr(&SMA_RSMA_ENV(pSma), pEnv); } tdUnLockSma(pSma); return TSDB_CODE_SUCCESS; }; + +void *tdRSmaExecutorFunc(void *param) { + setThreadName("vnode-rsma"); + + tdRSmaProcessExecImpl((SSma *)param, RSMA_EXEC_OVERFLOW); + return NULL; +} + +static int32_t tdRsmaStartExecutor(const SSma *pSma) { + TdThreadAttr thAttr = {0}; + taosThreadAttrInit(&thAttr); + taosThreadAttrSetDetachState(&thAttr, PTHREAD_CREATE_JOINABLE); + + SSmaEnv *pEnv = SMA_RSMA_ENV(pSma); + SSmaStat *pStat = SMA_ENV_STAT(pEnv); + TdThread *pthread = (TdThread *)&pStat->data; + + for (int32_t i = 0; i < tsNumOfVnodeRsmaThreads; ++i) { + if (taosThreadCreate(&pthread[i], &thAttr, tdRSmaExecutorFunc, (void *)pSma) != 0) { + terrno = TAOS_SYSTEM_ERROR(errno); + smaError("vgId:%d, failed to create pthread for rsma since %s", SMA_VID(pSma), terrstr()); + return -1; + } + smaDebug("vgId:%d, success to create pthread for rsma", SMA_VID(pSma)); + } + + taosThreadAttrDestroy(&thAttr); + return 0; +} + +static int32_t tdRsmaStopExecutor(const SSma *pSma) { + if (pSma && VND_IS_RSMA(pSma->pVnode)) { + SSmaEnv *pEnv = NULL; + SSmaStat *pStat = NULL; + SRSmaStat *pRSmaStat = NULL; + TdThread *pthread = NULL; + + if (!(pEnv = SMA_RSMA_ENV(pSma)) || !(pStat = SMA_ENV_STAT(pEnv))) { + return 0; + } + + pEnv->flag |= SMA_ENV_FLG_CLOSE; + pRSmaStat = (SRSmaStat *)pStat; + pthread = (TdThread *)&pStat->data; + + for (int32_t i = 0; i < tsNumOfVnodeRsmaThreads; ++i) { + tsem_post(&(pRSmaStat->notEmpty)); + } + + for (int32_t i = 0; i < tsNumOfVnodeRsmaThreads; ++i) { + if (taosCheckPthreadValid(pthread[i])) { + smaDebug("vgId:%d, start to join pthread for rsma:%" PRId64, SMA_VID(pSma), pthread[i]); + taosThreadJoin(pthread[i], NULL); + } + } + } + return 0; +} \ No newline at end of file diff --git a/source/dnode/vnode/src/sma/smaOpen.c b/source/dnode/vnode/src/sma/smaOpen.c index e2710b26e3..235fb1f941 100644 --- a/source/dnode/vnode/src/sma/smaOpen.c +++ b/source/dnode/vnode/src/sma/smaOpen.c @@ -146,20 +146,6 @@ int32_t smaClose(SSma *pSma) { return 0; } -int32_t smaPreClose(SSma *pSma) { - if (pSma && VND_IS_RSMA(pSma->pVnode)) { - SSmaEnv *pEnv = NULL; - SRSmaStat *pStat = NULL; - if (!(pEnv = SMA_RSMA_ENV(pSma)) || !(pStat = (SRSmaStat *)SMA_ENV_STAT(pEnv))) { - return 0; - } - for (int32_t i = 0; i < RSMA_EXECUTOR_MAX; ++i) { - tsem_post(&(pStat->notEmpty)); - } - } - return 0; -} - /** * @brief rsma env restore * diff --git a/source/dnode/vnode/src/sma/smaRollup.c b/source/dnode/vnode/src/sma/smaRollup.c index 448b8ab508..426ab521fd 100644 --- a/source/dnode/vnode/src/sma/smaRollup.c +++ b/source/dnode/vnode/src/sma/smaRollup.c @@ -621,7 +621,7 @@ static int32_t tdFetchSubmitReqSuids(SSubmitReq *pMsg, STbUidStore *pStore) { */ int32_t smaDoRetention(SSma *pSma, int64_t now) { int32_t code = TSDB_CODE_SUCCESS; - if (VND_IS_RSMA(pSma->pVnode)) { + if (!VND_IS_RSMA(pSma->pVnode)) { return code; } @@ -734,10 +734,12 @@ static int32_t tdExecuteRSmaImplAsync(SSma *pSma, const void *pMsg, int32_t inpu SRSmaStat *pRSmaStat = SMA_RSMA_STAT(pSma); - tsem_post(&(pRSmaStat->notEmpty)); - int64_t nItems = atomic_fetch_add_64(&pRSmaStat->nBufItems, 1); + if (atomic_load_8(&pInfo->assigned) == 0) { + tsem_post(&(pRSmaStat->notEmpty)); + } + // smoothing consume int32_t n = nItems / RSMA_QTASKEXEC_SMOOTH_SIZE; if (n > 1) { @@ -911,39 +913,6 @@ static int32_t tdExecuteRSmaAsync(SSma *pSma, const void *pMsg, int32_t inputTyp return TSDB_CODE_SUCCESS; } -static int32_t tdRSmaExecCheck(SSma *pSma) { - SRSmaStat *pRSmaStat = SMA_RSMA_STAT(pSma); - - if (atomic_load_8(&pRSmaStat->nExecutor) >= TMIN(RSMA_EXECUTOR_MAX, tsNumOfVnodeQueryThreads / 2)) { - return TSDB_CODE_SUCCESS; - } - - SRSmaExecMsg fetchMsg; - int32_t contLen = sizeof(SMsgHead); - void *pBuf = rpcMallocCont(0 + contLen); - - ((SMsgHead *)pBuf)->vgId = SMA_VID(pSma); - ((SMsgHead *)pBuf)->contLen = sizeof(SMsgHead); - - SRpcMsg rpcMsg = { - .code = 0, - .msgType = TDMT_VND_EXEC_RSMA, - .pCont = pBuf, - .contLen = contLen, - }; - - if ((terrno = tmsgPutToQueue(&pSma->pVnode->msgCb, QUERY_QUEUE, &rpcMsg)) != 0) { - smaError("vgId:%d, failed to put rsma exec msg into query-queue since %s", SMA_VID(pSma), terrstr()); - goto _err; - } - - smaDebug("vgId:%d, success to put rsma fetch msg into query-queue", SMA_VID(pSma)); - - return TSDB_CODE_SUCCESS; -_err: - return TSDB_CODE_FAILED; -} - int32_t tdProcessRSmaSubmit(SSma *pSma, void *pMsg, int32_t inputType) { SSmaEnv *pEnv = SMA_RSMA_ENV(pSma); if (!pEnv) { @@ -974,10 +943,6 @@ int32_t tdProcessRSmaSubmit(SSma *pSma, void *pMsg, int32_t inputType) { goto _err; } } - - if (tdRSmaExecCheck(pSma) < 0) { - goto _err; - } } } tdUidStoreDestory(&uidStore); @@ -1563,7 +1528,9 @@ static void tdRSmaFetchTrigger(void *param, void *tmrId) { ASSERT(qItem->level == pItem->level); ASSERT(qItem->fetchLevel == pItem->fetchLevel); #endif - tsem_post(&(pStat->notEmpty)); + if (atomic_load_8(&pRSmaInfo->assigned) == 0) { + tsem_post(&(pStat->notEmpty)); + } smaInfo("vgId:%d, rsma fetch task planned for level:%" PRIi8 " suid:%" PRIi64, SMA_VID(pSma), pItem->level, pRSmaInfo->suid); } break; @@ -1591,9 +1558,11 @@ _end: } static void tdFreeRSmaSubmitItems(SArray *pItems) { + ASSERT(taosArrayGetSize(pItems) > 0); for (int32_t i = 0; i < taosArrayGetSize(pItems); ++i) { taosFreeQitem(*(void **)taosArrayGet(pItems, i)); } + taosArrayClear(pItems); } /** @@ -1703,6 +1672,7 @@ _err: * @param type * @return int32_t */ + int32_t tdRSmaProcessExecImpl(SSma *pSma, ERsmaExecType type) { SVnode *pVnode = pSma->pVnode; SSmaEnv *pEnv = SMA_RSMA_ENV(pSma); @@ -1722,41 +1692,53 @@ int32_t tdRSmaProcessExecImpl(SSma *pSma, ERsmaExecType type) { goto _err; } - bool isBusy = false; while (true) { - isBusy = false; // step 1: rsma exec - consume data in buffer queue for all suids if (type == RSMA_EXEC_OVERFLOW || type == RSMA_EXEC_COMMIT) { - void *pIter = taosHashIterate(infoHash, NULL); // infoHash has r/w lock - while (pIter) { + void *pIter = NULL; + while ((pIter = taosHashIterate(infoHash, pIter))) { SRSmaInfo *pInfo = *(SRSmaInfo **)pIter; - int64_t itemSize = 0; - if ((itemSize = taosQueueItemSize(pInfo->queue)) || RSMA_INFO_ITEM(pInfo, 0)->fetchLevel || - RSMA_INFO_ITEM(pInfo, 1)->fetchLevel) { - smaDebug("vgId:%d, queueItemSize is %" PRIi64 " execType:%" PRIi8, SMA_VID(pSma), itemSize, type); - if (atomic_val_compare_exchange_8(&pInfo->assigned, 0, 1) == 0) { - taosReadAllQitems(pInfo->queue, pInfo->qall); // queue has mutex lock - int32_t qallItemSize = taosQallItemSize(pInfo->qall); - if (qallItemSize > 0) { - tdRSmaBatchExec(pSma, pInfo, pInfo->qall, pSubmitArr, type); + if (atomic_val_compare_exchange_8(&pInfo->assigned, 0, 1) == 0) { + if ((taosQueueItemSize(pInfo->queue) > 0) || RSMA_INFO_ITEM(pInfo, 0)->fetchLevel || + RSMA_INFO_ITEM(pInfo, 1)->fetchLevel) { + int32_t batchCnt = -1; + int32_t batchMax = taosHashGetSize(infoHash) / tsNumOfVnodeRsmaThreads; + bool occupied = (batchMax <= 1); + if (batchMax > 1) { + batchMax = 100 / batchMax; } + while (occupied || (++batchCnt < batchMax)) { // greedy mode + taosReadAllQitems(pInfo->queue, pInfo->qall); // queue has mutex lock + int32_t qallItemSize = taosQallItemSize(pInfo->qall); + if (qallItemSize > 0) { + tdRSmaBatchExec(pSma, pInfo, pInfo->qall, pSubmitArr, type); + smaDebug("vgId:%d, batchSize:%d, execType:%" PRIi8, SMA_VID(pSma), qallItemSize, type); + } - if (type == RSMA_EXEC_OVERFLOW) { - tdRSmaFetchAllResult(pSma, pInfo, pSubmitArr); - } + if (type == RSMA_EXEC_OVERFLOW) { + tdRSmaFetchAllResult(pSma, pInfo, pSubmitArr); + } - if (qallItemSize > 0) { - // subtract the item size after the task finished, commit should wait for all items be consumed - atomic_fetch_sub_64(&pRSmaStat->nBufItems, qallItemSize); - isBusy = true; + if (qallItemSize > 0) { + atomic_fetch_sub_64(&pRSmaStat->nBufItems, qallItemSize); + continue; + } else if (RSMA_INFO_ITEM(pInfo, 0)->fetchLevel || RSMA_INFO_ITEM(pInfo, 1)->fetchLevel) { + continue; + } + + break; } - ASSERT(1 == atomic_val_compare_exchange_8(&pInfo->assigned, 1, 0)); } + ASSERT(1 == atomic_val_compare_exchange_8(&pInfo->assigned, 1, 0)); } - pIter = taosHashIterate(infoHash, pIter); } if (type == RSMA_EXEC_COMMIT) { - break; + if (atomic_load_64(&pRSmaStat->nBufItems) <= 0) { + break; + } else { + // commit should wait for all items be consumed + continue; + } } } #if 0 @@ -1790,16 +1772,19 @@ int32_t tdRSmaProcessExecImpl(SSma *pSma, ERsmaExecType type) { } if (atomic_load_64(&pRSmaStat->nBufItems) <= 0) { - if (pVnode->inClose) { + if (pEnv->flag & SMA_ENV_FLG_CLOSE) { break; } + tsem_wait(&pRSmaStat->notEmpty); - if (pVnode->inClose && (atomic_load_64(&pRSmaStat->nBufItems) <= 0)) { - smaInfo("vgId:%d, exec task end, inClose:%d, nBufItems:%" PRIi64, SMA_VID(pSma), pVnode->inClose, + + if ((pEnv->flag & SMA_ENV_FLG_CLOSE) && (atomic_load_64(&pRSmaStat->nBufItems) <= 0)) { + smaInfo("vgId:%d, exec task end, flag:%" PRIi8 ", nBufItems:%" PRIi64, SMA_VID(pSma), pEnv->flag, atomic_load_64(&pRSmaStat->nBufItems)); break; } } + } // end of while(true) _end: @@ -1809,39 +1794,3 @@ _err: taosArrayDestroy(pSubmitArr); return TSDB_CODE_FAILED; } - -/** - * @brief exec rsma level 1data, fetch result of level 2/3 and submit - * - * @param pSma - * @param pMsg - * @return int32_t - */ -int32_t smaProcessExec(SSma *pSma, void *pMsg) { - SRpcMsg *pRpcMsg = (SRpcMsg *)pMsg; - SRSmaStat *pRSmaStat = SMA_RSMA_STAT(pSma); - - if (!pRpcMsg || pRpcMsg->contLen < sizeof(SMsgHead)) { - terrno = TSDB_CODE_RSMA_FETCH_MSG_MSSED_UP; - goto _err; - } - smaDebug("vgId:%d, begin to process rsma exec msg by TID:%p", SMA_VID(pSma), (void *)taosGetSelfPthreadId()); - - int8_t nOld = atomic_fetch_add_8(&pRSmaStat->nExecutor, 1); - - if (nOld < TMIN(RSMA_EXECUTOR_MAX, tsNumOfVnodeQueryThreads / 2)) { - if (tdRSmaProcessExecImpl(pSma, RSMA_EXEC_OVERFLOW) < 0) { - goto _err; - } - } else { - atomic_fetch_sub_8(&pRSmaStat->nExecutor, 1); - } - - smaDebug("vgId:%d, success to process rsma exec msg by TID:%p", SMA_VID(pSma), (void *)taosGetSelfPthreadId()); - return TSDB_CODE_SUCCESS; -_err: - atomic_fetch_sub_8(&pRSmaStat->nExecutor, 1); - smaError("vgId:%d, failed to process rsma exec msg by TID:%p since %s", SMA_VID(pSma), (void *)taosGetSelfPthreadId(), - terrstr()); - return TSDB_CODE_FAILED; -} diff --git a/source/dnode/vnode/src/tq/tq.c b/source/dnode/vnode/src/tq/tq.c index 3eba7db8d9..26db68a1d4 100644 --- a/source/dnode/vnode/src/tq/tq.c +++ b/source/dnode/vnode/src/tq/tq.c @@ -674,27 +674,33 @@ int32_t tqExpandTask(STQ* pTq, SStreamTask* pTask) { // expand executor if (pTask->taskLevel == TASK_LEVEL__SOURCE) { + pTask->pState = streamStateOpen(pTq->pStreamMeta->path, pTask); + if (pTask->pState == NULL) { + return -1; + } + SReadHandle handle = { .meta = pTq->pVnode->pMeta, .vnode = pTq->pVnode, .initTqReader = 1, + .pStateBackend = pTask->pState, }; pTask->exec.executor = qCreateStreamExecTaskInfo(pTask->exec.qmsg, &handle); ASSERT(pTask->exec.executor); } else if (pTask->taskLevel == TASK_LEVEL__AGG) { + pTask->pState = streamStateOpen(pTq->pStreamMeta->path, pTask); + if (pTask->pState == NULL) { + return -1; + } SReadHandle mgHandle = { .vnode = NULL, .numOfVgroups = (int32_t)taosArrayGetSize(pTask->childEpInfo), + .pStateBackend = pTask->pState, }; pTask->exec.executor = qCreateStreamExecTaskInfo(pTask->exec.qmsg, &mgHandle); ASSERT(pTask->exec.executor); } - pTask->pState = streamStateOpen(pTq->pStreamMeta->path, pTask); - if (pTask->pState == NULL) { - return -1; - } - // sink /*pTask->ahandle = pTq->pVnode;*/ if (pTask->outputType == TASK_OUTPUT__SMA) { diff --git a/source/dnode/vnode/src/tsdb/tsdbCommit.c b/source/dnode/vnode/src/tsdb/tsdbCommit.c index 020f3b0bc6..04a6de8472 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCommit.c +++ b/source/dnode/vnode/src/tsdb/tsdbCommit.c @@ -835,6 +835,9 @@ static int32_t tsdbMergeCommitLast(SCommitter *pCommitter, STbDataIter *pIter) { // set block data schema if need if (pBlockData->suid == 0 && pBlockData->uid == 0) { + code = tsdbCommitterUpdateTableSchema(pCommitter, pTbData->suid, pTbData->uid); + if (code) goto _err; + code = tBlockDataInit(pBlockData, pTbData->suid, pTbData->suid ? 0 : pTbData->uid, pCommitter->skmTable.pTSchema); if (code) goto _err; @@ -963,7 +966,20 @@ static int32_t tsdbCommitTableData(SCommitter *pCommitter, STbData *pTbData) { pRow = NULL; } - if (pRow == NULL) goto _exit; + if (pRow == NULL) { + if (pCommitter->dReader.pBlockIdx && tTABLEIDCmprFn(pCommitter->dReader.pBlockIdx, pTbData) == 0) { + SBlockIdx blockIdx = {.suid = pTbData->suid, .uid = pTbData->uid}; + code = tsdbWriteBlock(pCommitter->dWriter.pWriter, &pCommitter->dReader.mBlock, &blockIdx); + if (code) goto _err; + + if (taosArrayPush(pCommitter->dWriter.aBlockIdx, &blockIdx) == NULL) { + code = TSDB_CODE_OUT_OF_MEMORY; + goto _err; + } + } + + goto _exit; + } int32_t iBlock = 0; SBlock block; diff --git a/source/dnode/vnode/src/tsdb/tsdbRead.c b/source/dnode/vnode/src/tsdb/tsdbRead.c index f5a6a5a0d9..4265f1b9b6 100644 --- a/source/dnode/vnode/src/tsdb/tsdbRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbRead.c @@ -131,6 +131,7 @@ typedef struct SFileBlockDumpInfo { typedef struct SReaderStatus { bool loadFromFile; // check file stage + bool composedDataBlock; // the returned data block is a composed block or not SHashObj* pTableMap; // SHash STableBlockScanInfo* pTableIter; // table iterator used in building in-memory buffer data blocks. SFileBlockDumpInfo fBlockDumpInfo; @@ -138,7 +139,6 @@ typedef struct SReaderStatus { SBlockData fileBlockData; SFilesetIter fileIter; SDataBlockIter blockIter; - bool composedDataBlock; // the returned data block is a composed block or not } SReaderStatus; struct STsdbReader { @@ -166,7 +166,7 @@ struct STsdbReader { static SFileDataBlockInfo* getCurrentBlockInfo(SDataBlockIter* pBlockIter); static int buildDataBlockFromBufImpl(STableBlockScanInfo* pBlockScanInfo, int64_t endKey, int32_t capacity, STsdbReader* pReader); -static TSDBROW* getValidRow(SIterInfo* pIter, const SArray* pDelList, STsdbReader* pReader); +static TSDBROW* getValidMemRow(SIterInfo* pIter, const SArray* pDelList, STsdbReader* pReader); static int32_t doMergeRowsInFileBlocks(SBlockData* pBlockData, STableBlockScanInfo* pScanInfo, STsdbReader* pReader, SRowMerger* pMerger); static int32_t doMergeRowsInLastBlock(SLastBlockReader* pLastBlockReader, STableBlockScanInfo* pScanInfo, int64_t ts, SRowMerger* pMerger); @@ -513,86 +513,6 @@ _end: return code; } -// void tsdbResetQueryHandleForNewTable(STsdbReader* queryHandle, SQueryTableDataCond* pCond, STableListInfo* tableList, -// int32_t tWinIdx) { -// STsdbReader* pTsdbReadHandle = queryHandle; - -// pTsdbReadHandle->order = pCond->order; -// pTsdbReadHandle->window = pCond->twindows[tWinIdx]; -// pTsdbReadHandle->type = TSDB_QUERY_TYPE_ALL; -// pTsdbReadHandle->cur.fid = -1; -// pTsdbReadHandle->cur.win = TSWINDOW_INITIALIZER; -// pTsdbReadHandle->checkFiles = true; -// pTsdbReadHandle->activeIndex = 0; // current active table index -// pTsdbReadHandle->locateStart = false; -// pTsdbReadHandle->loadExternalRow = pCond->loadExternalRows; - -// if (ASCENDING_TRAVERSE(pCond->order)) { -// assert(pTsdbReadHandle->window.skey <= pTsdbReadHandle->window.ekey); -// } else { -// assert(pTsdbReadHandle->window.skey >= pTsdbReadHandle->window.ekey); -// } - -// // allocate buffer in order to load data blocks from file -// memset(pTsdbReadHandle->suppInfo.pstatis, 0, sizeof(SColumnDataAgg)); -// memset(pTsdbReadHandle->suppInfo.plist, 0, POINTER_BYTES); - -// tsdbInitDataBlockLoadInfo(&pTsdbReadHandle->dataBlockLoadInfo); -// tsdbInitCompBlockLoadInfo(&pTsdbReadHandle->compBlockLoadInfo); - -// SArray* pTable = NULL; -// // STsdbMeta* pMeta = tsdbGetMeta(pTsdbReadHandle->pTsdb); - -// // pTsdbReadHandle->pTableCheckInfo = destroyTableCheckInfo(pTsdbReadHandle->pTableCheckInfo); - -// pTsdbReadHandle->pTableCheckInfo = NULL; // createDataBlockScanInfo(pTsdbReadHandle, groupList, pMeta, -// // &pTable); -// if (pTsdbReadHandle->pTableCheckInfo == NULL) { -// // tsdbReaderClose(pTsdbReadHandle); -// terrno = TSDB_CODE_TDB_OUT_OF_MEMORY; -// } - -// // pTsdbReadHandle->prev = doFreeColumnInfoData(pTsdbReadHandle->prev); -// // pTsdbReadHandle->next = doFreeColumnInfoData(pTsdbReadHandle->next); -// } - -// SArray* tsdbGetQueriedTableList(STsdbReader** pHandle) { -// assert(pHandle != NULL); - -// STsdbReader* pTsdbReadHandle = (STsdbReader*)pHandle; - -// size_t size = taosArrayGetSize(pTsdbReadHandle->pTableCheckInfo); -// SArray* res = taosArrayInit(size, POINTER_BYTES); -// return res; -// } - -// static int32_t binarySearchForBlock(SBlock* pBlock, int32_t numOfBlocks, TSKEY skey, int32_t order) { -// int32_t firstSlot = 0; -// int32_t lastSlot = numOfBlocks - 1; - -// int32_t midSlot = firstSlot; - -// while (1) { -// numOfBlocks = lastSlot - firstSlot + 1; -// midSlot = (firstSlot + (numOfBlocks >> 1)); - -// if (numOfBlocks == 1) break; - -// if (skey > pBlock[midSlot].maxKey.ts) { -// if (numOfBlocks == 2) break; -// if ((order == TSDB_ORDER_DESC) && (skey < pBlock[midSlot + 1].minKey.ts)) break; -// firstSlot = midSlot + 1; -// } else if (skey < pBlock[midSlot].minKey.ts) { -// if ((order == TSDB_ORDER_ASC) && (skey > pBlock[midSlot - 1].maxKey.ts)) break; -// lastSlot = midSlot - 1; -// } else { -// break; // got the slot -// } -// } - -// return midSlot; -// } - static int32_t doLoadBlockIndex(STsdbReader* pReader, SDataFReader* pFileReader, SArray* pIndexList) { SArray* aBlockIdx = taosArrayInit(8, sizeof(SBlockIdx)); @@ -861,71 +781,32 @@ static int32_t copyBlockDataToSDataBlock(STsdbReader* pReader, STableBlockScanIn static int32_t doLoadFileBlockData(STsdbReader* pReader, SDataBlockIter* pBlockIter, SBlockData* pBlockData) { int64_t st = taosGetTimestampUs(); - double elapsedTime = 0; - int32_t code = 0; SFileDataBlockInfo* pBlockInfo = getCurrentBlockInfo(pBlockIter); SFileBlockDumpInfo* pDumpInfo = &pReader->status.fBlockDumpInfo; + ASSERT(pBlockInfo != NULL); - if (pBlockInfo != NULL) { - SBlock* pBlock = getCurrentBlock(pBlockIter); - code = tsdbReadDataBlock(pReader->pFileReader, pBlock, pBlockData); - if (code != TSDB_CODE_SUCCESS) { - tsdbError("%p error occurs in loading file block, global index:%d, table index:%d, brange:%" PRId64 "-%" PRId64 - ", rows:%d, code:%s %s", - pReader, pBlockIter->index, pBlockInfo->tbBlockIdx, pBlock->minKey.ts, pBlock->maxKey.ts, pBlock->nRow, - tstrerror(code), pReader->idStr); - goto _error; - } - - elapsedTime = (taosGetTimestampUs() - st) / 1000.0; - - tsdbDebug("%p load file block into buffer, global index:%d, table index:%d, brange:%" PRId64 "-%" PRId64 - ", rows:%d, minVer:%" PRId64 ", maxVer:%" PRId64 ", elapsed time:%.2f ms, %s", + SBlock* pBlock = getCurrentBlock(pBlockIter); + int32_t code = tsdbReadDataBlock(pReader->pFileReader, pBlock, pBlockData); + if (code != TSDB_CODE_SUCCESS) { + tsdbError("%p error occurs in loading file block, global index:%d, table index:%d, brange:%" PRId64 "-%" PRId64 + ", rows:%d, code:%s %s", pReader, pBlockIter->index, pBlockInfo->tbBlockIdx, pBlock->minKey.ts, pBlock->maxKey.ts, pBlock->nRow, - pBlock->minVer, pBlock->maxVer, elapsedTime, pReader->idStr); - } else { -#if 0 - SLastBlockReader* pLastBlockReader = pReader->status.fileIter.pLastBlockReader; - - uint64_t uid = pBlockInfo->uid; - SArray* pBlocks = pLastBlockReader->pBlockL; - - pLastBlockReader->currentBlockIndex = -1; - - // find the correct SBlockL - for(int32_t i = 0; i < taosArrayGetSize(pBlocks); ++i) { - SBlockL* pBlock = taosArrayGet(pBlocks, i); - if (pBlock->minUid >= uid && pBlock->maxUid <= uid) { - pLastBlockReader->currentBlockIndex = i; - break; - } - } - -// SBlockL* pBlockL = taosArrayGet(pLastBlockReader->pBlockL, *index); - code = tsdbReadLastBlock(pReader->pFileReader, pBlockL, pBlockData); - if (code != TSDB_CODE_SUCCESS) { - tsdbDebug("%p error occurs in loading last block into buffer, last block index:%d, total:%d brange:%" PRId64 "-%" PRId64 - ", rows:%d, minVer:%" PRId64 ", maxVer:%" PRId64 ", code:%s %s", - pReader, *index, pBlockIter->numOfBlocks.numOfLastBlocks, 0, 0, pBlockL->nRow, - pBlockL->minVer, pBlockL->maxVer, tstrerror(code), pReader->idStr); - goto _error; - } - - tsdbDebug("%p load last file block into buffer, last block index:%d, total:%d brange:%" PRId64 "-%" PRId64 - ", rows:%d, minVer:%" PRId64 ", maxVer:%" PRId64 ", elapsed time:%.2f ms, %s", - pReader, *index, pBlockIter->numOfBlocks.numOfLastBlocks, 0, 0, pBlockL->nRow, - pBlockL->minVer, pBlockL->maxVer, elapsedTime, pReader->idStr); -#endif + tstrerror(code), pReader->idStr); + return code; } + double elapsedTime = (taosGetTimestampUs() - st) / 1000.0; + + tsdbDebug("%p load file block into buffer, global index:%d, index in table block list:%d, brange:%" PRId64 "-%" PRId64 + ", rows:%d, minVer:%" PRId64 ", maxVer:%" PRId64 ", elapsed time:%.2f ms, %s", + pReader, pBlockIter->index, pBlockInfo->tbBlockIdx, pBlock->minKey.ts, pBlock->maxKey.ts, pBlock->nRow, + pBlock->minVer, pBlock->maxVer, elapsedTime, pReader->idStr); + pReader->cost.blockLoadTime += elapsedTime; pDumpInfo->allDumped = false; return TSDB_CODE_SUCCESS; - -_error: - return code; } static void cleanupBlockOrderSupporter(SBlockOrderSupporter* pSup) { @@ -979,10 +860,10 @@ static int32_t fileDataBlockOrderCompar(const void* pLeft, const void* pRight, v } static int32_t doSetCurrentBlock(SDataBlockIter* pBlockIter) { - SFileDataBlockInfo* pFBlock = getCurrentBlockInfo(pBlockIter); - if (pFBlock != NULL) { - STableBlockScanInfo* pScanInfo = taosHashGet(pBlockIter->pTableMap, &pFBlock->uid, sizeof(pFBlock->uid)); - int32_t* mapDataIndex = taosArrayGet(pScanInfo->pBlockList, pFBlock->tbBlockIdx); + SFileDataBlockInfo* pBlockInfo = getCurrentBlockInfo(pBlockIter); + if (pBlockInfo != NULL) { + STableBlockScanInfo* pScanInfo = taosHashGet(pBlockIter->pTableMap, &pBlockInfo->uid, sizeof(pBlockInfo->uid)); + int32_t* mapDataIndex = taosArrayGet(pScanInfo->pBlockList, pBlockInfo->tbBlockIdx); tMapDataGetItemByIdx(&pScanInfo->mapData, *mapDataIndex, &pBlockIter->block, tGetBlock); } @@ -1396,7 +1277,7 @@ static FORCE_INLINE STSchema* doGetSchemaForTSRow(int32_t sversion, STsdbReader* return pReader->pMemSchema; } -static int32_t doMergeBufAndFileRows_Rv(STsdbReader* pReader, STableBlockScanInfo* pBlockScanInfo, TSDBROW* pRow, +static int32_t doMergeBufAndFileRows(STsdbReader* pReader, STableBlockScanInfo* pBlockScanInfo, TSDBROW* pRow, SIterInfo* pIter, int64_t key, SLastBlockReader* pLastBlockReader) { SRowMerger merge = {0}; STSRow* pTSRow = NULL; @@ -1512,6 +1393,33 @@ static int32_t doMergeBufAndFileRows_Rv(STsdbReader* pReader, STableBlockScanInf return TSDB_CODE_SUCCESS; } +static int32_t doMergeFileBlockAndLastBlock(SLastBlockReader* pLastBlockReader, STsdbReader* pReader, + STableBlockScanInfo* pBlockScanInfo, SBlockData* pBlockData, + bool mergeBlockData) { + SBlockData* pLastBlockData = &pLastBlockReader->lastBlockData; + int64_t tsLastBlock = getCurrentKeyInLastBlock(pLastBlockReader); + + STSRow* pTSRow = NULL; + SRowMerger merge = {0}; + + TSDBROW fRow = tsdbRowFromBlockData(pLastBlockData, *pLastBlockReader->rowIndex); + + tRowMergerInit(&merge, &fRow, pReader->pSchema); + doMergeRowsInLastBlock(pLastBlockReader, pBlockScanInfo, tsLastBlock, &merge); + + // merge with block data if ts == key + if (mergeBlockData) { + doMergeRowsInFileBlocks(pBlockData, pBlockScanInfo, pReader, &merge); + } + + tRowMergerGetRow(&merge, &pTSRow); + doAppendRowFromTSRow(pReader->pResBlock, pReader, pTSRow, pBlockScanInfo->uid); + + taosMemoryFree(pTSRow); + tRowMergerClear(&merge); + return TSDB_CODE_SUCCESS; +} + static int32_t mergeFileBlockAndLastBlock(STsdbReader* pReader, SLastBlockReader* pLastBlockReader, int64_t key, STableBlockScanInfo* pBlockScanInfo, SBlockData* pBlockData) { SFileBlockDumpInfo* pDumpInfo = &pReader->status.fBlockDumpInfo; @@ -1549,55 +1457,23 @@ static int32_t mergeFileBlockAndLastBlock(STsdbReader* pReader, SLastBlockReader return TSDB_CODE_SUCCESS; } } else { // desc order - SBlockData* pLastBlockData = &pLastBlockReader->lastBlockData; - TSDBROW fRow1 = tsdbRowFromBlockData(pLastBlockData, *pLastBlockReader->rowIndex); - - STSRow* pTSRow = NULL; - SRowMerger merge = {0}; - tRowMergerInit(&merge, &fRow1, pReader->pSchema); - doMergeRowsInLastBlock(pLastBlockReader, pBlockScanInfo, ts, &merge); - - if (ts == key) { - doMergeRowsInFileBlocks(pBlockData, pBlockScanInfo, pReader, &merge); - } - - tRowMergerGetRow(&merge, &pTSRow); - doAppendRowFromTSRow(pReader->pResBlock, pReader, pTSRow, pBlockScanInfo->uid); - - taosMemoryFree(pTSRow); - tRowMergerClear(&merge); - return TSDB_CODE_SUCCESS; + return doMergeFileBlockAndLastBlock(pLastBlockReader, pReader, pBlockScanInfo, pBlockData, true); } } else { // only last block exists - SBlockData* pLastBlockData = &pLastBlockReader->lastBlockData; - int64_t tsLastBlock = getCurrentKeyInLastBlock(pLastBlockReader); - - STSRow* pTSRow = NULL; - SRowMerger merge = {0}; - - TSDBROW fRow = tsdbRowFromBlockData(pLastBlockData, *pLastBlockReader->rowIndex); - - tRowMergerInit(&merge, &fRow, pReader->pSchema); - doMergeRowsInLastBlock(pLastBlockReader, pBlockScanInfo, tsLastBlock, &merge); - tRowMergerGetRow(&merge, &pTSRow); - - doAppendRowFromTSRow(pReader->pResBlock, pReader, pTSRow, pBlockScanInfo->uid); - - taosMemoryFree(pTSRow); - tRowMergerClear(&merge); - return TSDB_CODE_SUCCESS; + return doMergeFileBlockAndLastBlock(pLastBlockReader, pReader, pBlockScanInfo, NULL, false); } } -static int32_t doMergeMultiLevelRowsRv(STsdbReader* pReader, STableBlockScanInfo* pBlockScanInfo, SBlockData* pBlockData, SLastBlockReader* pLastBlockReader) { +static int32_t doMergeMultiLevelRows(STsdbReader* pReader, STableBlockScanInfo* pBlockScanInfo, SBlockData* pBlockData, + SLastBlockReader* pLastBlockReader) { SRowMerger merge = {0}; STSRow* pTSRow = NULL; SFileBlockDumpInfo* pDumpInfo = &pReader->status.fBlockDumpInfo; SArray* pDelList = pBlockScanInfo->delSkyline; - TSDBROW* pRow = getValidRow(&pBlockScanInfo->iter, pDelList, pReader); - TSDBROW* piRow = getValidRow(&pBlockScanInfo->iiter, pDelList, pReader); + TSDBROW* pRow = getValidMemRow(&pBlockScanInfo->iter, pDelList, pReader); + TSDBROW* piRow = getValidMemRow(&pBlockScanInfo->iiter, pDelList, pReader); ASSERT(pRow != NULL && piRow != NULL); SBlockData* pLastBlockData = &pLastBlockReader->lastBlockData; @@ -1611,7 +1487,7 @@ static int32_t doMergeMultiLevelRowsRv(STsdbReader* pReader, STableBlockScanInfo TSDBKEY k = TSDBROW_KEY(pRow); TSDBKEY ik = TSDBROW_KEY(piRow); - int64_t minKey = 0;//INT64_MAX; + int64_t minKey = 0; if (ASCENDING_TRAVERSE(pReader->order)) { minKey = INT64_MAX; // let's find the minimum if (minKey > k.ts) { @@ -1748,8 +1624,8 @@ static int32_t doMergeThreeLevelRows(STsdbReader* pReader, STableBlockScanInfo* SFileBlockDumpInfo* pDumpInfo = &pReader->status.fBlockDumpInfo; SArray* pDelList = pBlockScanInfo->delSkyline; - TSDBROW* pRow = getValidRow(&pBlockScanInfo->iter, pDelList, pReader); - TSDBROW* piRow = getValidRow(&pBlockScanInfo->iiter, pDelList, pReader); + TSDBROW* pRow = getValidMemRow(&pBlockScanInfo->iter, pDelList, pReader); + TSDBROW* piRow = getValidMemRow(&pBlockScanInfo->iiter, pDelList, pReader); ASSERT(pRow != NULL && piRow != NULL); int64_t key = pBlockData->aTSKEY[pDumpInfo->rowIndex]; @@ -2024,20 +1900,20 @@ static int32_t buildComposedDataBlockImpl(STsdbReader* pReader, STableBlockScanI SFileBlockDumpInfo* pDumpInfo = &pReader->status.fBlockDumpInfo; int64_t key = (pBlockData->nRow > 0)? pBlockData->aTSKEY[pDumpInfo->rowIndex]:INT64_MIN; - TSDBROW* pRow = getValidRow(&pBlockScanInfo->iter, pBlockScanInfo->delSkyline, pReader); - TSDBROW* piRow = getValidRow(&pBlockScanInfo->iiter, pBlockScanInfo->delSkyline, pReader); + TSDBROW* pRow = getValidMemRow(&pBlockScanInfo->iter, pBlockScanInfo->delSkyline, pReader); + TSDBROW* piRow = getValidMemRow(&pBlockScanInfo->iiter, pBlockScanInfo->delSkyline, pReader); if (pBlockScanInfo->iter.hasVal && pBlockScanInfo->iiter.hasVal) { - return doMergeMultiLevelRowsRv(pReader, pBlockScanInfo, pBlockData, pLastBlockReader); + return doMergeMultiLevelRows(pReader, pBlockScanInfo, pBlockData, pLastBlockReader); } else { // imem + file + last block if (pBlockScanInfo->iiter.hasVal) { - return doMergeBufAndFileRows_Rv(pReader, pBlockScanInfo, piRow, &pBlockScanInfo->iiter, key, pLastBlockReader); + return doMergeBufAndFileRows(pReader, pBlockScanInfo, piRow, &pBlockScanInfo->iiter, key, pLastBlockReader); } // mem + file + last block if (pBlockScanInfo->iter.hasVal) { - return doMergeBufAndFileRows_Rv(pReader, pBlockScanInfo, pRow, &pBlockScanInfo->iter, key, pLastBlockReader); + return doMergeBufAndFileRows(pReader, pBlockScanInfo, pRow, &pBlockScanInfo->iter, key, pLastBlockReader); } // files data blocks + last block @@ -2270,12 +2146,12 @@ static TSDBKEY getCurrentKeyInBuf(STableBlockScanInfo* pScanInfo, STsdbReader* p TSDBKEY key = {.ts = TSKEY_INITIAL_VAL}; initMemDataIterator(pScanInfo, pReader); - TSDBROW* pRow = getValidRow(&pScanInfo->iter, pScanInfo->delSkyline, pReader); + TSDBROW* pRow = getValidMemRow(&pScanInfo->iter, pScanInfo->delSkyline, pReader); if (pRow != NULL) { key = TSDBROW_KEY(pRow); } - pRow = getValidRow(&pScanInfo->iiter, pScanInfo->delSkyline, pReader); + pRow = getValidMemRow(&pScanInfo->iiter, pScanInfo->delSkyline, pReader); if (pRow != NULL) { TSDBKEY k = TSDBROW_KEY(pRow); if (key.ts > k.ts) { @@ -2861,7 +2737,7 @@ bool hasBeenDropped(const SArray* pDelList, int32_t* index, TSDBKEY* pKey, int32 return false; } -TSDBROW* getValidRow(SIterInfo* pIter, const SArray* pDelList, STsdbReader* pReader) { +TSDBROW* getValidMemRow(SIterInfo* pIter, const SArray* pDelList, STsdbReader* pReader) { if (!pIter->hasVal) { return NULL; } @@ -2909,7 +2785,7 @@ int32_t doMergeRowsInBuf(SIterInfo* pIter, uint64_t uid, int64_t ts, SArray* pDe } // data exists but not valid - TSDBROW* pRow = getValidRow(pIter, pDelList, pReader); + TSDBROW* pRow = getValidMemRow(pIter, pDelList, pReader); if (pRow == NULL) { break; } @@ -3033,7 +2909,6 @@ int32_t doMergeRowsInFileBlocks(SBlockData* pBlockData, STableBlockScanInfo* pSc return TSDB_CODE_SUCCESS; } -// todo check if the rows are dropped or not int32_t doMergeRowsInLastBlock(SLastBlockReader* pLastBlockReader, STableBlockScanInfo* pScanInfo, int64_t ts, SRowMerger* pMerger) { while(nextRowInLastBlock(pLastBlockReader, pScanInfo)) { int64_t next1 = getCurrentKeyInLastBlock(pLastBlockReader); @@ -3061,7 +2936,7 @@ void doMergeMemTableMultiRows(TSDBROW* pRow, uint64_t uid, SIterInfo* pIter, SAr *freeTSRow = false; return; } else { // has next point in mem/imem - pNextRow = getValidRow(pIter, pDelList, pReader); + pNextRow = getValidMemRow(pIter, pDelList, pReader); if (pNextRow == NULL) { *pTSRow = current.pTSRow; *freeTSRow = false; @@ -3127,8 +3002,8 @@ void doMergeMemIMemRows(TSDBROW* pRow, TSDBROW* piRow, STableBlockScanInfo* pBlo int32_t tsdbGetNextRowInMem(STableBlockScanInfo* pBlockScanInfo, STsdbReader* pReader, STSRow** pTSRow, int64_t endKey, bool* freeTSRow) { - TSDBROW* pRow = getValidRow(&pBlockScanInfo->iter, pBlockScanInfo->delSkyline, pReader); - TSDBROW* piRow = getValidRow(&pBlockScanInfo->iiter, pBlockScanInfo->delSkyline, pReader); + TSDBROW* pRow = getValidMemRow(&pBlockScanInfo->iter, pBlockScanInfo->delSkyline, pReader); + TSDBROW* piRow = getValidMemRow(&pBlockScanInfo->iiter, pBlockScanInfo->delSkyline, pReader); SArray* pDelList = pBlockScanInfo->delSkyline; uint64_t uid = pBlockScanInfo->uid; diff --git a/source/dnode/vnode/src/vnd/vnodeOpen.c b/source/dnode/vnode/src/vnd/vnodeOpen.c index dcfbd33b90..a4fd984fb7 100644 --- a/source/dnode/vnode/src/vnd/vnodeOpen.c +++ b/source/dnode/vnode/src/vnd/vnodeOpen.c @@ -87,7 +87,6 @@ SVnode *vnodeOpen(const char *path, STfs *pTfs, SMsgCb msgCb) { pVnode->msgCb = msgCb; taosThreadMutexInit(&pVnode->lock, NULL); pVnode->blocked = false; - pVnode->inClose = false; tsem_init(&pVnode->syncSem, 0, 0); tsem_init(&(pVnode->canCommit), 0, 1); @@ -182,8 +181,6 @@ _err: void vnodePreClose(SVnode *pVnode) { if (pVnode) { syncLeaderTransfer(pVnode->sync); - pVnode->inClose = true; - smaPreClose(pVnode->pSma); } } diff --git a/source/dnode/vnode/src/vnd/vnodeSvr.c b/source/dnode/vnode/src/vnd/vnodeSvr.c index 7a8d168f4f..85feecff1a 100644 --- a/source/dnode/vnode/src/vnd/vnodeSvr.c +++ b/source/dnode/vnode/src/vnd/vnodeSvr.c @@ -301,8 +301,6 @@ int32_t vnodeProcessQueryMsg(SVnode *pVnode, SRpcMsg *pMsg) { return qWorkerProcessQueryMsg(&handle, pVnode->pQuery, pMsg, 0); case TDMT_SCH_QUERY_CONTINUE: return qWorkerProcessCQueryMsg(&handle, pVnode->pQuery, pMsg, 0); - case TDMT_VND_EXEC_RSMA: - return smaProcessExec(pVnode->pSma, pMsg); default: vError("unknown msg type:%d in query queue", pMsg->msgType); return TSDB_CODE_VND_APP_ERROR; @@ -370,6 +368,10 @@ void smaHandleRes(void *pVnode, int64_t smaId, const SArray *data) { } void vnodeUpdateMetaRsp(SVnode *pVnode, STableMetaRsp *pMetaRsp) { + if (NULL == pMetaRsp) { + return; + } + strcpy(pMetaRsp->dbFName, pVnode->config.dbname); pMetaRsp->dbId = pVnode->config.dbId; pMetaRsp->vgId = TD_VID(pVnode); @@ -380,14 +382,14 @@ static int32_t vnodeProcessTrimReq(SVnode *pVnode, int64_t version, void *pReq, int32_t code = 0; SVTrimDbReq trimReq = {0}; - vInfo("vgId:%d, trim vnode request will be processed, time:%d", pVnode->config.vgId, trimReq.timestamp); - // decode if (tDeserializeSVTrimDbReq(pReq, len, &trimReq) != 0) { code = TSDB_CODE_INVALID_MSG; goto _exit; } + vInfo("vgId:%d, trim vnode request will be processed, time:%d", pVnode->config.vgId, trimReq.timestamp); + // process code = tsdbDoRetention(pVnode->pTsdb, trimReq.timestamp); if (code) goto _exit; @@ -514,7 +516,7 @@ static int32_t vnodeProcessCreateTbReq(SVnode *pVnode, int64_t version, void *pR } // do create table - if (metaCreateTable(pVnode->pMeta, version, pCreateReq) < 0) { + if (metaCreateTable(pVnode->pMeta, version, pCreateReq, &cRsp.pMeta) < 0) { if (pCreateReq->flags & TD_CREATE_IF_NOT_EXISTS && terrno == TSDB_CODE_TDB_TABLE_ALREADY_EXIST) { cRsp.code = TSDB_CODE_SUCCESS; } else { @@ -524,6 +526,7 @@ static int32_t vnodeProcessCreateTbReq(SVnode *pVnode, int64_t version, void *pR cRsp.code = TSDB_CODE_SUCCESS; tdFetchTbUidList(pVnode->pSma, &pStore, pCreateReq->ctb.suid, pCreateReq->uid); taosArrayPush(tbUids, &pCreateReq->uid); + vnodeUpdateMetaRsp(pVnode, cRsp.pMeta); } taosArrayPush(rsp.pArray, &cRsp); @@ -552,7 +555,7 @@ _exit: pCreateReq = req.pReqs + iReq; taosArrayDestroy(pCreateReq->ctb.tagName); } - taosArrayDestroy(rsp.pArray); + taosArrayDestroyEx(rsp.pArray, tFreeSVCreateTbRsp); taosArrayDestroy(tbUids); tDecoderClear(&decoder); tEncoderClear(&encoder); @@ -864,7 +867,7 @@ static int32_t vnodeProcessSubmitReq(SVnode *pVnode, int64_t version, void *pReq goto _exit; } - if (metaCreateTable(pVnode->pMeta, version, &createTbReq) < 0) { + if (metaCreateTable(pVnode->pMeta, version, &createTbReq, &submitBlkRsp.pMeta) < 0) { if (terrno != TSDB_CODE_TDB_TABLE_ALREADY_EXIST) { submitBlkRsp.code = terrno; pRsp->code = terrno; @@ -872,6 +875,10 @@ static int32_t vnodeProcessSubmitReq(SVnode *pVnode, int64_t version, void *pReq taosArrayDestroy(createTbReq.ctb.tagName); goto _exit; } + } else { + if (NULL != submitBlkRsp.pMeta) { + vnodeUpdateMetaRsp(pVnode, submitBlkRsp.pMeta); + } } taosArrayPush(newTbUids, &createTbReq.uid); @@ -915,11 +922,7 @@ _exit: tEncodeSSubmitRsp(&encoder, &submitRsp); tEncoderClear(&encoder); - for (int32_t i = 0; i < taosArrayGetSize(submitRsp.pArray); i++) { - taosMemoryFree(((SSubmitBlkRsp *)taosArrayGet(submitRsp.pArray, i))[0].tblFName); - } - - taosArrayDestroy(submitRsp.pArray); + taosArrayDestroyEx(submitRsp.pArray, tFreeSSubmitBlkRsp); // TODO: the partial success scenario and the error case // => If partial success, extract the success submitted rows and reconstruct a new submit msg, and push to level diff --git a/source/libs/catalog/src/catalog.c b/source/libs/catalog/src/catalog.c index b6e958e192..7b32eadcd4 100644 --- a/source/libs/catalog/src/catalog.c +++ b/source/libs/catalog/src/catalog.c @@ -270,13 +270,22 @@ int32_t ctgUpdateTbMeta(SCatalog* pCtg, STableMetaRsp* rspMsg, bool syncOp) { int32_t code = 0; strcpy(output->dbFName, rspMsg->dbFName); - strcpy(output->tbName, rspMsg->tbName); output->dbId = rspMsg->dbId; - SET_META_TYPE_TABLE(output->metaType); + if (TSDB_CHILD_TABLE == rspMsg->tableType && NULL == rspMsg->pSchemas) { + strcpy(output->ctbName, rspMsg->tbName); - CTG_ERR_JRET(queryCreateTableMetaFromMsg(rspMsg, rspMsg->tableType == TSDB_SUPER_TABLE, &output->tbMeta)); + SET_META_TYPE_CTABLE(output->metaType); + + CTG_ERR_JRET(queryCreateCTableMetaFromMsg(rspMsg, &output->ctbMeta)); + } else { + strcpy(output->tbName, rspMsg->tbName); + + SET_META_TYPE_TABLE(output->metaType); + + CTG_ERR_JRET(queryCreateTableMetaFromMsg(rspMsg, rspMsg->tableType == TSDB_SUPER_TABLE, &output->tbMeta)); + } CTG_ERR_JRET(ctgUpdateTbMetaEnqueue(pCtg, output, syncOp)); diff --git a/source/libs/command/src/command.c b/source/libs/command/src/command.c index 1b2489acd6..7d259fe06c 100644 --- a/source/libs/command/src/command.c +++ b/source/libs/command/src/command.c @@ -17,6 +17,7 @@ #include "catalog.h" #include "commandInt.h" #include "scheduler.h" +#include "systable.h" #include "tdatablock.h" #include "tglobal.h" #include "tgrant.h" @@ -75,46 +76,41 @@ static SSDataBlock* buildDescResultDataBlock() { return pBlock; } -static void setDescResultIntoDataBlock(SSDataBlock* pBlock, int32_t numOfRows, STableMeta* pMeta) { +static void setDescResultIntoDataBlock(bool sysInfoUser, SSDataBlock* pBlock, int32_t numOfRows, STableMeta* pMeta) { blockDataEnsureCapacity(pBlock, numOfRows); - pBlock->info.rows = numOfRows; + pBlock->info.rows = 0; // field SColumnInfoData* pCol1 = taosArrayGet(pBlock->pDataBlock, 0); - char buf[DESCRIBE_RESULT_FIELD_LEN] = {0}; - for (int32_t i = 0; i < numOfRows; ++i) { - STR_TO_VARSTR(buf, pMeta->schema[i].name); - colDataAppend(pCol1, i, buf, false); - } - // Type SColumnInfoData* pCol2 = taosArrayGet(pBlock->pDataBlock, 1); - for (int32_t i = 0; i < numOfRows; ++i) { - STR_TO_VARSTR(buf, tDataTypes[pMeta->schema[i].type].name); - colDataAppend(pCol2, i, buf, false); - } - // Length SColumnInfoData* pCol3 = taosArrayGet(pBlock->pDataBlock, 2); - for (int32_t i = 0; i < numOfRows; ++i) { - int32_t bytes = getSchemaBytes(pMeta->schema + i); - colDataAppend(pCol3, i, (const char*)&bytes, false); - } - // Note SColumnInfoData* pCol4 = taosArrayGet(pBlock->pDataBlock, 3); + char buf[DESCRIBE_RESULT_FIELD_LEN] = {0}; for (int32_t i = 0; i < numOfRows; ++i) { + if (invisibleColumn(sysInfoUser, pMeta->tableType, pMeta->schema[i].flags)) { + continue; + } + STR_TO_VARSTR(buf, pMeta->schema[i].name); + colDataAppend(pCol1, pBlock->info.rows, buf, false); + STR_TO_VARSTR(buf, tDataTypes[pMeta->schema[i].type].name); + colDataAppend(pCol2, pBlock->info.rows, buf, false); + int32_t bytes = getSchemaBytes(pMeta->schema + i); + colDataAppend(pCol3, pBlock->info.rows, (const char*)&bytes, false); STR_TO_VARSTR(buf, i >= pMeta->tableInfo.numOfColumns ? "TAG" : ""); - colDataAppend(pCol4, i, buf, false); + colDataAppend(pCol4, pBlock->info.rows, buf, false); + ++(pBlock->info.rows); } } -static int32_t execDescribe(SNode* pStmt, SRetrieveTableRsp** pRsp) { +static int32_t execDescribe(bool sysInfoUser, SNode* pStmt, SRetrieveTableRsp** pRsp) { SDescribeStmt* pDesc = (SDescribeStmt*)pStmt; int32_t numOfRows = TABLE_TOTAL_COL_NUM(pDesc->pMeta); SSDataBlock* pBlock = buildDescResultDataBlock(); - setDescResultIntoDataBlock(pBlock, numOfRows, pDesc->pMeta); + setDescResultIntoDataBlock(sysInfoUser, pBlock, numOfRows, pDesc->pMeta); return buildRetrieveTableRsp(pBlock, DESCRIBE_RESULT_COLS, pRsp); } @@ -665,10 +661,10 @@ static int32_t execSelectWithoutFrom(SSelectStmt* pSelect, SRetrieveTableRsp** p return code; } -int32_t qExecCommand(SNode* pStmt, SRetrieveTableRsp** pRsp) { +int32_t qExecCommand(bool sysInfoUser, SNode* pStmt, SRetrieveTableRsp** pRsp) { switch (nodeType(pStmt)) { case QUERY_NODE_DESCRIBE_STMT: - return execDescribe(pStmt, pRsp); + return execDescribe(sysInfoUser, pStmt, pRsp); case QUERY_NODE_RESET_QUERY_CACHE_STMT: return execResetQueryCache(); case QUERY_NODE_SHOW_CREATE_DATABASE_STMT: diff --git a/source/libs/executor/inc/executil.h b/source/libs/executor/inc/executil.h index 3365960325..a25933d15e 100644 --- a/source/libs/executor/inc/executil.h +++ b/source/libs/executor/inc/executil.h @@ -23,6 +23,12 @@ #include "tcommon.h" #include "tpagedbuf.h" +#define T_LONG_JMP(_obj, _c) \ + do { \ + ASSERT((_c) != -1); \ + longjmp((_obj), (_c)); \ + } while (0); + #define SET_RES_WINDOW_KEY(_k, _ori, _len, _uid) \ do { \ assert(sizeof(_uid) == sizeof(uint64_t)); \ diff --git a/source/libs/executor/inc/executorimpl.h b/source/libs/executor/inc/executorimpl.h index d03c06790c..dbe35361bd 100644 --- a/source/libs/executor/inc/executorimpl.h +++ b/source/libs/executor/inc/executorimpl.h @@ -122,7 +122,7 @@ typedef int32_t (*__optr_decode_fn_t)(struct SOperatorInfo* pOperator, char* res typedef int32_t (*__optr_open_fn_t)(struct SOperatorInfo* pOptr); typedef SSDataBlock* (*__optr_fn_t)(struct SOperatorInfo* pOptr); -typedef void (*__optr_close_fn_t)(void* param, int32_t num); +typedef void (*__optr_close_fn_t)(void* param); typedef int32_t (*__optr_explain_fn_t)(struct SOperatorInfo* pOptr, void** pOptrExplain, uint32_t* len); typedef struct STaskIdInfo { @@ -512,6 +512,7 @@ typedef struct SSysTableScanInfo { SReadHandle readHandle; int32_t accountId; const char* pUser; + bool sysInfo; bool showRewrite; SNode* pCondition; // db_name filter condition, to discard data that are not in current database SMTbCursor* pCur; // cursor for iterate the local table meta store. diff --git a/source/libs/executor/src/cachescanoperator.c b/source/libs/executor/src/cachescanoperator.c index 94e4384b30..b31fa279e5 100644 --- a/source/libs/executor/src/cachescanoperator.c +++ b/source/libs/executor/src/cachescanoperator.c @@ -24,10 +24,9 @@ #include "tcompare.h" #include "thash.h" #include "ttypes.h" -#include "executorInt.h" static SSDataBlock* doScanLastrow(SOperatorInfo* pOperator); -static void destroyLastrowScanOperator(void* param, int32_t numOfOutput); +static void destroyLastrowScanOperator(void* param); static int32_t extractTargetSlotId(const SArray* pColMatchInfo, SExecTaskInfo* pTaskInfo, int32_t** pSlotIds); SOperatorInfo* createLastrowScanOperator(SLastRowScanPhysiNode* pScanNode, SReadHandle* readHandle, SExecTaskInfo* pTaskInfo) { @@ -211,7 +210,7 @@ SSDataBlock* doScanLastrow(SOperatorInfo* pOperator) { } } -void destroyLastrowScanOperator(void* param, int32_t numOfOutput) { +void destroyLastrowScanOperator(void* param) { SLastrowScanInfo* pInfo = (SLastrowScanInfo*)param; blockDataDestroy(pInfo->pRes); taosMemoryFreeClear(param); diff --git a/source/libs/executor/src/executil.c b/source/libs/executor/src/executil.c index cb382824f3..b021b5107a 100644 --- a/source/libs/executor/src/executil.c +++ b/source/libs/executor/src/executil.c @@ -284,17 +284,17 @@ int32_t isQualifiedTable(STableKeyInfo* info, SNode* pTagCond, void* metaHandle, return TSDB_CODE_SUCCESS; } -typedef struct tagFilterAssist{ - SHashObj *colHash; +typedef struct tagFilterAssist { + SHashObj* colHash; int32_t index; - SArray *cInfoList; -}tagFilterAssist; + SArray* cInfoList; +} tagFilterAssist; static EDealRes getColumn(SNode** pNode, void* pContext) { SColumnNode* pSColumnNode = NULL; if (QUERY_NODE_COLUMN == nodeType((*pNode))) { pSColumnNode = *(SColumnNode**)pNode; - }else if(QUERY_NODE_FUNCTION == nodeType((*pNode))){ + } else if (QUERY_NODE_FUNCTION == nodeType((*pNode))) { SFunctionNode* pFuncNode = *(SFunctionNode**)(pNode); if (pFuncNode->funcType == FUNCTION_TYPE_TBNAME) { pSColumnNode = (SColumnNode*)nodesMakeNode(QUERY_NODE_COLUMN); @@ -307,24 +307,26 @@ static EDealRes getColumn(SNode** pNode, void* pContext) { pSColumnNode->node.resType.bytes = TSDB_TABLE_FNAME_LEN - 1 + VARSTR_HEADER_SIZE; nodesDestroyNode(*pNode); *pNode = (SNode*)pSColumnNode; - }else{ + } else { return DEAL_RES_CONTINUE; } - }else{ + } else { return DEAL_RES_CONTINUE; } - tagFilterAssist *pData = (tagFilterAssist *)pContext; - void *data = taosHashGet(pData->colHash, &pSColumnNode->colId, sizeof(pSColumnNode->colId)); - if(!data){ + tagFilterAssist* pData = (tagFilterAssist*)pContext; + void* data = taosHashGet(pData->colHash, &pSColumnNode->colId, sizeof(pSColumnNode->colId)); + if (!data) { taosHashPut(pData->colHash, &pSColumnNode->colId, sizeof(pSColumnNode->colId), pNode, sizeof((*pNode))); pSColumnNode->slotId = pData->index++; - SColumnInfo cInfo = {.colId = pSColumnNode->colId, .type = pSColumnNode->node.resType.type, .bytes = pSColumnNode->node.resType.bytes}; + SColumnInfo cInfo = {.colId = pSColumnNode->colId, + .type = pSColumnNode->node.resType.type, + .bytes = pSColumnNode->node.resType.bytes}; #if TAG_FILTER_DEBUG qDebug("tagfilter build column info, slotId:%d, colId:%d, type:%d", pSColumnNode->slotId, cInfo.colId, cInfo.type); #endif taosArrayPush(pData->cInfoList, &cInfo); - }else{ + } else { SColumnNode* col = *(SColumnNode**)data; pSColumnNode->slotId = col->slotId; } @@ -339,9 +341,9 @@ static int32_t createResultData(SDataType* pType, int32_t numOfRows, SScalarPara return terrno; } - pColumnData->info.type = pType->type; - pColumnData->info.bytes = pType->bytes; - pColumnData->info.scale = pType->scale; + pColumnData->info.type = pType->type; + pColumnData->info.bytes = pType->bytes; + pColumnData->info.scale = pType->scale; pColumnData->info.precision = pType->precision; int32_t code = colInfoDataEnsureCapacity(pColumnData, numOfRows); @@ -356,28 +358,28 @@ static int32_t createResultData(SDataType* pType, int32_t numOfRows, SScalarPara return TSDB_CODE_SUCCESS; } -static SColumnInfoData* getColInfoResult(void* metaHandle, uint64_t suid, SArray* uidList, SNode* pTagCond){ - int32_t code = TSDB_CODE_SUCCESS; - SArray* pBlockList = NULL; +static SColumnInfoData* getColInfoResult(void* metaHandle, uint64_t suid, SArray* uidList, SNode* pTagCond) { + int32_t code = TSDB_CODE_SUCCESS; + SArray* pBlockList = NULL; SSDataBlock* pResBlock = NULL; - SHashObj * tags = NULL; + SHashObj* tags = NULL; SScalarParam output = {0}; tagFilterAssist ctx = {0}; ctx.colHash = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_SMALLINT), false, HASH_NO_LOCK); - if(ctx.colHash == NULL){ + if (ctx.colHash == NULL) { terrno = TSDB_CODE_OUT_OF_MEMORY; goto end; } ctx.index = 0; ctx.cInfoList = taosArrayInit(4, sizeof(SColumnInfo)); - if(ctx.cInfoList == NULL){ + if (ctx.cInfoList == NULL) { terrno = TSDB_CODE_OUT_OF_MEMORY; goto end; } - nodesRewriteExprPostOrder(&pTagCond, getColumn, (void *)&ctx); + nodesRewriteExprPostOrder(&pTagCond, getColumn, (void*)&ctx); pResBlock = createDataBlock(); if (pResBlock == NULL) { @@ -391,7 +393,7 @@ static SColumnInfoData* getColInfoResult(void* metaHandle, uint64_t suid, SArray blockDataAppendColInfo(pResBlock, &colInfo); } -// int64_t stt = taosGetTimestampUs(); + // int64_t stt = taosGetTimestampUs(); tags = taosHashInit(32, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK); code = metaGetTableTags(metaHandle, suid, uidList, tags); if (code != TSDB_CODE_SUCCESS) { @@ -401,11 +403,11 @@ static SColumnInfoData* getColInfoResult(void* metaHandle, uint64_t suid, SArray } int32_t rows = taosArrayGetSize(uidList); - if(rows == 0){ + if (rows == 0) { goto end; } -// int64_t stt1 = taosGetTimestampUs(); -// qDebug("generate tag meta rows:%d, cost:%ld us", rows, stt1-stt); + // int64_t stt1 = taosGetTimestampUs(); + // qDebug("generate tag meta rows:%d, cost:%ld us", rows, stt1-stt); code = blockDataEnsureCapacity(pResBlock, rows); if (code != TSDB_CODE_SUCCESS) { @@ -413,46 +415,46 @@ static SColumnInfoData* getColInfoResult(void* metaHandle, uint64_t suid, SArray goto end; } -// int64_t st = taosGetTimestampUs(); + // int64_t st = taosGetTimestampUs(); for (int32_t i = 0; i < rows; i++) { int64_t* uid = taosArrayGet(uidList, i); - for(int32_t j = 0; j < taosArrayGetSize(pResBlock->pDataBlock); j++){ + for (int32_t j = 0; j < taosArrayGetSize(pResBlock->pDataBlock); j++) { SColumnInfoData* pColInfo = (SColumnInfoData*)taosArrayGet(pResBlock->pDataBlock, j); - if(pColInfo->info.colId == -1){ // tbname + if (pColInfo->info.colId == -1) { // tbname char str[TSDB_TABLE_FNAME_LEN + VARSTR_HEADER_SIZE] = {0}; metaGetTableNameByUid(metaHandle, *uid, str); colDataAppend(pColInfo, i, str, false); #if TAG_FILTER_DEBUG - qDebug("tagfilter uid:%ld, tbname:%s", *uid, str+2); + qDebug("tagfilter uid:%ld, tbname:%s", *uid, str + 2); #endif - }else{ + } else { void* tag = taosHashGet(tags, uid, sizeof(int64_t)); ASSERT(tag); STagVal tagVal = {0}; tagVal.cid = pColInfo->info.colId; const char* p = metaGetTableTagVal(tag, pColInfo->info.type, &tagVal); - if (p == NULL || (pColInfo->info.type == TSDB_DATA_TYPE_JSON && ((STag*)p)->nTag == 0)){ + if (p == NULL || (pColInfo->info.type == TSDB_DATA_TYPE_JSON && ((STag*)p)->nTag == 0)) { colDataAppend(pColInfo, i, p, true); } else if (pColInfo->info.type == TSDB_DATA_TYPE_JSON) { colDataAppend(pColInfo, i, p, false); } else if (IS_VAR_DATA_TYPE(pColInfo->info.type)) { - char *tmp = taosMemoryCalloc(tagVal.nData + VARSTR_HEADER_SIZE + 1, 1); + char* tmp = taosMemoryCalloc(tagVal.nData + VARSTR_HEADER_SIZE + 1, 1); varDataSetLen(tmp, tagVal.nData); memcpy(tmp + VARSTR_HEADER_SIZE, tagVal.pData, tagVal.nData); colDataAppend(pColInfo, i, tmp, false); #if TAG_FILTER_DEBUG - qDebug("tagfilter varch:%s", tmp+2); + qDebug("tagfilter varch:%s", tmp + 2); #endif taosMemoryFree(tmp); } else { colDataAppend(pColInfo, i, (const char*)&tagVal.i64, false); #if TAG_FILTER_DEBUG - if(pColInfo->info.type == TSDB_DATA_TYPE_INT){ + if (pColInfo->info.type == TSDB_DATA_TYPE_INT) { qDebug("tagfilter int:%d", *(int*)(&tagVal.i64)); - }else if(pColInfo->info.type == TSDB_DATA_TYPE_DOUBLE){ - qDebug("tagfilter double:%f", *(double *)(&tagVal.i64)); + } else if (pColInfo->info.type == TSDB_DATA_TYPE_DOUBLE) { + qDebug("tagfilter double:%f", *(double*)(&tagVal.i64)); } #endif } @@ -461,8 +463,8 @@ static SColumnInfoData* getColInfoResult(void* metaHandle, uint64_t suid, SArray } pResBlock->info.rows = rows; -// int64_t st1 = taosGetTimestampUs(); -// qDebug("generate tag block rows:%d, cost:%ld us", rows, st1-st); + // int64_t st1 = taosGetTimestampUs(); + // qDebug("generate tag block rows:%d, cost:%ld us", rows, st1-st); pBlockList = taosArrayInit(2, POINTER_BYTES); taosArrayPush(pBlockList, &pResBlock); @@ -477,13 +479,13 @@ static SColumnInfoData* getColInfoResult(void* metaHandle, uint64_t suid, SArray } code = scalarCalculate(pTagCond, pBlockList, &output); - if(code != TSDB_CODE_SUCCESS){ + if (code != TSDB_CODE_SUCCESS) { qError("failed to calculate scalar, reason:%s", tstrerror(code)); terrno = code; goto end; } -// int64_t st2 = taosGetTimestampUs(); -// qDebug("calculate tag block rows:%d, cost:%ld us", rows, st2-st1); + // int64_t st2 = taosGetTimestampUs(); + // qDebug("calculate tag block rows:%d, cost:%ld us", rows, st2-st1); end: taosHashCleanup(tags); @@ -495,43 +497,43 @@ end: } static void releaseColInfoData(void* pCol) { - if(pCol){ - SColumnInfoData* col = (SColumnInfoData*) pCol; + if (pCol) { + SColumnInfoData* col = (SColumnInfoData*)pCol; colDataDestroy(col); taosMemoryFree(col); } } -int32_t getColInfoResultForGroupby(void* metaHandle, SNodeList* group, STableListInfo* pTableListInfo){ - int32_t code = TSDB_CODE_SUCCESS; - SArray *pBlockList = NULL; - SSDataBlock *pResBlock = NULL; - SHashObj *tags = NULL; - SArray *uidList = NULL; - void *keyBuf = NULL; - SArray *groupData = NULL; +int32_t getColInfoResultForGroupby(void* metaHandle, SNodeList* group, STableListInfo* pTableListInfo) { + int32_t code = TSDB_CODE_SUCCESS; + SArray* pBlockList = NULL; + SSDataBlock* pResBlock = NULL; + SHashObj* tags = NULL; + SArray* uidList = NULL; + void* keyBuf = NULL; + SArray* groupData = NULL; int32_t rows = taosArrayGetSize(pTableListInfo->pTableList); - if(rows == 0){ + if (rows == 0) { return TDB_CODE_SUCCESS; } tagFilterAssist ctx = {0}; ctx.colHash = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_SMALLINT), false, HASH_NO_LOCK); - if(ctx.colHash == NULL){ + if (ctx.colHash == NULL) { code = TSDB_CODE_OUT_OF_MEMORY; goto end; } ctx.index = 0; ctx.cInfoList = taosArrayInit(4, sizeof(SColumnInfo)); - if(ctx.cInfoList == NULL){ + if (ctx.cInfoList == NULL) { code = TSDB_CODE_OUT_OF_MEMORY; goto end; } - SNode* pNode = NULL; + SNode* pNode = NULL; FOREACH(pNode, group) { - nodesRewriteExprPostOrder(&pNode, getColumn, (void *)&ctx); + nodesRewriteExprPostOrder(&pNode, getColumn, (void*)&ctx); REPLACE_NODE(pNode); } @@ -553,61 +555,61 @@ int32_t getColInfoResultForGroupby(void* metaHandle, SNodeList* group, STableLis taosArrayPush(uidList, &pkeyInfo->uid); } -// int64_t stt = taosGetTimestampUs(); + // int64_t stt = taosGetTimestampUs(); tags = taosHashInit(32, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK); code = metaGetTableTags(metaHandle, pTableListInfo->suid, uidList, tags); if (code != TSDB_CODE_SUCCESS) { goto end; } -// int64_t stt1 = taosGetTimestampUs(); -// qDebug("generate tag meta rows:%d, cost:%ld us", rows, stt1-stt); + // int64_t stt1 = taosGetTimestampUs(); + // qDebug("generate tag meta rows:%d, cost:%ld us", rows, stt1-stt); code = blockDataEnsureCapacity(pResBlock, rows); if (code != TSDB_CODE_SUCCESS) { goto end; } -// int64_t st = taosGetTimestampUs(); + // int64_t st = taosGetTimestampUs(); for (int32_t i = 0; i < rows; i++) { int64_t* uid = taosArrayGet(uidList, i); - for(int32_t j = 0; j < taosArrayGetSize(pResBlock->pDataBlock); j++){ + for (int32_t j = 0; j < taosArrayGetSize(pResBlock->pDataBlock); j++) { SColumnInfoData* pColInfo = (SColumnInfoData*)taosArrayGet(pResBlock->pDataBlock, j); - if(pColInfo->info.colId == -1){ // tbname + if (pColInfo->info.colId == -1) { // tbname char str[TSDB_TABLE_FNAME_LEN + VARSTR_HEADER_SIZE] = {0}; metaGetTableNameByUid(metaHandle, *uid, str); colDataAppend(pColInfo, i, str, false); #if TAG_FILTER_DEBUG - qDebug("tagfilter uid:%ld, tbname:%s", *uid, str+2); + qDebug("tagfilter uid:%ld, tbname:%s", *uid, str + 2); #endif - }else{ + } else { void* tag = taosHashGet(tags, uid, sizeof(int64_t)); ASSERT(tag); STagVal tagVal = {0}; tagVal.cid = pColInfo->info.colId; const char* p = metaGetTableTagVal(tag, pColInfo->info.type, &tagVal); - if (p == NULL || (pColInfo->info.type == TSDB_DATA_TYPE_JSON && ((STag*)p)->nTag == 0)){ + if (p == NULL || (pColInfo->info.type == TSDB_DATA_TYPE_JSON && ((STag*)p)->nTag == 0)) { colDataAppend(pColInfo, i, p, true); } else if (pColInfo->info.type == TSDB_DATA_TYPE_JSON) { colDataAppend(pColInfo, i, p, false); } else if (IS_VAR_DATA_TYPE(pColInfo->info.type)) { - char *tmp = taosMemoryCalloc(tagVal.nData + VARSTR_HEADER_SIZE + 1, 1); + char* tmp = taosMemoryCalloc(tagVal.nData + VARSTR_HEADER_SIZE + 1, 1); varDataSetLen(tmp, tagVal.nData); memcpy(tmp + VARSTR_HEADER_SIZE, tagVal.pData, tagVal.nData); colDataAppend(pColInfo, i, tmp, false); #if TAG_FILTER_DEBUG - qDebug("tagfilter varch:%s", tmp+2); + qDebug("tagfilter varch:%s", tmp + 2); #endif taosMemoryFree(tmp); } else { colDataAppend(pColInfo, i, (const char*)&tagVal.i64, false); #if TAG_FILTER_DEBUG - if(pColInfo->info.type == TSDB_DATA_TYPE_INT){ + if (pColInfo->info.type == TSDB_DATA_TYPE_INT) { qDebug("tagfilter int:%d", *(int*)(&tagVal.i64)); - }else if(pColInfo->info.type == TSDB_DATA_TYPE_DOUBLE){ - qDebug("tagfilter double:%f", *(double *)(&tagVal.i64)); + } else if (pColInfo->info.type == TSDB_DATA_TYPE_DOUBLE) { + qDebug("tagfilter double:%f", *(double*)(&tagVal.i64)); } #endif } @@ -616,8 +618,8 @@ int32_t getColInfoResultForGroupby(void* metaHandle, SNodeList* group, STableLis } pResBlock->info.rows = rows; -// int64_t st1 = taosGetTimestampUs(); -// qDebug("generate tag block rows:%d, cost:%ld us", rows, st1-st); + // int64_t st1 = taosGetTimestampUs(); + // qDebug("generate tag block rows:%d, cost:%ld us", rows, st1-st); pBlockList = taosArrayInit(2, POINTER_BYTES); taosArrayPush(pBlockList, &pResBlock); @@ -631,7 +633,7 @@ int32_t getColInfoResultForGroupby(void* metaHandle, SNodeList* group, STableLis break; case QUERY_NODE_COLUMN: case QUERY_NODE_OPERATOR: - case QUERY_NODE_FUNCTION:{ + case QUERY_NODE_FUNCTION: { SExprNode* expNode = (SExprNode*)pNode; code = createResultData(&expNode->resType, rows, &output); if (code != TSDB_CODE_SUCCESS) { @@ -643,16 +645,16 @@ int32_t getColInfoResultForGroupby(void* metaHandle, SNodeList* group, STableLis code = TSDB_CODE_OPS_NOT_SUPPORT; goto end; } - if(nodeType(pNode) == QUERY_NODE_COLUMN){ - SColumnNode* pSColumnNode = (SColumnNode*)pNode; + if (nodeType(pNode) == QUERY_NODE_COLUMN) { + SColumnNode* pSColumnNode = (SColumnNode*)pNode; SColumnInfoData* pColInfo = (SColumnInfoData*)taosArrayGet(pResBlock->pDataBlock, pSColumnNode->slotId); code = colDataAssign(output.columnData, pColInfo, rows, NULL); - }else if(nodeType(pNode) == QUERY_NODE_VALUE){ + } else if (nodeType(pNode) == QUERY_NODE_VALUE) { continue; - }else{ + } else { code = scalarCalculate(pNode, pBlockList, &output); } - if(code != TSDB_CODE_SUCCESS){ + if (code != TSDB_CODE_SUCCESS) { releaseColInfoData(output.columnData); goto end; } @@ -660,7 +662,7 @@ int32_t getColInfoResultForGroupby(void* metaHandle, SNodeList* group, STableLis } int32_t keyLen = 0; - SNode* node; + SNode* node; FOREACH(node, group) { SExprNode* pExpr = (SExprNode*)node; keyLen += pExpr->resType.bytes; @@ -674,12 +676,12 @@ int32_t getColInfoResultForGroupby(void* metaHandle, SNodeList* group, STableLis code = TSDB_CODE_OUT_OF_MEMORY; goto end; } - for(int i = 0; i < rows; i++){ + for (int i = 0; i < rows; i++) { STableKeyInfo* info = taosArrayGet(pTableListInfo->pTableList, i); char* isNull = (char*)keyBuf; char* pStart = (char*)keyBuf + sizeof(int8_t) * LIST_LENGTH(group); - for(int j = 0; j < taosArrayGetSize(groupData); j++){ + for (int j = 0; j < taosArrayGetSize(groupData); j++) { SColumnInfoData* pValue = (SColumnInfoData*)taosArrayGetP(groupData, j); if (colDataIsNull_s(pValue, i)) { @@ -692,7 +694,7 @@ int32_t getColInfoResultForGroupby(void* metaHandle, SNodeList* group, STableLis code = TSDB_CODE_QRY_JSON_IN_GROUP_ERROR; goto end; } - if(tTagIsJsonNull(data)){ + if (tTagIsJsonNull(data)) { isNull[j] = 1; continue; } @@ -714,10 +716,10 @@ int32_t getColInfoResultForGroupby(void* metaHandle, SNodeList* group, STableLis taosHashPut(pTableListInfo->map, &(info->uid), sizeof(uint64_t), &info->groupId, sizeof(uint64_t)); } -// int64_t st2 = taosGetTimestampUs(); -// qDebug("calculate tag block rows:%d, cost:%ld us", rows, st2-st1); + // int64_t st2 = taosGetTimestampUs(); + // qDebug("calculate tag block rows:%d, cost:%ld us", rows, st2-st1); - end: +end: taosMemoryFreeClear(keyBuf); taosHashCleanup(tags); taosHashCleanup(ctx.colHash); @@ -747,7 +749,7 @@ int32_t getTableList(void* metaHandle, void* pVnode, SScanPhysiNode* pScanNode, SIndexMetaArg metaArg = { .metaEx = metaHandle, .idx = tsdbGetIdx(metaHandle), .ivtIdx = tsdbGetIvtIdx(metaHandle), .suid = tableUid}; -// int64_t stt = taosGetTimestampUs(); + // int64_t stt = taosGetTimestampUs(); SIdxFltStatus status = SFLT_NOT_INDEX; code = doFilterTag(pTagIndexCond, &metaArg, res, &status); if (code != 0 || status == SFLT_NOT_INDEX) { @@ -755,13 +757,13 @@ int32_t getTableList(void* metaHandle, void* pVnode, SScanPhysiNode* pScanNode, code = TDB_CODE_SUCCESS; } -// int64_t stt1 = taosGetTimestampUs(); -// qDebug("generate table list, cost:%ld us", stt1-stt); - }else if(!pTagCond){ + // int64_t stt1 = taosGetTimestampUs(); + // qDebug("generate table list, cost:%ld us", stt1-stt); + } else if (!pTagCond) { vnodeGetCtbIdList(pVnode, pScanNode->suid, res); } } else { // Create one table group. - if(metaIsTableExist(metaHandle, tableUid)){ + if (metaIsTableExist(metaHandle, tableUid)) { taosArrayPush(res, &tableUid); } } @@ -769,7 +771,7 @@ int32_t getTableList(void* metaHandle, void* pVnode, SScanPhysiNode* pScanNode, if (pTagCond) { terrno = TDB_CODE_SUCCESS; SColumnInfoData* pColInfoData = getColInfoResult(metaHandle, pListInfo->suid, res, pTagCond); - if(terrno != TDB_CODE_SUCCESS){ + if (terrno != TDB_CODE_SUCCESS) { colDataDestroy(pColInfoData); taosMemoryFreeClear(pColInfoData); taosArrayDestroy(res); @@ -831,7 +833,7 @@ size_t getTableTagsBufLen(const SNodeList* pGroups) { int32_t getGroupIdFromTagsVal(void* pMeta, uint64_t uid, SNodeList* pGroupNode, char* keyBuf, uint64_t* pGroupId) { SMetaReader mr = {0}; metaReaderInit(&mr, pMeta, 0); - if(metaGetTableEntryByUid(&mr, uid) != 0){ // table not exist + if (metaGetTableEntryByUid(&mr, uid) != 0) { // table not exist metaReaderClear(&mr); return TSDB_CODE_PAR_TABLE_NOT_EXIST; } @@ -989,7 +991,7 @@ static SResSchema createResSchema(int32_t type, int32_t bytes, int32_t slotId, i return s; } -static SColumn* createColumn(int32_t blockId, int32_t slotId, int32_t colId, SDataType* pType) { +static SColumn* createColumn(int32_t blockId, int32_t slotId, int32_t colId, SDataType* pType, EColumnType colType) { SColumn* pCol = taosMemoryCalloc(1, sizeof(SColumn)); if (pCol == NULL) { terrno = TSDB_CODE_OUT_OF_MEMORY; @@ -1003,7 +1005,7 @@ static SColumn* createColumn(int32_t blockId, int32_t slotId, int32_t colId, SDa pCol->scale = pType->scale; pCol->precision = pType->precision; pCol->dataBlockId = blockId; - + pCol->colType = colType; return pCol; } @@ -1047,7 +1049,8 @@ SExprInfo* createExprInfo(SNodeList* pNodeList, SNodeList* pGroupKeys, int32_t* SDataType* pType = &pColNode->node.resType; pExp->base.resSchema = createResSchema(pType->type, pType->bytes, pTargetNode->slotId, pType->scale, pType->precision, pColNode->colName); - pExp->base.pParam[0].pCol = createColumn(pColNode->dataBlockId, pColNode->slotId, pColNode->colId, pType); + pExp->base.pParam[0].pCol = + createColumn(pColNode->dataBlockId, pColNode->slotId, pColNode->colId, pType, pColNode->colType); pExp->base.pParam[0].type = FUNC_PARAM_TYPE_COLUMN; } else if (type == QUERY_NODE_VALUE) { pExp->pExpr->nodeType = QUERY_NODE_VALUE; @@ -1099,7 +1102,8 @@ SExprInfo* createExprInfo(SNodeList* pNodeList, SNodeList* pGroupKeys, int32_t* SColumnNode* pcn = (SColumnNode*)p1; pExp->base.pParam[j].type = FUNC_PARAM_TYPE_COLUMN; - pExp->base.pParam[j].pCol = createColumn(pcn->dataBlockId, pcn->slotId, pcn->colId, &pcn->node.resType); + pExp->base.pParam[j].pCol = + createColumn(pcn->dataBlockId, pcn->slotId, pcn->colId, &pcn->node.resType, pcn->colType); } else if (p1->type == QUERY_NODE_VALUE) { SValueNode* pvn = (SValueNode*)p1; pExp->base.pParam[j].type = FUNC_PARAM_TYPE_VALUE; diff --git a/source/libs/executor/src/executorimpl.c b/source/libs/executor/src/executorimpl.c index 113ea9ea0e..4ef6a5df26 100644 --- a/source/libs/executor/src/executorimpl.c +++ b/source/libs/executor/src/executorimpl.c @@ -76,12 +76,6 @@ static UNUSED_FUNC void* u_realloc(void* p, size_t __size) { #define realloc u_realloc #endif -#define T_LONG_JMP(_obj, _c) \ - do { \ - assert((_c) != -1); \ - longjmp((_obj), (_c)); \ - } while (0); - #define CLEAR_QUERY_STATUS(q, st) ((q)->status &= (~(st))) #define QUERY_IS_INTERVAL_QUERY(_q) ((_q)->interval.interval > 0) @@ -92,19 +86,17 @@ static int32_t getExprFunctionId(SExprInfo* pExprInfo) { return 0; } -static void doSetTagValueToResultBuf(char* output, const char* val, int16_t type, int16_t bytes); - -static void setBlockSMAInfo(SqlFunctionCtx* pCtx, SExprInfo* pExpr, SSDataBlock* pSDataBlock); +static void setBlockSMAInfo(SqlFunctionCtx* pCtx, SExprInfo* pExpr, SSDataBlock* pBlock); static void releaseQueryBuf(size_t numOfTables); -static void destroyFillOperatorInfo(void* param, int32_t numOfOutput); -static void destroyProjectOperatorInfo(void* param, int32_t numOfOutput); -static void destroyOrderOperatorInfo(void* param, int32_t numOfOutput); -static void destroyAggOperatorInfo(void* param, int32_t numOfOutput); +static void destroyFillOperatorInfo(void* param); +static void destroyProjectOperatorInfo(void* param); +static void destroyOrderOperatorInfo(void* param); +static void destroyAggOperatorInfo(void* param); -static void destroyIntervalOperatorInfo(void* param, int32_t numOfOutput); -static void destroyExchangeOperatorInfo(void* param, int32_t numOfOutput); +static void destroyIntervalOperatorInfo(void* param); +static void destroyExchangeOperatorInfo(void* param); static void destroyOperatorInfo(SOperatorInfo* pOperator); @@ -148,20 +140,6 @@ static int32_t doCopyToSDataBlock(SExecTaskInfo* pTaskInfo, SSDataBlock* pBlock, static void initCtxOutputBuffer(SqlFunctionCtx* pCtx, int32_t size); static void doSetTableGroupOutputBuf(SOperatorInfo* pOperator, int32_t numOfOutput, uint64_t groupId); -// setup the output buffer for each operator -static bool hasNull(SColumn* pColumn, SColumnDataAgg* pStatis) { - if (TSDB_COL_IS_TAG(pColumn->flag) || TSDB_COL_IS_UD_COL(pColumn->flag) || - pColumn->colId == PRIMARYKEY_TIMESTAMP_COL_ID) { - return false; - } - - if (pStatis != NULL && pStatis->numOfNull == 0) { - return false; - } - - return true; -} - #if 0 static bool chkResultRowFromKey(STaskRuntimeEnv* pRuntimeEnv, SResultRowInfo* pResultRowInfo, char* pData, int16_t bytes, bool masterscan, uint64_t uid) { @@ -278,9 +256,6 @@ SResultRow* doSetResultOutBufByKey(SDiskbasedBuf* pResultBuf, SResultRowInfo* pR // 1. close current opened time window if (pResultRowInfo->cur.pageId != -1 && ((pResult == NULL) || (pResult->pageId != pResultRowInfo->cur.pageId))) { -#ifdef BUF_PAGE_DEBUG - qDebug("page_1"); -#endif SResultRowPosition pos = pResultRowInfo->cur; SFilePage* pPage = getBufPage(pResultBuf, pos.pageId); releaseBufPage(pResultBuf, pPage); @@ -308,7 +283,7 @@ SResultRow* doSetResultOutBufByKey(SDiskbasedBuf* pResultBuf, SResultRowInfo* pR // too many time window in query if (pTaskInfo->execModel == OPTR_EXEC_MODEL_BATCH && taosHashGetSize(pSup->pResultRowHashTable) > MAX_INTERVAL_TIME_WINDOW) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_TOO_MANY_TIMEWINDOW); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_TOO_MANY_TIMEWINDOW); } return pResult; @@ -434,7 +409,7 @@ void doApplyFunctions(SExecTaskInfo* taskInfo, SqlFunctionCtx* pCtx, SColumnInfo if (code != TSDB_CODE_SUCCESS) { qError("%s apply functions error, code: %s", GET_TASKID(taskInfo), tstrerror(code)); taskInfo->code = code; - longjmp(taskInfo->env, code); + T_LONG_JMP(taskInfo->env, code); } } @@ -1152,7 +1127,7 @@ int32_t loadDataBlockOnDemand(SExecTaskInfo* pTaskInfo, STableScanInfo* pTableSc if (setResultOutputBufByKey(pRuntimeEnv, pTableScanInfo->pResultRowInfo, pBlock->info.uid, &win, masterScan, &pResult, groupId, pTableScanInfo->pCtx, pTableScanInfo->numOfOutput, pTableScanInfo->rowEntryInfoOffset) != TSDB_CODE_SUCCESS) { - longjmp(pRuntimeEnv->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pRuntimeEnv->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } } } else if (pQueryAttr->stableQuery && (!pQueryAttr->tsCompQuery) && (!pQueryAttr->diffQuery)) { // stable aggregate, not interval aggregate or normal column aggregate @@ -1203,7 +1178,7 @@ int32_t loadDataBlockOnDemand(SExecTaskInfo* pTaskInfo, STableScanInfo* pTableSc if (setResultOutputBufByKey(pRuntimeEnv, pTableScanInfo->pResultRowInfo, pBlock->info.uid, &win, masterScan, &pResult, groupId, pTableScanInfo->pCtx, pTableScanInfo->numOfOutput, pTableScanInfo->rowEntryInfoOffset) != TSDB_CODE_SUCCESS) { - longjmp(pRuntimeEnv->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pRuntimeEnv->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } } } @@ -1495,7 +1470,7 @@ int32_t finalizeResultRowIntoResultDataBlock(SDiskbasedBuf* pBuf, SResultRowPosi if (TAOS_FAILED(code)) { releaseBufPage(pBuf, page); qError("%s ensure result data capacity failed, code %s", GET_TASKID(pTaskInfo), tstrerror(code)); - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } } @@ -1507,7 +1482,7 @@ int32_t finalizeResultRowIntoResultDataBlock(SDiskbasedBuf* pBuf, SResultRowPosi int32_t code = pCtx[j].fpSet.finalize(&pCtx[j], pBlock); if (TAOS_FAILED(code)) { qError("%s build result data block error, code %s", GET_TASKID(pTaskInfo), tstrerror(code)); - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } } else if (strcmp(pCtx[j].pExpr->pExpr->_function.functionName, "_select_value") == 0) { // do nothing, todo refactor @@ -1581,7 +1556,7 @@ int32_t doCopyToSDataBlock(SExecTaskInfo* pTaskInfo, SSDataBlock* pBlock, SExprS int32_t code = pCtx[j].fpSet.finalize(&pCtx[j], pBlock); if (TAOS_FAILED(code)) { qError("%s build result data block error, code %s", GET_TASKID(pTaskInfo), tstrerror(code)); - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } } else if (strcmp(pCtx[j].pExpr->pExpr->_function.functionName, "_select_value") == 0) { // do nothing, todo refactor @@ -1736,7 +1711,7 @@ void queryCostStatis(SExecTaskInfo* pTaskInfo) { // SDataBlockInfo blockInfo = SDATA_BLOCK_INITIALIZER; // while (tsdbNextDataBlock(pTsdbReadHandle)) { // if (isTaskKilled(pRuntimeEnv->qinfo)) { -// longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED); +// T_LONG_JMP(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED); // } // // tsdbRetrieveDataBlockInfo(pTsdbReadHandle, &blockInfo); @@ -1755,7 +1730,7 @@ void queryCostStatis(SExecTaskInfo* pTaskInfo) { // } // // if (terrno != TSDB_CODE_SUCCESS) { -// longjmp(pRuntimeEnv->env, terrno); +// T_LONG_JMP(pRuntimeEnv->env, terrno); // } // } @@ -1919,7 +1894,7 @@ void queryCostStatis(SExecTaskInfo* pTaskInfo) { // // // check for error // if (terrno != TSDB_CODE_SUCCESS) { -// longjmp(pRuntimeEnv->env, terrno); +// T_LONG_JMP(pRuntimeEnv->env, terrno); // } // // return true; @@ -2771,7 +2746,7 @@ static SSDataBlock* doSortedMerge(SOperatorInfo* pOperator) { int32_t code = tsortOpen(pInfo->pSortHandle); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, terrno); + T_LONG_JMP(pTaskInfo->env, terrno); } pOperator->status = OP_RES_TO_RETURN; @@ -2966,7 +2941,7 @@ static int32_t doOpenAggregateOptr(SOperatorInfo* pOperator) { int32_t code = getTableScanInfo(pOperator, &order, &scanFlag); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } // there is an scalar expression that needs to be calculated before apply the group aggregation. @@ -2974,7 +2949,7 @@ static int32_t doOpenAggregateOptr(SOperatorInfo* pOperator) { SExprSupp* pSup1 = &pAggInfo->scalarExprSup; code = projectApplyFunctions(pSup1->pExprInfo, pBlock, pBlock, pSup1->pCtx, pSup1->numOfExprs, NULL); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } } @@ -2983,7 +2958,7 @@ static int32_t doOpenAggregateOptr(SOperatorInfo* pOperator) { setInputDataBlock(pOperator, pSup->pCtx, pBlock, order, scanFlag, true); code = doAggregateImpl(pOperator, pSup->pCtx); if (code != 0) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } } @@ -3440,7 +3415,7 @@ static void destroyOperatorInfo(SOperatorInfo* pOperator) { } if (pOperator->fpSet.closeFn != NULL) { - pOperator->fpSet.closeFn(pOperator->info, pOperator->exprSupp.numOfExprs); + pOperator->fpSet.closeFn(pOperator->info); } if (pOperator->pDownstream != NULL) { @@ -3632,7 +3607,7 @@ SOperatorInfo* createAggregateOperatorInfo(SOperatorInfo* downstream, SExprInfo* return pOperator; _error: - destroyAggOperatorInfo(pInfo, numOfCols); + destroyAggOperatorInfo(pInfo); taosMemoryFreeClear(pOperator); pTaskInfo->code = TSDB_CODE_OUT_OF_MEMORY; return NULL; @@ -3657,7 +3632,7 @@ static void freeItem(void* pItem) { } } -void destroyAggOperatorInfo(void* param, int32_t numOfOutput) { +void destroyAggOperatorInfo(void* param) { SAggOperatorInfo* pInfo = (SAggOperatorInfo*)param; cleanupBasicInfo(&pInfo->binfo); @@ -3667,7 +3642,7 @@ void destroyAggOperatorInfo(void* param, int32_t numOfOutput) { taosMemoryFreeClear(param); } -void destroyFillOperatorInfo(void* param, int32_t numOfOutput) { +void destroyFillOperatorInfo(void* param) { SFillOperatorInfo* pInfo = (SFillOperatorInfo*)param; pInfo->pFillInfo = taosDestroyFillInfo(pInfo->pFillInfo); pInfo->pRes = blockDataDestroy(pInfo->pRes); @@ -3683,7 +3658,7 @@ void destroyFillOperatorInfo(void* param, int32_t numOfOutput) { taosMemoryFreeClear(param); } -void destroyExchangeOperatorInfo(void* param, int32_t numOfOutput) { +void destroyExchangeOperatorInfo(void* param) { SExchangeInfo* pExInfo = (SExchangeInfo*)param; taosRemoveRef(exchangeObjRefPool, pExInfo->self); } @@ -4672,27 +4647,6 @@ void doDestroyTask(SExecTaskInfo* pTaskInfo) { taosMemoryFreeClear(pTaskInfo); } -static void doSetTagValueToResultBuf(char* output, const char* val, int16_t type, int16_t bytes) { - if (val == NULL) { - setNull(output, type, bytes); - return; - } - - if (IS_VAR_DATA_TYPE(type)) { - // Binary data overflows for sort of unknown reasons. Let trim the overflow data - if (varDataTLen(val) > bytes) { - int32_t maxLen = bytes - VARSTR_HEADER_SIZE; - int32_t len = (varDataLen(val) > maxLen) ? maxLen : varDataLen(val); - memcpy(varDataVal(output), varDataVal(val), len); - varDataSetLen(output, len); - } else { - varDataCopy(output, val); - } - } else { - memcpy(output, val, bytes); - } -} - static int64_t getQuerySupportBufSize(size_t numOfTables) { size_t s1 = sizeof(STableQueryInfo); // size_t s3 = sizeof(STableCheckInfo); buffer consumption in tsdb diff --git a/source/libs/executor/src/groupoperator.c b/source/libs/executor/src/groupoperator.c index 05dffc658b..53709c7dcc 100644 --- a/source/libs/executor/src/groupoperator.c +++ b/source/libs/executor/src/groupoperator.c @@ -36,8 +36,12 @@ static void freeGroupKey(void* param) { taosMemoryFree(pKey->pData); } -static void destroyGroupOperatorInfo(void* param, int32_t numOfOutput) { +static void destroyGroupOperatorInfo(void* param) { SGroupbyOperatorInfo* pInfo = (SGroupbyOperatorInfo*)param; + if (pInfo == NULL) { + return; + } + cleanupBasicInfo(&pInfo->binfo); taosMemoryFreeClear(pInfo->keyBuf); taosArrayDestroy(pInfo->pGroupCols); @@ -247,7 +251,7 @@ static void doHashGroupbyAgg(SOperatorInfo* pOperator, SSDataBlock* pBlock) { if (!pInfo->isInit) { recordNewGroupKeys(pInfo->pGroupCols, pInfo->pGroupColVals, pBlock, j); if (terrno != TSDB_CODE_SUCCESS) { // group by json error - longjmp(pTaskInfo->env, terrno); + T_LONG_JMP(pTaskInfo->env, terrno); } pInfo->isInit = true; num++; @@ -265,7 +269,7 @@ static void doHashGroupbyAgg(SOperatorInfo* pOperator, SSDataBlock* pBlock) { num++; recordNewGroupKeys(pInfo->pGroupCols, pInfo->pGroupColVals, pBlock, j); if (terrno != TSDB_CODE_SUCCESS) { // group by json error - longjmp(pTaskInfo->env, terrno); + T_LONG_JMP(pTaskInfo->env, terrno); } continue; } @@ -273,7 +277,7 @@ static void doHashGroupbyAgg(SOperatorInfo* pOperator, SSDataBlock* pBlock) { len = buildGroupKeys(pInfo->keyBuf, pInfo->pGroupColVals); int32_t ret = setGroupResultOutputBuf(pOperator, &(pInfo->binfo), pOperator->exprSupp.numOfExprs, pInfo->keyBuf, len, pBlock->info.groupId, pInfo->aggSup.pResultBuf, &pInfo->aggSup); if (ret != TSDB_CODE_SUCCESS) { // null data, too many state code - longjmp(pTaskInfo->env, TSDB_CODE_QRY_APP_ERROR); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_APP_ERROR); } int32_t rowIndex = j - num; @@ -291,7 +295,7 @@ static void doHashGroupbyAgg(SOperatorInfo* pOperator, SSDataBlock* pBlock) { setGroupResultOutputBuf(pOperator, &(pInfo->binfo), pOperator->exprSupp.numOfExprs, pInfo->keyBuf, len, pBlock->info.groupId, pInfo->aggSup.pResultBuf, &pInfo->aggSup); if (ret != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_APP_ERROR); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_APP_ERROR); } int32_t rowIndex = pBlock->info.rows - num; @@ -350,7 +354,7 @@ static SSDataBlock* hashGroupbyAggregate(SOperatorInfo* pOperator) { int32_t code = getTableScanInfo(pOperator, &order, &scanFlag); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } // the pDataBlock are always the same one, no need to call this again @@ -360,7 +364,7 @@ static SSDataBlock* hashGroupbyAggregate(SOperatorInfo* pOperator) { if (pInfo->scalarSup.pExprInfo != NULL) { pTaskInfo->code = projectApplyFunctions(pInfo->scalarSup.pExprInfo, pBlock, pBlock, pInfo->scalarSup.pCtx, pInfo->scalarSup.numOfExprs, NULL); if (pTaskInfo->code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, pTaskInfo->code); + T_LONG_JMP(pTaskInfo->env, pTaskInfo->code); } } @@ -413,7 +417,11 @@ SOperatorInfo* createGroupOperatorInfo(SOperatorInfo* downstream, SExprInfo* pEx } initResultSizeInfo(&pOperator->resultInfo, 4096); - initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExprInfo, numOfCols, pInfo->groupKeyLen, pTaskInfo->id.str); + code = initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExprInfo, numOfCols, pInfo->groupKeyLen, pTaskInfo->id.str); + if (code != TSDB_CODE_SUCCESS) { + goto _error; + } + initBasicInfo(&pInfo->binfo, pResultBlock); initResultRowInfo(&pInfo->binfo.resultRowInfo); @@ -426,11 +434,15 @@ SOperatorInfo* createGroupOperatorInfo(SOperatorInfo* downstream, SExprInfo* pEx pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, hashGroupbyAggregate, NULL, NULL, destroyGroupOperatorInfo, aggEncodeResultRow, aggDecodeResultRow, NULL); code = appendDownstream(pOperator, &downstream, 1); + if (code != TSDB_CODE_SUCCESS) { + goto _error; + } + return pOperator; _error: pTaskInfo->code = TSDB_CODE_OUT_OF_MEMORY; - taosMemoryFreeClear(pInfo); + destroyGroupOperatorInfo(pInfo); taosMemoryFreeClear(pOperator); return NULL; } @@ -678,14 +690,14 @@ static SSDataBlock* hashPartition(SOperatorInfo* pOperator) { if (pInfo->scalarSup.pExprInfo != NULL) { pTaskInfo->code = projectApplyFunctions(pInfo->scalarSup.pExprInfo, pBlock, pBlock, pInfo->scalarSup.pCtx, pInfo->scalarSup.numOfExprs, NULL); if (pTaskInfo->code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, pTaskInfo->code); + T_LONG_JMP(pTaskInfo->env, pTaskInfo->code); } } terrno = TSDB_CODE_SUCCESS; doHashPartition(pOperator, pBlock); if (terrno != TSDB_CODE_SUCCESS) { // group by json error - longjmp(pTaskInfo->env, terrno); + T_LONG_JMP(pTaskInfo->env, terrno); } } @@ -710,7 +722,7 @@ static SSDataBlock* hashPartition(SOperatorInfo* pOperator) { return buildPartitionResult(pOperator); } -static void destroyPartitionOperatorInfo(void* param, int32_t numOfOutput) { +static void destroyPartitionOperatorInfo(void* param) { SPartitionOperatorInfo* pInfo = (SPartitionOperatorInfo*)param; cleanupBasicInfo(&pInfo->binfo); taosArrayDestroy(pInfo->pGroupCols); diff --git a/source/libs/executor/src/joinoperator.c b/source/libs/executor/src/joinoperator.c index 7d2b84d0f0..1bc7d458e0 100644 --- a/source/libs/executor/src/joinoperator.c +++ b/source/libs/executor/src/joinoperator.c @@ -25,7 +25,7 @@ static void setJoinColumnInfo(SColumnInfo* pColumn, const SColumnNode* pColumnNode); static SSDataBlock* doMergeJoin(struct SOperatorInfo* pOperator); -static void destroyMergeJoinOperator(void* param, int32_t numOfOutput); +static void destroyMergeJoinOperator(void* param); static void extractTimeCondition(SJoinOperatorInfo* pInfo, SOperatorInfo** pDownstream, int32_t numOfDownstream, SSortMergeJoinPhysiNode* pJoinNode); @@ -128,12 +128,11 @@ void setJoinColumnInfo(SColumnInfo* pColumn, const SColumnNode* pColumnNode) { pColumn->scale = pColumnNode->node.resType.scale; } -void destroyMergeJoinOperator(void* param, int32_t numOfOutput) { +void destroyMergeJoinOperator(void* param) { SJoinOperatorInfo* pJoinOperator = (SJoinOperatorInfo*)param; nodesDestroyNode(pJoinOperator->pCondAfterMerge); pJoinOperator->pRes = blockDataDestroy(pJoinOperator->pRes); - taosMemoryFreeClear(param); } diff --git a/source/libs/executor/src/projectoperator.c b/source/libs/executor/src/projectoperator.c index 94da3e23e1..0661ccd390 100644 --- a/source/libs/executor/src/projectoperator.c +++ b/source/libs/executor/src/projectoperator.c @@ -23,7 +23,7 @@ static SArray* setRowTsColumnOutputInfo(SqlFunctionCtx* pCtx, int32_t numOf static void setFunctionResultOutput(SOperatorInfo* pOperator, SOptrBasicInfo* pInfo, SAggSupporter* pSup, int32_t stage, int32_t numOfExprs); -static void destroyProjectOperatorInfo(void* param, int32_t numOfOutput) { +static void destroyProjectOperatorInfo(void* param) { if (NULL == param) { return; } @@ -37,10 +37,13 @@ static void destroyProjectOperatorInfo(void* param, int32_t numOfOutput) { taosMemoryFreeClear(param); } -static void destroyIndefinitOperatorInfo(void* param, int32_t numOfOutput) { +static void destroyIndefinitOperatorInfo(void* param) { SIndefOperatorInfo* pInfo = (SIndefOperatorInfo*)param; - cleanupBasicInfo(&pInfo->binfo); + if (pInfo == NULL) { + return; + } + cleanupBasicInfo(&pInfo->binfo); taosArrayDestroy(pInfo->pPseudoColInfo); cleanupAggSup(&pInfo->aggSup); cleanupExprSupp(&pInfo->scalarSup); @@ -112,7 +115,7 @@ SOperatorInfo* createProjectOperatorInfo(SOperatorInfo* downstream, SProjectPhys return pOperator; _error: - destroyProjectOperatorInfo(pInfo, numOfCols); + destroyProjectOperatorInfo(pInfo); taosMemoryFree(pOperator); pTaskInfo->code = code; return NULL; @@ -268,7 +271,7 @@ SSDataBlock* doProjectOperation(SOperatorInfo* pOperator) { // the pDataBlock are always the same one, no need to call this again int32_t code = getTableScanInfo(downstream, &order, &scanFlag); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } setInputDataBlock(pOperator, pSup->pCtx, pBlock, order, scanFlag, false); @@ -277,7 +280,7 @@ SSDataBlock* doProjectOperation(SOperatorInfo* pOperator) { code = projectApplyFunctions(pSup->pExprInfo, pInfo->pRes, pBlock, pSup->pCtx, pSup->numOfExprs, pProjectInfo->pPseudoColInfo); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } status = doIngroupLimitOffset(pLimitInfo, pBlock->info.groupId, pInfo->pRes, pOperator); @@ -371,9 +374,12 @@ SOperatorInfo* createIndefinitOutputOperatorInfo(SOperatorInfo* downstream, SPhy initResultSizeInfo(&pOperator->resultInfo, numOfRows); - initAggInfo(pSup, &pInfo->aggSup, pExprInfo, numOfExpr, keyBufSize, pTaskInfo->id.str); - initBasicInfo(&pInfo->binfo, pResBlock); + int32_t code = initAggInfo(pSup, &pInfo->aggSup, pExprInfo, numOfExpr, keyBufSize, pTaskInfo->id.str); + if (code != TSDB_CODE_SUCCESS) { + goto _error; + } + initBasicInfo(&pInfo->binfo, pResBlock); setFunctionResultOutput(pOperator, &pInfo->binfo, &pInfo->aggSup, MAIN_SCAN, numOfExpr); pInfo->binfo.pRes = pResBlock; @@ -389,7 +395,7 @@ SOperatorInfo* createIndefinitOutputOperatorInfo(SOperatorInfo* downstream, SPhy pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doApplyIndefinitFunction, NULL, NULL, destroyIndefinitOperatorInfo, NULL, NULL, NULL); - int32_t code = appendDownstream(pOperator, &downstream, 1); + code = appendDownstream(pOperator, &downstream, 1); if (code != TSDB_CODE_SUCCESS) { goto _error; } @@ -397,7 +403,7 @@ SOperatorInfo* createIndefinitOutputOperatorInfo(SOperatorInfo* downstream, SPhy return pOperator; _error: - taosMemoryFree(pInfo); + destroyIndefinitOperatorInfo(pInfo); taosMemoryFree(pOperator); pTaskInfo->code = TSDB_CODE_OUT_OF_MEMORY; return NULL; @@ -415,7 +421,7 @@ static void doHandleDataBlock(SOperatorInfo* pOperator, SSDataBlock* pBlock, SOp // the pDataBlock are always the same one, no need to call this again int32_t code = getTableScanInfo(downstream, &order, &scanFlag); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } // there is an scalar expression that needs to be calculated before apply the group aggregation. @@ -424,7 +430,7 @@ static void doHandleDataBlock(SOperatorInfo* pOperator, SSDataBlock* pBlock, SOp code = projectApplyFunctions(pScalarSup->pExprInfo, pBlock, pBlock, pScalarSup->pCtx, pScalarSup->numOfExprs, pIndefInfo->pPseudoColInfo); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } } @@ -434,7 +440,7 @@ static void doHandleDataBlock(SOperatorInfo* pOperator, SSDataBlock* pBlock, SOp code = projectApplyFunctions(pSup->pExprInfo, pInfo->pRes, pBlock, pSup->pCtx, pSup->numOfExprs, pIndefInfo->pPseudoColInfo); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } } diff --git a/source/libs/executor/src/scanoperator.c b/source/libs/executor/src/scanoperator.c index 08e1c1eee5..3e2a813066 100644 --- a/source/libs/executor/src/scanoperator.c +++ b/source/libs/executor/src/scanoperator.c @@ -36,8 +36,8 @@ #define SWITCH_ORDER(n) (((n) = ((n) == TSDB_ORDER_ASC) ? TSDB_ORDER_DESC : TSDB_ORDER_ASC)) static int32_t buildSysDbTableInfo(const SSysTableScanInfo* pInfo, int32_t capacity); -static int32_t buildDbTableInfoBlock(const SSDataBlock* p, const SSysTableMeta* pSysDbTableMeta, size_t size, - const char* dbName); +static int32_t buildDbTableInfoBlock(bool sysInfo, const SSDataBlock* p, const SSysTableMeta* pSysDbTableMeta, + size_t size, const char* dbName); static bool processBlockWithProbability(const SSampleExecInfo* pInfo); @@ -250,7 +250,7 @@ static bool doLoadBlockSMA(STableScanInfo* pTableScanInfo, SSDataBlock* pBlock, int32_t code = tsdbRetrieveDatablockSMA(pTableScanInfo->dataReader, &pColAgg, &allColumnsHaveAgg); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } if (!allColumnsHaveAgg) { @@ -264,7 +264,7 @@ static bool doLoadBlockSMA(STableScanInfo* pTableScanInfo, SSDataBlock* pBlock, if (pBlock->pBlockAgg == NULL) { pBlock->pBlockAgg = taosMemoryCalloc(numOfCols, POINTER_BYTES); if (pBlock->pBlockAgg == NULL) { - longjmp(pTaskInfo->env, TSDB_CODE_OUT_OF_MEMORY); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_OUT_OF_MEMORY); } } @@ -374,7 +374,7 @@ static int32_t loadDataBlock(SOperatorInfo* pOperator, STableScanInfo* pTableSca int32_t code = addTagPseudoColumnData(&pTableScanInfo->readHandle, pSup->pExprInfo, pSup->numOfExprs, pBlock, GET_TASKID(pTaskInfo)); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } } @@ -495,7 +495,7 @@ static SSDataBlock* doTableScanImpl(SOperatorInfo* pOperator) { while (tsdbNextDataBlock(pTableScanInfo->dataReader)) { if (isTaskKilled(pTaskInfo)) { - longjmp(pTaskInfo->env, TSDB_CODE_TSC_QUERY_CANCELLED); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_TSC_QUERY_CANCELLED); } // process this data block based on the probabilities @@ -523,7 +523,7 @@ static SSDataBlock* doTableScanImpl(SOperatorInfo* pOperator) { int32_t code = loadDataBlock(pOperator, pTableScanInfo, pBlock, &status); // int32_t code = loadDataBlockOnDemand(pOperator->pRuntimeEnv, pTableScanInfo, pBlock, &status); if (code != TSDB_CODE_SUCCESS) { - longjmp(pOperator->pTaskInfo->env, code); + T_LONG_JMP(pOperator->pTaskInfo->env, code); } // current block is filter out according to filter condition, continue load the next block @@ -649,7 +649,7 @@ static SSDataBlock* doTableScan(SOperatorInfo* pOperator) { int32_t code = tsdbReaderOpen(pInfo->readHandle.vnode, &pInfo->cond, tableList, (STsdbReader**)&pInfo->dataReader, GET_TASKID(pTaskInfo)); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); return NULL; } } @@ -689,7 +689,7 @@ static int32_t getTableScannerExecInfo(struct SOperatorInfo* pOptr, void** pOptr return 0; } -static void destroyTableScanOperatorInfo(void* param, int32_t numOfOutput) { +static void destroyTableScanOperatorInfo(void* param) { STableScanInfo* pTableScanInfo = (STableScanInfo*)param; blockDataDestroy(pTableScanInfo->pResBlock); cleanupQueryTableDataCond(&pTableScanInfo->cond); @@ -837,7 +837,7 @@ static SSDataBlock* doBlockInfoScan(SOperatorInfo* pOperator) { int32_t code = doGetTableRowSize(pBlockScanInfo->readHandle.meta, pBlockScanInfo->uid, &blockDistInfo.rowSize, GET_TASKID(pTaskInfo)); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } tsdbGetFileBlocksDistInfo(pBlockScanInfo->pHandle, &blockDistInfo); @@ -863,7 +863,7 @@ static SSDataBlock* doBlockInfoScan(SOperatorInfo* pOperator) { return pBlock; } -static void destroyBlockDistScanOperatorInfo(void* param, int32_t numOfOutput) { +static void destroyBlockDistScanOperatorInfo(void* param) { SBlockDistInfo* pDistInfo = (SBlockDistInfo*)param; blockDataDestroy(pDistInfo->pResBlock); tsdbReaderClose(pDistInfo->pHandle); @@ -1266,7 +1266,7 @@ static int32_t setBlockIntoRes(SStreamScanInfo* pInfo, const SSDataBlock* pBlock GET_TASKID(pTaskInfo)); if (code != TSDB_CODE_SUCCESS) { blockDataFreeRes((SSDataBlock*)pBlock); - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } } @@ -1281,6 +1281,42 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) { SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo; SStreamScanInfo* pInfo = pOperator->info; +#if 0 + SStreamState* pState = pTaskInfo->streamInfo.pState; + if (pState) { + printf(">>>>>>>> stream write backend\n"); + SWinKey key = { + .ts = 1, + .groupId = 2, + }; + char tmp[100] = "abcdefg1"; + if (streamStatePut(pState, &key, &tmp, strlen(tmp) + 1) < 0) { + ASSERT(0); + } + + key.ts = 2; + char tmp2[100] = "abcdefg2"; + if (streamStatePut(pState, &key, &tmp2, strlen(tmp2) + 1) < 0) { + ASSERT(0); + } + + key.groupId = 5; + key.ts = 1; + char tmp3[100] = "abcdefg3"; + if (streamStatePut(pState, &key, &tmp3, strlen(tmp3) + 1) < 0) { + ASSERT(0); + } + + char* val2 = NULL; + int32_t sz; + if (streamStateGet(pState, &key, (void**)&val2, &sz) < 0) { + ASSERT(0); + } + printf("stream read %s %d\n", val2, sz); + streamFreeVal(val2); + } +#endif + qDebug("stream scan called"); if (pTaskInfo->streamInfo.prepareStatus.type == TMQ_OFFSET__LOG) { while (1) { @@ -1673,11 +1709,11 @@ SOperatorInfo* createRawScanOperatorInfo(SReadHandle* pHandle, SExecTaskInfo* pT return pOperator; } -static void destroyStreamScanOperatorInfo(void* param, int32_t numOfOutput) { +static void destroyStreamScanOperatorInfo(void* param) { SStreamScanInfo* pStreamScan = (SStreamScanInfo*)param; if (pStreamScan->pTableScanOp && pStreamScan->pTableScanOp->info) { STableScanInfo* pTableScanInfo = pStreamScan->pTableScanOp->info; - destroyTableScanOperatorInfo(pTableScanInfo, numOfOutput); + destroyTableScanOperatorInfo(pTableScanInfo); taosMemoryFreeClear(pStreamScan->pTableScanOp); } if (pStreamScan->tqReader) { @@ -1833,7 +1869,7 @@ _error: return NULL; } -static void destroySysScanOperator(void* param, int32_t numOfOutput) { +static void destroySysScanOperator(void* param) { SSysTableScanInfo* pInfo = (SSysTableScanInfo*)param; tsem_destroy(&pInfo->ready); blockDataDestroy(pInfo->pRes); @@ -2091,7 +2127,7 @@ static SSDataBlock* sysTableScanUserTags(SOperatorInfo* pOperator) { metaReaderClear(&smr); metaCloseTbCursor(pInfo->pCur); pInfo->pCur = NULL; - longjmp(pTaskInfo->env, terrno); + T_LONG_JMP(pTaskInfo->env, terrno); } char stableName[TSDB_TABLE_NAME_LEN + VARSTR_HEADER_SIZE] = {0}; @@ -2294,7 +2330,7 @@ static SSDataBlock* sysTableScanUserTables(SOperatorInfo* pOperator) { metaReaderClear(&mr); metaCloseTbCursor(pInfo->pCur); pInfo->pCur = NULL; - longjmp(pTaskInfo->env, terrno); + T_LONG_JMP(pTaskInfo->env, terrno); } // number of columns @@ -2526,10 +2562,10 @@ int32_t buildSysDbTableInfo(const SSysTableScanInfo* pInfo, int32_t capacity) { const SSysTableMeta* pSysDbTableMeta = NULL; getInfosDbMeta(&pSysDbTableMeta, &size); - p->info.rows = buildDbTableInfoBlock(p, pSysDbTableMeta, size, TSDB_INFORMATION_SCHEMA_DB); + p->info.rows = buildDbTableInfoBlock(pInfo->sysInfo, p, pSysDbTableMeta, size, TSDB_INFORMATION_SCHEMA_DB); getPerfDbMeta(&pSysDbTableMeta, &size); - p->info.rows = buildDbTableInfoBlock(p, pSysDbTableMeta, size, TSDB_PERFORMANCE_SCHEMA_DB); + p->info.rows = buildDbTableInfoBlock(pInfo->sysInfo, p, pSysDbTableMeta, size, TSDB_PERFORMANCE_SCHEMA_DB); pInfo->pRes->info.rows = p->info.rows; relocateColumnData(pInfo->pRes, pInfo->scanCols, p->pDataBlock, false); @@ -2538,13 +2574,16 @@ int32_t buildSysDbTableInfo(const SSysTableScanInfo* pInfo, int32_t capacity) { return pInfo->pRes->info.rows; } -int32_t buildDbTableInfoBlock(const SSDataBlock* p, const SSysTableMeta* pSysDbTableMeta, size_t size, +int32_t buildDbTableInfoBlock(bool sysInfo, const SSDataBlock* p, const SSysTableMeta* pSysDbTableMeta, size_t size, const char* dbName) { char n[TSDB_TABLE_FNAME_LEN + VARSTR_HEADER_SIZE] = {0}; int32_t numOfRows = p->info.rows; for (int32_t i = 0; i < size; ++i) { const SSysTableMeta* pm = &pSysDbTableMeta[i]; + if (!sysInfo && pm->sysInfo) { + continue; + } SColumnInfoData* pColInfoData = taosArrayGet(p->pDataBlock, 0); @@ -2598,6 +2637,7 @@ SOperatorInfo* createSysTableScanOperatorInfo(void* readHandle, SSystemTableScan pInfo->accountId = pScanPhyNode->accountId; pInfo->pUser = taosMemoryStrDup((void*)pUser); + pInfo->sysInfo = pScanPhyNode->sysInfo; pInfo->showRewrite = pScanPhyNode->showRewrite; pInfo->pRes = pResBlock; pInfo->pCondition = pScanNode->node.pConditions; @@ -2668,7 +2708,7 @@ static SSDataBlock* doTagScan(SOperatorInfo* pOperator) { qError("failed to get table meta, uid:0x%" PRIx64 ", code:%s, %s", item->uid, tstrerror(terrno), GET_TASKID(pTaskInfo)); metaReaderClear(&mr); - longjmp(pTaskInfo->env, terrno); + T_LONG_JMP(pTaskInfo->env, terrno); } for (int32_t j = 0; j < pOperator->exprSupp.numOfExprs; ++j) { @@ -2718,12 +2758,10 @@ static SSDataBlock* doTagScan(SOperatorInfo* pOperator) { return (pRes->info.rows == 0) ? NULL : pInfo->pRes; } -static void destroyTagScanOperatorInfo(void* param, int32_t numOfOutput) { +static void destroyTagScanOperatorInfo(void* param) { STagScanInfo* pInfo = (STagScanInfo*)param; pInfo->pRes = blockDataDestroy(pInfo->pRes); - taosArrayDestroy(pInfo->pColMatchInfo); - taosMemoryFreeClear(param); } @@ -2919,7 +2957,7 @@ static int32_t loadDataBlockFromOneTable(SOperatorInfo* pOperator, STableMergeSc int32_t code = addTagPseudoColumnData(&pTableScanInfo->readHandle, pTableScanInfo->pseudoSup.pExprInfo, pTableScanInfo->pseudoSup.numOfExprs, pBlock, GET_TASKID(pTaskInfo)); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } } @@ -2962,7 +3000,7 @@ static SSDataBlock* getTableDataBlock(void* param) { STsdbReader* reader = taosArrayGetP(pTableScanInfo->dataReaders, readerIdx); while (tsdbNextDataBlock(reader)) { if (isTaskKilled(pOperator->pTaskInfo)) { - longjmp(pOperator->pTaskInfo->env, TSDB_CODE_TSC_QUERY_CANCELLED); + T_LONG_JMP(pOperator->pTaskInfo->env, TSDB_CODE_TSC_QUERY_CANCELLED); } // process this data block based on the probabilities @@ -2985,7 +3023,7 @@ static SSDataBlock* getTableDataBlock(void* param) { int32_t code = loadDataBlockFromOneTable(pOperator, pTableScanInfo, readerIdx, pBlock, &status); // int32_t code = loadDataBlockOnDemand(pOperator->pRuntimeEnv, pTableScanInfo, pBlock, &status); if (code != TSDB_CODE_SUCCESS) { - longjmp(pOperator->pTaskInfo->env, code); + T_LONG_JMP(pOperator->pTaskInfo->env, code); } // current block is filter out according to filter condition, continue load the next block @@ -3078,7 +3116,7 @@ int32_t startGroupTableMergeScan(SOperatorInfo* pOperator) { int32_t code = tsortOpen(pInfo->pSortHandle); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, terrno); + T_LONG_JMP(pTaskInfo->env, terrno); } return TSDB_CODE_SUCCESS; @@ -3148,7 +3186,7 @@ SSDataBlock* doTableMergeScan(SOperatorInfo* pOperator) { int32_t code = pOperator->fpSet._openFn(pOperator); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } size_t tableListSize = taosArrayGetSize(pInfo->tableListInfo->pTableList); if (!pInfo->hasGroupId) { @@ -3186,7 +3224,7 @@ SSDataBlock* doTableMergeScan(SOperatorInfo* pOperator) { return pBlock; } -void destroyTableMergeScanOperatorInfo(void* param, int32_t numOfOutput) { +void destroyTableMergeScanOperatorInfo(void* param) { STableMergeScanInfo* pTableScanInfo = (STableMergeScanInfo*)param; cleanupQueryTableDataCond(&pTableScanInfo->cond); taosArrayDestroy(pTableScanInfo->sortSourceParams); diff --git a/source/libs/executor/src/sortoperator.c b/source/libs/executor/src/sortoperator.c index 4dd5e4ec15..e2014ec973 100644 --- a/source/libs/executor/src/sortoperator.c +++ b/source/libs/executor/src/sortoperator.c @@ -20,7 +20,7 @@ static SSDataBlock* doSort(SOperatorInfo* pOperator); static int32_t doOpenSortOperator(SOperatorInfo* pOperator); static int32_t getExplainExecInfo(SOperatorInfo* pOptr, void** pOptrExplain, uint32_t* len); -static void destroyOrderOperatorInfo(void* param, int32_t numOfOutput); +static void destroyOrderOperatorInfo(void* param); // todo add limit/offset impl SOperatorInfo* createSortOperatorInfo(SOperatorInfo* downstream, SSortPhysiNode* pSortNode, SExecTaskInfo* pTaskInfo) { @@ -156,7 +156,7 @@ void applyScalarFunction(SSDataBlock* pBlock, void* param) { int32_t code = projectApplyFunctions(pOperator->exprSupp.pExprInfo, pBlock, pBlock, pOperator->exprSupp.pCtx, pOperator->exprSupp.numOfExprs, NULL); if (code != TSDB_CODE_SUCCESS) { - longjmp(pOperator->pTaskInfo->env, code); + T_LONG_JMP(pOperator->pTaskInfo->env, code); } } } @@ -184,7 +184,7 @@ int32_t doOpenSortOperator(SOperatorInfo* pOperator) { taosMemoryFreeClear(ps); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, terrno); + T_LONG_JMP(pTaskInfo->env, terrno); } pOperator->cost.openCost = (taosGetTimestampUs() - pInfo->startTs) / 1000.0; @@ -204,7 +204,7 @@ SSDataBlock* doSort(SOperatorInfo* pOperator) { int32_t code = pOperator->fpSet._openFn(pOperator); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } SSDataBlock* pBlock = NULL; @@ -250,7 +250,7 @@ SSDataBlock* doSort(SOperatorInfo* pOperator) { return blockDataGetNumOfRows(pBlock) > 0 ? pBlock : NULL; } -void destroyOrderOperatorInfo(void* param, int32_t numOfOutput) { +void destroyOrderOperatorInfo(void* param) { SSortOperatorInfo* pInfo = (SSortOperatorInfo*)param; pInfo->binfo.pRes = blockDataDestroy(pInfo->binfo.pRes); @@ -388,7 +388,7 @@ int32_t beginSortGroup(SOperatorInfo* pOperator) { taosMemoryFreeClear(ps); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, terrno); + T_LONG_JMP(pTaskInfo->env, terrno); } return TSDB_CODE_SUCCESS; @@ -420,7 +420,7 @@ SSDataBlock* doGroupSort(SOperatorInfo* pOperator) { int32_t code = pOperator->fpSet._openFn(pOperator); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } if (!pInfo->hasGroupId) { @@ -468,7 +468,7 @@ int32_t getGroupSortExplainExecInfo(SOperatorInfo* pOptr, void** pOptrExplain, u return TSDB_CODE_SUCCESS; } -void destroyGroupSortOperatorInfo(void* param, int32_t numOfOutput) { +void destroyGroupSortOperatorInfo(void* param) { SGroupSortOperatorInfo* pInfo = (SGroupSortOperatorInfo*)param; pInfo->binfo.pRes = blockDataDestroy(pInfo->binfo.pRes); @@ -575,7 +575,7 @@ int32_t doOpenMultiwayMergeOperator(SOperatorInfo* pOperator) { int32_t code = tsortOpen(pInfo->pSortHandle); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, terrno); + T_LONG_JMP(pTaskInfo->env, terrno); } pOperator->cost.openCost = (taosGetTimestampUs() - pInfo->startTs) / 1000.0; @@ -672,7 +672,7 @@ SSDataBlock* doMultiwayMerge(SOperatorInfo* pOperator) { int32_t code = pOperator->fpSet._openFn(pOperator); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } SSDataBlock* pBlock = getMultiwaySortedBlockData(pInfo->pSortHandle, pInfo->binfo.pRes, @@ -685,7 +685,7 @@ SSDataBlock* doMultiwayMerge(SOperatorInfo* pOperator) { return pBlock; } -void destroyMultiwayMergeOperatorInfo(void* param, int32_t numOfOutput) { +void destroyMultiwayMergeOperatorInfo(void* param) { SMultiwayMergeOperatorInfo* pInfo = (SMultiwayMergeOperatorInfo*)param; pInfo->binfo.pRes = blockDataDestroy(pInfo->binfo.pRes); pInfo->pInputBlock = blockDataDestroy(pInfo->pInputBlock); diff --git a/source/libs/executor/src/tfill.c b/source/libs/executor/src/tfill.c index 59dd58070d..f23552c5a7 100644 --- a/source/libs/executor/src/tfill.c +++ b/source/libs/executor/src/tfill.c @@ -36,6 +36,7 @@ #define GET_DEST_SLOT_ID(_p) ((_p)->pExpr->base.resSchema.slotId) static void doSetVal(SColumnInfoData* pDstColInfoData, int32_t rowIndex, const SGroupKeys* pKey); +static bool fillIfWindowPseudoColumn(SFillInfo* pFillInfo, SFillColInfo* pCol, SColumnInfoData* pDstColInfoData, int32_t rowIndex); static void setNullRow(SSDataBlock* pBlock, SFillInfo* pFillInfo, int32_t rowIndex) { for(int32_t i = 0; i < pFillInfo->numOfCols; ++i) { @@ -43,9 +44,8 @@ static void setNullRow(SSDataBlock* pBlock, SFillInfo* pFillInfo, int32_t rowInd int32_t dstSlotId = GET_DEST_SLOT_ID(pCol); SColumnInfoData* pDstColInfo = taosArrayGet(pBlock->pDataBlock, dstSlotId); if (pCol->notFillCol) { - if (pDstColInfo->info.type == TSDB_DATA_TYPE_TIMESTAMP) { - colDataAppend(pDstColInfo, rowIndex, (const char*)&pFillInfo->currentKey, false); - } else { + bool filled = fillIfWindowPseudoColumn(pFillInfo, pCol, pDstColInfo, rowIndex); + if (!filled) { SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->prev.pRowVal : pFillInfo->next.pRowVal; SGroupKeys* pKey = taosArrayGet(p, i); doSetVal(pDstColInfo, rowIndex, pKey); @@ -76,6 +76,35 @@ static void doSetUserSpecifiedValue(SColumnInfoData* pDst, SVariant* pVar, int32 } } +//fill windows pseudo column, _wstart, _wend, _wduration and return true, otherwise return false +static bool fillIfWindowPseudoColumn(SFillInfo* pFillInfo, SFillColInfo* pCol, SColumnInfoData* pDstColInfoData, int32_t rowIndex) { + if (!pCol->notFillCol) { + return false; + } + if (pCol->pExpr->pExpr->nodeType == QUERY_NODE_COLUMN) { + if (pCol->pExpr->base.numOfParams != 1) { + return false; + } + if (pCol->pExpr->base.pParam[0].pCol->colType == COLUMN_TYPE_WINDOW_START) { + colDataAppend(pDstColInfoData, rowIndex, (const char*)&pFillInfo->currentKey, false); + return true; + } else if (pCol->pExpr->base.pParam[0].pCol->colType == COLUMN_TYPE_WINDOW_END) { + //TODO: include endpoint + SInterval* pInterval = &pFillInfo->interval; + int32_t step = (pFillInfo->order == TSDB_ORDER_ASC) ? 1 : -1; + int64_t windowEnd = + taosTimeAdd(pFillInfo->currentKey, pInterval->sliding * step, pInterval->slidingUnit, pInterval->precision); + colDataAppend(pDstColInfoData, rowIndex, (const char*)&windowEnd, false); + return true; + } else if (pCol->pExpr->base.pParam[0].pCol->colType == COLUMN_TYPE_WINDOW_DURATION) { + //TODO: include endpoint + colDataAppend(pDstColInfoData, rowIndex, (const char*)&pFillInfo->interval.sliding, false); + return true; + } + } + return false; +} + static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock* pSrcBlock, int64_t ts, bool outOfBound) { SPoint point1, point2, point; @@ -92,10 +121,8 @@ static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock* SFillColInfo* pCol = &pFillInfo->pFillCol[i]; SColumnInfoData* pDstColInfoData = taosArrayGet(pBlock->pDataBlock, GET_DEST_SLOT_ID(pCol)); - - if (pDstColInfoData->info.type == TSDB_DATA_TYPE_TIMESTAMP) { - colDataAppend(pDstColInfoData, index, (const char*)&pFillInfo->currentKey, false); - } else { + bool filled = fillIfWindowPseudoColumn(pFillInfo, pCol, pDstColInfoData, index); + if (!filled) { SGroupKeys* pKey = taosArrayGet(p, i); doSetVal(pDstColInfoData, index, pKey); } @@ -106,10 +133,8 @@ static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock* for (int32_t i = 0; i < pFillInfo->numOfCols; ++i) { SFillColInfo* pCol = &pFillInfo->pFillCol[i]; SColumnInfoData* pDstColInfoData = taosArrayGet(pBlock->pDataBlock, GET_DEST_SLOT_ID(pCol)); - - if (pDstColInfoData->info.type == TSDB_DATA_TYPE_TIMESTAMP) { - colDataAppend(pDstColInfoData, index, (const char*)&pFillInfo->currentKey, false); - } else { + bool filled = fillIfWindowPseudoColumn(pFillInfo, pCol, pDstColInfoData, index); + if (!filled) { SGroupKeys* pKey = taosArrayGet(p, i); doSetVal(pDstColInfoData, index, pKey); } @@ -127,9 +152,8 @@ static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock* int16_t type = pDstCol->info.type; if (pCol->notFillCol) { - if (type == TSDB_DATA_TYPE_TIMESTAMP) { - colDataAppend(pDstCol, index, (const char*)&pFillInfo->currentKey, false); - } else { + bool filled = fillIfWindowPseudoColumn(pFillInfo, pCol, pDstCol, index); + if (!filled) { SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->prev.pRowVal : pFillInfo->next.pRowVal; SGroupKeys* pKey = taosArrayGet(p, i); doSetVal(pDstCol, index, pKey); @@ -170,9 +194,8 @@ static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock* SColumnInfoData* pDst = taosArrayGet(pBlock->pDataBlock, slotId); if (pCol->notFillCol) { - if (pDst->info.type == TSDB_DATA_TYPE_TIMESTAMP) { - colDataAppend(pDst, index, (const char*)&pFillInfo->currentKey, false); - } else { + bool filled = fillIfWindowPseudoColumn(pFillInfo, pCol, pDst, index); + if (!filled) { SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->prev.pRowVal : pFillInfo->next.pRowVal; SGroupKeys* pKey = taosArrayGet(p, i); doSetVal(pDst, index, pKey); diff --git a/source/libs/executor/src/timewindowoperator.c b/source/libs/executor/src/timewindowoperator.c index 3d54b791a8..cadaf4a9d5 100644 --- a/source/libs/executor/src/timewindowoperator.c +++ b/source/libs/executor/src/timewindowoperator.c @@ -15,6 +15,7 @@ #include "executorimpl.h" #include "function.h" #include "functionMgt.h" +#include "tcommon.h" #include "tcompare.h" #include "tdatablock.h" #include "tfill.h" @@ -27,11 +28,6 @@ typedef enum SResultTsInterpType { #define IS_FINAL_OP(op) ((op)->isFinal) -typedef struct SWinRes { - TSKEY ts; - uint64_t groupId; -} SWinRes; - typedef struct SPullWindowInfo { STimeWindow window; uint64_t groupId; @@ -628,7 +624,7 @@ static void doInterpUnclosedTimeWindow(SOperatorInfo* pOperatorInfo, int32_t num int32_t ret = setTimeWindowOutputBuf(pResultRowInfo, &w, (scanFlag == MAIN_SCAN), &pResult, groupId, pSup->pCtx, numOfOutput, pSup->rowEntryInfoOffset, &pInfo->aggSup, pTaskInfo); if (ret != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } ASSERT(!isResultRowInterpolated(pResult, RESULT_ROW_END_INTERP)); @@ -641,7 +637,8 @@ static void doInterpUnclosedTimeWindow(SOperatorInfo* pOperatorInfo, int32_t num setResultRowInterpo(pResult, RESULT_ROW_END_INTERP); setNotInterpoWindowKey(pSup->pCtx, numOfExprs, RESULT_ROW_START_INTERP); - doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, startPos, 0, pBlock->info.rows, numOfExprs); + doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, startPos, 0, pBlock->info.rows, + numOfExprs); if (isResultRowInterpolated(pResult, RESULT_ROW_END_INTERP)) { closeResultRow(pr); @@ -812,7 +809,7 @@ static int32_t savePullWindow(SPullWindowInfo* pPullInfo, SArray* pPullWins) { int32_t compareResKey(void* pKey, void* data, int32_t index) { SArray* res = (SArray*)data; SResKeyPos* pos = taosArrayGetP(res, index); - SWinRes* pData = (SWinRes*)pKey; + SWinKey* pData = (SWinKey*)pKey; if (pData->ts == *(int64_t*)pos->key) { if (pData->groupId > pos->groupId) { return 1; @@ -828,7 +825,7 @@ int32_t compareResKey(void* pKey, void* data, int32_t index) { static int32_t saveResult(int64_t ts, int32_t pageId, int32_t offset, uint64_t groupId, SArray* pUpdated) { int32_t size = taosArrayGetSize(pUpdated); - SWinRes data = {.ts = ts, .groupId = groupId}; + SWinKey data = {.ts = ts, .groupId = groupId}; int32_t index = binarySearchCom(pUpdated, size, &data, TSDB_ORDER_DESC, compareResKey); if (index == -1) { index = 0; @@ -861,8 +858,8 @@ static int32_t saveWinResult(int64_t ts, int32_t pageId, int32_t offset, uint64_ newPos->groupId = groupId; newPos->pos = (SResultRowPosition){.pageId = pageId, .offset = offset}; *(int64_t*)newPos->key = ts; - SWinRes key = {.ts = ts, .groupId = groupId}; - if (taosHashPut(pUpdatedMap, &key, sizeof(SWinRes), &newPos, sizeof(void*)) != TSDB_CODE_SUCCESS) { + SWinKey key = {.ts = ts, .groupId = groupId}; + if (taosHashPut(pUpdatedMap, &key, sizeof(SWinKey), &newPos, sizeof(void*)) != TSDB_CODE_SUCCESS) { taosMemoryFree(newPos); } return TSDB_CODE_SUCCESS; @@ -879,20 +876,20 @@ static int32_t saveResultRow(SResultRow* result, uint64_t groupId, SArray* pUpda static void removeResults(SArray* pWins, SHashObj* pUpdatedMap) { int32_t size = taosArrayGetSize(pWins); for (int32_t i = 0; i < size; i++) { - SWinRes* pW = taosArrayGet(pWins, i); - taosHashRemove(pUpdatedMap, pW, sizeof(SWinRes)); + SWinKey* pW = taosArrayGet(pWins, i); + taosHashRemove(pUpdatedMap, pW, sizeof(SWinKey)); } } int64_t getWinReskey(void* data, int32_t index) { SArray* res = (SArray*)data; - SWinRes* pos = taosArrayGet(res, index); + SWinKey* pos = taosArrayGet(res, index); return pos->ts; } int32_t compareWinRes(void* pKey, void* data, int32_t index) { SArray* res = (SArray*)data; - SWinRes* pos = taosArrayGetP(res, index); + SWinKey* pos = taosArrayGetP(res, index); SResKeyPos* pData = (SResKeyPos*)pKey; if (*(int64_t*)pData->key == pos->ts) { if (pData->groupId > pos->groupId) { @@ -952,7 +949,7 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul ret = setTimeWindowOutputBuf(pResultRowInfo, &win, (scanFlag == MAIN_SCAN), &pResult, tableGroupId, pSup->pCtx, numOfOutput, pSup->rowEntryInfoOffset, &pInfo->aggSup, pTaskInfo); if (ret != TSDB_CODE_SUCCESS || pResult == NULL) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } if (pInfo->execModel == OPTR_EXEC_MODEL_STREAM && pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE) { @@ -975,7 +972,7 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul ret = setTimeWindowOutputBuf(pResultRowInfo, &win, (scanFlag == MAIN_SCAN), &pResult, tableGroupId, pSup->pCtx, numOfOutput, pSup->rowEntryInfoOffset, &pInfo->aggSup, pTaskInfo); if (ret != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } // window start key interpolation @@ -985,8 +982,8 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul if ((!pInfo->ignoreExpiredData || !isCloseWindow(&win, &pInfo->twAggSup)) && inSlidingWindow(&pInfo->interval, &win, &pBlock->info)) { updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &win, true); - doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, startPos, forwardRows, - pBlock->info.rows, numOfOutput); + doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, startPos, forwardRows, pBlock->info.rows, + numOfOutput); } doCloseWindow(pResultRowInfo, pInfo, pResult); @@ -1009,7 +1006,7 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul int32_t code = setTimeWindowOutputBuf(pResultRowInfo, &nextWin, (scanFlag == MAIN_SCAN), &pResult, tableGroupId, pSup->pCtx, numOfOutput, pSup->rowEntryInfoOffset, &pInfo->aggSup, pTaskInfo); if (code != TSDB_CODE_SUCCESS || pResult == NULL) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } if (pInfo->execModel == OPTR_EXEC_MODEL_STREAM && pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE) { @@ -1025,8 +1022,8 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul doWindowBorderInterpolation(pInfo, pBlock, pResult, &nextWin, startPos, forwardRows, pSup); updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &nextWin, true); - doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, startPos, forwardRows, - pBlock->info.rows, numOfOutput); + doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, startPos, forwardRows, pBlock->info.rows, + numOfOutput); doCloseWindow(pResultRowInfo, pInfo, pResult); } @@ -1185,7 +1182,7 @@ static void doStateWindowAggImpl(SOperatorInfo* pOperator, SStateWindowOperatorI int32_t ret = setTimeWindowOutputBuf(&pInfo->binfo.resultRowInfo, &window, masterScan, &pResult, gid, pSup->pCtx, numOfOutput, pSup->rowEntryInfoOffset, &pInfo->aggSup, pTaskInfo); if (ret != TSDB_CODE_SUCCESS) { // null data, too many state code - longjmp(pTaskInfo->env, TSDB_CODE_QRY_APP_ERROR); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_APP_ERROR); } updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &window, false); @@ -1210,12 +1207,12 @@ static void doStateWindowAggImpl(SOperatorInfo* pOperator, SStateWindowOperatorI int32_t ret = setTimeWindowOutputBuf(&pInfo->binfo.resultRowInfo, &pRowSup->win, masterScan, &pResult, gid, pSup->pCtx, numOfOutput, pSup->rowEntryInfoOffset, &pInfo->aggSup, pTaskInfo); if (ret != TSDB_CODE_SUCCESS) { // null data, too many state code - longjmp(pTaskInfo->env, TSDB_CODE_QRY_APP_ERROR); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_APP_ERROR); } updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pRowSup->win, false); - doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex, - pRowSup->numOfRows, pBlock->info.rows, numOfOutput); + doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex, pRowSup->numOfRows, + pBlock->info.rows, numOfOutput); } static SSDataBlock* doStateWindowAgg(SOperatorInfo* pOperator) { @@ -1418,7 +1415,7 @@ void doDeleteSpecifyIntervalWindow(SAggSupporter* pAggSup, SSDataBlock* pBlock, STimeWindow win = getActiveTimeWindow(NULL, &dumyInfo, tsStarts[i], pInterval, TSDB_ORDER_ASC); doDeleteIntervalWindow(pAggSup, win.skey, groupIds[i]); if (pUpWins) { - SWinRes winRes = {.ts = win.skey, .groupId = groupIds[i]}; + SWinKey winRes = {.ts = win.skey, .groupId = groupIds[i]}; taosArrayPush(pUpWins, &winRes); } } @@ -1445,7 +1442,7 @@ static void doClearWindows(SAggSupporter* pAggSup, SExprSupp* pSup1, SInterval* uint64_t winGpId = pGpDatas ? pGpDatas[startPos] : pBlock->info.groupId; bool res = doClearWindow(pAggSup, pSup1, (char*)&win.skey, sizeof(TSKEY), winGpId, numOfOutput); if (pUpWins && res) { - SWinRes winRes = {.ts = win.skey, .groupId = winGpId}; + SWinKey winRes = {.ts = win.skey, .groupId = winGpId}; taosArrayPush(pUpWins, &winRes); } getNextTimeWindow(pInterval, pInterval->precision, TSDB_ORDER_ASC, &win); @@ -1484,11 +1481,11 @@ static int32_t closeIntervalWindow(SHashObj* pHashMap, STimeWindowAggSupp* pSup, STimeWindow win; win.skey = ts; win.ekey = taosTimeAdd(win.skey, pInterval->interval, pInterval->intervalUnit, pInterval->precision) - 1; - SWinRes winRe = { + SWinKey winRe = { .ts = win.skey, .groupId = groupId, }; - void* chIds = taosHashGet(pPullDataMap, &winRe, sizeof(SWinRes)); + void* chIds = taosHashGet(pPullDataMap, &winRe, sizeof(SWinKey)); if (isCloseWindow(&win, pSup)) { if (chIds && pPullDataMap) { SArray* chAy = *(SArray**)chIds; @@ -1555,7 +1552,7 @@ static void doBuildDeleteResult(SArray* pWins, int32_t* index, SSDataBlock* pBlo SColumnInfoData* pTsCol = taosArrayGet(pBlock->pDataBlock, START_TS_COLUMN_INDEX); SColumnInfoData* pGroupCol = taosArrayGet(pBlock->pDataBlock, GROUPID_COLUMN_INDEX); for (int32_t i = *index; i < size; i++) { - SWinRes* pWin = taosArrayGet(pWins, i); + SWinKey* pWin = taosArrayGet(pWins, i); colDataAppend(pTsCol, pBlock->info.rows, (const char*)&pWin->ts, false); colDataAppend(pGroupCol, pBlock->info.rows, (const char*)&pWin->groupId, false); pBlock->info.rows++; @@ -1595,6 +1592,9 @@ static SSDataBlock* doStreamIntervalAgg(SOperatorInfo* pOperator) { SArray* pUpdated = taosArrayInit(4, POINTER_BYTES); // SResKeyPos _hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY); SHashObj* pUpdatedMap = taosHashInit(1024, hashFn, false, HASH_NO_LOCK); + + SStreamState* pState = pTaskInfo->streamInfo.pState; + while (1) { SSDataBlock* pBlock = downstream->fpSet.getNextFn(downstream); if (pBlock == NULL) { @@ -1639,6 +1639,35 @@ static SSDataBlock* doStreamIntervalAgg(SOperatorInfo* pOperator) { hashIntervalAgg(pOperator, &pInfo->binfo.resultRowInfo, pBlock, MAIN_SCAN, pUpdatedMap); } +#if 0 + if (pState) { + printf(">>>>>>>> stream read backend\n"); + SWinKey key = { + .ts = 1, + .groupId = 2, + }; + char* val = NULL; + int32_t sz; + if (streamStateGet(pState, &key, (void**)&val, &sz) < 0) { + ASSERT(0); + } + printf("stream read %s %d\n", val, sz); + streamFreeVal(val); + + SStreamStateCur* pCur = streamStateGetCur(pState, &key); + ASSERT(pCur); + while (streamStateCurNext(pState, pCur) == 0) { + SWinKey key1; + const void* val1; + if (streamStateGetKVByCur(pCur, &key1, &val1, &sz) < 0) { + break; + } + printf("stream iter key groupId:%d ts:%d, value %s %d\n", key1.groupId, key1.ts, val1, sz); + } + streamStateFreeCur(pCur); + } +#endif + pOperator->status = OP_RES_TO_RETURN; closeIntervalWindow(pInfo->aggSup.pResultRowHashTable, &pInfo->twAggSup, &pInfo->interval, NULL, pUpdatedMap, pInfo->pRecycledPages, pInfo->aggSup.pResultBuf); @@ -1664,7 +1693,7 @@ static SSDataBlock* doStreamIntervalAgg(SOperatorInfo* pOperator) { return pInfo->binfo.pRes->info.rows == 0 ? NULL : pInfo->binfo.pRes; } -static void destroyStateWindowOperatorInfo(void* param, int32_t numOfOutput) { +static void destroyStateWindowOperatorInfo(void* param) { SStateWindowOperatorInfo* pInfo = (SStateWindowOperatorInfo*)param; cleanupBasicInfo(&pInfo->binfo); taosMemoryFreeClear(pInfo->stateKey.pData); @@ -1677,7 +1706,7 @@ static void freeItem(void* param) { taosMemoryFree(pKey->pData); } -void destroyIntervalOperatorInfo(void* param, int32_t numOfOutput) { +void destroyIntervalOperatorInfo(void* param) { SIntervalAggOperatorInfo* pInfo = (SIntervalAggOperatorInfo*)param; cleanupBasicInfo(&pInfo->binfo); cleanupAggSup(&pInfo->aggSup); @@ -1694,7 +1723,7 @@ void destroyIntervalOperatorInfo(void* param, int32_t numOfOutput) { taosMemoryFreeClear(param); } -void destroyStreamFinalIntervalOperatorInfo(void* param, int32_t numOfOutput) { +void destroyStreamFinalIntervalOperatorInfo(void* param) { SStreamFinalIntervalOperatorInfo* pInfo = (SStreamFinalIntervalOperatorInfo*)param; cleanupBasicInfo(&pInfo->binfo); cleanupAggSup(&pInfo->aggSup); @@ -1711,7 +1740,7 @@ void destroyStreamFinalIntervalOperatorInfo(void* param, int32_t numOfOutput) { int32_t size = taosArrayGetSize(pInfo->pChildren); for (int32_t i = 0; i < size; i++) { SOperatorInfo* pChildOp = taosArrayGetP(pInfo->pChildren, i); - destroyStreamFinalIntervalOperatorInfo(pChildOp->info, numOfOutput); + destroyStreamFinalIntervalOperatorInfo(pChildOp->info); taosMemoryFree(pChildOp->pDownstream); cleanupExprSupp(&pChildOp->exprSupp); taosMemoryFreeClear(pChildOp); @@ -1836,6 +1865,10 @@ SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo* initResultSizeInfo(&pOperator->resultInfo, 4096); int32_t code = initAggInfo(pSup, &pInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str); + if (code != TSDB_CODE_SUCCESS) { + goto _error; + } + initBasicInfo(&pInfo->binfo, pResBlock); if (isStream) { @@ -1856,8 +1889,9 @@ SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo* goto _error; } } + pInfo->pRecycledPages = taosArrayInit(4, sizeof(int32_t)); - pInfo->pDelWins = taosArrayInit(4, sizeof(SWinRes)); + pInfo->pDelWins = taosArrayInit(4, sizeof(SWinKey)); pInfo->delIndex = 0; pInfo->pDelRes = createSpecialDataBlock(STREAM_DELETE_RESULT); initResultRowInfo(&pInfo->binfo.resultRowInfo); @@ -1885,7 +1919,7 @@ SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo* return pOperator; _error: - destroyIntervalOperatorInfo(pInfo, numOfCols); + destroyIntervalOperatorInfo(pInfo); taosMemoryFreeClear(pOperator); pTaskInfo->code = code; return NULL; @@ -1918,8 +1952,8 @@ static void doSessionWindowAggImpl(SOperatorInfo* pOperator, SSessionAggOperator if (gid != pRowSup->groupId || pInfo->winSup.prevTs == INT64_MIN) { doKeepNewWindowStartInfo(pRowSup, tsList, j, gid); doKeepTuple(pRowSup, tsList[j], gid); - } else if ((tsList[j] - pRowSup->prevTs >= 0) && tsList[j] - pRowSup->prevTs <= gap || - (pRowSup->prevTs - tsList[j] >= 0) && (pRowSup->prevTs - tsList[j] <= gap)) { + } else if (((tsList[j] - pRowSup->prevTs >= 0) && (tsList[j] - pRowSup->prevTs <= gap)) || + ((pRowSup->prevTs - tsList[j] >= 0) && (pRowSup->prevTs - tsList[j] <= gap))) { // The gap is less than the threshold, so it belongs to current session window that has been opened already. doKeepTuple(pRowSup, tsList[j], gid); if (j == 0 && pRowSup->startRowIndex != 0) { @@ -1935,7 +1969,7 @@ static void doSessionWindowAggImpl(SOperatorInfo* pOperator, SSessionAggOperator int32_t ret = setTimeWindowOutputBuf(&pInfo->binfo.resultRowInfo, &window, masterScan, &pResult, gid, pSup->pCtx, numOfOutput, pSup->rowEntryInfoOffset, &pInfo->aggSup, pTaskInfo); if (ret != TSDB_CODE_SUCCESS) { // null data, too many state code - longjmp(pTaskInfo->env, TSDB_CODE_QRY_APP_ERROR); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_APP_ERROR); } // pInfo->numOfRows data belong to the current session window @@ -1954,12 +1988,12 @@ static void doSessionWindowAggImpl(SOperatorInfo* pOperator, SSessionAggOperator int32_t ret = setTimeWindowOutputBuf(&pInfo->binfo.resultRowInfo, &pRowSup->win, masterScan, &pResult, gid, pSup->pCtx, numOfOutput, pSup->rowEntryInfoOffset, &pInfo->aggSup, pTaskInfo); if (ret != TSDB_CODE_SUCCESS) { // null data, too many state code - longjmp(pTaskInfo->env, TSDB_CODE_QRY_APP_ERROR); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_APP_ERROR); } updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pRowSup->win, false); - doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex, - pRowSup->numOfRows, pBlock->info.rows, numOfOutput); + doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex, pRowSup->numOfRows, + pBlock->info.rows, numOfOutput); } static SSDataBlock* doSessionWindowAgg(SOperatorInfo* pOperator) { @@ -2112,6 +2146,7 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp // todo set the correct primary timestamp column // output the result + bool hasInterp = true; for (int32_t j = 0; j < pExprSup->numOfExprs; ++j) { SExprInfo* pExprInfo = &pExprSup->pExprInfo[j]; int32_t srcSlot = pExprInfo->base.pParam[0].pCol->slotId; @@ -2123,7 +2158,6 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp switch (pSliceInfo->fillType) { case TSDB_FILL_NULL: { colDataAppendNULL(pDst, rows); - pResBlock->info.rows += 1; break; } @@ -2143,7 +2177,6 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp GET_TYPED_DATA(v, int64_t, pVar->nType, &pVar->i); colDataAppend(pDst, rows, (char*)&v, false); } - pResBlock->info.rows += 1; break; } @@ -2157,6 +2190,7 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp // before interp range, do not fill if (start.key == INT64_MIN || end.key == INT64_MAX) { + hasInterp = false; break; } @@ -2168,28 +2202,27 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp } taosMemoryFree(current.val); - pResBlock->info.rows += 1; break; } case TSDB_FILL_PREV: { if (!pSliceInfo->isPrevRowSet) { + hasInterp = false; break; } SGroupKeys* pkey = taosArrayGet(pSliceInfo->pPrevRow, srcSlot); colDataAppend(pDst, rows, pkey->pData, false); - pResBlock->info.rows += 1; break; } case TSDB_FILL_NEXT: { if (!pSliceInfo->isNextRowSet) { + hasInterp = false; break; } SGroupKeys* pkey = taosArrayGet(pSliceInfo->pNextRow, srcSlot); colDataAppend(pDst, rows, pkey->pData, false); - pResBlock->info.rows += 1; break; } @@ -2198,6 +2231,11 @@ static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp break; } } + + if (hasInterp) { + pResBlock->info.rows += 1; + } + } static int32_t initPrevRowsKeeper(STimeSliceOperatorInfo* pInfo, SSDataBlock* pBlock) { @@ -2342,7 +2380,7 @@ static SSDataBlock* doTimeslice(SOperatorInfo* pOperator) { int32_t code = initKeeperInfo(pSliceInfo, pBlock); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } // the pDataBlock are always the same one, no need to call this again @@ -2378,6 +2416,11 @@ static SSDataBlock* doTimeslice(SOperatorInfo* pOperator) { SColumnInfoData* pSrc = taosArrayGet(pBlock->pDataBlock, srcSlot); SColumnInfoData* pDst = taosArrayGet(pResBlock->pDataBlock, dstSlot); + if (colDataIsNull_s(pSrc, i)) { + colDataAppendNULL(pDst, pResBlock->info.rows); + continue; + } + char* v = colDataGetData(pSrc, i); colDataAppend(pDst, pResBlock->info.rows, v, false); } @@ -2570,7 +2613,7 @@ static SSDataBlock* doTimeslice(SOperatorInfo* pOperator) { return pResBlock->info.rows == 0 ? NULL : pResBlock; } -void destroyTimeSliceOperatorInfo(void* param, int32_t numOfOutput) { +void destroyTimeSliceOperatorInfo(void* param) { STimeSliceOperatorInfo* pInfo = (STimeSliceOperatorInfo*)param; pInfo->pRes = blockDataDestroy(pInfo->pRes); @@ -2678,7 +2721,11 @@ SOperatorInfo* createStatewindowOperatorInfo(SOperatorInfo* downstream, SExprInf size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES; initResultSizeInfo(&pOperator->resultInfo, 4096); - initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExpr, numOfCols, keyBufSize, pTaskInfo->id.str); + int32_t code = initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExpr, numOfCols, keyBufSize, pTaskInfo->id.str); + if (code != TSDB_CODE_SUCCESS) { + goto _error; + } + initBasicInfo(&pInfo->binfo, pResBlock); initResultRowInfo(&pInfo->binfo.resultRowInfo); @@ -2699,18 +2746,27 @@ SOperatorInfo* createStatewindowOperatorInfo(SOperatorInfo* downstream, SExprInf pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doStateWindowAgg, NULL, NULL, destroyStateWindowOperatorInfo, aggEncodeResultRow, aggDecodeResultRow, NULL); - int32_t code = appendDownstream(pOperator, &downstream, 1); + code = appendDownstream(pOperator, &downstream, 1); + if (code != TSDB_CODE_SUCCESS) { + goto _error; + } + return pOperator; _error: - pTaskInfo->code = TSDB_CODE_SUCCESS; + destroyStateWindowOperatorInfo(pInfo); + taosMemoryFreeClear(pOperator); + pTaskInfo->code = code; return NULL; } -void destroySWindowOperatorInfo(void* param, int32_t numOfOutput) { +void destroySWindowOperatorInfo(void* param) { SSessionAggOperatorInfo* pInfo = (SSessionAggOperatorInfo*)param; - cleanupBasicInfo(&pInfo->binfo); + if (pInfo == NULL) { + return; + } + cleanupBasicInfo(&pInfo->binfo); colDataDestroy(&pInfo->twAggSup.timeWindowData); cleanupAggSup(&pInfo->aggSup); @@ -2764,15 +2820,15 @@ SOperatorInfo* createSessionAggOperatorInfo(SOperatorInfo* downstream, SSessionW pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doSessionWindowAgg, NULL, NULL, destroySWindowOperatorInfo, aggEncodeResultRow, aggDecodeResultRow, NULL); pOperator->pTaskInfo = pTaskInfo; - code = appendDownstream(pOperator, &downstream, 1); + if (code != TSDB_CODE_SUCCESS) { + goto _error; + } + return pOperator; _error: - if (pInfo != NULL) { - destroySWindowOperatorInfo(pInfo, numOfCols); - } - + destroySWindowOperatorInfo(pInfo); taosMemoryFreeClear(pOperator); pTaskInfo->code = code; return NULL; @@ -2790,7 +2846,7 @@ void compactFunctions(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx, int3 if (code != TSDB_CODE_SUCCESS) { qError("%s apply functions error, code: %s", GET_TASKID(pTaskInfo), tstrerror(code)); pTaskInfo->code = code; - longjmp(pTaskInfo->env, code); + T_LONG_JMP(pTaskInfo->env, code); } } } @@ -2811,7 +2867,7 @@ static void rebuildIntervalWindow(SStreamFinalIntervalOperatorInfo* pInfo, SExpr return; } for (int32_t i = 0; i < size; i++) { - SWinRes* pWinRes = taosArrayGet(pWinArray, i); + SWinKey* pWinRes = taosArrayGet(pWinArray, i); SResultRow* pCurResult = NULL; STimeWindow ParentWin = {.skey = pWinRes->ts, .ekey = pWinRes->ts + 1}; setTimeWindowOutputBuf(&pInfo->binfo.resultRowInfo, &ParentWin, true, &pCurResult, pWinRes->groupId, pSup->pCtx, @@ -2854,12 +2910,12 @@ int32_t getNexWindowPos(SInterval* pInterval, SDataBlockInfo* pBlockInfo, TSKEY* return getNextQualifiedWindow(pInterval, pNextWin, pBlockInfo, tsCols, prevEndPos, TSDB_ORDER_ASC); } -void addPullWindow(SHashObj* pMap, SWinRes* pWinRes, int32_t size) { +void addPullWindow(SHashObj* pMap, SWinKey* pWinRes, int32_t size) { SArray* childIds = taosArrayInit(8, sizeof(int32_t)); for (int32_t i = 0; i < size; i++) { taosArrayPush(childIds, &i); } - taosHashPut(pMap, pWinRes, sizeof(SWinRes), &childIds, sizeof(void*)); + taosHashPut(pMap, pWinRes, sizeof(SWinKey), &childIds, sizeof(void*)); } static int32_t getChildIndex(SSDataBlock* pBlock) { return pBlock->info.childId; } @@ -2906,11 +2962,11 @@ static void doHashInterval(SOperatorInfo* pOperatorInfo, SSDataBlock* pSDataBloc } if (IS_FINAL_OP(pInfo) && isClosed && pInfo->pChildren) { bool ignore = true; - SWinRes winRes = { + SWinKey winRes = { .ts = nextWin.skey, .groupId = tableGroupId, }; - void* chIds = taosHashGet(pInfo->pPullDataMap, &winRes, sizeof(SWinRes)); + void* chIds = taosHashGet(pInfo->pPullDataMap, &winRes, sizeof(SWinKey)); if (isDeletedWindow(&nextWin, tableGroupId, &pInfo->aggSup) && !chIds) { SPullWindowInfo pull = {.window = nextWin, .groupId = tableGroupId}; // add pull data request @@ -2944,7 +3000,7 @@ static void doHashInterval(SOperatorInfo* pOperatorInfo, SSDataBlock* pSDataBloc int32_t code = setTimeWindowOutputBuf(pResultRowInfo, &nextWin, true, &pResult, tableGroupId, pSup->pCtx, numOfOutput, pSup->rowEntryInfoOffset, &pInfo->aggSup, pTaskInfo); if (code != TSDB_CODE_SUCCESS || pResult == NULL) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } if (IS_FINAL_OP(pInfo)) { @@ -3038,8 +3094,8 @@ void processPullOver(SSDataBlock* pBlock, SHashObj* pMap) { uint64_t* groupIdData = (uint64_t*)pGroupCol->pData; int32_t chId = getChildIndex(pBlock); for (int32_t i = 0; i < pBlock->info.rows; i++) { - SWinRes winRes = {.ts = tsData[i], .groupId = groupIdData[i]}; - void* chIds = taosHashGet(pMap, &winRes, sizeof(SWinRes)); + SWinKey winRes = {.ts = tsData[i], .groupId = groupIdData[i]}; + void* chIds = taosHashGet(pMap, &winRes, sizeof(SWinKey)); if (chIds) { SArray* chArray = *(SArray**)chIds; int32_t index = taosArraySearchIdx(chArray, &chId, compareInt32Val, TD_EQ); @@ -3048,7 +3104,7 @@ void processPullOver(SSDataBlock* pBlock, SHashObj* pMap) { taosArrayRemove(chArray, index); if (taosArrayGetSize(chArray) == 0) { // pull data is over - taosHashRemove(pMap, &winRes, sizeof(SWinRes)); + taosHashRemove(pMap, &winRes, sizeof(SWinKey)); } } } @@ -3132,7 +3188,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) { if (pBlock->info.type == STREAM_NORMAL || pBlock->info.type == STREAM_PULL_DATA) { pInfo->binfo.pRes->info.type = pBlock->info.type; } else if (pBlock->info.type == STREAM_CLEAR) { - SArray* pUpWins = taosArrayInit(8, sizeof(SWinRes)); + SArray* pUpWins = taosArrayInit(8, sizeof(SWinKey)); doClearWindows(&pInfo->aggSup, pSup, &pInfo->interval, pOperator->exprSupp.numOfExprs, pBlock, pUpWins); if (IS_FINAL_OP(pInfo)) { int32_t childIndex = getChildIndex(pBlock); @@ -3170,7 +3226,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) { getAllIntervalWindow(pInfo->aggSup.pResultRowHashTable, pUpdatedMap); continue; } else if (pBlock->info.type == STREAM_RETRIEVE && !IS_FINAL_OP(pInfo)) { - SArray* pUpWins = taosArrayInit(8, sizeof(SWinRes)); + SArray* pUpWins = taosArrayInit(8, sizeof(SWinKey)); doClearWindows(&pInfo->aggSup, pSup, &pInfo->interval, pOperator->exprSupp.numOfExprs, pBlock, pUpWins); removeResults(pUpWins, pUpdatedMap); taosArrayDestroy(pUpWins); @@ -3196,7 +3252,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) { for (int32_t i = 0; i < chIndex + 1 - size; i++) { SOperatorInfo* pChildOp = createStreamFinalIntervalOperatorInfo(NULL, pInfo->pPhyNode, pOperator->pTaskInfo, 0); if (!pChildOp) { - longjmp(pOperator->pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pOperator->pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } SStreamFinalIntervalOperatorInfo* pTmpInfo = pChildOp->info; pTmpInfo->twAggSup.calTrigger = STREAM_TRIGGER_AT_ONCE; @@ -3335,15 +3391,17 @@ SOperatorInfo* createStreamFinalIntervalOperatorInfo(SOperatorInfo* downstream, SSDataBlock* pResBlock = createResDataBlock(pPhyNode->pOutputDataBlockDesc); int32_t code = initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str); + if (code != TSDB_CODE_SUCCESS) { + goto _error; + } + initStreamFunciton(pOperator->exprSupp.pCtx, pOperator->exprSupp.numOfExprs); initBasicInfo(&pInfo->binfo, pResBlock); ASSERT(numOfCols > 0); increaseTs(pOperator->exprSupp.pCtx); initExecTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pTaskInfo->window); - if (code != TSDB_CODE_SUCCESS) { - goto _error; - } + initResultRowInfo(&pInfo->binfo.resultRowInfo); pInfo->pChildren = NULL; if (numOfChild > 0) { @@ -3385,7 +3443,7 @@ SOperatorInfo* createStreamFinalIntervalOperatorInfo(SOperatorInfo* downstream, pInfo->ignoreExpiredData = pIntervalPhyNode->window.igExpired; pInfo->pDelRes = createSpecialDataBlock(STREAM_DELETE_RESULT); pInfo->delIndex = 0; - pInfo->pDelWins = taosArrayInit(4, sizeof(SWinRes)); + pInfo->pDelWins = taosArrayInit(4, sizeof(SWinKey)); pInfo->pRecycledPages = taosArrayInit(4, sizeof(int32_t)); pOperator->operatorType = pPhyNode->type; @@ -3409,7 +3467,7 @@ SOperatorInfo* createStreamFinalIntervalOperatorInfo(SOperatorInfo* downstream, return pOperator; _error: - destroyStreamFinalIntervalOperatorInfo(pInfo, numOfCols); + destroyStreamFinalIntervalOperatorInfo(pInfo); taosMemoryFreeClear(pOperator); pTaskInfo->code = code; return NULL; @@ -3447,7 +3505,7 @@ void destroyStateStreamAggSupporter(SStreamAggSupporter* pSup) { blockDataDestroy(pSup->pScanBlock); } -void destroyStreamSessionAggOperatorInfo(void* param, int32_t numOfOutput) { +void destroyStreamSessionAggOperatorInfo(void* param) { SStreamSessionAggOperatorInfo* pInfo = (SStreamSessionAggOperatorInfo*)param; cleanupBasicInfo(&pInfo->binfo); destroyStreamAggSupporter(&pInfo->streamAggSup); @@ -3457,7 +3515,7 @@ void destroyStreamSessionAggOperatorInfo(void* param, int32_t numOfOutput) { for (int32_t i = 0; i < size; i++) { SOperatorInfo* pChild = taosArrayGetP(pInfo->pChildren, i); SStreamSessionAggOperatorInfo* pChInfo = pChild->info; - destroyStreamSessionAggOperatorInfo(pChInfo, numOfOutput); + destroyStreamSessionAggOperatorInfo(pChInfo); taosMemoryFreeClear(pChild); } } @@ -3528,7 +3586,7 @@ SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream, SPh if (pSessionNode->window.pExprs != NULL) { int32_t numOfScalar = 0; SExprInfo* pScalarExprInfo = createExprInfo(pSessionNode->window.pExprs, NULL, &numOfScalar); - int32_t code = initExprSupp(&pInfo->scalarSupp, pScalarExprInfo, numOfScalar); + code = initExprSupp(&pInfo->scalarSupp, pScalarExprInfo, numOfScalar); if (code != TSDB_CODE_SUCCESS) { goto _error; } @@ -3592,7 +3650,7 @@ SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream, SPh _error: if (pInfo != NULL) { - destroyStreamSessionAggOperatorInfo(pInfo, numOfCols); + destroyStreamSessionAggOperatorInfo(pInfo); } taosMemoryFreeClear(pOperator); @@ -3720,8 +3778,8 @@ int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pStartTs, TS } if (pWinInfo->win.skey > pStartTs[i]) { if (pStDeleted && pWinInfo->isOutput) { - SWinRes res = {.ts = pWinInfo->win.skey, .groupId = groupId}; - taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &res, sizeof(SWinRes)); + SWinKey res = {.ts = pWinInfo->win.skey, .groupId = groupId}; + taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &res, sizeof(SWinKey)); pWinInfo->isOutput = false; } pWinInfo->win.skey = pStartTs[i]; @@ -3741,7 +3799,7 @@ static int32_t setWindowOutputBuf(SResultWindowInfo* pWinInfo, SResultRow** pRes // too many time window in query int32_t size = taosArrayGetSize(pAggSup->pCurWins); if (pTaskInfo->execModel == OPTR_EXEC_MODEL_BATCH && size > MAX_INTERVAL_TIME_WINDOW) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_TOO_MANY_TIMEWINDOW); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_TOO_MANY_TIMEWINDOW); } if (pWinInfo->pos.pageId == -1) { @@ -3839,8 +3897,8 @@ void compactTimeWindow(SStreamSessionAggOperatorInfo* pInfo, int32_t startIndex, compactFunctions(pSup->pCtx, pInfo->pDummyCtx, numOfOutput, pTaskInfo); taosHashRemove(pStUpdated, &pWinInfo->pos, sizeof(SResultRowPosition)); if (pWinInfo->isOutput) { - SWinRes res = {.ts = pWinInfo->win.skey, .groupId = groupId}; - taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &res, sizeof(SWinRes)); + SWinKey res = {.ts = pWinInfo->win.skey, .groupId = groupId}; + taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &res, sizeof(SWinKey)); pWinInfo->isOutput = false; } taosArrayRemove(pInfo->streamAggSup.pCurWins, i); @@ -3893,7 +3951,7 @@ static void doStreamSessionAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSData pStDeleted); code = doOneWindowAgg(pInfo, pSDataBlock, pCurWin, &pResult, i, winRows, numOfOutput, pOperator); if (code != TSDB_CODE_SUCCESS || pResult == NULL) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } int32_t winNum = getNumCompactWindow(pAggSup->pCurWins, winIndex, gap); @@ -3902,10 +3960,10 @@ static void doStreamSessionAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSData } pCurWin->isClosed = false; if (pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE && pStUpdated) { - SWinRes value = {.ts = pCurWin->win.skey, .groupId = groupId}; - code = taosHashPut(pStUpdated, &pCurWin->pos, sizeof(SResultRowPosition), &value, sizeof(SWinRes)); + SWinKey value = {.ts = pCurWin->win.skey, .groupId = groupId}; + code = taosHashPut(pStUpdated, &pCurWin->pos, sizeof(SResultRowPosition), &value, sizeof(SWinKey)); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } pCurWin->isOutput = true; } @@ -3955,8 +4013,7 @@ static void doClearSessionWindows(SStreamAggSupporter* pAggSup, SExprSupp* pSup, int32_t step = 0; for (int32_t i = 0; i < pBlock->info.rows; i += step) { int32_t winIndex = 0; - SResultWindowInfo* pCurWin = - getCurSessionWindow(pAggSup, tsCols[i], INT64_MIN, gpCols[i], gap, &winIndex); + SResultWindowInfo* pCurWin = getCurSessionWindow(pAggSup, tsCols[i], INT64_MIN, gpCols[i], gap, &winIndex); if (!pCurWin || pCurWin->pos.pageId == -1) { // window has been closed. step = 1; @@ -3981,9 +4038,9 @@ static int32_t copyUpdateResult(SHashObj* pStUpdated, SArray* pUpdated) { if (pos == NULL) { return TSDB_CODE_QRY_OUT_OF_MEMORY; } - pos->groupId = ((SWinRes*)pData)->groupId; + pos->groupId = ((SWinKey*)pData)->groupId; pos->pos = *(SResultRowPosition*)key; - *(int64_t*)pos->key = ((SWinRes*)pData)->ts; + *(int64_t*)pos->key = ((SWinKey*)pData)->ts; taosArrayPush(pUpdated, &pos); } taosArraySort(pUpdated, resultrowComparAsc); @@ -3999,7 +4056,7 @@ void doBuildDeleteDataBlock(SHashObj* pStDeleted, SSDataBlock* pBlock, void** It blockDataEnsureCapacity(pBlock, size); size_t keyLen = 0; while (((*Ite) = taosHashIterate(pStDeleted, *Ite)) != NULL) { - SWinRes* res = *Ite; + SWinKey* res = *Ite; SColumnInfoData* pTsCol = taosArrayGet(pBlock->pDataBlock, START_TS_COLUMN_INDEX); colDataAppend(pTsCol, pBlock->info.rows, (const char*)&res->ts, false); SColumnInfoData* pGpCol = taosArrayGet(pBlock->pDataBlock, GROUPID_COLUMN_INDEX); @@ -4130,8 +4187,8 @@ static void copyDeleteWindowInfo(SArray* pResWins, SHashObj* pStDeleted) { int32_t size = taosArrayGetSize(pResWins); for (int32_t i = 0; i < size; i++) { SResultWindowInfo* pWinInfo = taosArrayGet(pResWins, i); - SWinRes res = {.ts = pWinInfo->win.skey, .groupId = pWinInfo->groupId}; - taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &res, sizeof(SWinRes)); + SWinKey res = {.ts = pWinInfo->win.skey, .groupId = pWinInfo->groupId}; + taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &res, sizeof(SWinKey)); } } @@ -4169,14 +4226,14 @@ static SSDataBlock* doStreamSessionAgg(SOperatorInfo* pOperator) { if (pBlock->info.type == STREAM_CLEAR) { SArray* pWins = taosArrayInit(16, sizeof(SResultWindowInfo)); - doClearSessionWindows(&pInfo->streamAggSup, &pOperator->exprSupp, pBlock, START_TS_COLUMN_INDEX, pOperator->exprSupp.numOfExprs, 0, - pWins); + doClearSessionWindows(&pInfo->streamAggSup, &pOperator->exprSupp, pBlock, START_TS_COLUMN_INDEX, + pOperator->exprSupp.numOfExprs, 0, pWins); if (IS_FINAL_OP(pInfo)) { int32_t childIndex = getChildIndex(pBlock); SOperatorInfo* pChildOp = taosArrayGetP(pInfo->pChildren, childIndex); SStreamSessionAggOperatorInfo* pChildInfo = pChildOp->info; - doClearSessionWindows(&pChildInfo->streamAggSup, &pChildOp->exprSupp, pBlock, START_TS_COLUMN_INDEX, pChildOp->exprSupp.numOfExprs, - 0, NULL); + doClearSessionWindows(&pChildInfo->streamAggSup, &pChildOp->exprSupp, pBlock, START_TS_COLUMN_INDEX, + pChildOp->exprSupp.numOfExprs, 0, NULL); rebuildTimeWindow(pInfo, pWins, pBlock->info.groupId, pOperator->exprSupp.numOfExprs, pOperator); } taosArrayDestroy(pWins); @@ -4216,7 +4273,7 @@ static SSDataBlock* doStreamSessionAgg(SOperatorInfo* pOperator) { SOperatorInfo* pChildOp = createStreamFinalSessionAggOperatorInfo(NULL, pInfo->pPhyNode, pOperator->pTaskInfo, 0); if (!pChildOp) { - longjmp(pOperator->pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pOperator->pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } taosArrayPush(pInfo->pChildren, &pChildOp); } @@ -4422,7 +4479,7 @@ SOperatorInfo* createStreamFinalSessionAggOperatorInfo(SOperatorInfo* downstream _error: if (pInfo != NULL) { - destroyStreamSessionAggOperatorInfo(pInfo, pOperator->exprSupp.numOfExprs); + destroyStreamSessionAggOperatorInfo(pInfo); } taosMemoryFreeClear(pOperator); @@ -4430,7 +4487,7 @@ _error: return NULL; } -void destroyStreamStateOperatorInfo(void* param, int32_t numOfOutput) { +void destroyStreamStateOperatorInfo(void* param) { SStreamStateAggOperatorInfo* pInfo = (SStreamStateAggOperatorInfo*)param; cleanupBasicInfo(&pInfo->binfo); destroyStateStreamAggSupporter(&pInfo->streamAggSup); @@ -4440,7 +4497,7 @@ void destroyStreamStateOperatorInfo(void* param, int32_t numOfOutput) { for (int32_t i = 0; i < size; i++) { SOperatorInfo* pChild = taosArrayGetP(pInfo->pChildren, i); SStreamSessionAggOperatorInfo* pChInfo = pChild->info; - destroyStreamSessionAggOperatorInfo(pChInfo, numOfOutput); + destroyStreamSessionAggOperatorInfo(pChInfo); taosMemoryFreeClear(pChild); taosMemoryFreeClear(pChInfo); } @@ -4578,7 +4635,8 @@ SStateWindowInfo* getStateWindow(SStreamAggSupporter* pAggSup, TSKEY ts, uint64_ } int32_t updateStateWindowInfo(SArray* pWinInfos, int32_t winIndex, TSKEY* pTs, uint64_t groupId, - SColumnInfoData* pKeyCol, int32_t rows, int32_t start, bool* allEqual, SHashObj* pSeDeleted) { + SColumnInfoData* pKeyCol, int32_t rows, int32_t start, bool* allEqual, + SHashObj* pSeDeleted) { *allEqual = true; SStateWindowInfo* pWinInfo = taosArrayGet(pWinInfos, winIndex); for (int32_t i = start; i < rows; ++i) { @@ -4599,9 +4657,8 @@ int32_t updateStateWindowInfo(SArray* pWinInfos, int32_t winIndex, TSKEY* pTs, u } if (pWinInfo->winInfo.win.skey > pTs[i]) { if (pSeDeleted && pWinInfo->winInfo.isOutput) { - SWinRes res = {.ts = pWinInfo->winInfo.win.skey, .groupId = groupId}; - taosHashPut(pSeDeleted, &pWinInfo->winInfo.pos, sizeof(SResultRowPosition), &res, - sizeof(SWinRes)); + SWinKey res = {.ts = pWinInfo->winInfo.win.skey, .groupId = groupId}; + taosHashPut(pSeDeleted, &pWinInfo->winInfo.pos, sizeof(SResultRowPosition), &res, sizeof(SWinKey)); pWinInfo->winInfo.isOutput = false; } pWinInfo->winInfo.win.skey = pTs[i]; @@ -4614,14 +4671,14 @@ int32_t updateStateWindowInfo(SArray* pWinInfos, int32_t winIndex, TSKEY* pTs, u return rows - start; } -static void doClearStateWindows(SStreamAggSupporter* pAggSup, SSDataBlock* pBlock, - SHashObj* pSeUpdated, SHashObj* pSeDeleted) { +static void doClearStateWindows(SStreamAggSupporter* pAggSup, SSDataBlock* pBlock, SHashObj* pSeUpdated, + SHashObj* pSeDeleted) { SColumnInfoData* pTsColInfo = taosArrayGet(pBlock->pDataBlock, START_TS_COLUMN_INDEX); SColumnInfoData* pGroupColInfo = taosArrayGet(pBlock->pDataBlock, GROUPID_COLUMN_INDEX); TSKEY* tsCol = (TSKEY*)pTsColInfo->pData; bool allEqual = false; int32_t step = 1; - uint64_t* gpCol = (uint64_t*) pGroupColInfo->pData; + uint64_t* gpCol = (uint64_t*)pGroupColInfo->pData; for (int32_t i = 0; i < pBlock->info.rows; i += step) { int32_t winIndex = 0; SStateWindowInfo* pCurWin = getStateWindowByTs(pAggSup, tsCol[i], gpCol[i], &winIndex); @@ -4665,27 +4722,26 @@ static void doStreamStateAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSDataBl char* pKeyData = colDataGetData(pKeyColInfo, i); int32_t winIndex = 0; bool allEqual = true; - SStateWindowInfo* pCurWin = - getStateWindow(pAggSup, tsCols[i], groupId, pKeyData, &pInfo->stateCol, &winIndex); - winRows = updateStateWindowInfo(pAggSup->pCurWins, winIndex, tsCols, groupId, pKeyColInfo, - pSDataBlock->info.rows, i, &allEqual, pStDeleted); + SStateWindowInfo* pCurWin = getStateWindow(pAggSup, tsCols[i], groupId, pKeyData, &pInfo->stateCol, &winIndex); + winRows = updateStateWindowInfo(pAggSup->pCurWins, winIndex, tsCols, groupId, pKeyColInfo, pSDataBlock->info.rows, + i, &allEqual, pStDeleted); if (!allEqual) { - appendOneRow(pAggSup->pScanBlock, &pCurWin->winInfo.win.skey, &pCurWin->winInfo.win.ekey, - GROUPID_COLUMN_INDEX, &groupId); + appendOneRow(pAggSup->pScanBlock, &pCurWin->winInfo.win.skey, &pCurWin->winInfo.win.ekey, GROUPID_COLUMN_INDEX, + &groupId); taosHashRemove(pSeUpdated, &pCurWin->winInfo.pos, sizeof(SResultRowPosition)); deleteWindow(pAggSup->pCurWins, winIndex, destroyStateWinInfo); continue; } code = doOneStateWindowAgg(pInfo, pSDataBlock, &pCurWin->winInfo, &pResult, i, winRows, numOfOutput, pOperator); if (code != TSDB_CODE_SUCCESS || pResult == NULL) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } pCurWin->winInfo.isClosed = false; if (pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE) { - SWinRes value = {.ts = pCurWin->winInfo.win.skey, .groupId = groupId}; - code = taosHashPut(pSeUpdated, &pCurWin->winInfo.pos, sizeof(SResultRowPosition), &value, sizeof(SWinRes)); + SWinKey value = {.ts = pCurWin->winInfo.win.skey, .groupId = groupId}; + code = taosHashPut(pSeUpdated, &pCurWin->winInfo.pos, sizeof(SResultRowPosition), &value, sizeof(SWinKey)); if (code != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } pCurWin->winInfo.isOutput = true; } @@ -4856,16 +4912,15 @@ SOperatorInfo* createStreamStateAggOperatorInfo(SOperatorInfo* downstream, SPhys return pOperator; _error: - destroyStreamStateOperatorInfo(pInfo, numOfCols); + destroyStreamStateOperatorInfo(pInfo); taosMemoryFreeClear(pOperator); pTaskInfo->code = code; return NULL; } -void destroyMergeAlignedIntervalOperatorInfo(void* param, int32_t numOfOutput) { +void destroyMergeAlignedIntervalOperatorInfo(void* param) { SMergeAlignedIntervalAggOperatorInfo* miaInfo = (SMergeAlignedIntervalAggOperatorInfo*)param; - destroyIntervalOperatorInfo(miaInfo->intervalAggOperatorInfo, numOfOutput); - + destroyIntervalOperatorInfo(miaInfo->intervalAggOperatorInfo); taosMemoryFreeClear(param); } @@ -4928,7 +4983,7 @@ static void doMergeAlignedIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultR int32_t ret = setTimeWindowOutputBuf(pResultRowInfo, &win, (scanFlag == MAIN_SCAN), &pResult, tableGroupId, pSup->pCtx, numOfOutput, pSup->rowEntryInfoOffset, &iaInfo->aggSup, pTaskInfo); if (ret != TSDB_CODE_SUCCESS || pResult == NULL) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } int32_t currPos = startPos; @@ -4955,7 +5010,7 @@ static void doMergeAlignedIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultR ret = setTimeWindowOutputBuf(pResultRowInfo, &currWin, (scanFlag == MAIN_SCAN), &pResult, tableGroupId, pSup->pCtx, numOfOutput, pSup->rowEntryInfoOffset, &iaInfo->aggSup, pTaskInfo); if (ret != TSDB_CODE_SUCCESS || pResult == NULL) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } miaInfo->curTs = currWin.skey; @@ -5093,8 +5148,11 @@ SOperatorInfo* createMergeAlignedIntervalOperatorInfo(SOperatorInfo* downstream, int32_t code = initAggInfo(&pOperator->exprSupp, &iaInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str); - initBasicInfo(&iaInfo->binfo, pResBlock); + if (code != TSDB_CODE_SUCCESS) { + goto _error; + } + initBasicInfo(&iaInfo->binfo, pResBlock); initExecTimeWindowInfo(&iaInfo->twAggSup.timeWindowData, &iaInfo->win); iaInfo->timeWindowInterpo = timeWindowinterpNeeded(pSup->pCtx, numOfCols, iaInfo); @@ -5102,10 +5160,6 @@ SOperatorInfo* createMergeAlignedIntervalOperatorInfo(SOperatorInfo* downstream, iaInfo->binfo.resultRowInfo.openWindow = tdListNew(sizeof(SResultRowPosition)); } - if (code != TSDB_CODE_SUCCESS) { - goto _error; - } - initResultRowInfo(&iaInfo->binfo.resultRowInfo); blockDataEnsureCapacity(iaInfo->binfo.pRes, pOperator->resultInfo.capacity); @@ -5129,7 +5183,7 @@ SOperatorInfo* createMergeAlignedIntervalOperatorInfo(SOperatorInfo* downstream, return pOperator; _error: - destroyMergeAlignedIntervalOperatorInfo(miaInfo, numOfCols); + destroyMergeAlignedIntervalOperatorInfo(miaInfo); taosMemoryFreeClear(pOperator); pTaskInfo->code = code; return NULL; @@ -5152,10 +5206,10 @@ typedef struct SGroupTimeWindow { STimeWindow window; } SGroupTimeWindow; -void destroyMergeIntervalOperatorInfo(void* param, int32_t numOfOutput) { +void destroyMergeIntervalOperatorInfo(void* param) { SMergeIntervalAggOperatorInfo* miaInfo = (SMergeIntervalAggOperatorInfo*)param; tdListFree(miaInfo->groupIntervals); - destroyIntervalOperatorInfo(&miaInfo->intervalAggOperatorInfo, numOfOutput); + destroyIntervalOperatorInfo(&miaInfo->intervalAggOperatorInfo); taosMemoryFreeClear(param); } @@ -5230,7 +5284,7 @@ static void doMergeIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultRowInfo* setTimeWindowOutputBuf(pResultRowInfo, &win, (scanFlag == MAIN_SCAN), &pResult, tableGroupId, pExprSup->pCtx, numOfOutput, pExprSup->rowEntryInfoOffset, &iaInfo->aggSup, pTaskInfo); if (ret != TSDB_CODE_SUCCESS || pResult == NULL) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } TSKEY ekey = ascScan ? win.ekey : win.skey; @@ -5247,7 +5301,7 @@ static void doMergeIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultRowInfo* ret = setTimeWindowOutputBuf(pResultRowInfo, &win, (scanFlag == MAIN_SCAN), &pResult, tableGroupId, pExprSup->pCtx, numOfOutput, pExprSup->rowEntryInfoOffset, &iaInfo->aggSup, pTaskInfo); if (ret != TSDB_CODE_SUCCESS) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } // window start key interpolation @@ -5276,7 +5330,7 @@ static void doMergeIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultRowInfo* setTimeWindowOutputBuf(pResultRowInfo, &nextWin, (scanFlag == MAIN_SCAN), &pResult, tableGroupId, pExprSup->pCtx, numOfOutput, pExprSup->rowEntryInfoOffset, &iaInfo->aggSup, pTaskInfo); if (code != TSDB_CODE_SUCCESS || pResult == NULL) { - longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); } ekey = ascScan ? nextWin.ekey : nextWin.skey; @@ -5399,8 +5453,11 @@ SOperatorInfo* createMergeIntervalOperatorInfo(SOperatorInfo* downstream, SExprI initResultSizeInfo(&pOperator->resultInfo, 4096); int32_t code = initAggInfo(pExprSupp, &iaInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str); - initBasicInfo(&iaInfo->binfo, pResBlock); + if (code != TSDB_CODE_SUCCESS) { + goto _error; + } + initBasicInfo(&iaInfo->binfo, pResBlock); initExecTimeWindowInfo(&iaInfo->twAggSup.timeWindowData, &iaInfo->win); iaInfo->timeWindowInterpo = timeWindowinterpNeeded(pExprSupp->pCtx, numOfCols, iaInfo); @@ -5433,7 +5490,7 @@ SOperatorInfo* createMergeIntervalOperatorInfo(SOperatorInfo* downstream, SExprI return pOperator; _error: - destroyMergeIntervalOperatorInfo(miaInfo, numOfCols); + destroyMergeIntervalOperatorInfo(miaInfo); taosMemoryFreeClear(pOperator); pTaskInfo->code = code; return NULL; diff --git a/source/libs/executor/src/tlinearhash.c b/source/libs/executor/src/tlinearhash.c index ad97d79f7e..e0752840db 100644 --- a/source/libs/executor/src/tlinearhash.c +++ b/source/libs/executor/src/tlinearhash.c @@ -26,7 +26,7 @@ typedef struct SLHashBucket { int32_t size; // the number of element in this entry } SLHashBucket; -typedef struct SLHashObj { +struct SLHashObj { SDiskbasedBuf *pBuf; _hash_fn_t hashFn; SLHashBucket **pBucket; // entry list @@ -35,7 +35,7 @@ typedef struct SLHashObj { int32_t bits; // the number of bits used in hash int32_t numOfBuckets; // the number of buckets int64_t size; // the number of total items -} SLHashObj; +}; /** * the data struct for each hash node @@ -99,7 +99,7 @@ static int32_t doAddToBucket(SLHashObj* pHashObj, SLHashBucket* pBucket, int32_t int32_t newPageId = -1; SFilePage* pNewPage = getNewBufPage(pHashObj->pBuf, 0, &newPageId); if (pNewPage == NULL) { - return TSDB_CODE_OUT_OF_MEMORY; + return terrno; } taosArrayPush(pBucket->pPageIdList, &newPageId); @@ -138,7 +138,6 @@ static void doRemoveFromBucket(SFilePage* pPage, SLHashNode* pNode, SLHashBucket } setBufPageDirty(pPage, true); - pBucket->size -= 1; } @@ -229,6 +228,10 @@ static int32_t doAddNewBucket(SLHashObj* pHashObj) { int32_t pageId = -1; SFilePage* p = getNewBufPage(pHashObj->pBuf, 0, &pageId); + if (p == NULL) { + return terrno; + } + p->num = sizeof(SFilePage); setBufPageDirty(p, true); @@ -252,7 +255,8 @@ SLHashObj* tHashInit(int32_t inMemPages, int32_t pageSize, _hash_fn_t fn, int32_ printf("tHash Init failed since %s", terrstr(terrno)); return NULL; } - int32_t code = createDiskbasedBuf(&pHashObj->pBuf, pageSize, inMemPages * pageSize, 0, tsTempDir); + + int32_t code = createDiskbasedBuf(&pHashObj->pBuf, pageSize, inMemPages * pageSize, "", tsTempDir); if (code != 0) { terrno = code; return NULL; @@ -389,7 +393,9 @@ char* tHashGet(SLHashObj* pHashObj, const void *key, size_t keyLen) { } SLHashBucket* pBucket = pHashObj->pBucket[bucketId]; - for (int32_t i = 0; i < taosArrayGetSize(pBucket->pPageIdList); ++i) { + int32_t num = taosArrayGetSize(pBucket->pPageIdList); + + for (int32_t i = 0; i < num; ++i) { int32_t pageId = *(int32_t*)taosArrayGet(pBucket->pPageIdList, i); SFilePage* p = getBufPage(pHashObj->pBuf, pageId); diff --git a/source/libs/executor/test/executorTests.cpp b/source/libs/executor/test/executorTests.cpp index bba4b254c5..1c42163349 100644 --- a/source/libs/executor/test/executorTests.cpp +++ b/source/libs/executor/test/executorTests.cpp @@ -26,7 +26,6 @@ #include "executor.h" #include "executorimpl.h" #include "function.h" -#include "stub.h" #include "taos.h" #include "tdatablock.h" #include "tdef.h" diff --git a/source/libs/executor/test/lhashTests.cpp b/source/libs/executor/test/lhashTests.cpp index 695552faa0..c9b75395bc 100644 --- a/source/libs/executor/test/lhashTests.cpp +++ b/source/libs/executor/test/lhashTests.cpp @@ -26,40 +26,47 @@ TEST(testCase, linear_hash_Tests) { taosSeedRand(taosGetTimestampSec()); + strcpy(tsTempDir, "/tmp/"); _hash_fn_t fn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT); -#if 0 - SLHashObj* pHashObj = tHashInit(256, 4096, fn, 320); - for(int32_t i = 0; i < 5000000; ++i) { + + int64_t st = taosGetTimestampUs(); + + SLHashObj* pHashObj = tHashInit(4098*4*2, 512, fn, 40); + for(int32_t i = 0; i < 1000000; ++i) { int32_t code = tHashPut(pHashObj, &i, sizeof(i), &i, sizeof(i)); assert(code == 0); } // tHashPrint(pHashObj, LINEAR_HASH_STATIS); + int64_t et = taosGetTimestampUs(); -// for(int32_t i = 0; i < 10000; ++i) { -// char* v = tHashGet(pHashObj, &i, sizeof(i)); -// if (v != NULL) { -//// printf("find value: %d, key:%d\n", *(int32_t*) v, i); -// } else { -// printf("failed to found key:%d in hash\n", i); -// } -// } - - tHashPrint(pHashObj, LINEAR_HASH_STATIS); - tHashCleanup(pHashObj); -#endif - -#if 0 - SHashObj* pHashObj = taosHashInit(1000, fn, false, HASH_NO_LOCK); for(int32_t i = 0; i < 1000000; ++i) { - taosHashPut(pHashObj, &i, sizeof(i), &i, sizeof(i)); + if (i == 950000) { + printf("kf\n"); + } + char* v = tHashGet(pHashObj, &i, sizeof(i)); + if (v != NULL) { +// printf("find value: %d, key:%d\n", *(int32_t*) v, i); + } else { +// printf("failed to found key:%d in hash\n", i); + } } - for(int32_t i = 0; i < 10000; ++i) { - void* v = taosHashGet(pHashObj, &i, sizeof(i)); - } - taosHashCleanup(pHashObj); -#endif +// tHashPrint(pHashObj, LINEAR_HASH_STATIS); + tHashCleanup(pHashObj); + int64_t et1 = taosGetTimestampUs(); + SHashObj* pHashObj1 = taosHashInit(1000, fn, false, HASH_NO_LOCK); + for(int32_t i = 0; i < 1000000; ++i) { + taosHashPut(pHashObj1, &i, sizeof(i), &i, sizeof(i)); + } + + for(int32_t i = 0; i < 1000000; ++i) { + void* v = taosHashGet(pHashObj1, &i, sizeof(i)); + } + taosHashCleanup(pHashObj1); + + int64_t et2 = taosGetTimestampUs(); + printf("linear hash time:%.2f ms, buildHash:%.2f ms, hash:%.2f\n", (et1-st)/1000.0, (et-st)/1000.0, (et2-et1)/1000.0); } \ No newline at end of file diff --git a/source/libs/executor/test/sortTests.cpp b/source/libs/executor/test/sortTests.cpp index 6e244152f2..4ac15670ac 100644 --- a/source/libs/executor/test/sortTests.cpp +++ b/source/libs/executor/test/sortTests.cpp @@ -27,7 +27,6 @@ #include "executorimpl.h" #include "executor.h" -#include "stub.h" #include "taos.h" #include "tdatablock.h" #include "tdef.h" @@ -196,7 +195,7 @@ int32_t docomp(const void* p1, const void* p2, void* param) { } } // namespace -#if 1 +#if 0 TEST(testCase, inMem_sort_Test) { SBlockOrderInfo oi = {0}; oi.order = TSDB_ORDER_ASC; @@ -382,7 +381,7 @@ TEST(testCase, ordered_merge_sort_Test) { } void* v = tsortGetValue(pTupleHandle, 0); - printf("%d: %d\n", row, *(int32_t*) v); +// printf("%d: %d\n", row, *(int32_t*) v); ASSERT_EQ(row++, *(int32_t*) v); } diff --git a/source/libs/index/src/indexFilter.c b/source/libs/index/src/indexFilter.c index 21aeaba70b..75844ce76f 100644 --- a/source/libs/index/src/indexFilter.c +++ b/source/libs/index/src/indexFilter.c @@ -255,6 +255,13 @@ static int32_t sifInitOperParams(SIFParam **params, SOperatorNode *node, SIFCtx if (node->opType == OP_TYPE_JSON_GET_VALUE) { return code; } + if ((node->pLeft != NULL && nodeType(node->pLeft) == QUERY_NODE_COLUMN) && + (node->pRight != NULL && nodeType(node->pRight) == QUERY_NODE_VALUE)) { + SColumnNode *cn = (SColumnNode *)(node->pLeft); + if (cn->node.resType.type == TSDB_DATA_TYPE_JSON) { + SIF_ERR_RET(TSDB_CODE_QRY_INVALID_INPUT); + } + } SIFParam *paramList = taosMemoryCalloc(nParam, sizeof(SIFParam)); if (NULL == paramList) { diff --git a/source/libs/nodes/src/nodesCloneFuncs.c b/source/libs/nodes/src/nodesCloneFuncs.c index 9390d129df..83bccbffb4 100644 --- a/source/libs/nodes/src/nodesCloneFuncs.c +++ b/source/libs/nodes/src/nodesCloneFuncs.c @@ -545,6 +545,7 @@ static int32_t physiSysTableScanCopy(const SSystemTableScanPhysiNode* pSrc, SSys COPY_OBJECT_FIELD(mgmtEpSet, sizeof(SEpSet)); COPY_SCALAR_FIELD(showRewrite); COPY_SCALAR_FIELD(accountId); + COPY_SCALAR_FIELD(sysInfo); return TSDB_CODE_SUCCESS; } diff --git a/source/libs/nodes/src/nodesCodeFuncs.c b/source/libs/nodes/src/nodesCodeFuncs.c index 0f32001c47..822bdec365 100644 --- a/source/libs/nodes/src/nodesCodeFuncs.c +++ b/source/libs/nodes/src/nodesCodeFuncs.c @@ -1654,6 +1654,7 @@ static int32_t jsonToPhysiTableScanNode(const SJson* pJson, void* pObj) { static const char* jkSysTableScanPhysiPlanMnodeEpSet = "MnodeEpSet"; static const char* jkSysTableScanPhysiPlanShowRewrite = "ShowRewrite"; static const char* jkSysTableScanPhysiPlanAccountId = "AccountId"; +static const char* jkSysTableScanPhysiPlanSysInfo = "SysInfo"; static int32_t physiSysTableScanNodeToJson(const void* pObj, SJson* pJson) { const SSystemTableScanPhysiNode* pNode = (const SSystemTableScanPhysiNode*)pObj; @@ -1668,6 +1669,9 @@ static int32_t physiSysTableScanNodeToJson(const void* pObj, SJson* pJson) { if (TSDB_CODE_SUCCESS == code) { code = tjsonAddIntegerToObject(pJson, jkSysTableScanPhysiPlanAccountId, pNode->accountId); } + if (TSDB_CODE_SUCCESS == code) { + code = tjsonAddBoolToObject(pJson, jkSysTableScanPhysiPlanSysInfo, pNode->sysInfo); + } return code; } @@ -1684,7 +1688,9 @@ static int32_t jsonToPhysiSysTableScanNode(const SJson* pJson, void* pObj) { } if (TSDB_CODE_SUCCESS == code) { tjsonGetNumberValue(pJson, jkSysTableScanPhysiPlanAccountId, pNode->accountId, code); - ; + } + if (TSDB_CODE_SUCCESS == code) { + code = tjsonGetBoolValue(pJson, jkSysTableScanPhysiPlanSysInfo, &pNode->sysInfo); } return code; diff --git a/source/libs/nodes/src/nodesToSQLFuncs.c b/source/libs/nodes/src/nodesToSQLFuncs.c index e521c57c3d..9325d02886 100644 --- a/source/libs/nodes/src/nodesToSQLFuncs.c +++ b/source/libs/nodes/src/nodesToSQLFuncs.c @@ -135,7 +135,12 @@ int32_t nodesNodeToSQL(SNode *pNode, char *buf, int32_t bufSize, int32_t *len) { NODES_ERR_RET(TSDB_CODE_QRY_APP_ERROR); } - *len += snprintf(buf + *len, bufSize - *len, "%s", t); + int32_t tlen = strlen(t); + if (tlen > 32) { + *len += snprintf(buf + *len, bufSize - *len, "%.*s...%s", 32, t, t + tlen - 1); + } else { + *len += snprintf(buf + *len, bufSize - *len, "%s", t); + } taosMemoryFree(t); return TSDB_CODE_SUCCESS; @@ -199,12 +204,17 @@ int32_t nodesNodeToSQL(SNode *pNode, char *buf, int32_t bufSize, int32_t *len) { SNodeListNode *pListNode = (SNodeListNode *)pNode; SNode *node = NULL; bool first = true; + int32_t num = 0; *len += snprintf(buf + *len, bufSize - *len, "("); FOREACH(node, pListNode->pNodeList) { if (!first) { *len += snprintf(buf + *len, bufSize - *len, ", "); + if (++num >= 10) { + *len += snprintf(buf + *len, bufSize - *len, "..."); + break; + } } NODES_ERR_RET(nodesNodeToSQL(node, buf, bufSize, len)); first = false; diff --git a/source/libs/parser/src/parAuthenticator.c b/source/libs/parser/src/parAuthenticator.c index befc822808..fc76d8ffc6 100644 --- a/source/libs/parser/src/parAuthenticator.c +++ b/source/libs/parser/src/parAuthenticator.c @@ -108,6 +108,21 @@ static int32_t authQuery(SAuthCxt* pCxt, SNode* pStmt) { return authDelete(pCxt, (SDeleteStmt*)pStmt); case QUERY_NODE_INSERT_STMT: return authInsert(pCxt, (SInsertStmt*)pStmt); + case QUERY_NODE_SHOW_DNODES_STMT: + case QUERY_NODE_SHOW_MNODES_STMT: + case QUERY_NODE_SHOW_MODULES_STMT: + case QUERY_NODE_SHOW_QNODES_STMT: + case QUERY_NODE_SHOW_SNODES_STMT: + case QUERY_NODE_SHOW_BNODES_STMT: + case QUERY_NODE_SHOW_CLUSTER_STMT: + case QUERY_NODE_SHOW_LICENCES_STMT: + case QUERY_NODE_SHOW_VGROUPS_STMT: + case QUERY_NODE_SHOW_VARIABLES_STMT: + case QUERY_NODE_SHOW_TRANSACTIONS_STMT: + case QUERY_NODE_SHOW_TABLE_DISTRIBUTED_STMT: + case QUERY_NODE_SHOW_VNODES_STMT: + case QUERY_NODE_SHOW_SCORES_STMT: + return !pCxt->pParseCxt->enableSysInfo ? TSDB_CODE_PAR_PERMISSION_DENIED : TSDB_CODE_SUCCESS; default: break; } diff --git a/source/libs/parser/src/parTranslater.c b/source/libs/parser/src/parTranslater.c index 8a1d8763bf..5a32c87fc3 100644 --- a/source/libs/parser/src/parTranslater.c +++ b/source/libs/parser/src/parTranslater.c @@ -784,6 +784,9 @@ static int32_t createColumnsByTable(STranslateContext* pCxt, const STableNode* p int32_t nums = pMeta->tableInfo.numOfColumns + (igTags ? 0 : ((TSDB_SUPER_TABLE == pMeta->tableType) ? pMeta->tableInfo.numOfTags : 0)); for (int32_t i = 0; i < nums; ++i) { + if (invisibleColumn(pCxt->pParseCxt->enableSysInfo, pMeta->tableType, pMeta->schema[i].flags)) { + continue; + } SColumnNode* pCol = (SColumnNode*)nodesMakeNode(QUERY_NODE_COLUMN); if (NULL == pCol) { return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_OUT_OF_MEMORY); @@ -826,7 +829,8 @@ static int32_t findAndSetColumn(STranslateContext* pCxt, SColumnNode** pColRef, } int32_t nums = pMeta->tableInfo.numOfTags + pMeta->tableInfo.numOfColumns; for (int32_t i = 0; i < nums; ++i) { - if (0 == strcmp(pCol->colName, pMeta->schema[i].name)) { + if (0 == strcmp(pCol->colName, pMeta->schema[i].name) && + !invisibleColumn(pCxt->pParseCxt->enableSysInfo, pMeta->tableType, pMeta->schema[i].flags)) { setColumnInfoBySchema((SRealTableNode*)pTable, pMeta->schema + i, (i - pMeta->tableInfo.numOfColumns), pCol); *pFound = true; break; @@ -2192,14 +2196,14 @@ static int32_t translateTable(STranslateContext* pCxt, SNode* pTable) { code = setTableCacheLastMode(pCxt, &name, pRealTable); } } - pRealTable->table.precision = pRealTable->pMeta->tableInfo.precision; - pRealTable->table.singleTable = isSingleTable(pRealTable); if (TSDB_CODE_SUCCESS == code) { + pRealTable->table.precision = pRealTable->pMeta->tableInfo.precision; + pRealTable->table.singleTable = isSingleTable(pRealTable); + if (TSDB_SUPER_TABLE == pRealTable->pMeta->tableType) { + pCxt->stableQuery = true; + } code = addNamespace(pCxt, pRealTable); } - if (TSDB_SUPER_TABLE == pRealTable->pMeta->tableType) { - pCxt->stableQuery = true; - } break; } case QUERY_NODE_TEMP_TABLE: { @@ -2594,8 +2598,7 @@ static int32_t getQueryTimeRange(STranslateContext* pCxt, SNode* pWhere, STimeWi return code; } -static int32_t checkFill(STranslateContext* pCxt, SFillNode* pFill, SValueNode* pInterval, - bool isInterpFill) { +static int32_t checkFill(STranslateContext* pCxt, SFillNode* pFill, SValueNode* pInterval, bool isInterpFill) { if (FILL_MODE_NONE == pFill->mode) { if (isInterpFill) { return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_WRONG_VALUE_TYPE, "Unsupported fill type"); diff --git a/source/libs/parser/test/parTestUtil.cpp b/source/libs/parser/test/parTestUtil.cpp index 98281b7bf0..360b904c17 100644 --- a/source/libs/parser/test/parTestUtil.cpp +++ b/source/libs/parser/test/parTestUtil.cpp @@ -207,6 +207,7 @@ class ParserTestBaseImpl { pCxt->db = caseEnv_.db_.c_str(); pCxt->pUser = caseEnv_.user_.c_str(); pCxt->isSuperUser = caseEnv_.user_ == "root"; + pCxt->enableSysInfo = true; pCxt->pSql = stmtEnv_.sql_.c_str(); pCxt->sqlLen = stmtEnv_.sql_.length(); pCxt->pMsg = stmtEnv_.msgBuf_.data(); diff --git a/source/libs/planner/src/planLogicCreater.c b/source/libs/planner/src/planLogicCreater.c index 71f084d412..0667c5f5b9 100644 --- a/source/libs/planner/src/planLogicCreater.c +++ b/source/libs/planner/src/planLogicCreater.c @@ -44,12 +44,15 @@ static void setColumnInfo(SFunctionNode* pFunc, SColumnNode* pCol) { pCol->colType = COLUMN_TYPE_TBNAME; break; case FUNCTION_TYPE_WSTART: + pCol->colId = PRIMARYKEY_TIMESTAMP_COL_ID; + pCol->colType = COLUMN_TYPE_WINDOW_START; + break; case FUNCTION_TYPE_WEND: pCol->colId = PRIMARYKEY_TIMESTAMP_COL_ID; - pCol->colType = COLUMN_TYPE_WINDOW_PC; + pCol->colType = COLUMN_TYPE_WINDOW_END; break; case FUNCTION_TYPE_WDURATION: - pCol->colType = COLUMN_TYPE_WINDOW_PC; + pCol->colType = COLUMN_TYPE_WINDOW_DURATION; break; case FUNCTION_TYPE_GROUP_KEY: pCol->colType = COLUMN_TYPE_GROUP_KEY; @@ -784,7 +787,10 @@ static int32_t createWindowLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSele static EDealRes needFillValueImpl(SNode* pNode, void* pContext) { if (QUERY_NODE_COLUMN == nodeType(pNode)) { SColumnNode* pCol = (SColumnNode*)pNode; - if (COLUMN_TYPE_WINDOW_PC != pCol->colType && COLUMN_TYPE_GROUP_KEY != pCol->colType) { + if (COLUMN_TYPE_WINDOW_START != pCol->colType && + COLUMN_TYPE_WINDOW_END != pCol->colType && + COLUMN_TYPE_WINDOW_DURATION != pCol->colType && + COLUMN_TYPE_GROUP_KEY != pCol->colType) { *(bool*)pContext = true; return DEAL_RES_END; } diff --git a/source/libs/planner/src/planPhysiCreater.c b/source/libs/planner/src/planPhysiCreater.c index c7eb6f7b5e..cafae18dbe 100644 --- a/source/libs/planner/src/planPhysiCreater.c +++ b/source/libs/planner/src/planPhysiCreater.c @@ -576,6 +576,7 @@ static int32_t createSystemTableScanPhysiNode(SPhysiPlanContext* pCxt, SSubplan* pScan->showRewrite = pScanLogicNode->showRewrite; pScan->accountId = pCxt->pPlanCxt->acctId; + pScan->sysInfo = pCxt->pPlanCxt->sysInfo; if (0 == strcmp(pScanLogicNode->tableName.tname, TSDB_INS_TABLE_TABLES) || 0 == strcmp(pScanLogicNode->tableName.tname, TSDB_INS_TABLE_TABLE_DISTRIBUTED) || 0 == strcmp(pScanLogicNode->tableName.tname, TSDB_INS_TABLE_TAGS)) { diff --git a/source/libs/planner/test/planTestUtil.cpp b/source/libs/planner/test/planTestUtil.cpp index 5fc8b3cf30..f904643be9 100644 --- a/source/libs/planner/test/planTestUtil.cpp +++ b/source/libs/planner/test/planTestUtil.cpp @@ -343,6 +343,7 @@ class PlannerTestBaseImpl { cxt.pMsg = stmtEnv_.msgBuf_.data(); cxt.msgLen = stmtEnv_.msgBuf_.max_size(); cxt.svrVer = "3.0.0.0"; + cxt.enableSysInfo = true; if (prepare) { SStmtCallback stmtCb = {0}; cxt.pStmtCb = &stmtCb; diff --git a/source/libs/qcom/src/queryUtil.c b/source/libs/qcom/src/queryUtil.c index 5143aa4af1..d848016e46 100644 --- a/source/libs/qcom/src/queryUtil.c +++ b/source/libs/qcom/src/queryUtil.c @@ -213,15 +213,25 @@ SSchema createSchema(int8_t type, int32_t bytes, col_id_t colId, const char* nam return s; } +void freeSTableMetaRspPointer(void *p) { + tFreeSTableMetaRsp(*(void**)p); + taosMemoryFreeClear(*(void**)p); +} + void destroyQueryExecRes(SExecResult* pRes) { if (NULL == pRes || NULL == pRes->res) { return; } switch (pRes->msgType) { + case TDMT_VND_CREATE_TABLE: { + taosArrayDestroyEx((SArray*)pRes->res, freeSTableMetaRspPointer); + break; + } + case TDMT_MND_CREATE_STB: case TDMT_VND_ALTER_TABLE: case TDMT_MND_ALTER_STB: { - tFreeSTableMetaRsp((STableMetaRsp*)pRes->res); + tFreeSTableMetaRsp(pRes->res); taosMemoryFreeClear(pRes->res); break; } diff --git a/source/libs/qcom/src/querymsg.c b/source/libs/qcom/src/querymsg.c index ed8786170d..e2d3ac1583 100644 --- a/source/libs/qcom/src/querymsg.c +++ b/source/libs/qcom/src/querymsg.c @@ -354,6 +354,19 @@ static int32_t queryConvertTableMetaMsg(STableMetaRsp *pMetaMsg) { return TSDB_CODE_SUCCESS; } +int32_t queryCreateCTableMetaFromMsg(STableMetaRsp *msg, SCTableMeta *pMeta) { + pMeta->vgId = msg->vgId; + pMeta->tableType = msg->tableType; + pMeta->uid = msg->tuid; + pMeta->suid = msg->suid; + + qDebug("ctable %s uid %" PRIx64 " meta returned, type %d vgId:%d db %s suid %" PRIx64 , + msg->tbName, pMeta->uid, pMeta->tableType, pMeta->vgId, msg->dbFName, pMeta->suid); + + return TSDB_CODE_SUCCESS; +} + + int32_t queryCreateTableMetaFromMsg(STableMetaRsp *msg, bool isStb, STableMeta **pMeta) { int32_t total = msg->numOfColumns + msg->numOfTags; int32_t metaSize = sizeof(STableMeta) + sizeof(SSchema) * total; diff --git a/source/libs/scheduler/src/schRemote.c b/source/libs/scheduler/src/schRemote.c index ecd9daf1bc..5a64aaaebb 100644 --- a/source/libs/scheduler/src/schRemote.c +++ b/source/libs/scheduler/src/schRemote.c @@ -102,15 +102,30 @@ int32_t schHandleResponseMsg(SSchJob *pJob, SSchTask *pTask, int32_t execId, SDa tDecoderInit(&coder, msg, msgSize); code = tDecodeSVCreateTbBatchRsp(&coder, &batchRsp); if (TSDB_CODE_SUCCESS == code && batchRsp.nRsps > 0) { + SCH_LOCK(SCH_WRITE, &pJob->resLock); + if (NULL == pJob->execRes.res) { + pJob->execRes.res = taosArrayInit(batchRsp.nRsps, POINTER_BYTES); + pJob->execRes.msgType = TDMT_VND_CREATE_TABLE; + } + for (int32_t i = 0; i < batchRsp.nRsps; ++i) { SVCreateTbRsp *rsp = batchRsp.pRsps + i; + if (rsp->pMeta) { + taosArrayPush((SArray*)pJob->execRes.res, &rsp->pMeta); + } + if (TSDB_CODE_SUCCESS != rsp->code) { code = rsp->code; - tDecoderClear(&coder); - SCH_ERR_JRET(code); } } + SCH_UNLOCK(SCH_WRITE, &pJob->resLock); + + if (taosArrayGetSize((SArray*)pJob->execRes.res) <= 0) { + taosArrayDestroy((SArray*)pJob->execRes.res); + pJob->execRes.res = NULL; + } } + tDecoderClear(&coder); SCH_ERR_JRET(code); } diff --git a/source/libs/stream/src/streamDispatch.c b/source/libs/stream/src/streamDispatch.c index c78ff0756f..9d4010f60e 100644 --- a/source/libs/stream/src/streamDispatch.c +++ b/source/libs/stream/src/streamDispatch.c @@ -358,7 +358,7 @@ int32_t streamDispatchAllBlocks(SStreamTask* pTask, const SStreamDataBlock* pDat FAIL_SHUFFLE_DISPATCH: if (pReqs) { for (int32_t i = 0; i < vgSz; i++) { - taosArrayDestroy(pReqs[i].data); + taosArrayDestroyP(pReqs[i].data, taosMemoryFree); taosArrayDestroy(pReqs[i].dataLen); } taosMemoryFree(pReqs); diff --git a/source/libs/stream/src/streamState.c b/source/libs/stream/src/streamState.c index 6ccc90fa51..dfd6f012cc 100644 --- a/source/libs/stream/src/streamState.c +++ b/source/libs/stream/src/streamState.c @@ -15,6 +15,7 @@ #include "executor.h" #include "streamInc.h" +#include "tcommon.h" #include "ttimer.h" SStreamState* streamStateOpen(char* path, SStreamTask* pTask) { @@ -23,14 +24,18 @@ SStreamState* streamStateOpen(char* path, SStreamTask* pTask) { terrno = TSDB_CODE_OUT_OF_MEMORY; return NULL; } - char statePath[200]; + char statePath[300]; sprintf(statePath, "%s/%d", path, pTask->taskId); - if (tdbOpen(statePath, 16 * 1024, 1, &pState->db) < 0) { + if (tdbOpen(statePath, 4096, 256, &pState->db) < 0) { goto _err; } // open state storage backend - if (tdbTbOpen("state.db", sizeof(int32_t), -1, NULL, pState->db, &pState->pStateDb) < 0) { + if (tdbTbOpen("state.db", sizeof(SWinKey), -1, SWinKeyCmpr, pState->db, &pState->pStateDb) < 0) { + goto _err; + } + + if (streamStateBegin(pState) < 0) { goto _err; } @@ -60,6 +65,7 @@ int32_t streamStateBegin(SStreamState* pState) { } if (tdbBegin(pState->db, &pState->txn) < 0) { + tdbTxnClose(&pState->txn); return -1; } return 0; @@ -95,33 +101,39 @@ int32_t streamStateAbort(SStreamState* pState) { return 0; } -int32_t streamStatePut(SStreamState* pState, const void* key, int32_t kLen, const void* value, int32_t vLen) { - return tdbTbUpsert(pState->pStateDb, key, kLen, value, vLen, &pState->txn); +int32_t streamStatePut(SStreamState* pState, const SWinKey* key, const void* value, int32_t vLen) { + return tdbTbUpsert(pState->pStateDb, key, sizeof(SWinKey), value, vLen, &pState->txn); } -int32_t streamStateGet(SStreamState* pState, const void* key, int32_t kLen, void** pVal, int32_t* pVLen) { - return tdbTbGet(pState->pStateDb, key, kLen, pVal, pVLen); +int32_t streamStateGet(SStreamState* pState, const SWinKey* key, void** pVal, int32_t* pVLen) { + return tdbTbGet(pState->pStateDb, key, sizeof(SWinKey), pVal, pVLen); } -int32_t streamStateDel(SStreamState* pState, const void* key, int32_t kLen) { - return tdbTbDelete(pState->pStateDb, key, kLen, &pState->txn); +int32_t streamStateDel(SStreamState* pState, const SWinKey* key) { + return tdbTbDelete(pState->pStateDb, key, sizeof(SWinKey), &pState->txn); } -SStreamStateCur* streamStateGetCur(SStreamState* pState, const void* key, int32_t kLen) { +SStreamStateCur* streamStateGetCur(SStreamState* pState, const SWinKey* key) { SStreamStateCur* pCur = taosMemoryCalloc(1, sizeof(SStreamStateCur)); if (pCur == NULL) return NULL; tdbTbcOpen(pState->pStateDb, &pCur->pCur, NULL); int32_t c; - tdbTbcMoveTo(pCur->pCur, key, kLen, &c); + tdbTbcMoveTo(pCur->pCur, key, sizeof(SWinKey), &c); if (c != 0) { taosMemoryFree(pCur); return NULL; } - return 0; + return pCur; } -int32_t streamGetKVByCur(SStreamStateCur* pCur, void** pKey, int32_t* pKLen, void** pVal, int32_t* pVLen) { - return tdbTbcGet(pCur->pCur, (const void**)pKey, pKLen, (const void**)pVal, pVLen); +int32_t streamStateGetKVByCur(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen) { + const SWinKey* pKTmp = NULL; + int32_t kLen; + if (tdbTbcGet(pCur->pCur, (const void**)&pKTmp, &kLen, pVal, pVLen) < 0) { + return -1; + } + *pKey = *pKTmp; + return 0; } int32_t streamStateSeekFirst(SStreamState* pState, SStreamStateCur* pCur) { @@ -134,14 +146,14 @@ int32_t streamStateSeekLast(SStreamState* pState, SStreamStateCur* pCur) { return tdbTbcMoveToLast(pCur->pCur); } -SStreamStateCur* streamStateSeekKeyNext(SStreamState* pState, const void* key, int32_t kLen) { +SStreamStateCur* streamStateSeekKeyNext(SStreamState* pState, const SWinKey* key) { SStreamStateCur* pCur = taosMemoryCalloc(1, sizeof(SStreamStateCur)); if (pCur == NULL) { return NULL; } int32_t c; - if (tdbTbcMoveTo(pCur->pCur, key, kLen, &c) < 0) { + if (tdbTbcMoveTo(pCur->pCur, key, sizeof(SWinKey), &c) < 0) { taosMemoryFree(pCur); return NULL; } @@ -155,14 +167,14 @@ SStreamStateCur* streamStateSeekKeyNext(SStreamState* pState, const void* key, i return pCur; } -SStreamStateCur* streamStateSeekKeyPrev(SStreamState* pState, const void* key, int32_t kLen) { +SStreamStateCur* streamStateSeekKeyPrev(SStreamState* pState, const SWinKey* key) { SStreamStateCur* pCur = taosMemoryCalloc(1, sizeof(SStreamStateCur)); if (pCur == NULL) { return NULL; } int32_t c; - if (tdbTbcMoveTo(pCur->pCur, key, kLen, &c) < 0) { + if (tdbTbcMoveTo(pCur->pCur, key, sizeof(SWinKey), &c) < 0) { taosMemoryFree(pCur); return NULL; } @@ -185,3 +197,9 @@ int32_t streamStateCurPrev(SStreamState* pState, SStreamStateCur* pCur) { // return tdbTbcMoveToPrev(pCur->pCur); } +void streamStateFreeCur(SStreamStateCur* pCur) { + tdbTbcClose(pCur->pCur); + taosMemoryFree(pCur); +} + +void streamFreeVal(void* val) { tdbFree(val); } diff --git a/source/libs/tdb/src/db/tdbBtree.c b/source/libs/tdb/src/db/tdbBtree.c index d3c0c7c755..2ab5fd2e9c 100644 --- a/source/libs/tdb/src/db/tdbBtree.c +++ b/source/libs/tdb/src/db/tdbBtree.c @@ -934,6 +934,8 @@ static int tdbFetchOvflPage(SPgno *pPgno, SPage **ppOfp, TXN *pTxn, SBTree *pBt) return -1; } + tdbPCacheRelease(pBt->pPager->pCache, *ppOfp, pTxn); + return ret; } @@ -1277,6 +1279,8 @@ static int tdbBtreeDecodePayload(SPage *pPage, const SCell *pCell, int nHeader, nLeft -= bytes; memcpy(&pgno, ofpCell + bytes, sizeof(pgno)); + + tdbPCacheRelease(pBt->pPager->pCache, ofp, pTxn); } } else { int nLeftKey = kLen; @@ -1336,6 +1340,8 @@ static int tdbBtreeDecodePayload(SPage *pPage, const SCell *pCell, int nHeader, memcpy(&pgno, ofpCell + bytes, sizeof(pgno)); + tdbPCacheRelease(pBt->pPager->pCache, ofp, pTxn); + nLeftKey -= bytes; nLeft -= bytes; } @@ -1374,6 +1380,8 @@ static int tdbBtreeDecodePayload(SPage *pPage, const SCell *pCell, int nHeader, memcpy(&pgno, ofpCell + vLen - nLeft + bytes, sizeof(pgno)); + tdbPCacheRelease(pBt->pPager->pCache, ofp, pTxn); + nLeft -= bytes; } } @@ -1401,7 +1409,7 @@ static int tdbBtreeDecodeCell(SPage *pPage, const SCell *pCell, SCellDecoder *pD pDecoder->pgno = 0; TDB_CELLDECODER_SET_FREE_NIL(pDecoder); - //tdbDebug("tdb btc decoder set nil: %p/0x%x ", pDecoder, pDecoder->freeKV); + // tdbTrace("tdb btc decoder set nil: %p/0x%x ", pDecoder, pDecoder->freeKV); // 1. Decode header part if (!leaf) { diff --git a/source/libs/tdb/src/db/tdbPCache.c b/source/libs/tdb/src/db/tdbPCache.c index ab9b21dc3f..76d95cbb91 100644 --- a/source/libs/tdb/src/db/tdbPCache.c +++ b/source/libs/tdb/src/db/tdbPCache.c @@ -111,6 +111,7 @@ void tdbPCacheRelease(SPCache *pCache, SPage *pPage, TXN *pTxn) { tdbPCacheLock(pCache); nRef = tdbUnrefPage(pPage); + tdbDebug("pcache/release page %p/%d/%d/%d", pPage, TDB_PAGE_PGNO(pPage), pPage->id, nRef); if (nRef == 0) { // test the nRef again to make sure // it is safe th handle the page @@ -145,7 +146,7 @@ static SPage *tdbPCacheFetchImpl(SPCache *pCache, const SPgid *pPgid, TXN *pTxn) // 1. Search the hash table pPage = pCache->pgHash[tdbPCachePageHash(pPgid) % pCache->nHash]; while (pPage) { - if (memcmp(pPage->pgid.fileid, pPgid->fileid, TDB_FILE_ID_LEN) == 0 && pPage->pgid.pgno == pPgid->pgno) break; + if (pPage->pgid.pgno == pPgid->pgno && memcmp(pPage->pgid.fileid, pPgid->fileid, TDB_FILE_ID_LEN) == 0) break; pPage = pPage->pHashNext; } @@ -212,7 +213,8 @@ static SPage *tdbPCacheFetchImpl(SPCache *pCache, const SPgid *pPgid, TXN *pTxn) pPage->pPager = pPageH->pPager; memcpy(pPage->pData, pPageH->pData, pPage->pageSize); - tdbDebug("pcache/pPageH: %p %d %p %p", pPageH, pPageH->pPageHdr - pPageH->pData, pPageH->xCellSize, pPage); + tdbDebug("pcache/pPageH: %p %d %p %p %d", pPageH, pPageH->pPageHdr - pPageH->pData, pPageH->xCellSize, pPage, + TDB_PAGE_PGNO(pPageH)); tdbPageInit(pPage, pPageH->pPageHdr - pPageH->pData, pPageH->xCellSize); pPage->kLen = pPageH->kLen; pPage->vLen = pPageH->vLen; @@ -243,7 +245,7 @@ static void tdbPCachePinPage(SPCache *pCache, SPage *pPage) { pCache->nRecyclable--; // printf("pin page %d pgno %d pPage %p\n", pPage->id, TDB_PAGE_PGNO(pPage), pPage); - tdbTrace("pin page %d", pPage->id); + tdbDebug("pcache/pin page %p/%d/%d", pPage, TDB_PAGE_PGNO(pPage), pPage->id); } } @@ -264,15 +266,14 @@ static void tdbPCacheUnpinPage(SPCache *pCache, SPage *pPage) { pCache->nRecyclable++; // printf("unpin page %d pgno %d pPage %p\n", pPage->id, TDB_PAGE_PGNO(pPage), pPage); - tdbTrace("unpin page %d", pPage->id); + tdbDebug("pcache/unpin page %p/%d/%d", pPage, TDB_PAGE_PGNO(pPage), pPage->id); } static void tdbPCacheRemovePageFromHash(SPCache *pCache, SPage *pPage) { - SPage **ppPage; - uint32_t h; + uint32_t h = tdbPCachePageHash(&(pPage->pgid)) % pCache->nHash; - h = tdbPCachePageHash(&(pPage->pgid)); - for (ppPage = &(pCache->pgHash[h % pCache->nHash]); (*ppPage) && *ppPage != pPage; ppPage = &((*ppPage)->pHashNext)) + SPage **ppPage = &(pCache->pgHash[h]); + for (; (*ppPage) && *ppPage != pPage; ppPage = &((*ppPage)->pHashNext)) ; if (*ppPage) { @@ -281,13 +282,11 @@ static void tdbPCacheRemovePageFromHash(SPCache *pCache, SPage *pPage) { // printf("rmv page %d to hash, pgno %d, pPage %p\n", pPage->id, TDB_PAGE_PGNO(pPage), pPage); } - tdbTrace("remove page %d to hash", pPage->id); + tdbDebug("pcache/remove page %p/%d/%d from hash %" PRIu32, pPage, TDB_PAGE_PGNO(pPage), pPage->id, h); } static void tdbPCacheAddPageToHash(SPCache *pCache, SPage *pPage) { - int h; - - h = tdbPCachePageHash(&(pPage->pgid)) % pCache->nHash; + uint32_t h = tdbPCachePageHash(&(pPage->pgid)) % pCache->nHash; pPage->pHashNext = pCache->pgHash[h]; pCache->pgHash[h] = pPage; @@ -295,7 +294,7 @@ static void tdbPCacheAddPageToHash(SPCache *pCache, SPage *pPage) { pCache->nPage++; // printf("add page %d to hash, pgno %d, pPage %p\n", pPage->id, TDB_PAGE_PGNO(pPage), pPage); - tdbTrace("add page %d to hash", pPage->id); + tdbDebug("pcache/add page %p/%d/%d to hash %" PRIu32, pPage, TDB_PAGE_PGNO(pPage), pPage->id, h); } static int tdbPCacheOpenImpl(SPCache *pCache) { diff --git a/source/libs/tdb/src/db/tdbPage.c b/source/libs/tdb/src/db/tdbPage.c index 276b06b147..a3f376b929 100644 --- a/source/libs/tdb/src/db/tdbPage.c +++ b/source/libs/tdb/src/db/tdbPage.c @@ -68,12 +68,15 @@ int tdbPageCreate(int pageSize, SPage **ppPage, void *(*xMalloc)(void *, size_t) } *ppPage = pPage; + + tdbDebug("page/create: %p %p", pPage, xMalloc); return 0; } int tdbPageDestroy(SPage *pPage, void (*xFree)(void *arg, void *ptr), void *arg) { u8 *ptr; + tdbDebug("page/destroy: %p %p", pPage, xFree); ASSERT(xFree); for (int iOvfl = 0; iOvfl < pPage->nOverflow; iOvfl++) { @@ -87,6 +90,7 @@ int tdbPageDestroy(SPage *pPage, void (*xFree)(void *arg, void *ptr), void *arg) } void tdbPageZero(SPage *pPage, u8 szAmHdr, int (*xCellSize)(const SPage *, SCell *, int, TXN *, SBTree *pBt)) { + tdbDebug("page/zero: %p %" PRIu8 " %p", pPage, szAmHdr, xCellSize); pPage->pPageHdr = pPage->pData + szAmHdr; TDB_PAGE_NCELLS_SET(pPage, 0); TDB_PAGE_CCELLS_SET(pPage, pPage->pageSize - sizeof(SPageFtr)); @@ -103,6 +107,7 @@ void tdbPageZero(SPage *pPage, u8 szAmHdr, int (*xCellSize)(const SPage *, SCell } void tdbPageInit(SPage *pPage, u8 szAmHdr, int (*xCellSize)(const SPage *, SCell *, int, TXN *, SBTree *pBt)) { + tdbDebug("page/init: %p %" PRIu8 " %p", pPage, szAmHdr, xCellSize); pPage->pPageHdr = pPage->pData + szAmHdr; pPage->pCellIdx = pPage->pPageHdr + TDB_PAGE_HDR_SIZE(pPage); pPage->pFreeStart = pPage->pCellIdx + TDB_PAGE_OFFSET_SIZE(pPage) * TDB_PAGE_NCELLS(pPage); diff --git a/source/libs/transport/src/thttp.c b/source/libs/transport/src/thttp.c index 935f536a90..98e0b8f9c9 100644 --- a/source/libs/transport/src/thttp.c +++ b/source/libs/transport/src/thttp.c @@ -23,6 +23,7 @@ #define HTTP_RECV_BUF_SIZE 1024 + typedef struct SHttpClient { uv_connect_t conn; uv_tcp_t tcp; @@ -143,9 +144,9 @@ static void clientAllocBuffCb(uv_handle_t *handle, size_t suggested_size, uv_buf static void clientRecvCb(uv_stream_t* handle, ssize_t nread, const uv_buf_t *buf) { SHttpClient* cli = handle->data; if (nread < 0) { - uError("http-report read error:%s", uv_err_name(nread)); + uError("http-report recv error:%s", uv_err_name(nread)); } else { - uTrace("http-report succ to read %d bytes, just ignore it", nread); + uTrace("http-report succ to recv %d bytes, just ignore it", nread); } uv_close((uv_handle_t*)&cli->tcp, clientCloseCb); } diff --git a/source/libs/transport/src/transSvr.c b/source/libs/transport/src/transSvr.c index 447db76136..207b967923 100644 --- a/source/libs/transport/src/transSvr.c +++ b/source/libs/transport/src/transSvr.c @@ -276,14 +276,16 @@ void uvOnRecvCb(uv_stream_t* cli, ssize_t nread, const uv_buf_t* buf) { while (transReadComplete(pBuf)) { tTrace("%s conn %p alread read complete packet", transLabel(pTransInst), conn); if (true == pBuf->invalid || false == uvHandleReq(conn)) { - tError("%s conn %p read invalid packet", transLabel(pTransInst), conn); + tError("%s conn %p read invalid packet, received from %s, local info:%s", transLabel(pTransInst), conn, + conn->dst, conn->src); destroyConn(conn, true); return; } } return; } else { - tError("%s conn %p read invalid packet, exceed limit", transLabel(pTransInst), conn); + tError("%s conn %p read invalid packet, exceed limit, received from %s, local info:", transLabel(pTransInst), + conn, conn->dst, conn->src); destroyConn(conn, true); return; } @@ -649,7 +651,7 @@ void uvOnAcceptCb(uv_stream_t* stream, int status) { pObj->workerIdx = (pObj->workerIdx + 1) % pObj->numOfThreads; - tTrace("new conntion accepted by main server, dispatch to %dth worker-thread", pObj->workerIdx); + tTrace("new connection accepted by main server, dispatch to %dth worker-thread", pObj->workerIdx); uv_write2(wr, (uv_stream_t*)&(pObj->pipe[pObj->workerIdx][0]), &buf, 1, (uv_stream_t*)cli, uvOnPipeWriteCb); } else { diff --git a/source/os/src/osFile.c b/source/os/src/osFile.c index 2d9cfe3246..f9797f6319 100644 --- a/source/os/src/osFile.c +++ b/source/os/src/osFile.c @@ -203,10 +203,11 @@ int32_t taosRenameFile(const char *oldName, const char *newName) { } int32_t taosStatFile(const char *path, int64_t *size, int32_t *mtime) { - struct stat fileStat; #ifdef WINDOWS - int32_t code = _stat(path, &fileStat); + struct _stati64 fileStat; + int32_t code = _stati64(path, &fileStat); #else + struct stat fileStat; int32_t code = stat(path, &fileStat); #endif if (code < 0) { diff --git a/source/os/src/osSysinfo.c b/source/os/src/osSysinfo.c index 3a75e18a7f..3aa3f4f29e 100644 --- a/source/os/src/osSysinfo.c +++ b/source/os/src/osSysinfo.c @@ -851,13 +851,12 @@ char *taosGetCmdlineByPID(int pid) { } void taosSetCoreDump(bool enable) { + if (!enable) return; #ifdef WINDOWS - // SetUnhandledExceptionFilter(exceptionHandler); - // SetUnhandledExceptionFilter(&FlCrashDump); + SetUnhandledExceptionFilter(exceptionHandler); + SetUnhandledExceptionFilter(&FlCrashDump); #elif defined(_TD_DARWIN_64) #else - if (!enable) return; - // 1. set ulimit -c unlimited struct rlimit rlim; struct rlimit rlim_new; diff --git a/source/util/src/thash.c b/source/util/src/thash.c index aee84a0d55..0029f1ab1e 100644 --- a/source/util/src/thash.c +++ b/source/util/src/thash.c @@ -21,7 +21,7 @@ // the add ref count operation may trigger the warning if the reference count is greater than the MAX_WARNING_REF_COUNT #define MAX_WARNING_REF_COUNT 10000 -#define HASH_MAX_CAPACITY (1024 * 1024 * 16) +#define HASH_MAX_CAPACITY (1024 * 1024 * 1024) #define HASH_DEFAULT_LOAD_FACTOR (0.75) #define HASH_INDEX(v, c) ((v) & ((c)-1)) @@ -67,6 +67,7 @@ struct SHashObj { bool enableUpdate; // enable update SArray *pMemBlock; // memory block allocated for SHashEntry _hash_before_fn_t callbackFp; // function invoked before return the value to caller + int64_t compTimes; }; /* @@ -146,6 +147,7 @@ static FORCE_INLINE SHashNode *doSearchInEntryList(SHashObj *pHashObj, SHashEntr uint32_t hashVal) { SHashNode *pNode = pe->next; while (pNode) { + atomic_add_fetch_64(&pHashObj->compTimes, 1); if ((pNode->keyLen == keyLen) && ((*(pHashObj->equalFp))(GET_HASH_NODE_KEY(pNode), key, keyLen) == 0) && pNode->removed == 0) { assert(pNode->hashVal == hashVal); @@ -882,3 +884,7 @@ void *taosHashAcquire(SHashObj *pHashObj, const void *key, size_t keyLen) { } void taosHashRelease(SHashObj *pHashObj, void *p) { taosHashCancelIterate(pHashObj, p); } + +int64_t taosHashGetCompTimes(SHashObj *pHashObj) { return atomic_load_64(&pHashObj->compTimes); } + + diff --git a/source/util/src/tpagedbuf.c b/source/util/src/tpagedbuf.c index 0e608d0da2..ac2128dd70 100644 --- a/source/util/src/tpagedbuf.c +++ b/source/util/src/tpagedbuf.c @@ -309,6 +309,7 @@ static SListNode* getEldestUnrefedPage(SDiskbasedBuf* pBuf) { static char* evacOneDataPage(SDiskbasedBuf* pBuf) { char* bufPage = NULL; SListNode* pn = getEldestUnrefedPage(pBuf); + terrno = 0; // all pages are referenced by user, try to allocate new space if (pn == NULL) { @@ -332,6 +333,7 @@ static char* evacOneDataPage(SDiskbasedBuf* pBuf) { bufPage = flushPageToDisk(pBuf, d); } + ASSERT((bufPage != NULL) || terrno != TSDB_CODE_SUCCESS); return bufPage; } diff --git a/source/util/test/hashTest.cpp b/source/util/test/hashTest.cpp index 99f5a761c5..97e67ea36e 100644 --- a/source/util/test/hashTest.cpp +++ b/source/util/test/hashTest.cpp @@ -197,6 +197,201 @@ void acquireRleaseTest() { taosMemoryFreeClear(data.p); } +void perfTest() { + SHashObj* hash1h = (SHashObj*) taosHashInit(100, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK); + SHashObj* hash1s = (SHashObj*) taosHashInit(1000, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK); + SHashObj* hash10s = (SHashObj*) taosHashInit(10000, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK); + SHashObj* hash100s = (SHashObj*) taosHashInit(100000, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK); + SHashObj* hash1m = (SHashObj*) taosHashInit(1000000, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK); + SHashObj* hash10m = (SHashObj*) taosHashInit(10000000, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK); + SHashObj* hash100m = (SHashObj*) taosHashInit(100000000, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK); + + char *name = (char*)taosMemoryCalloc(50000000, 9); + for (int64_t i = 0; i < 50000000; ++i) { + sprintf(name + i * 9, "t%08d", i); + } + + for (int64_t i = 0; i < 50; ++i) { + taosHashPut(hash1h, name + i * 9, 9, &i, sizeof(i)); + } + + for (int64_t i = 0; i < 500; ++i) { + taosHashPut(hash1s, name + i * 9, 9, &i, sizeof(i)); + } + + for (int64_t i = 0; i < 5000; ++i) { + taosHashPut(hash10s, name + i * 9, 9, &i, sizeof(i)); + } + + for (int64_t i = 0; i < 50000; ++i) { + taosHashPut(hash100s, name + i * 9, 9, &i, sizeof(i)); + } + + for (int64_t i = 0; i < 500000; ++i) { + taosHashPut(hash1m, name + i * 9, 9, &i, sizeof(i)); + } + + for (int64_t i = 0; i < 5000000; ++i) { + taosHashPut(hash10m, name + i * 9, 9, &i, sizeof(i)); + } + + for (int64_t i = 0; i < 50000000; ++i) { + taosHashPut(hash100m, name + i * 9, 9, &i, sizeof(i)); + } + + int64_t start1h = taosGetTimestampMs(); + int64_t start1hCt = taosHashGetCompTimes(hash1h); + for (int64_t i = 0; i < 10000000; ++i) { + ASSERT(taosHashGet(hash1h, name + (i % 50) * 9, 9)); + } + int64_t end1h = taosGetTimestampMs(); + int64_t end1hCt = taosHashGetCompTimes(hash1h); + + int64_t start1s = taosGetTimestampMs(); + int64_t start1sCt = taosHashGetCompTimes(hash1s); + for (int64_t i = 0; i < 10000000; ++i) { + ASSERT(taosHashGet(hash1s, name + (i % 500) * 9, 9)); + } + int64_t end1s = taosGetTimestampMs(); + int64_t end1sCt = taosHashGetCompTimes(hash1s); + + int64_t start10s = taosGetTimestampMs(); + int64_t start10sCt = taosHashGetCompTimes(hash10s); + for (int64_t i = 0; i < 10000000; ++i) { + ASSERT(taosHashGet(hash10s, name + (i % 5000) * 9, 9)); + } + int64_t end10s = taosGetTimestampMs(); + int64_t end10sCt = taosHashGetCompTimes(hash10s); + + int64_t start100s = taosGetTimestampMs(); + int64_t start100sCt = taosHashGetCompTimes(hash100s); + for (int64_t i = 0; i < 10000000; ++i) { + ASSERT(taosHashGet(hash100s, name + (i % 50000) * 9, 9)); + } + int64_t end100s = taosGetTimestampMs(); + int64_t end100sCt = taosHashGetCompTimes(hash100s); + + int64_t start1m = taosGetTimestampMs(); + int64_t start1mCt = taosHashGetCompTimes(hash1m); + for (int64_t i = 0; i < 10000000; ++i) { + ASSERT(taosHashGet(hash1m, name + (i % 500000) * 9, 9)); + } + int64_t end1m = taosGetTimestampMs(); + int64_t end1mCt = taosHashGetCompTimes(hash1m); + + int64_t start10m = taosGetTimestampMs(); + int64_t start10mCt = taosHashGetCompTimes(hash10m); + for (int64_t i = 0; i < 10000000; ++i) { + ASSERT(taosHashGet(hash10m, name + (i % 5000000) * 9, 9)); + } + int64_t end10m = taosGetTimestampMs(); + int64_t end10mCt = taosHashGetCompTimes(hash10m); + + int64_t start100m = taosGetTimestampMs(); + int64_t start100mCt = taosHashGetCompTimes(hash100m); + for (int64_t i = 0; i < 10000000; ++i) { + ASSERT(taosHashGet(hash100m, name + (i % 50000000) * 9, 9)); + } + int64_t end100m = taosGetTimestampMs(); + int64_t end100mCt = taosHashGetCompTimes(hash100m); + + + SArray *sArray[1000] = {0}; + for (int64_t i = 0; i < 1000; ++i) { + sArray[i] = taosArrayInit(100000, 9); + } + int64_t cap = 4; + while (cap < 100000000) cap = (cap << 1u); + + _hash_fn_t hashFp = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY); + int32_t slotR = cap / 1000 + 1; + for (int64_t i = 0; i < 10000000; ++i) { + char* p = name + (i % 50000000) * 9; + uint32_t v = (*hashFp)(p, 9); + taosArrayPush(sArray[(v%cap)/slotR], p); + } + SArray *slArray = taosArrayInit(100000000, 9); + for (int64_t i = 0; i < 1000; ++i) { + int32_t num = taosArrayGetSize(sArray[i]); + SArray* pArray = sArray[i]; + for (int64_t m = 0; m < num; ++m) { + char* p = (char*)taosArrayGet(pArray, m); + ASSERT(taosArrayPush(slArray, p)); + } + } + int64_t start100mS = taosGetTimestampMs(); + int64_t start100mSCt = taosHashGetCompTimes(hash100m); + int32_t num = taosArrayGetSize(slArray); + for (int64_t i = 0; i < num; ++i) { + ASSERT(taosHashGet(hash100m, (char*)TARRAY_GET_ELEM(slArray, i), 9)); + } + int64_t end100mS = taosGetTimestampMs(); + int64_t end100mSCt = taosHashGetCompTimes(hash100m); + for (int64_t i = 0; i < 1000; ++i) { + taosArrayDestroy(sArray[i]); + } + taosArrayDestroy(slArray); + + printf("1h \t %" PRId64 "ms,%" PRId64 "\n", end1h - start1h, end1hCt - start1hCt); + printf("1s \t %" PRId64 "ms,%" PRId64 "\n", end1s - start1s, end1sCt - start1sCt); + printf("10s \t %" PRId64 "ms,%" PRId64 "\n", end10s - start10s, end10sCt - start10sCt); + printf("100s \t %" PRId64 "ms,%" PRId64 "\n", end100s - start100s, end100sCt - start100sCt); + printf("1m \t %" PRId64 "ms,%" PRId64 "\n", end1m - start1m, end1mCt - start1mCt); + printf("10m \t %" PRId64 "ms,%" PRId64 "\n", end10m - start10m, end10mCt - start10mCt); + printf("100m \t %" PRId64 "ms,%" PRId64 "\n", end100m - start100m, end100mCt - start100mCt); + printf("100mS \t %" PRId64 "ms,%" PRId64 "\n", end100mS - start100mS, end100mSCt - start100mSCt); + + taosHashCleanup(hash1h); + taosHashCleanup(hash1s); + taosHashCleanup(hash10s); + taosHashCleanup(hash100s); + taosHashCleanup(hash1m); + taosHashCleanup(hash10m); + taosHashCleanup(hash100m); + + SHashObj *mhash[1000] = {0}; + for (int64_t i = 0; i < 1000; ++i) { + mhash[i] = (SHashObj*) taosHashInit(100000, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK); + } + + for (int64_t i = 0; i < 50000000; ++i) { +#if 0 + taosHashPut(mhash[i%1000], name + i * 9, 9, &i, sizeof(i)); +#else + taosHashPut(mhash[i/50000], name + i * 9, 9, &i, sizeof(i)); +#endif + } + + int64_t startMhashCt = 0; + for (int64_t i = 0; i < 1000; ++i) { + startMhashCt += taosHashGetCompTimes(mhash[i]); + } + + int64_t startMhash = taosGetTimestampMs(); +#if 0 + for (int32_t i = 0; i < 10000000; ++i) { + ASSERT(taosHashGet(mhash[i%1000], name + i * 9, 9)); + } +#else +// for (int64_t i = 0; i < 10000000; ++i) { + for (int64_t i = 0; i < 50000000; i+=5) { + ASSERT(taosHashGet(mhash[i/50000], name + i * 9, 9)); + } +#endif + int64_t endMhash = taosGetTimestampMs(); + int64_t endMhashCt = 0; + for (int64_t i = 0; i < 1000; ++i) { + printf(" %" PRId64 , taosHashGetCompTimes(mhash[i])); + endMhashCt += taosHashGetCompTimes(mhash[i]); + } + printf("\n100m \t %" PRId64 "ms,%" PRId64 "\n", endMhash - startMhash, endMhashCt - startMhashCt); + + for (int64_t i = 0; i < 1000; ++i) { + taosHashCleanup(mhash[i]); + } +} + + } int main(int argc, char** argv) { @@ -210,4 +405,5 @@ TEST(testCase, hashTest) { noLockPerformanceTest(); multithreadsTest(); acquireRleaseTest(); + //perfTest(); } diff --git a/tests/script/tmp/monitor.sim b/tests/script/tmp/monitor.sim index c0c1da567c..b410e1b6ad 100644 --- a/tests/script/tmp/monitor.sim +++ b/tests/script/tmp/monitor.sim @@ -4,6 +4,7 @@ system sh/cfg.sh -n dnode1 -c monitorfqdn -v localhost system sh/cfg.sh -n dnode1 -c monitorport -v 80 system sh/cfg.sh -n dnode1 -c monitorInterval -v 1 system sh/cfg.sh -n dnode1 -c monitorComp -v 1 +system sh/cfg.sh -n dnode1 -c uptimeInterval -v 3 #system sh/cfg.sh -n dnode1 -c supportVnodes -v 128 #system sh/cfg.sh -n dnode1 -c telemetryReporting -v 1 @@ -14,7 +15,7 @@ system sh/cfg.sh -n dnode1 -c monitorComp -v 1 system sh/exec.sh -n dnode1 -s start sql connect -print =============== select * from information_schema.ins_dnodes +print =============== create database sql create database db vgroups 2; sql use db; sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 binary(16)) comment "abd"; diff --git a/tests/script/tsim/db/basic2.sim b/tests/script/tsim/db/basic2.sim index b7ac0b5edd..4f0ba4a13c 100644 --- a/tests/script/tsim/db/basic2.sim +++ b/tests/script/tsim/db/basic2.sim @@ -4,7 +4,7 @@ system sh/exec.sh -n dnode1 -s start sql connect print =============== conflict stb -sql create database db vgroups 1; +sql create database db vgroups 4; sql use db; sql create table stb (ts timestamp, i int) tags (j int); sql_error create table stb using stb tags (1); @@ -16,6 +16,9 @@ sql_error create table ctb (ts timestamp, i int) tags (j int); sql create table ntb (ts timestamp, i int); sql_error create table ntb (ts timestamp, i int) tags (j int); +sql drop table ntb +sql create table ntb (ts timestamp, i int) tags (j int); + sql drop database db print =============== create database d1 diff --git a/tests/script/tsim/parser/alter1.sim b/tests/script/tsim/parser/alter1.sim index 9d0049e45e..369419dcd9 100644 --- a/tests/script/tsim/parser/alter1.sim +++ b/tests/script/tsim/parser/alter1.sim @@ -130,4 +130,4 @@ endi # return -1 #endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/parser/binary_escapeCharacter.sim b/tests/script/tsim/parser/binary_escapeCharacter.sim index 0b437d8b04..5a9c0e7bb1 100644 --- a/tests/script/tsim/parser/binary_escapeCharacter.sim +++ b/tests/script/tsim/parser/binary_escapeCharacter.sim @@ -101,4 +101,4 @@ sql_error insert into tb values(now, '\'); #sql_error insert into tb values(now, '\\\n'); sql insert into tb values(now, '\n'); -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/parser/col_arithmetic_operation.sim b/tests/script/tsim/parser/col_arithmetic_operation.sim index f22beefdf8..9a2ba34c85 100644 --- a/tests/script/tsim/parser/col_arithmetic_operation.sim +++ b/tests/script/tsim/parser/col_arithmetic_operation.sim @@ -132,4 +132,4 @@ sql_error select max(c1-c2) from $tb print =====================> td-1764 sql select sum(c1)/count(*), sum(c1) as b, count(*) as b from $stb interval(1y) -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/parser/columnValue_unsign.sim b/tests/script/tsim/parser/columnValue_unsign.sim index 85ff490bf4..7ae1b20eca 100644 --- a/tests/script/tsim/parser/columnValue_unsign.sim +++ b/tests/script/tsim/parser/columnValue_unsign.sim @@ -129,4 +129,4 @@ if $rows != 1 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/parser/fill_stb.sim b/tests/script/tsim/parser/fill_stb.sim index 656b1ac94e..6c61631aa8 100644 --- a/tests/script/tsim/parser/fill_stb.sim +++ b/tests/script/tsim/parser/fill_stb.sim @@ -279,7 +279,7 @@ endi #endi ## linear fill -sql select _wstart, max(c1), min(c2), avg(c3), sum(c4), first(c7), last(c8), first(c9) from $stb where ts >= $ts0 and ts <= $tsu partition by t1 interval(5m) fill(linear) +sql select _wstart, max(c1), min(c2), avg(c3), sum(c4), first(c7), last(c8), first(c9) from $stb where ts >= $ts0 and ts <= $tsu partition by t1 interval(5m) fill(linear) $val = $rowNum * 2 $val = $val - 1 $val = $val * $tbNum diff --git a/tests/script/tsim/parser/import_file.sim b/tests/script/tsim/parser/import_file.sim index e031e0249d..37dc0c4476 100644 --- a/tests/script/tsim/parser/import_file.sim +++ b/tests/script/tsim/parser/import_file.sim @@ -69,4 +69,4 @@ endi system rm -f $inFileName -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/parser/repeatAlter.sim b/tests/script/tsim/parser/repeatAlter.sim index d28a03e193..b4012048cc 100644 --- a/tests/script/tsim/parser/repeatAlter.sim +++ b/tests/script/tsim/parser/repeatAlter.sim @@ -6,4 +6,4 @@ while $i <= $loops $i = $i + 1 endw -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/parser/select_from_cache_disk.sim b/tests/script/tsim/parser/select_from_cache_disk.sim index 0983e36a3a..3c0b13c638 100644 --- a/tests/script/tsim/parser/select_from_cache_disk.sim +++ b/tests/script/tsim/parser/select_from_cache_disk.sim @@ -60,4 +60,4 @@ if $data12 != 1 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/parser/single_row_in_tb.sim b/tests/script/tsim/parser/single_row_in_tb.sim index 1bd53ad24e..e7b4c9a871 100644 --- a/tests/script/tsim/parser/single_row_in_tb.sim +++ b/tests/script/tsim/parser/single_row_in_tb.sim @@ -33,4 +33,4 @@ print ================== server restart completed run tsim/parser/single_row_in_tb_query.sim -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/parser/single_row_in_tb_query.sim b/tests/script/tsim/parser/single_row_in_tb_query.sim index 422756b798..37e193f9d2 100644 --- a/tests/script/tsim/parser/single_row_in_tb_query.sim +++ b/tests/script/tsim/parser/single_row_in_tb_query.sim @@ -195,4 +195,4 @@ endi print ===============>safty check TD-4927 sql select first(ts, c1) from sr_stb where ts<1 group by t1; -sql select first(ts, c1) from sr_stb where ts>0 and ts<1; \ No newline at end of file +sql select first(ts, c1) from sr_stb where ts>0 and ts<1; diff --git a/tests/script/tsim/query/complex_group.sim b/tests/script/tsim/query/complex_group.sim index 3dad8059cd..d7d14c0ee8 100644 --- a/tests/script/tsim/query/complex_group.sim +++ b/tests/script/tsim/query/complex_group.sim @@ -454,4 +454,4 @@ if $rows != 1 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/complex_having.sim b/tests/script/tsim/query/complex_having.sim index 9e28c3803e..4c0af6d10c 100644 --- a/tests/script/tsim/query/complex_having.sim +++ b/tests/script/tsim/query/complex_having.sim @@ -365,4 +365,4 @@ if $rows != 1 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/complex_limit.sim b/tests/script/tsim/query/complex_limit.sim index 2a90e7ff1d..acb133f650 100644 --- a/tests/script/tsim/query/complex_limit.sim +++ b/tests/script/tsim/query/complex_limit.sim @@ -508,4 +508,4 @@ if $rows != 1 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/complex_select.sim b/tests/script/tsim/query/complex_select.sim index f4c9877bfd..b7697e5cab 100644 --- a/tests/script/tsim/query/complex_select.sim +++ b/tests/script/tsim/query/complex_select.sim @@ -558,4 +558,4 @@ if $data00 != 33 then endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/complex_where.sim b/tests/script/tsim/query/complex_where.sim index bda1c036f0..847f67ed34 100644 --- a/tests/script/tsim/query/complex_where.sim +++ b/tests/script/tsim/query/complex_where.sim @@ -669,4 +669,4 @@ if $rows != 1 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/crash_sql.sim b/tests/script/tsim/query/crash_sql.sim index 1d20491869..169f2e7272 100644 --- a/tests/script/tsim/query/crash_sql.sim +++ b/tests/script/tsim/query/crash_sql.sim @@ -79,4 +79,4 @@ print ================ start query ====================== print ================ SQL used to cause taosd or taos shell crash sql_error select sum(c1) ,count(c1) from ct4 group by c1 having sum(c10) between 0 and 1 ; -#system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +#system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/diff.sim b/tests/script/tsim/query/diff.sim index f0d82b01e9..badd139a9f 100644 --- a/tests/script/tsim/query/diff.sim +++ b/tests/script/tsim/query/diff.sim @@ -25,17 +25,17 @@ $i = 0 while $i < $tbNum $tb = $tbPrefix . $i sql create table $tb using $mt tags( $i ) - + $x = 0 while $x < $rowNum $cc = $x * 60000 $ms = 1601481600000 + $cc - sql insert into $tb values ($ms , $x ) + sql insert into $tb values ($ms , $x ) $x = $x + 1 - endw - + endw + $i = $i + 1 -endw +endw sleep 100 @@ -61,7 +61,7 @@ sql select _rowts, diff(tbcol) from $tb where ts > $ms print ===> rows: $rows print ===> $data00 $data01 $data02 $data03 $data04 $data05 print ===> $data10 $data11 $data12 $data13 $data14 $data15 -if $data11 != 1 then +if $data11 != 1 then return -1 endi @@ -72,7 +72,7 @@ sql select _rowts, diff(tbcol) from $tb where ts <= $ms print ===> rows: $rows print ===> $data00 $data01 $data02 $data03 $data04 $data05 print ===> $data10 $data11 $data12 $data13 $data14 $data15 -if $data11 != 1 then +if $data11 != 1 then return -1 endi @@ -82,7 +82,7 @@ sql select _rowts, diff(tbcol) as b from $tb print ===> rows: $rows print ===> $data00 $data01 $data02 $data03 $data04 $data05 print ===> $data10 $data11 $data12 $data13 $data14 $data15 -if $data11 != 1 then +if $data11 != 1 then return -1 endi @@ -107,4 +107,4 @@ if $rows != 2 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/explain.sim b/tests/script/tsim/query/explain.sim index 30a857815c..2871252d91 100644 --- a/tests/script/tsim/query/explain.sim +++ b/tests/script/tsim/query/explain.sim @@ -3,7 +3,7 @@ system sh/deploy.sh -n dnode1 -i 1 system sh/exec.sh -n dnode1 -s start sql connect -print ======== step1 +print ======== step1 sql create database db1 vgroups 3; sql use db1; sql select * from information_schema.ins_databases; @@ -30,7 +30,7 @@ sql insert into tb4 values (now, 4, "Bitmap Heap Scan on tenk1 t1 (cost=5.07..2 #sql insert into tb4 values (now, 4, "Bitmap Heap Scan on tenk1 t1 (cost=5.07..229.20 rows=101 width=244) (actual time=0.080..0.526 rows=100 loops=1)"); -print ======== step2 +print ======== step2 sql explain select * from st1 where -2; sql explain select ts from tb1; sql explain select * from st1; @@ -41,14 +41,14 @@ sql explain select count(*),sum(f1) from st1; sql explain select count(*),sum(f1) from st1 group by f1; #sql explain select count(f1) from tb1 interval(10s, 2s) sliding(3s) fill(prev); -print ======== step3 +print ======== step3 sql explain verbose true select * from st1 where -2; sql explain verbose true select ts from tb1 where f1 > 0; sql explain verbose true select * from st1 where f1 > 0 and ts > '2020-10-31 00:00:00' and ts < '2021-10-31 00:00:00'; sql explain verbose true select count(*) from st1 partition by tbname slimit 1 soffset 2 limit 2 offset 1; sql explain verbose true select * from information_schema.ins_stables where db_name='db2'; -print ======== step4 +print ======== step4 sql explain analyze select ts from st1 where -2; sql explain analyze select ts from tb1; sql explain analyze select ts from st1; @@ -59,7 +59,7 @@ sql explain analyze select count(*),sum(f1) from tb1; sql explain analyze select count(*),sum(f1) from st1; sql explain analyze select count(*),sum(f1) from st1 group by f1; -print ======== step5 +print ======== step5 sql explain analyze verbose true select ts from st1 where -2; sql explain analyze verbose true select ts from tb1; sql explain analyze verbose true select ts from st1; @@ -87,12 +87,12 @@ sql explain analyze verbose true select count(f1) from st1 group by tbname; #sql explain select * from tb1, tb2 where tb1.ts=tb2.ts; #sql explain select * from st1, st2 where tb1.ts=tb2.ts; #sql explain analyze verbose true select sum(a+b) from (select _rowts, min(f1) b,count(*) a from st1 where f1 > 0 interval(1a)) where a < 0 interval(1s); -#sql explain select min(f1) from st1 interval(1m, 2a) sliding(30s); +#sql explain select min(f1) from st1 interval(1m, 2a) sliding(30s); #sql explain verbose true select count(*),sum(f1) from st1 where f1 > 0 and ts > '2021-10-31 00:00:00' group by f1 having sum(f1) > 0; -#sql explain analyze select min(f1) from st1 interval(3m, 2a) sliding(1m); +#sql explain analyze select min(f1) from st1 interval(3m, 2a) sliding(1m); #sql explain analyze select count(f1) from tb1 interval(10s, 2s) sliding(3s) fill(prev); #sql explain analyze verbose true select count(*),sum(f1) from st1 where f1 > 0 and ts > '2021-10-31 00:00:00' group by f1 having sum(f1) > 0; -#sql explain analyze verbose true select min(f1) from st1 interval(3m, 2a) sliding(1m); +#sql explain analyze verbose true select min(f1) from st1 interval(3m, 2a) sliding(1m); system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/interval.sim b/tests/script/tsim/query/interval.sim index cc8a73daec..833da4a8ba 100644 --- a/tests/script/tsim/query/interval.sim +++ b/tests/script/tsim/query/interval.sim @@ -177,4 +177,4 @@ print =============== clear # return -1 #endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/scalarFunction.sim b/tests/script/tsim/query/scalarFunction.sim index 103e66e54e..1b8115fec6 100644 --- a/tests/script/tsim/query/scalarFunction.sim +++ b/tests/script/tsim/query/scalarFunction.sim @@ -33,7 +33,7 @@ print =============== create normal table sql create table ntb (ts timestamp, c1 int, c2 float, c3 double) sql show tables -if $rows != 101 then +if $rows != 101 then return -1 endi @@ -444,7 +444,7 @@ if $loop_test == 0 then print =============== stop and restart taosd system sh/exec.sh -n dnode1 -s stop -x SIGINT system sh/exec.sh -n dnode1 -s start - + $loop_cnt = 0 check_dnode_ready_0: $loop_cnt = $loop_cnt + 1 @@ -462,7 +462,7 @@ if $loop_test == 0 then goto check_dnode_ready_0 endi - $loop_test = 1 + $loop_test = 1 goto loop_test_pos endi diff --git a/tests/script/tsim/query/scalarNull.sim b/tests/script/tsim/query/scalarNull.sim index ec95c94f23..6abe3d62d9 100644 --- a/tests/script/tsim/query/scalarNull.sim +++ b/tests/script/tsim/query/scalarNull.sim @@ -3,7 +3,7 @@ system sh/deploy.sh -n dnode1 -i 1 system sh/exec.sh -n dnode1 -s start sql connect -print ======== step1 +print ======== step1 sql create database db1 vgroups 3; sql use db1; sql select * from information_schema.ins_databases; diff --git a/tests/script/tsim/query/session.sim b/tests/script/tsim/query/session.sim index 158448d765..b6eb4ed3aa 100644 --- a/tests/script/tsim/query/session.sim +++ b/tests/script/tsim/query/session.sim @@ -35,8 +35,8 @@ sql INSERT INTO dev_001 VALUES('2020-05-13 13:00:00.001', 12) sql INSERT INTO dev_001 VALUES('2020-05-14 13:00:00.001', 13) sql INSERT INTO dev_001 VALUES('2020-05-15 14:00:00.000', 14) sql INSERT INTO dev_001 VALUES('2020-05-20 10:00:00.000', 15) -sql INSERT INTO dev_001 VALUES('2020-05-27 10:00:00.001', 16) - +sql INSERT INTO dev_001 VALUES('2020-05-27 10:00:00.001', 16) + sql INSERT INTO dev_002 VALUES('2020-05-13 10:00:00.000', 1) sql INSERT INTO dev_002 VALUES('2020-05-13 10:00:00.005', 2) sql INSERT INTO dev_002 VALUES('2020-05-13 10:00:00.009', 3) @@ -46,7 +46,7 @@ sql INSERT INTO dev_002 VALUES('2020-05-13 10:00:00.036', 6) sql INSERT INTO dev_002 VALUES('2020-05-13 10:00:00.51', 7) # vnode does not return the precision of the table -print ====> create database d1 precision 'us' +print ====> create database d1 precision 'us' sql create database d1 precision 'us' sql use d1 sql create table dev_001 (ts timestamp ,i timestamp ,j int) @@ -54,7 +54,7 @@ sql insert into dev_001 values(1623046993681000,now,1)(1623046993681001,now+1s,2 sql create table secondts(ts timestamp,t2 timestamp,i int) sql insert into secondts values(1623046993681000,now,1)(1623046993681001,now+1s,2)(1623046993681002,now+2s,3)(1623046993681004,now+5s,4) -$loop_test = 0 +$loop_test = 0 loop_test_pos: sql use $dbNamme @@ -299,7 +299,7 @@ if $loop_test == 0 then print =============== stop and restart taosd system sh/exec.sh -n dnode1 -s stop -x SIGINT system sh/exec.sh -n dnode1 -s start - + $loop_cnt = 0 check_dnode_ready_0: $loop_cnt = $loop_cnt + 1 @@ -317,7 +317,7 @@ if $loop_test == 0 then goto check_dnode_ready_0 endi - $loop_test = 1 + $loop_test = 1 goto loop_test_pos endi diff --git a/tests/script/tsim/query/stddev.sim b/tests/script/tsim/query/stddev.sim index d61c7273e1..b45c7d80a3 100644 --- a/tests/script/tsim/query/stddev.sim +++ b/tests/script/tsim/query/stddev.sim @@ -409,4 +409,4 @@ if $rows != 2 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/time_process.sim b/tests/script/tsim/query/time_process.sim index b3c0e9561f..83a6445846 100644 --- a/tests/script/tsim/query/time_process.sim +++ b/tests/script/tsim/query/time_process.sim @@ -111,4 +111,4 @@ if $rows != 2 then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/query/udf.sim b/tests/script/tsim/query/udf.sim index 7cc1403bcb..7f8b1044ef 100644 --- a/tests/script/tsim/query/udf.sim +++ b/tests/script/tsim/query/udf.sim @@ -9,7 +9,7 @@ system sh/cfg.sh -n dnode1 -c udf -v 1 system sh/exec.sh -n dnode1 -s start sql connect -print ======== step1 udf +print ======== step1 udf system sh/compile_udf.sh sql create database udf vgroups 3; sql use udf; diff --git a/tests/script/tsim/user/privilege_sysinfo.sim b/tests/script/tsim/user/privilege_sysinfo.sim index 25c1a84db6..3008599427 100644 --- a/tests/script/tsim/user/privilege_sysinfo.sim +++ b/tests/script/tsim/user/privilege_sysinfo.sim @@ -8,7 +8,20 @@ sql create user sysinfo0 pass 'taosdata' sql create user sysinfo1 pass 'taosdata' sql alter user sysinfo0 sysinfo 0 sql alter user sysinfo1 sysinfo 1 + sql create database db +sql use db +sql create table db.stb (ts timestamp, i int) tags (t int) +sql create table db.ctb using db.stb tags (1) +sql create table db.ntb (ts timestamp, i int) +sql insert into db.ctb values (now, 1); +sql insert into db.ntb values (now, 1); +sql select * from db.stb +sql select * from db.ctb +sql select * from db.ntb + +sql create database d2 +sql GRANT all ON d2.* to sysinfo0; print user sysinfo0 login sql close @@ -17,11 +30,31 @@ sql connect sysinfo0 print =============== check oper sql_error create user u1 pass 'u1' sql_error drop user sysinfo1 -sql_error alter user sysinfo1 pass '1' sql_error alter user sysinfo0 pass '1' +sql_error alter user sysinfo0 enable 0 +sql_error alter user sysinfo0 enable 1 +sql_error alter user sysinfo1 pass '1' +sql_error alter user sysinfo1 enable 1 +sql_error alter user sysinfo1 enable 1 +sql_error GRANT read ON db.* to sysinfo0; +sql_error GRANT read ON *.* to sysinfo0; +sql_error REVOKE read ON db.* from sysinfo0; +sql_error REVOKE read ON *.* from sysinfo0; +sql_error GRANT write ON db.* to sysinfo0; +sql_error GRANT write ON *.* to sysinfo0; +sql_error REVOKE write ON db.* from sysinfo0; +sql_error REVOKE write ON *.* from sysinfo0; +sql_error REVOKE write ON *.* from sysinfo0; sql_error create dnode $hostname port 7200 sql_error drop dnode 1 +sql_error alter dnode 1 'debugFlag 135' +sql_error alter dnode 1 'dDebugFlag 131' +sql_error alter dnode 1 'resetlog' +sql_error alter dnode 1 'monitor' '1' +sql_error alter dnode 1 'monitor' '0' +sql_error alter dnode 1 'monitor 1' +sql_error alter dnode 1 'monitor 0' sql_error create qnode on dnode 1 sql_error drop qnode on dnode 1 @@ -44,20 +77,107 @@ sql_error create database d1 sql_error drop database db sql_error use db sql_error alter database db replica 1; +sql_error alter database db keep 21 sql_error show db.vgroups -sql select * from information_schema.ins_stables where db_name = 'db' -sql select * from information_schema.ins_tables where db_name = 'db' + +sql_error create table db.stb1 (ts timestamp, i int) tags (t int) +sql_error create table db.ctb1 using db.stb1 tags (1) +sql_error create table db.ntb1 (ts timestamp, i int) +sql_error insert into db.ctb values (now, 1); +sql_error insert into db.ntb values (now, 1); +sql_error select * from db.stb +sql_error select * from db.ctb +sql_error select * from db.ntb + +sql use d2 +sql create table d2.stb2 (ts timestamp, i int) tags (t int) +sql create table d2.ctb2 using d2.stb2 tags (1) +sql create table d2.ntb2 (ts timestamp, i int) +sql insert into d2.ctb2 values (now, 1); +sql insert into d2.ntb2 values (now, 1); +sql select * from d2.stb2 +sql select * from d2.ctb2 +sql select * from d2.ntb2 print =============== check show -sql select * from information_schema.ins_users +sql_error show users sql_error show cluster -sql select * from information_schema.ins_dnodes -sql select * from information_schema.ins_mnodes +sql_error select * from information_schema.ins_dnodes +sql_error select * from information_schema.ins_mnodes sql_error show snodes -sql select * from information_schema.ins_qnodes +sql_error select * from information_schema.ins_qnodes +sql_error show dnodes +sql_error show snodes +sql_error show qnodes +sql_error show mnodes sql_error show bnodes +sql_error show db.vgroups +sql_error show db.stables +sql_error show db.tables +sql_error show indexes from stb from db +sql show databases +sql_error show d2.vgroups +sql show d2.stables +sql show d2.tables +sql show indexes from stb2 from d2 +#sql_error show create database db +sql_error show create table db.stb; +sql_error show create table db.ctb; +sql_error show create table db.ntb; +sql show streams +sql show consumers +sql show topics +sql show subscriptions +sql show functions sql_error show grants +sql show queries +sql show connections +sql show apps +sql_error show transactions +#sql_error show create database d2 +sql show create table d2.stb2; +sql show create table d2.ctb2; +sql show create table d2.ntb2; +sql_error show variables; +sql show local variables; sql_error show dnode 1 variables; -sql show variables; +sql_error show variables; -system sh/exec.sh -n dnode1 -s stop -x SIGINT + +print =============== check information_schema +sql show databases +if $rows != 3 then + return -1 +endi + +sql use information_schema; +sql_error select * from information_schema.ins_dnodes +sql_error select * from information_schema.ins_mnodes +sql_error select * from information_schema.ins_modules +sql_error select * from information_schema.ins_qnodes +sql_error select * from information_schema.ins_cluster +sql select * from information_schema.ins_databases +sql select * from information_schema.ins_functions +sql select * from information_schema.ins_indexes +sql select * from information_schema.ins_stables +sql select * from information_schema.ins_tables +sql select * from information_schema.ins_tags +sql select * from information_schema.ins_users +sql_error select * from information_schema.ins_grants +sql_error select * from information_schema.ins_vgroups +sql_error select * from information_schema.ins_configs +sql_error select * from information_schema.ins_dnode_variables + +print =============== check performance_schema +sql use performance_schema; +sql select * from performance_schema.perf_connections +sql select * from performance_schema.perf_queries +sql select * from performance_schema.perf_topics +sql select * from performance_schema.perf_consumers +sql select * from performance_schema.perf_subscriptions +#sql_error select * from performance_schema.perf_trans +#sql_error select * from performance_schema.perf_smas +#sql_error select * from information_schema.perf_streams +#sql_error select * from information_schema.perf_apps + +#system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/system-test/2-query/interp.py b/tests/system-test/2-query/interp.py index 934ba9e161..5550519e05 100644 --- a/tests/system-test/2-query/interp.py +++ b/tests/system-test/2-query/interp.py @@ -551,7 +551,57 @@ class TDTestCase: tdSql.checkData(0, 0, 15) tdSql.checkData(1, 0, 15) - tdLog.printNoPrefix("==========step9:test error cases") + tdLog.printNoPrefix("==========step9:test multi-interp cases") + tdSql.query(f"select interp(c0),interp(c1),interp(c2),interp(c3) from {dbname}.{tbname} range('2020-02-09 00:00:05', '2020-02-13 00:00:05') every(1d) fill(null)") + tdSql.checkRows(5) + tdSql.checkCols(4) + + for i in range (tdSql.queryCols): + tdSql.checkData(0, i, None) + tdSql.checkData(1, i, None) + tdSql.checkData(2, i, 15) + tdSql.checkData(3, i, None) + tdSql.checkData(4, i, None) + + tdSql.query(f"select interp(c0),interp(c1),interp(c2),interp(c3) from {dbname}.{tbname} range('2020-02-09 00:00:05', '2020-02-13 00:00:05') every(1d) fill(value, 1)") + tdSql.checkRows(5) + tdSql.checkCols(4) + + for i in range (tdSql.queryCols): + tdSql.checkData(0, i, 1) + tdSql.checkData(1, i, 1) + tdSql.checkData(2, i, 15) + tdSql.checkData(3, i, 1) + tdSql.checkData(4, i, 1) + + tdSql.query(f"select interp(c0),interp(c1),interp(c2),interp(c3) from {dbname}.{tbname} range('2020-02-09 00:00:05', '2020-02-13 00:00:05') every(1d) fill(prev)") + tdSql.checkRows(5) + tdSql.checkCols(4) + + for i in range (tdSql.queryCols): + tdSql.checkData(0, i, 5) + tdSql.checkData(1, i, 5) + tdSql.checkData(2, i, 15) + tdSql.checkData(3, i, 15) + tdSql.checkData(4, i, 15) + + tdSql.query(f"select interp(c0),interp(c1),interp(c2),interp(c3) from {dbname}.{tbname} range('2020-02-09 00:00:05', '2020-02-13 00:00:05') every(1d) fill(next)") + tdSql.checkRows(3) + tdSql.checkCols(4) + + for i in range (tdSql.queryCols): + tdSql.checkData(0, i, 15) + tdSql.checkData(1, i, 15) + tdSql.checkData(2, i, 15) + + tdSql.query(f"select interp(c0),interp(c1),interp(c2),interp(c3) from {dbname}.{tbname} range('2020-02-09 00:00:05', '2020-02-13 00:00:05') every(1d) fill(linear)") + tdSql.checkRows(1) + tdSql.checkCols(4) + + for i in range (tdSql.queryCols): + tdSql.checkData(0, i, 15) + + tdLog.printNoPrefix("==========step10:test error cases") tdSql.error(f"select interp(c0) from {dbname}.{tbname}") tdSql.error(f"select interp(c0) from {dbname}.{tbname} range('2020-02-10 00:00:05', '2020-02-15 00:00:05')")