@@ -284,7 +284,7 @@ SELECT COUNT(*) FROM d1001 WHERE ts >= '2017-7-14 00:00:00' AND ts < '2017-7-14
TDengine 对每个数据采集点单独建表,但在实际应用中经常需要对不同的采集点数据进行聚合。为高效的进行聚合操作,TDengine 引入超级表(STable)的概念。超级表用来代表一特定类型的数据采集点,它是包含多张表的表集合,集合里每张表的模式(schema)完全一致,但每张表都带有自己的静态标签,标签可以有多个,可以随时增加、删除和修改。应用可通过指定标签的过滤条件,对一个 STable 下的全部或部分表进行聚合或统计操作,这样大大简化应用的开发。其具体流程如下图所示:
-
+
图 5 多表聚合查询原理图
diff --git a/docs-cn/21-tdinternal/dnode.webp b/docs-cn/21-tdinternal/dnode.webp
new file mode 100644
index 0000000000..a56c7e4594
Binary files /dev/null and b/docs-cn/21-tdinternal/dnode.webp differ
diff --git a/docs-cn/21-tdinternal/message.webp b/docs-cn/21-tdinternal/message.webp
new file mode 100644
index 0000000000..a2a42abff3
Binary files /dev/null and b/docs-cn/21-tdinternal/message.webp differ
diff --git a/docs-cn/21-tdinternal/modules.webp b/docs-cn/21-tdinternal/modules.webp
new file mode 100644
index 0000000000..718a6abccd
Binary files /dev/null and b/docs-cn/21-tdinternal/modules.webp differ
diff --git a/docs-cn/21-tdinternal/multi_tables.webp b/docs-cn/21-tdinternal/multi_tables.webp
new file mode 100644
index 0000000000..8f649e34a3
Binary files /dev/null and b/docs-cn/21-tdinternal/multi_tables.webp differ
diff --git a/docs-cn/21-tdinternal/replica-forward.webp b/docs-cn/21-tdinternal/replica-forward.webp
new file mode 100644
index 0000000000..512efd4eba
Binary files /dev/null and b/docs-cn/21-tdinternal/replica-forward.webp differ
diff --git a/docs-cn/21-tdinternal/replica-master.webp b/docs-cn/21-tdinternal/replica-master.webp
new file mode 100644
index 0000000000..57030a11f5
Binary files /dev/null and b/docs-cn/21-tdinternal/replica-master.webp differ
diff --git a/docs-cn/21-tdinternal/replica-restore.webp b/docs-cn/21-tdinternal/replica-restore.webp
new file mode 100644
index 0000000000..f282c2d4d2
Binary files /dev/null and b/docs-cn/21-tdinternal/replica-restore.webp differ
diff --git a/docs-cn/21-tdinternal/structure.webp b/docs-cn/21-tdinternal/structure.webp
new file mode 100644
index 0000000000..b77a42c074
Binary files /dev/null and b/docs-cn/21-tdinternal/structure.webp differ
diff --git a/docs-cn/21-tdinternal/vnode.webp b/docs-cn/21-tdinternal/vnode.webp
new file mode 100644
index 0000000000..fae3104c89
Binary files /dev/null and b/docs-cn/21-tdinternal/vnode.webp differ
diff --git a/docs-cn/21-tdinternal/write_master.webp b/docs-cn/21-tdinternal/write_master.webp
new file mode 100644
index 0000000000..9624036ed3
Binary files /dev/null and b/docs-cn/21-tdinternal/write_master.webp differ
diff --git a/docs-cn/21-tdinternal/write_slave.webp b/docs-cn/21-tdinternal/write_slave.webp
new file mode 100644
index 0000000000..7c45dec11b
Binary files /dev/null and b/docs-cn/21-tdinternal/write_slave.webp differ
diff --git a/docs-cn/25-application/01-telegraf.md b/docs-cn/25-application/01-telegraf.md
index f63a6701ee..5bfc94c534 100644
--- a/docs-cn/25-application/01-telegraf.md
+++ b/docs-cn/25-application/01-telegraf.md
@@ -16,7 +16,7 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + Telegraf + Grafana 的 IT 运维系统。架构如下图:
-
+
## 安装步骤
@@ -75,7 +75,7 @@ sudo systemctl start telegraf
点击左侧齿轮图标并选择 `Plugins`,应该可以找到 TDengine data source 插件图标。
点击左侧加号图标并选择 `Import`,从 `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json` 下载 dashboard JSON 文件后导入。之后可以看到如下界面的仪表盘:
-
+![IT-DevOps-Solutions-telegraf-dashboard.webp]./IT-DevOps-Solutions-telegraf-dashboard.webp)
## 总结
diff --git a/docs-cn/25-application/02-collectd.md b/docs-cn/25-application/02-collectd.md
index 5e6bc6577b..5966f2d654 100644
--- a/docs-cn/25-application/02-collectd.md
+++ b/docs-cn/25-application/02-collectd.md
@@ -16,7 +16,7 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + collectd / statsD + Grafana 的 IT 运维系统。架构如下图:
-
+
## 安装步骤
@@ -81,12 +81,12 @@ repeater 部分添加 { host:'', port: select groupid, location from test.d0;
groupid | location |
=================================
- 0 | shanghai |
+ 0 | California.SanDieo |
Query OK, 1 row(s) in set (0.003490s)
```
diff --git a/docs-cn/eco_system.png b/docs-cn/eco_system.png
deleted file mode 100644
index bf8bf8f1e0..0000000000
Binary files a/docs-cn/eco_system.png and /dev/null differ
diff --git a/docs-cn/eco_system.webp b/docs-cn/eco_system.webp
new file mode 100644
index 0000000000..d60c38e97c
Binary files /dev/null and b/docs-cn/eco_system.webp differ
diff --git a/docs-en/02-intro/eco_system.png b/docs-en/02-intro/eco_system.png
deleted file mode 100644
index bf8bf8f1e0..0000000000
Binary files a/docs-en/02-intro/eco_system.png and /dev/null differ
diff --git a/docs-en/02-intro/eco_system.webp b/docs-en/02-intro/eco_system.webp
new file mode 100644
index 0000000000..d60c38e97c
Binary files /dev/null and b/docs-en/02-intro/eco_system.webp differ
diff --git a/docs-en/02-intro/index.md b/docs-en/02-intro/index.md
index e2309943f3..628e87dd59 100644
--- a/docs-en/02-intro/index.md
+++ b/docs-en/02-intro/index.md
@@ -5,39 +5,39 @@ toc_max_heading_level: 2
TDengine is a high-performance, scalable time-series database with SQL support. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](/develop/cache), [stream processing](/develop/continuous-query), [data subscription](/develop/subscribe) and other functionalities to reduce the complexity and cost of development and operation.
-This section introduces the major features, competitive advantages, suited scenarios and benchmarks to help you get a high level picture for TDengine.
+This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine.
## Major Features
The major features are listed below:
-1. Besides [using SQL to insert](/develop/insert-data/sql-writing),it supports [Schemaless writing](/reference/schemaless/),and it supports [InfluxDB LINE](/develop/insert-data/influxdb-line),[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) and other protocols.
-2. Support for seamless integration with third-party data collection agents like [Telegraf](/third-party/telegraf),[Prometheus](/third-party/prometheus),[StatsD](/third-party/statsd),[collectd](/third-party/collectd),[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). Without a line of code, those agents can write data points into TDengine just by configuration.
-3. Support for [all kinds of queries](/develop/query-data), including aggregation, nested query, downsampling, interpolation, etc.
-4. Support for [user defined functions](/develop/udf)
+1. While TDengine supports [using SQL to insert](/develop/insert-data/sql-writing), it also supports [Schemaless writing](/reference/schemaless/) just like NoSQL databases. TDengine also supports standard protocols like [InfluxDB LINE](/develop/insert-data/influxdb-line),[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) among others.
+2. TDengine supports seamless integration with third-party data collection agents like [Telegraf](/third-party/telegraf),[Prometheus](/third-party/prometheus),[StatsD](/third-party/statsd),[collectd](/third-party/collectd),[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). These agents can write data into TDengine with simple configuration and without a single line of code.
+3. Support for [all kinds of queries](/develop/query-data), including aggregation, nested query, downsampling, interpolation and others.
+4. Support for [user defined functions](/develop/udf).
5. Support for [caching](/develop/cache). TDengine always saves the last data point in cache, so Redis is not needed in some scenarios.
6. Support for [continuous query](/develop/continuous-query).
7. Support for [data subscription](/develop/subscribe) with the capability to specify filter conditions.
8. Support for [cluster](/cluster/), with the capability of increasing processing power by adding more nodes. High availability is supported by replication.
-9. Provides interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc query.
+9. Provides an interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc queries.
10. Provides many ways to [import](/operation/import) and [export](/operation/export) data.
-11. Provides [monitoring](/operation/monitor) on TDengine running instances.
+11. Provides [monitoring](/operation/monitor) on running instances of TDengine.
12. Provides [connectors](/reference/connector/) for [C/C++](/reference/connector/cpp), [Java](/reference/connector/java), [Python](/reference/connector/python), [Go](/reference/connector/go), [Rust](/reference/connector/rust), [Node.js](/reference/connector/node) and other programming languages.
13. Provides a [REST API](/reference/rest-api/).
-14. Supports the seamless integration with [Grafana](/third-party/grafana) for visualization.
+14. Supports seamless integration with [Grafana](/third-party/grafana) for visualization.
15. Supports seamless integration with Google Data Studio.
-For more detail on features, please read through the whole documentation.
+For more details on features, please read through the entire documentation.
## Competitive Advantages
-TDengine makes full use of [the characteristics of time series data](https://tdengine.com/2019/07/09/86.html), such as structured, no transaction, rarely delete or update, etc., and builds its own innovative storage engine and computing engine to differentiate itself from other time series databases with the following advantages.
+Time-series data is structured, not transactional, and is rarely deleted or updated. TDengine makes full use of [these characteristics of time series data](https://tdengine.com/2019/07/09/86.html) to build its own innovative storage engine and computing engine to differentiate itself from other time series databases, with the following advantages.
-- **[High Performance](https://tdengine.com/fast)**: TDengine outperforms other time series databases in data ingestion and querying while significantly reducing storage cost and compute costs, with an innovatively designed and purpose-built storage engine.
+- **[High Performance](https://tdengine.com/fast)**: With an innovatively designed and purpose-built storage engine, TDengine outperforms other time series databases in data ingestion and querying while significantly reducing storage costs and compute costs.
- **[Scalable](https://tdengine.com/scalable)**: TDengine provides out-of-box scalability and high-availability through its native distributed design. Nodes can be added through simple configuration to achieve greater data processing power. In addition, this feature is open source.
-- **[SQL Support](https://tdengine.com/sql-support)**: TDengine uses SQL as the query language, thereby reducing learning and migration costs, while adding SQL extensions to handle time-series data better, and supporting convenient and flexible schemaless data ingestion.
+- **[SQL Support](https://tdengine.com/sql-support)**: TDengine uses SQL as the query language, thereby reducing learning and migration costs, while adding SQL extensions to better handle time-series. Keeping NoSQL developers in mind, TDengine also supports convenient and flexible, schemaless data ingestion.
- **All in One**: TDengine has built-in caching, stream processing and data subscription functions. It is no longer necessary to integrate Kafka/Redis/HBase/Spark or other software in some scenarios. It makes the system architecture much simpler, cost-effective and easier to maintain.
@@ -45,24 +45,24 @@ TDengine makes full use of [the characteristics of time series data](https://tde
- **Zero Management**: Installation and cluster setup can be done in seconds. Data partitioning and sharding are executed automatically. TDengine’s running status can be monitored via Grafana or other DevOps tools.
-- **Zero Learning Costs**: With SQL as the query language and support for ubiquitous tools like Python, Java, C/C++, Go, Rust, and Node.js connectors, there are zero learning costs.
+- **Zero Learning Costs**: With SQL as the query language and support for ubiquitous tools like Python, Java, C/C++, Go, Rust, and Node.js connectors, and a REST API, there are zero learning costs.
-- **Interactive Console**: TDengine provides convenient console access to the database to run ad hoc queries, maintain the database, or manage the cluster without any programming.
+- **Interactive Console**: TDengine provides convenient console access to the database, through a CLI, to run ad hoc queries, maintain the database, or manage the cluster, without any programming.
-With TDengine, the total cost of ownership of time-series data platform can be greatly reduced. Because 1: with its superior performance, the computing and storage resources are reduced significantly; 2:with SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly; 3: with its simple architecture and zero management, the operation and maintenance costs are reduced.
+With TDengine, the total cost of ownership of your time-series data platform can be greatly reduced. 1: With its superior performance, the computing and storage resources are reduced significantly 2: With SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly 3: With its simple architecture and zero management, the operation and maintenance costs are reduced.
## Technical Ecosystem
-In the time-series data processing platform, TDengine stands in a role like this diagram below:
+This is how TDengine would be situated, in a typical time-series data processing platform:
-
+
Figure 1. TDengine Technical Ecosystem
-On the left side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides interactive command-line interface and web interface for management and maintenance.
+On the left-hand side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right-hand side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides an interactive command-line interface and a web interface for management and maintenance.
-## Suited Scenarios
+## Typical Use Cases
-As a high-performance, scalable and SQL supported time-series database, TDengine's typical application scenarios include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM, etc. This section makes a more detailed analysis of the applicable scenarios.
+As a high-performance, scalable and SQL supported time-series database, TDengine's typical use case include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data. As such, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM and so on. More generally TDengine is not a suitable storage engine for non-time-series data. This section makes a more detailed analysis of the applicable scenarios.
### Characteristics and Requirements of Data Sources
diff --git a/docs-en/04-concept/index.md b/docs-en/04-concept/index.md
index abc553ab6d..850f705146 100644
--- a/docs-en/04-concept/index.md
+++ b/docs-en/04-concept/index.md
@@ -2,7 +2,7 @@
title: Concepts
---
-In order to explain the basic concepts and provide some sample code, the TDengine documentation takes smart meters as a typical time series data scenario. Assuming that each smart meter collects three metrics of current, voltage, and phase, there are multiple smart meters, and each meter has static attributes like location and group ID, the collected data will be similar to the following table:
+In order to explain the basic concepts and provide some sample code, the TDengine documentation smart meters as a typical time series use case. We assume the following: 1. Each smart meter collects three metrics i.e. current, voltage, and phase 2. There are multiple smart meters, and 3. Each meter has static attributes like location and group ID. Based on this, collected data will look similar to the following table:
@@ -29,7 +29,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
10.3
219
0.31
-
Beijing.Chaoyang
+
California.SanFrancisco
2
@@ -38,7 +38,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
10.2
220
0.23
-
Beijing.Chaoyang
+
California.SanFrancisco
3
@@ -47,7 +47,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
11.5
221
0.35
-
Beijing.Haidian
+
California.LosAngeles
3
@@ -56,7 +56,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
13.4
223
0.29
-
Beijing.Haidian
+
California.LosAngeles
2
@@ -65,7 +65,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
12.6
218
0.33
-
Beijing.Chaoyang
+
California.SanFrancisco
2
@@ -74,7 +74,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
11.8
221
0.28
-
Beijing.Haidian
+
California.LosAngeles
2
@@ -83,7 +83,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
10.3
218
0.25
-
Beijing.Chaoyang
+
California.SanFrancisco
3
@@ -92,7 +92,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
12.3
221
0.31
-
Beijing.Chaoyang
+
California.SanFrancisco
2
@@ -112,7 +112,7 @@ Label/Tag refers to the static properties of sensors, equipment or other types o
## Data Collection Point
-Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipments, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car, so in this example the car would have three data collection points.
+Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points.
## Table
@@ -122,10 +122,10 @@ To make full use of time-series data characteristics, TDengine adopts a strategy
1. Since the metric data from different DCP are fully independent, the data source of each DCP is unique, and a table has only one writer. In this way, data points can be written in a lock-free manner, and the writing speed can be greatly improved.
2. For a DCP, the metric data generated by DCP is ordered by timestamp, so the write operation can be implemented by simple appending, which further greatly improves the data writing speed.
-3. The metric data from a DCP is continuously stored in block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude.
-4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, this allows for a higher compression rate.
+3. The metric data from a DCP is continuously stored, block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude.
+4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, which allows for a higher compression rate.
-If the metric data of multiple DCPs are traditionally written into a single table, due to the uncontrollable network delay, the timing of the data from different DCPs arriving at the server cannot be guaranteed, the writing operation must be protected by locks, and the metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest extent.**
+If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.**
TDengine suggests using DCP ID as the table name (like D1001 in the above table). Each DCP may collect one or multiple metrics (like the current, voltage, phase as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the time stamp as the index, and won’t build the index on any metrics stored. Column wise storage is used.
@@ -139,7 +139,7 @@ In the design of TDengine, **a table is used to represent a specific data collec
## Subtable
-When creating a table for a specific data collection point, the user can use a STable as a template and specifies the tag values of this specific DCP to create it. **The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is:
+When creating a table for a specific data collection point, the user can use a STable as a template and specify the tag values of this specific DCP to create it. **The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is:
1. Subtable is a table, all SQL commands applied on a regular table can be applied on subtable.
2. Subtable is a table with extensions, it has static tags (labels), and these tags can be added, deleted, and updated after it is created. But a regular table does not have tags.
3. A subtable belongs to only one STable, but a STable may have many subtables. Regular tables do not belong to a STable.
@@ -151,7 +151,7 @@ The relationship between a STable and the subtables created based on this STable
2. The schema of metrics or labels cannot be adjusted through subtables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all associated subtables.
3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to subtables.
-Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which can greatly reduce the data sets to be scanned, thus greatly improving the performance of data aggregation across multiple DCPs.
+Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs.
In TDengine, it is recommended to use a subtable instead of a regular table for a DCP.
@@ -167,4 +167,4 @@ FQDN (Fully Qualified Domain Name) is the full domain name of a specific compute
Each node of a TDengine cluster is uniquely identified by an End Point, which consists of an FQDN and a Port, such as h1.tdengine.com:6030. In this way, when the IP changes, we can still use the FQDN to dynamically find the node without changing any configuration of the cluster. In addition, FQDN is used to facilitate unified access to the same cluster from the Intranet and the Internet.
-TDengine does not recommend using an IP address to access the cluster, FQDN is recommended for cluster management.
+TDengine does not recommend using an IP address to access the cluster. FQDN is recommended for cluster management.
diff --git a/docs-en/05-get-started/index.md b/docs-en/05-get-started/index.md
index 39b2d02eca..858dd6ac56 100644
--- a/docs-en/05-get-started/index.md
+++ b/docs-en/05-get-started/index.md
@@ -10,7 +10,7 @@ import AptGetInstall from "./\_apt_get_install.mdx";
## Quick Install
-The full package of TDengine includes the server(taosd), taosAdapter for connecting with third-party systems and providing a RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For the current version, the server taosd and taosAdapter can only be installed and run on Linux systems. In the future taosd and taosAdapter will also be supported on Windows, macOS and other systems. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to the connectors of multiple languages, [RESTful interface](/reference/rest-api) is also provided by [taosAdapter](/reference/taosadapter) in TDengine. Prior to version 2.4.0.0, however, there is no taosAdapter, the RESTful interface is provided by the built-in HTTP service of taosd.
+The full package of TDengine includes the server(taosd), taosAdapter for connecting with third-party systems and providing a RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For the current version, the server taosd and taosAdapter can only be installed and run on Linux systems. In the future taosd and taosAdapter will also be supported on Windows, macOS and other systems. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](/reference/rest-api) through [taosAdapter](/reference/taosadapter). Prior to version 2.4.0.0, taosAdapter did not exist and the RESTful interface was provided by the built-in HTTP service of taosd.
TDengine supports X64/ARM64/MIPS64/Alpha64 hardware platforms, and will support ARM32, RISC-V and other CPU architectures in the future.
@@ -130,7 +130,7 @@ After TDengine server is running,execute `taosBenchmark` (previously named tao
taosBenchmark
```
-This command will create a super table "meters" under database "test". Under "meters", 10000 tables are created with names from "d0" to "d9999". Each table has 10000 rows and each row has four columns (ts, current, voltage, phase). Time stamp is starting from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999". Each table has tags "location" and "groupId". groupId is set 1 to 10 randomly, and location is set to "beijing" or "shanghai".
+This command will create a super table "meters" under database "test". Under "meters", 10000 tables are created with names from "d0" to "d9999". Each table has 10000 rows and each row has four columns (ts, current, voltage, phase). Time stamp is starting from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999". Each table has tags "location" and "groupId". groupId is set 1 to 10 randomly, and location is set to "California.SanFrancisco" or "California.SanDieo".
This command will insert 100 million rows into the database quickly. Time to insert depends on the hardware configuration, it only takes a dozen seconds for a regular PC server.
@@ -152,10 +152,10 @@ query the average, maximum, minimum of 100 million rows:
taos> select avg(current), max(voltage), min(phase) from test.meters;
```
-query the total number of rows with location="beijing":
+query the total number of rows with location="California.SanFrancisco":
```sql
-taos> select count(*) from test.meters where location="beijing";
+taos> select count(*) from test.meters where location="California.SanFrancisco";
```
query the average, maximum, minimum of all rows with groupId=10:
diff --git a/docs-en/07-develop/01-connect/index.md b/docs-en/07-develop/01-connect/index.md
index 2e886cb892..21b2149f44 100644
--- a/docs-en/07-develop/01-connect/index.md
+++ b/docs-en/07-develop/01-connect/index.md
@@ -1,7 +1,7 @@
---
sidebar_label: Connection
title: Connect to TDengine
-description: "This document explains how to establish connection to TDengine, and briefly introduce how to install and use TDengine connectors."
+description: "This document explains how to establish connections to TDengine, and briefly introduces how to install and use TDengine connectors."
---
import Tabs from "@theme/Tabs";
@@ -19,7 +19,7 @@ import InstallOnLinux from "../../14-reference/03-connector/\_windows_install.md
import VerifyLinux from "../../14-reference/03-connector/\_verify_linux.mdx";
import VerifyWindows from "../../14-reference/03-connector/\_verify_windows.mdx";
-Any application programs running on any kind of platforms can access TDengine through the REST API provided by TDengine. For the details, please refer to [REST API](/reference/rest-api/). Besides, application programs can use the connectors of multiple programming languages to access TDengine, including C/C++, Java, Python, Go, Node.js, C#, and Rust. This chapter describes how to establish connection to TDengine and briefly introduces how to install and use connectors. For details about the connectors, please refer to [Connectors](/reference/connector/)
+Any application programs running on any kind of platform can access TDengine through the REST API provided by TDengine. For details, please refer to [REST API](/reference/rest-api/). Additionally, application programs can use the connectors of multiple programming languages including C/C++, Java, Python, Go, Node.js, C#, and Rust to access TDengine. This chapter describes how to establish a connection to TDengine and briefly introduces how to install and use connectors. For details about the connectors, please refer to [Connectors](/reference/connector/)
## Establish Connection
@@ -31,12 +31,12 @@ There are two ways for a connector to establish connections to TDengine:
Key differences:
1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc.
-2. The TDengine client driver (taosc) is not supported across all platforms, and applications built on taosc may need to be modified when updating taosc to newere versions.
+2. The TDengine client driver (taosc) is not supported across all platforms, and applications built on taosc may need to be modified when updating taosc to newer versions.
3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade.
## Install Client Driver taosc
-If you are choosing to use native connection and the application is not on the same host as TDengine server, the TDengine client driver taosc needs to be installed on the application host. If choosing to use the REST connection or the application is on the same host as TDengine server, this step can be skipped. It's better to use same version of taosc as the server.
+If you are choosing to use the native connection and the the application is not on the same host as TDengine server, the TDengine client driver taosc needs to be installed on the application host. If choosing to use the REST connection or the application is on the same host as TDengine server, this step can be skipped. It's better to use same version of taosc as the TDengine server.
### Install
diff --git a/docs-en/07-develop/02-model/index.mdx b/docs-en/07-develop/02-model/index.mdx
index 2b91dc5487..bdeca37ec1 100644
--- a/docs-en/07-develop/02-model/index.mdx
+++ b/docs-en/07-develop/02-model/index.mdx
@@ -52,10 +52,10 @@ At most 4096 (or 1024 prior to version 2.1.7.0) columns are allowed in a STable.
A specific table needs to be created for each data collection point. Similar to RDBMS, table name and schema are required to create a table. Beside, one or more tags can be created for each table. To create a table, a STable needs to be used as template and the values need to be specified for the tags. For example, for the meters in [Table 1](/tdinternal/arch#model_table1), the table can be created using below SQL statement.
```sql
-CREATE TABLE d1001 USING meters TAGS ("Beijing.Chaoyang", 2);
+CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
```
-In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "Beijing.Chaoyang" and "2" respectively in the example. The tag values can be updated after the table is created. Please refer to [Tables](/taos-sql/table) for details.
+In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "California.SanFrancisco" and "2" respectively in the example. The tag values can be updated after the table is created. Please refer to [Tables](/taos-sql/table) for details.
In TDengine system, it's recommended to create a table for a data collection point via STable. A table created via STable is called subtable in some parts of the TDengine documentation. All SQL commands applied on regular tables can be applied on subtables.
@@ -70,10 +70,10 @@ It's suggested to use the global unique ID of a data collection point as the tab
In some circumstances, it's unknown whether the table already exists when inserting rows. The table can be created automatically using the SQL statement below, and nothing will happen if the table already exist.
```sql
-INSERT INTO d1001 USING meters TAGS ("Beijng.Chaoyang", 2) VALUES (now, 10.2, 219, 0.32);
+INSERT INTO d1001 USING meters TAGS ("California.SanFrancisco", 2) VALUES (now, 10.2, 219, 0.32);
```
-In the above SQL statement, a row with value `(now, 10.2, 219, 0.32)` will be inserted into table "d1001". If table "d1001" doesn't exist, it will be created automatically using STable "meters" as template with tag value `"Beijing.Chaoyang", 2`.
+In the above SQL statement, a row with value `(now, 10.2, 219, 0.32)` will be inserted into table "d1001". If table "d1001" doesn't exist, it will be created automatically using STable "meters" as template with tag value `"California.SanFrancisco", 2`.
For more details please refer to [Create Table Automatically](/taos-sql/insert#automatically-create-table-when-inserting).
diff --git a/docs-en/07-develop/03-insert-data/01-sql-writing.mdx b/docs-en/07-develop/03-insert-data/01-sql-writing.mdx
index 9f66992d3d..ae170a2bef 100644
--- a/docs-en/07-develop/03-insert-data/01-sql-writing.mdx
+++ b/docs-en/07-develop/03-insert-data/01-sql-writing.mdx
@@ -22,11 +22,11 @@ import CStmt from "./_c_stmt.mdx";
## Introduction
-Application program can execute `INSERT` statement through connectors to insert rows. TAOS CLI can be launched manually to insert data too.
+Application programs can execute `INSERT` statement through connectors to insert rows. The TAOS CLI can also be used to manually insert data.
### Insert Single Row
-Below SQL statement is used to insert one row into table "d1001".
+The below SQL statement is used to insert one row into table "d1001".
```sql
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
@@ -34,7 +34,7 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
### Insert Multiple Rows
-Multiple rows can be inserted in single SQL statement. Below example inserts 2 rows into table "d1001".
+Multiple rows can be inserted in a single SQL statement. The example below inserts 2 rows into table "d1001".
```sql
INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3, 218, 0.25);
@@ -42,7 +42,7 @@ INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3,
### Insert into Multiple Tables
-Data can be inserted into multiple tables in same SQL statement. Below example inserts 2 rows into table "d1001" and 1 row into table "d1002".
+Data can be inserted into multiple tables in the same SQL statement. The example below inserts 2 rows into table "d1001" and 1 row into table "d1002".
```sql
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31);
@@ -52,14 +52,14 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
:::info
-- Inserting in batch can gain better performance. Normally, the higher the batch size, the better the performance. Please be noted each single row can't exceed 16K bytes and each single SQL statement can't exceed 1M bytes.
-- Inserting with multiple threads can gain better performance too. However, depending on the system resources on the application side and the server side, with the number of inserting threads grows to a specific point, the performance may drop instead of growing. The proper number of threads need to be tested in a specific environment to find the best number.
+- Inserting in batches can improve performance. Normally, the higher the batch size, the better the performance. Please note that a single row can't exceed 16K bytes and each SQL statement can't exceed 1MB.
+- Inserting with multiple threads can also improve performance. However, depending on the system resources on the application side and the server side, when the number of inserting threads grows beyond a specific point the performance may drop instead of improving. The proper number of threads needs to be tested in a specific environment to find the best number.
:::
:::warning
-- If the timestamp for the row to be inserted already exists in the table, the behavior depends on the value of parameter `UPDATE`. If it's set to 0 (also the default value), the row will be discarded. If it's set to 1, the new values will override the old values for the same row.
+- If the timestamp for the row to be inserted already exists in the table, the behavior depends on the value of parameter `UPDATE`. If it's set to 0 (the default value), the row will be discarded. If it's set to 1, the new values will override the old values for the same row.
- The timestamp to be inserted must be newer than the timestamp of subtracting current time by the parameter `KEEP`. If `KEEP` is set to 3650 days, then the data older than 3650 days ago can't be inserted. The timestamp to be inserted can't be newer than the timestamp of current time plus parameter `DAYS`. If `DAYS` is set to 2, the data newer than 2 days later can't be inserted.
:::
@@ -95,13 +95,13 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
:::note
1. With either native connection or REST connection, the above samples can work well.
-2. Please be noted that `use db` can't be used with REST connection because REST connection is stateless, so in the samples `dbName.tbName` is used to specify the table name.
+2. Please note that `use db` can't be used with a REST connection because REST connections are stateless, so in the samples `dbName.tbName` is used to specify the table name.
:::
### Insert with Parameter Binding
-TDengine also provides Prepare API that support parameter binding. Similar to MySQL, only `?` can be used in these APIs to represent the parameters to bind. From version 2.1.1.0 and 2.1.2.0, parameter binding support for inserting data has been improved significantly to improve the insert performance by avoiding the cost of parsing SQL statements.
+TDengine also provides API support for parameter binding. Similar to MySQL, only `?` can be used in these APIs to represent the parameters to bind. From version 2.1.1.0 and 2.1.2.0, parameter binding support for inserting data has improved significantly to improve the insert performance by avoiding the cost of parsing SQL statements.
Parameter binding is available only with native connection.
diff --git a/docs-en/07-develop/03-insert-data/02-influxdb-line.mdx b/docs-en/07-develop/03-insert-data/02-influxdb-line.mdx
index 172003d203..06f6387b8a 100644
--- a/docs-en/07-develop/03-insert-data/02-influxdb-line.mdx
+++ b/docs-en/07-develop/03-insert-data/02-influxdb-line.mdx
@@ -29,7 +29,7 @@ measurement,tag_set field_set timestamp
For example:
```
-meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500
+meters,location=California.LoSangeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500
```
:::note
diff --git a/docs-en/07-develop/03-insert-data/03-opentsdb-telnet.mdx b/docs-en/07-develop/03-insert-data/03-opentsdb-telnet.mdx
index 66bb67c256..b83bbdf61e 100644
--- a/docs-en/07-develop/03-insert-data/03-opentsdb-telnet.mdx
+++ b/docs-en/07-develop/03-insert-data/03-opentsdb-telnet.mdx
@@ -15,21 +15,21 @@ import CTelnet from "./_c_opts_telnet.mdx";
## Introduction
-A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs single column data model, so one line can only contains single data column. There can be multiple tags. Each line contains 4 parts as below:
+A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs single column data model, so one line can only contain a single data column. There can be multiple tags. Each line contains 4 parts as below:
```
=[ =]
```
-- `metric` will be used as STable name.
-- `timestamp` is the timestamp of current row of data. The time precision will be determined automatically based on the length of the timestamp. second and millisecond time precision are supported.\
+- `metric` will be used as the STable name.
+- `timestamp` is the timestamp of current row of data. The time precision will be determined automatically based on the length of the timestamp. Second and millisecond time precision are supported.
- `value` is a metric which must be a numeric value, the corresponding column name is "value".
-- The last part is tag sets separated by space, all tags will be converted to nchar type automatically.
+- The last part is the tag set separated by spaces, all tags will be converted to nchar type automatically.
For example:
```txt
-meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3
+meters.current 1648432611250 11.3 location=California.LoSangeles groupid=3
```
Please refer to [OpenTSDB Telnet API](http://opentsdb.net/docs/build/html/api_telnet/put.html) for more details.
@@ -76,9 +76,9 @@ Query OK, 2 row(s) in set (0.002544s)
taos> select tbname, * from `meters.current`;
tbname | ts | value | groupid | location |
==================================================================================================================================
- t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.249 | 10.800000000 | 3 | Beijing.Haidian |
- t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.250 | 11.300000000 | 3 | Beijing.Haidian |
- t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.249 | 10.300000000 | 2 | Beijing.Chaoyang |
- t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.250 | 12.600000000 | 2 | Beijing.Chaoyang |
+ t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.249 | 10.800000000 | 3 | California.LoSangeles |
+ t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.250 | 11.300000000 | 3 | California.LoSangeles |
+ t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.249 | 10.300000000 | 2 | California.SanFrancisco |
+ t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.250 | 12.600000000 | 2 | California.SanFrancisco |
Query OK, 4 row(s) in set (0.005399s)
```
diff --git a/docs-en/07-develop/03-insert-data/04-opentsdb-json.mdx b/docs-en/07-develop/03-insert-data/04-opentsdb-json.mdx
index d4f723dcde..74267a344b 100644
--- a/docs-en/07-develop/03-insert-data/04-opentsdb-json.mdx
+++ b/docs-en/07-develop/03-insert-data/04-opentsdb-json.mdx
@@ -93,7 +93,7 @@ Query OK, 2 row(s) in set (0.001954s)
taos> select * from `meters.current`;
ts | value | groupid | location |
===================================================================================================================
- 2022-03-28 09:56:51.249 | 10.300000000 | 2.000000000 | Beijing.Chaoyang |
- 2022-03-28 09:56:51.250 | 12.600000000 | 2.000000000 | Beijing.Chaoyang |
+ 2022-03-28 09:56:51.249 | 10.300000000 | 2.000000000 | California.SanFrancisco |
+ 2022-03-28 09:56:51.250 | 12.600000000 | 2.000000000 | California.SanFrancisco |
Query OK, 2 row(s) in set (0.004076s)
```
diff --git a/docs-en/07-develop/03-insert-data/index.md b/docs-en/07-develop/03-insert-data/index.md
index ee80d436f1..ba31a951ff 100644
--- a/docs-en/07-develop/03-insert-data/index.md
+++ b/docs-en/07-develop/03-insert-data/index.md
@@ -2,11 +2,11 @@
title: Insert
---
-TDengine supports multiple protocols of inserting data, including SQL, InfluxDB Line protocol, OpenTSDB Telnet protocol, OpenTSDB JSON protocol. Data can be inserted row by row, or in batch. Data from one or more collecting points can be inserted simultaneously. In the meantime, data can be inserted with multiple threads, out of order data and historical data can be inserted too. InfluxDB Line protocol, OpenTSDB Telnet protocol and OpenTSDB JSON protocol are the 3 kinds of schemaless insert protocols supported by TDengine. It's not necessary to create stable and table in advance if using schemaless protocols, and the schemas can be adjusted automatically according to the data to be inserted.
+TDengine supports multiple protocols of inserting data, including SQL, InfluxDB Line protocol, OpenTSDB Telnet protocol, and OpenTSDB JSON protocol. Data can be inserted row by row, or in batches. Data from one or more collection points can be inserted simultaneously. Data can be inserted with multiple threads, and out of order data and historical data can be inserted as well. InfluxDB Line protocol, OpenTSDB Telnet protocol and OpenTSDB JSON protocol are the 3 kinds of schemaless insert protocols supported by TDengine. It's not necessary to create STables and tables in advance if using schemaless protocols, and the schemas can be adjusted automatically based on the data being inserted.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
-```
\ No newline at end of file
+```
diff --git a/docs-en/07-develop/04-query-data/index.mdx b/docs-en/07-develop/04-query-data/index.mdx
index 4016f8453b..761fe1889b 100644
--- a/docs-en/07-develop/04-query-data/index.mdx
+++ b/docs-en/07-develop/04-query-data/index.mdx
@@ -20,7 +20,7 @@ import CAsync from "./_c_async.mdx";
## Introduction
-SQL is used by TDengine as the query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine CLI `taos` can also be used to execute SQL Ad-Hoc query. Here is the list of major query functionalities supported by TDengine:
+SQL is used by TDengine as the query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine CLI `taos` can also be used to execute SQL Ad-Hoc queries. Here is the list of major query functionalities supported by TDengine:
- Query on single column or multiple columns
- Filter on tags or data columns:>, <, =, <\>, like
@@ -31,7 +31,7 @@ SQL is used by TDengine as the query language. Application programs can send SQL
- Join query with timestamp alignment
- Aggregate functions: count, max, min, avg, sum, twa, stddev, leastsquares, top, bottom, first, last, percentile, apercentile, last_row, spread, diff
-For example, below SQL statement can be executed in TDengine CLI `taos` to select the rows whose voltage column is bigger than 215 and limit the output to only 2 rows.
+For example, the SQL statement below can be executed in TDengine CLI `taos` to select the rows whose voltage column is bigger than 215 and limit the output to only 2 rows.
```sql
select * from d1001 where voltage > 215 order by ts desc limit 2;
@@ -46,26 +46,26 @@ taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
Query OK, 2 row(s) in set (0.001100s)
```
-To meet the requirements in many use cases, some special functions have been added in TDengine, for example `twa` (Time Weighted Average), `spared` (The difference between the maximum and the minimum), `last_row` (the last row), more and more functions will be added to better perform in many use cases. Furthermore, continuous query is also supported in TDengine.
+To meet the requirements of many use cases, some special functions have been added in TDengine, for example `twa` (Time Weighted Average), `spared` (The difference between the maximum and the minimum), and `last_row` (the last row). Furthermore, continuous query is also supported in TDengine.
For detailed query syntax please refer to [Select](/taos-sql/select).
## Aggregation among Tables
-In many use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviated for super table), is used in TDengine to represent a kind of data collection points, and a table is used to represent a specific data collection point. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same kind of data collection points, can be. Aggregate functions applicable for tables can be used directly on STables, syntax is exactly same.
+In many use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviated for super table), is used in TDengine to represent a kind of data collection point, and a subtable is used to represent a specific data collection point. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same kind of data collection points. Aggregate functions applicable for tables can be used directly on STables, the syntax is exactly the same.
-In summary, for a STable, its subtables can be aggregated by a simple query on STable, it's kind of join operation. But tables belong to different STables could not be aggregated.
+In summary, for a STable, its subtables can be aggregated by a simple query on the STable, it's a kind of join operation. But tables belong to different STables can not be aggregated.
### Example 1
-In TDengine CLI `taos`, use below SQL to get the average voltage of all the meters in BeiJing grouped by location.
+In TDengine CLI `taos`, use below SQL to get the average voltage of all the meters in California grouped by location.
```
taos> SELECT AVG(voltage) FROM meters GROUP BY location;
avg(voltage) | location |
=============================================================
- 222.000000000 | Beijing.Haidian |
- 219.200000000 | Beijing.Chaoyang |
+ 222.000000000 | California.LoSangeles |
+ 219.200000000 | California.SanFrancisco |
Query OK, 2 row(s) in set (0.002136s)
```
@@ -81,11 +81,11 @@ taos> SELECT count(*), max(current) FROM meters where groupId = 2 and ts > now -
Query OK, 1 row(s) in set (0.002136s)
```
-Join query is allowed between only the tables of same STable. In [Select](/taos-sql/select), all query operations are marked as whether it supports STable or not.
+Join queries are only allowed between the subtables of the same STable. In [Select](/taos-sql/select), all query operations are marked as to whether they supports STables or not.
## Down Sampling and Interpolation
-In IoT use cases, down sampling is widely used to aggregate the data by time range. `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, below SQL statement can be used to get the sum of current every 10 seconds from meters table d1001.
+In IoT use cases, down sampling is widely used to aggregate the data by time range. The `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, the SQL statement below can be used to get the sum of current every 10 seconds from meters table d1001.
```
taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
@@ -96,10 +96,10 @@ taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
Query OK, 2 row(s) in set (0.000883s)
```
-Down sampling can also be used for STable. For example, below SQL statement can be used to get the sum of current from all meters in BeiJing.
+Down sampling can also be used for STable. For example, the below SQL statement can be used to get the sum of current from all meters in California.
```
-taos> SELECT SUM(current) FROM meters where location like "Beijing%" INTERVAL(1s);
+taos> SELECT SUM(current) FROM meters where location like "California%" INTERVAL(1s);
ts | sum(current) |
======================================================
2018-10-03 14:38:04.000 | 10.199999809 |
@@ -110,7 +110,7 @@ taos> SELECT SUM(current) FROM meters where location like "Beijing%" INTERVAL(1s
Query OK, 5 row(s) in set (0.001538s)
```
-Down sampling also supports time offset. For example, below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
+Down sampling also supports time offset. For example, the below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
```
taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
@@ -124,7 +124,7 @@ taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
Query OK, 5 row(s) in set (0.001521s)
```
-In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle by themselves in many systems. In TDengine, it's easy to achieve the alignment using down sampling.
+In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle this by themselves. In TDengine, it's easy to achieve the alignment using down sampling.
Interpolation can be performed in TDengine if there is no data in a time range.
@@ -162,16 +162,16 @@ In the section describing [Insert](/develop/insert-data/sql-writing), a database
:::note
-1. With either REST connection or native connection, the above sample code work well.
-2. Please be noted that `use db` can't be used in case of REST connection because it's stateless.
+1. With either REST connection or native connection, the above sample code works well.
+2. Please note that `use db` can't be used in case of REST connection because it's stateless.
:::
### Asynchronous Query
-Besides synchronous query, asynchronous query API is also provided by TDengine to insert or query data more efficiently. With similar hardware and software environment, async API is 2~4 times faster than sync APIs. Async API works in non-blocking mode, which means an operation can be returned without finishing so that the calling thread can switch to other works to improve the performance of the whole application system. Async APIs perform especially better in case of poor network.
+Besides synchronous queries, an asynchronous query API is also provided by TDengine to insert or query data more efficiently. With a similar hardware and software environment, the async API is 2~4 times faster than sync APIs. Async API works in non-blocking mode, which means an operation can be returned without finishing so that the calling thread can switch to other works to improve the performance of the whole application system. Async APIs perform especially better in the case of poor networks.
-Please be noted that async query can only be used with native connection.
+Please note that async query can only be used with a native connection.
diff --git a/docs-en/07-develop/05-continuous-query.mdx b/docs-en/07-develop/05-continuous-query.mdx
index 97e32a17ff..f233deba31 100644
--- a/docs-en/07-develop/05-continuous-query.mdx
+++ b/docs-en/07-develop/05-continuous-query.mdx
@@ -4,15 +4,15 @@ description: "Continuous query is a query that's executed automatically accordin
title: "Continuous Query"
---
-Continuous query is a query that's executed automatically according to predefined frequency to provide aggregate query capability by time window, it's actually a simplified time driven stream computing. Continuous query can be performed on a table or STable in TDengine. The result of continuous query can be pushed to client or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively.
+Continuous query is a query that's executed automatically according to a predefined frequency to provide aggregate query capability by time window, it's actually a simplified time driven stream computing. Continuous query can be performed on a table or STable in TDengine. The result of continuous query can be pushed to clients or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively.
-Continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With continuous query, the result can be generated according to time window to achieve down sampling of original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to client or written to TDengine.
+Continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With continuous query, the result can be generated according to a time window to achieve down sampling of the original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to clients or written to TDengine.
There are some differences between continuous query in TDengine and time window computation in stream computing:
- The computation is performed and the result is returned in real time in stream computing, but the computation in continuous query is only started when a time window closes. For example, if the time window is 1 day, then the result will only be generated at 23:59:59.
-- If a historical data row is written in to a time widow for which the computation has been finished, the computation will not be performed again and the result will not be pushed to client again either. If the result has been written into TDengine, there will be no update for the result.
-- In continuous query, if the result is pushed to client, the client status is not cached on the server side and Exactly-once is not guaranteed by the server either. If the client program crashes, a new time window will be generated from the time where the continuous query is restarted. If the result is written into TDengine, the data written into TDengine can be guaranteed as valid and continuous.
+- If a historical data row is written in to a time window for which the computation has already finished, the computation will not be performed again and the result will not be pushed to client applications again. If the results have already been written into TDengine, they will not be updated.
+- In continuous query, if the result is pushed to a client, the client status is not cached on the server side and Exactly-once is not guaranteed by the server. If the client program crashes, a new time window will be generated from the time where the continuous query is restarted. If the result is written into TDengine, the data written into TDengine can be guaranteed as valid and continuous.
## Syntax
@@ -30,15 +30,15 @@ SLIDING: The time step for which the time window moves forward each time
## How to Use
-In this section the use case of meters will be used to introduce how to use continuous query. Assume the STable and sub tables have been created using below SQL statement.
+In this section the use case of meters will be used to introduce how to use continuous query. Assume the STable and subtables have been created using the SQL statements below.
```sql
create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int);
-create table D1001 using meters tags ("Beijing.Chaoyang", 2);
-create table D1002 using meters tags ("Beijing.Haidian", 2);
+create table D1001 using meters tags ("California.SanFrancisco", 2);
+create table D1002 using meters tags ("California.LoSangeles", 2);
```
-The average voltage for each time window of one minute with 30 seconds as the length of moving forward can be retrieved using below SQL statement.
+The SQL statement below retrieves the average voltage for a one minute time window, with each time window moving forward by 30 seconds.
```sql
select avg(voltage) from meters interval(1m) sliding(30s);
@@ -50,13 +50,13 @@ Whenever the above SQL statement is executed, all the existing data will be comp
select avg(voltage) from meters where ts > {startTime} interval(1m) sliding(30s);
```
-Another easier way for same purpose is prepend `create table {tableName} as` before the `select`.
+An easier way to achieve this is to prepend `create table {tableName} as` before the `select`.
```sql
create table avg_vol as select avg(voltage) from meters interval(1m) sliding(30s);
```
-A table named as `avg_vol` will be created automatically, then every 30 seconds the `select` statement will be executed automatically on the data in the past 1 minutes, i.e. the latest time window, and the result is written into table `avg_vol`. The client program just needs to query from table `avg_vol`. For example:
+A table named as `avg_vol` will be created automatically, then every 30 seconds the `select` statement will be executed automatically on the data in the past 1 minute, i.e. the latest time window, and the result is written into table `avg_vol`. The client program just needs to query from table `avg_vol`. For example:
```sql
taos> select * from avg_vol;
@@ -68,16 +68,16 @@ taos> select * from avg_vol;
2020-07-29 13:39:00.000 | 223.0800000 |
```
-Please be noted that the minimum allowed time window is 10 milliseconds, and no upper limit.
+Please note that the minimum allowed time window is 10 milliseconds, and no upper limit.
-Besides, it's allowed to specify the start and end time of continuous query. If the start time is not specified, the timestamp of the first original row will be considered as the start time; if the end time is not specified, the continuous will be performed infinitely, otherwise it will be terminated once the end time is reached. For example, the continuous query in below SQL statement will be started from now and terminated one hour later.
+It's possible to specify the start and end time of a continuous query. If the start time is not specified, the timestamp of the first row will be considered as the start time; if the end time is not specified, the continuous query will be performed indefinitely, otherwise it will be terminated once the end time is reached. For example, the continuous query in the SQL statement below will be started from now and terminated one hour later.
```sql
create table avg_vol as select avg(voltage) from meters where ts > now and ts <= now + 1h interval(1m) sliding(30s);
```
-`now` in above SQL statement stands for the time when the continuous query is created, not the time when the computation is actually performed. Besides, to avoid the trouble caused by the delay of original data as much as possible, the actual computation in continuous query is also started with a little delay. That means, once a time window closes, the computation is not started immediately. Normally, the result can only be available a little time later, normally within one minute, after the time window closes.
+`now` in the above SQL statement stands for the time when the continuous query is created, not the time when the computation is actually performed. To avoid the trouble caused by a delay in receiving data as much as possible, the actual computation in a continuous query is started after a little delay. That means, once a time window closes, the computation is not started immediately. Normally, the result are available after a little time, normally within one minute, after the time window closes.
## How to Manage
-`show streams` command can be used in TDengine CLI `taos` to show all the continuous queries in the system, and `kill stream` can be used to terminate a continuous query.
+`show streams` command can be used in the TDengine CLI `taos` to show all the continuous queries in the system, and `kill stream` can be used to terminate a continuous query.
diff --git a/docs-en/07-develop/06-subscribe.mdx b/docs-en/07-develop/06-subscribe.mdx
index 56f4ed83d8..3fa2d1280f 100644
--- a/docs-en/07-develop/06-subscribe.mdx
+++ b/docs-en/07-develop/06-subscribe.mdx
@@ -16,9 +16,9 @@ import CDemo from "./_sub_c.mdx";
## Introduction
-According to the time series nature of the data, data inserting in TDengine is similar to data publishing in message queues, they both can be considered as a new data record with timestamp is inserted into the system. Data is stored in ascending order of timestamp inside TDengine, so essentially each table in TDengine can be considered as a message queue.
+Due to the nature of time series data, data inserting in TDengine is similar to data publishing in message queues. Data is stored in ascending order of timestamp inside TDengine, so each table in TDengine can essentially be considered as a message queue.
-Lightweight service for data subscription and pushing is built in TDengine. With the API provided by TDengine, client programs can used `select` statement to subscribe the data from one or more tables. The subscription and and state maintenance is performed on the client side, the client programs polls the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start for retrieving new data is up to the client side.
+A lightweight service for data subscription and pushing is built in TDengine. With the API provided by TDengine, client programs can use `select` statements to subscribe to data from one or more tables. The subscription and state maintenance is performed on the client side, the client programs poll the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start for retrieving new data is up to the client side.
There are 3 major APIs related to subscription provided in the TDengine client driver.
@@ -28,9 +28,9 @@ taos_consume
taos_unsubscribe
```
-For more details about these API please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and sub tables please refer to the previous section "continuous query". Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c).
+For more details about these APIs please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and subtables from the previous section [Continuous Query](/develop/continuous-query) are used. Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c).
-If we want to get notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways:
+If we want to get a notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways:
The first way is to query on each sub table and record the last timestamp matching the criteria, then after some time query on the data later than recorded timestamp and repeat this process. The SQL statements for this way are as below.
@@ -40,7 +40,7 @@ select * from D1002 where ts > {last_timestamp2} and current > 10;
...
```
-The above way works, but the problem is that the number of `select` statements increases with the number of meters grows. Finally the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number.
+The above way works, but the problem is that the number of `select` statements increases with the number of meters. Additionally, the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number.
A better way is to query on the STable, only one `select` is enough regardless of the number of meters, like below:
@@ -48,7 +48,7 @@ A better way is to query on the STable, only one `select` is enough regardless o
select * from meters where ts > {last_timestamp} and current > 10;
```
-However, how to choose `last_timestamp` becomes a new problem if using this way. Firstly, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Secondly, the time when the data from different meters may arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fasted" meters is used as `last_timestamp`, some data from other meters may be missed.
+However, this presents a new problem in how to choose `last_timestamp`. First, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Second, the time when the data from different meters arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fastest" meter is used as `last_timestamp`, some data from other meters may be missed.
All the problems mentioned above can be resolved thoroughly using subscription provided by TDengine.
@@ -75,19 +75,19 @@ The parameter `sql` is a `select` statement in which `where` clause can be used
select * from meters where current > 10;
```
-Please be noted that, all the data will be processed because no start time is specified. If only the data from one day ago needs to be processed, a time related condition can be added:
+Please note that, all the data will be processed because no start time is specified. If only the data from one day ago needs to be processed, a time related condition can be added:
```sql
select * from meters where ts > now - 1d and current > 10;
```
-The parameter `topic` is the name of the subscription, it needs to be guaranteed unique in the client program, but it's not necessary to be globally unique because subscription is implemented in the APIs on client side.
+The parameter `topic` is the name of the subscription, it needs to be guaranteed unique in the client program, but it's not necessary to be globally unique because subscription is implemented in the APIs on the client side.
-If the subscription named as `topic` doesn't exist, parameter `restart` would be ignored. If the subscription named as `topic` has been created before by the client program which then exited, when the client program is restarted to use this `topic`, parameter `restart` is used to determine retrieving data from beginning or from the last point where the subscription was broken. If the value of `restart` is **true** (i.e. a non-zero value), the data will be retrieved from beginning, or if it is **false** (i.e. zero), the data already consumed before will not be processed again.
+If the subscription named as `topic` doesn't exist, the parameter `restart` will be ignored. If the subscription named as `topic` has been created before by the client program, when the client program is restarted with the subscription named `topic`, parameter `restart` is used to determine whether to retrieve data from the beginning or from the last point where the subscription was broken. If the value of `restart` is **true** (i.e. a non-zero value), the data will be retrieved from beginning, or if it is **false** (i.e. zero), the data already consumed before will not be processed again.
-The last parameter of `taos_subscribe` is the polling interval in unit of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` would be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function.
+The last parameter of `taos_subscribe` is the polling interval in unit of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` will be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function.
-The last second parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode.
+The second to last parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode.
After a subscription is created, its data can be consumed and processed, below is the sample code of how to consume data in sync mode, in the else part if `if (async)`.
@@ -149,22 +149,22 @@ void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
taos_unsubscribe(tsub, keep);
```
-The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value in when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with same name as `topic` for each subscription, the subscription will be restarted from beginning if the corresponding progress file is removed.
+The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with the same name as `topic` for each subscription, the subscription will be restarted from the beginning if the corresponding progress file is removed.
Now let's see the effect of the above sample code, assuming below prerequisites have been done.
- The sample code has been downloaded to local system
- TDengine has been installed and launched properly on same system
-- The database, STable, sub tables required in the sample code have been ready
+- The database, STable, and subtables required in the sample code are ready
-It's ready to launch below command in the directory where the sample code resides to compile and start the program.
+Launch the command below in the directory where the sample code resides to compile and start the program.
```bash
make
./subscribe -sql='select * from meters where current > 10;'
```
-After the program is started, open another terminal and launch TDengine CLI `taos`, then use below SQL commands to insert a row whose current is 12A into table **D1001**.
+After the program is started, open another terminal and launch TDengine CLI `taos`, then use the below SQL commands to insert a row whose current is 12A into table **D1001**.
```sql
use test;
@@ -187,8 +187,8 @@ taos> use power;
# create super table "meters"
taos> create table meters(ts timestamp, current float, voltage int, phase int) tags(location binary(64), groupId int);
# create tabes using the schema defined by super table "meters"
-taos> create table d1001 using meters tags ("Beijing.Chaoyang", 2);
-taos> create table d1002 using meters tags ("Beijing.Haidian", 2);
+taos> create table d1001 using meters tags ("California.SanFrancisco", 2);
+taos> create table d1002 using meters tags ("California.LoSangeles", 2);
# insert some rows
taos> insert into d1001 values("2020-08-15 12:00:00.000", 12, 220, 1),("2020-08-15 12:10:00.000", 12.3, 220, 2),("2020-08-15 12:20:00.000", 12.2, 220, 1);
taos> insert into d1002 values("2020-08-15 12:00:00.000", 9.9, 220, 1),("2020-08-15 12:10:00.000", 10.3, 220, 1),("2020-08-15 12:20:00.000", 11.2, 220, 1);
@@ -196,11 +196,11 @@ taos> insert into d1002 values("2020-08-15 12:00:00.000", 9.9, 220, 1),("2020-08
taos> select * from meters where current > 10;
ts | current | voltage | phase | location | groupid |
===========================================================================================================
- 2020-08-15 12:10:00.000 | 10.30000 | 220 | 1 | Beijing.Haidian | 2 |
- 2020-08-15 12:20:00.000 | 11.20000 | 220 | 1 | Beijing.Haidian | 2 |
- 2020-08-15 12:00:00.000 | 12.00000 | 220 | 1 | Beijing.Chaoyang | 2 |
- 2020-08-15 12:10:00.000 | 12.30000 | 220 | 2 | Beijing.Chaoyang | 2 |
- 2020-08-15 12:20:00.000 | 12.20000 | 220 | 1 | Beijing.Chaoyang | 2 |
+ 2020-08-15 12:10:00.000 | 10.30000 | 220 | 1 | California.LoSangeles | 2 |
+ 2020-08-15 12:20:00.000 | 11.20000 | 220 | 1 | California.LoSangeles | 2 |
+ 2020-08-15 12:00:00.000 | 12.00000 | 220 | 1 | California.SanFrancisco | 2 |
+ 2020-08-15 12:10:00.000 | 12.30000 | 220 | 2 | California.SanFrancisco | 2 |
+ 2020-08-15 12:20:00.000 | 12.20000 | 220 | 1 | California.SanFrancisco | 2 |
Query OK, 5 row(s) in set (0.004896s)
```
@@ -232,14 +232,14 @@ Query OK, 5 row(s) in set (0.004896s)
### Run the Examples
-The example programs firstly consume all historical data matching the criteria.
+The example programs first consume all historical data matching the criteria.
```bash
-ts: 1597464000000 current: 12.0 voltage: 220 phase: 1 location: Beijing.Chaoyang groupid : 2
-ts: 1597464600000 current: 12.3 voltage: 220 phase: 2 location: Beijing.Chaoyang groupid : 2
-ts: 1597465200000 current: 12.2 voltage: 220 phase: 1 location: Beijing.Chaoyang groupid : 2
-ts: 1597464600000 current: 10.3 voltage: 220 phase: 1 location: Beijing.Haidian groupid : 2
-ts: 1597465200000 current: 11.2 voltage: 220 phase: 1 location: Beijing.Haidian groupid : 2
+ts: 1597464000000 current: 12.0 voltage: 220 phase: 1 location: California.SanFrancisco groupid : 2
+ts: 1597464600000 current: 12.3 voltage: 220 phase: 2 location: California.SanFrancisco groupid : 2
+ts: 1597465200000 current: 12.2 voltage: 220 phase: 1 location: California.SanFrancisco groupid : 2
+ts: 1597464600000 current: 10.3 voltage: 220 phase: 1 location: California.LoSangeles groupid : 2
+ts: 1597465200000 current: 11.2 voltage: 220 phase: 1 location: California.LoSangeles groupid : 2
```
Next, use TDengine CLI to insert a new row.
@@ -253,5 +253,5 @@ taos> insert into d1001 values(now, 12.4, 220, 1);
Because the current in inserted row exceeds 10A, it will be consumed by the example program.
```
-ts: 1651146662805 current: 12.4 voltage: 220 phase: 1 location: Beijing.Chaoyang groupid: 2
+ts: 1651146662805 current: 12.4 voltage: 220 phase: 1 location: California.SanFrancisco groupid: 2
```
diff --git a/docs-en/07-develop/07-cache.md b/docs-en/07-develop/07-cache.md
index 13db6c3638..3d42e22eb3 100644
--- a/docs-en/07-develop/07-cache.md
+++ b/docs-en/07-develop/07-cache.md
@@ -10,10 +10,10 @@ Caching the latest data provides the capability of retrieving data in millisecon
The memory space used by TDengine cache is fixed in size, according to the configuration based on application requirement and system resources. Independent memory pool is allocated for and managed by each vnode (virtual node) in TDengine, there is no sharing of memory pools between vnodes. All the tables belonging to a vnode share all the cache memory of the vnode.
-Memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache`, the number of blocks for each vnode is determined by `blocks`. For each vnode, the total cache size is `cache * blocks`. It's better to set the size of each block to hold at least tends of rows.
+Memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache`, the number of blocks for each vnode is determined by `blocks`. For each vnode, the total cache size is `cache * blocks`. A cache block needs to ensure that each table can store at least dozens of records to be efficient.
-`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example below SQL statement retrieves the latest voltage of all meters in Chaoyang district of Beijing.
+`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example the below SQL statement retrieves the latest voltage of all meters in San Francisco of California.
```sql
-select last_row(voltage) from meters where location='Beijing.Chaoyang';
+select last_row(voltage) from meters where location='California.SanFrancisco';
```
diff --git a/docs-en/07-develop/index.md b/docs-en/07-develop/index.md
index 122dd0d870..e3f55f2907 100644
--- a/docs-en/07-develop/index.md
+++ b/docs-en/07-develop/index.md
@@ -2,15 +2,15 @@
title: Developer Guide
---
-To develop an application using TDengine to process time-series data, we recommend taking the following steps:
+To develop an application to process time-series data using TDengine, we recommend taking the following steps:
-1. Choose the way for connection to TDengine. No matter what programming language you use, you can always use the REST interface to access TDengine, but you can also use connectors unique to each programming language.
-2. Design the data model based on your own application scenarios. Learn the [concepts](/concept/) of TDengine including "one table for one data collection point" and the "super table" concept; learn about static labels, collected metrics, and subtables. According to the data characteristics, you may decide to create one or more databases, and you should design the STable schema to fit your data.
-3. Decide how to insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually.
-4. Based on business requirements, find out what SQL query statements need to be written.
+1. Choose the method to connect to TDengine. No matter what programming language you use, you can always use the REST interface to access TDengine, but you can also use connectors unique to each programming language.
+2. Design the data model based on your own use cases. Learn the [concepts](/concept/) of TDengine including "one table for one data collection point" and the "super table" (STable) concept; learn about static labels, collected metrics, and subtables. Depending on the characteristics of your data and your requirements, you may decide to create one or more databases, and you should design the STable schema to fit your data.
+3. Decide how you will insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually.
+4. Based on business requirements, find out what SQL query statements need to be written. You may be able to repurpose any existing SQL.
5. If you want to run real-time analysis based on time series data, including various dashboards, it is recommended that you use the TDengine continuous query feature instead of deploying complex streaming processing systems such as Spark or Flink.
6. If your application has modules that need to consume inserted data, and they need to be notified when new data is inserted, it is recommended that you use the data subscription function provided by TDengine without the need to deploy Kafka.
-7. In many scenarios (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately.
+7. In many use cases (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately.
8. If you find that the SQL functions of TDengine cannot meet your requirements, then you can use user-defined functions to solve the problem.
This section is organized in the order described above. For ease of understanding, TDengine provides sample code for each supported programming language for each function. If you want to learn more about the use of SQL, please read the [SQL manual](/taos-sql/). For a more in-depth understanding of the use of each connector, please read the [Connector Reference Guide](/reference/connector/). If you also want to integrate TDengine with third-party systems, such as Grafana, please refer to the [third-party tools](/third-party/).
diff --git a/docs-en/10-cluster/01-deploy.md b/docs-en/10-cluster/01-deploy.md
index 8c921797ec..844a026ff6 100644
--- a/docs-en/10-cluster/01-deploy.md
+++ b/docs-en/10-cluster/01-deploy.md
@@ -6,15 +6,15 @@ title: Deployment
### Step 1
-The FQDN of all hosts need to be setup properly, all the FQDNs need to be configured in the /etc/hosts of each host. It must be guaranteed that each FQDN can be accessed (by ping, for example) from any other hosts.
+The FQDN of all hosts needs to be setup properly, all the FQDNs need to be configured in the /etc/hosts of each host. It must be confirmed that each FQDN can be accessed (by ping, for example) from any other hosts.
-On each host command `hostname -f` can be executed to get the hostname. `ping` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, need to be checked and revised to make any two hosts accessible to each other.
+On each host the command `hostname -f` can be executed to get the hostname. `ping` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, need to be checked and revised to make any two hosts accessible to each other.
:::note
-- The host where the client program runs also needs to configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster.
+- The host where the client program runs also needs to be configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster.
-- It's suggested to disable the firewall for all hosts in the cluster. At least TCP/UDP for port 6030~6042 need to be open if firewall is enabled.
+- It's suggested to disable the firewall for all hosts in the cluster. At least TCP/UDP for port 6030~6042 need to be open if a firewall is enabled.
:::
@@ -28,7 +28,7 @@ Now it's time to install TDengine on all hosts without starting `taosd`, the ver
### Step 4
-Now each physical node (referred to as `dnode` hereinafter, it's abbreviation for "data node") of TDengine need to be configured properly. Please be noted that one dnode doesn't stand for one host, multiple TDengine nodes can be started on single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following.
+Now each physical node (referred to as `dnode` hereinafter, it's abbreviation for "data node") of TDengine needs to be configured properly. Please note that one dnode doesn't stand for one host, multiple TDengine nodes can be started on single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following.
```c
// firstEp is the end point to connect to when any dnode starts
@@ -44,9 +44,9 @@ serverPort 6030
#arbitrator ha.taosdata.com:6042
```
-`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. If you want to start multiple TDengine dnodes on a single host, please also make sure all other configurations like `dataDir`, `logDir`, and other resources related parameters are not conflicting.
+`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. If you want to start multiple TDengine dnodes on a single host, please make sure all other configurations like `dataDir`, `logDir`, and other resources related parameters are not conflicting.
-For all the dnodes in a TDengine cluster, below parameters must be configured as exactly same, any node whose configuration is different from dnodes already in the cluster can't join the cluster.
+For all the dnodes in a TDengine cluster, the below parameters must be configured exactly the same, any node whose configuration is different from dnodes already in the cluster can't join the cluster.
| **#** | **Parameter** | **Definition** |
| ----- | ------------------ | --------------------------------------------------------------------------------- |
@@ -61,7 +61,7 @@ For all the dnodes in a TDengine cluster, below parameters must be configured as
| 9 | maxVgroupsPerDb | Maximum number vgroups that can be used by each DB |
:::note
-Prior to version 2.0.19.0, besides the above parameters, `locale` and `charset` must be configured as same too for each dnode.
+Prior to version 2.0.19.0, besides the above parameters, `locale` and `charset` must also be configured the same for each dnode.
:::
@@ -92,7 +92,7 @@ From the above output, it is shown that the end point of the started dnode is "h
There are a few steps necessary to add other dnodes in the cluster.
-Firstly, start `taosd` as instructed in [Get Started](/get-started/), assuming it's for the second dnode. Before starting `taosd`, please making sure the configuration is correct, especially `firstEp`, `FQDN` and `serverPort`, `firstEp` must be same as the dnode shown in the section "Start First DNODE", i.e. "h1.taosdata.com" in this example.
+First, start `taosd` as instructed in [Get Started](/get-started/), assuming it's for the second dnode. Before starting `taosd`, please making sure the configuration is correct, especially `firstEp`, `FQDN` and `serverPort`, `firstEp` must be same as the dnode shown in the section "Start First DNODE", i.e. "h1.taosdata.com" in this example.
Then, on the first dnode, use TDengine CLI `taos` to execute below command to add the end point of the dnode in the cluster. In the command "fqdn:port" should be quoted using double quotes.
@@ -109,6 +109,6 @@ SHOW DNODES;
If the status of the newly added dnode is offline, please check:
- Whether the `taosd` process is running properly or not
-- In the log file `taosdlog.0` to see whether the fqdn and port are correct or not
+- In the log file `taosdlog.0` to see whether the fqdn and port are correct
The above process can be repeated to add more dnodes in the cluster.
diff --git a/docs-en/10-cluster/02-cluster-mgmt.md b/docs-en/10-cluster/02-cluster-mgmt.md
index 3fcd68b29c..9d717be236 100644
--- a/docs-en/10-cluster/02-cluster-mgmt.md
+++ b/docs-en/10-cluster/02-cluster-mgmt.md
@@ -3,7 +3,7 @@ sidebar_label: Operation
title: Manage DNODEs
---
-It has been introduced that how to deploy and start a cluster from scratch. Once a cluster is ready, the dnode status in the cluster can be shown at any time, new dnode can be added to scale out the cluster, an existing dnode can be removed, even load balance can be performed manually.\
+The previous section [Deployment](/cluster/deploy) introduced how to deploy and start a cluster from scratch. Once a cluster is ready, the dnode status in the cluster can be shown at any time, new dnode can be added to scale out the cluster, an existing dnode can be removed, even load balance can be performed manually.
:::note
All the commands to be introduced in this chapter need to be run through TDengine CLI, sometimes it's necessary to use root privilege.
@@ -12,7 +12,7 @@ All the commands to be introduced in this chapter need to be run through TDengin
## Show DNODEs
-below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes, etc. It's suggested to execute this command to check after adding or removing a dnode.
+The below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes, etc. It's suggested to execute this command to check after adding or removing a dnode.
```sql
SHOW DNODES;
@@ -39,7 +39,7 @@ USE SOME_DATABASE;
SHOW VGROUPS;
```
-The example output is as below:
+The example output is below:
```
taos> show dnodes;
@@ -87,7 +87,7 @@ taos> show dnodes;
Query OK, 2 row(s) in set (0.001017s)
```
-It can be seen that the status of the new dnode is "offline", once the dnode is started and connects the firstEp of the cluster, execute the command again and get below example output, from which it can be seen that two dnodes are both in "ready" status.
+It can be seen that the status of the new dnode is "offline", once the dnode is started and connects the firstEp of the cluster, execute the command again and get the example output below, from which it can be seen that two dnodes are both in "ready" status.
```
taos> show dnodes;
@@ -100,7 +100,7 @@ Query OK, 2 row(s) in set (0.001316s)
## Drop DNODE
-Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, `dnodeId` can be gotten from `show dnodes`.
+Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, you can get `dnodeId` from `show dnodes`.
```sql
DROP DNODE "fqdn:port";
@@ -112,7 +112,7 @@ or
DROP DNODE dnodeId;
```
-The example output is as below:
+The example output is below:
```
taos> show dnodes;
@@ -139,7 +139,7 @@ In the above example, when `show dnodes` is executed the first time, two dnodes
- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Normally, before dropping a dnode, the data belonging to the dnode needs to be migrated to other place.
- Please be noted that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped.
- Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode.
-- dnodeID is allocated automatically and can't be interfered manually. dnodeID is generated in ascending order without duplication.
+- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication.
:::
@@ -155,7 +155,7 @@ ALTER DNODE BALANCE "VNODE:-DNODE:";
In the above command, `source-dnodeId` is the original dnodeId where the vnode resides, `dest-dnodeId` specifies the target dnode. vgId (vgroup ID) can be shown by `SHOW VGROUPS `.
-Firstly `show vgroups` is executed to show the vgroup distribution.
+First `show vgroups` is executed to show the vgroup distribution.
```
taos> show vgroups;
@@ -172,7 +172,7 @@ taos> show vgroups;
Query OK, 8 row(s) in set (0.001314s)
```
-It can be seen that there are 5 vgroups in dnode 3 and 3 vgroups in node 1, now we want to move vgId 18 from dnode 3 to dnode 1. Execute below command in `taos`
+It can be seen that there are 5 vgroups in dnode 3 and 3 vgroups in node 1, now we want to move vgId 18 from dnode 3 to dnode 1. Execute the below command in `taos`
```
taos> alter dnode 3 balance "vnode:18-dnode:1";
@@ -207,7 +207,7 @@ It can be seen from above output that vgId 18 has been moved from dnode 3 to dno
:::note
- Manual load balancing can only be performed when the automatic load balancing is disabled, i.e. `balance` is set to 0.
-- Only vnode in normal state, i.e. master or slave, can be moved. vnode can't moved when its in status offline, unsynced or syncing.
+- Only a vnode in normal state, i.e. master or slave, can be moved. vnode can't be moved when its in status offline, unsynced or syncing.
- Before moving a vnode, it's necessary to make sure the target dnode has enough resources: CPU, memory and disk.
:::
diff --git a/docs-en/10-cluster/03-ha-and-lb.md b/docs-en/10-cluster/03-ha-and-lb.md
index 53c95be9e9..6e0c386abe 100644
--- a/docs-en/10-cluster/03-ha-and-lb.md
+++ b/docs-en/10-cluster/03-ha-and-lb.md
@@ -7,19 +7,19 @@ title: High Availability and Load Balancing
High availability of vnode and mnode can be achieved through replicas in TDengine.
-The number of vnodes is associated with each DB, there can be multiple DBs in a TDengine cluster. For the purpose of operation, different number of replicas can be configured properly for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas, the default value is 1. With single replica, the high availability of the system can't be guaranteed. Whenever one node is down, data service would be unavailable. The number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation would fail with error "more dnodes are needed". Below SQL statement is used to create a database named as "demo" with 3 replicas.
+The number of vnodes is associated with each DB, there can be multiple DBs in a TDengine cluster. A different number of replicas can be configured for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas, the default value is 1. With single replica, the high availability of the system can't be guaranteed. Whenever one node is down, the data service will be unavailable. The number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation would fail with error "more dnodes are needed". The SQL statement below is used to create a database named "demo" with 3 replicas.
```sql
CREATE DATABASE demo replica 3;
```
-The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each group is determined by the number of replicas set for the DB. The vnodes in each vgroups store exactly same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in online state, the vgroup is able to serve data access. Otherwise the vgroup can't handle any data access for reading or inserting data.
+The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each vgroup is determined by the number of replicas set for the DB. The vnodes in each vgroup store exactly the same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in an online state, the vgroup is able to provide data access. Otherwise the vgroup can't provide data access for reading or inserting data.
There may be data for multiple DBs in a dnode. Once a dnode is down, multiple DBs may be affected. However, it's hard to say the cluster is guaranteed to work properly as long as over half of dnodes are online because vnodes are introduced and there may be complex mapping between vnodes and dnodes.
## High Availability of Mnode
-Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`, the valid time range is [1,3]. To make sure the data consistency between mnodes, the data replication between mnodes is performed in synchronous way.
+Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`, the valid time range is [1,3]. To make sure the data consistency between mnodes, the data replication between mnodes is performed in a synchronous way.
There may be multiple dnodes in a cluster, but only one mnode can be started in each dnode. Which one or ones of the dnodes will be designated as mnodes is automatically determined by TDengine according to the cluster configuration and system resources. Command `show mnodes` can be executed in TDengine `taos` to show the mnodes in the cluster.
@@ -32,19 +32,19 @@ The end point and role/status (master, slave, unsynced, or offline) of all mnode
For the high availability of mnode, `numOfMnodes` needs to be configured to 2 or a higher value. Because the data consistency between mnodes must be guaranteed, the replica confirmation parameter `quorum` is set to 2 automatically if `numOfMNodes` is set to 2 or higher.
:::note
-If high availability is important for your system, both vnode and mnode must be configured to have multiple replicas. How to configure for them are different and have been described.
+If high availability is important for your system, both vnode and mnode must be configured to have multiple replicas.
:::
## Load Balance
-Load balance will be triggered in 3 cades without manual intervention.
+Load balance will be triggered in 3 cases without manual intervention.
- When a new dnode is joined in the cluster, automatic load balancing may be triggered, some data from some dnodes may be transferred to the new dnode automatically.
- When a dnode is removed from the cluster, the data from this dnode will be transferred to other dnodes automatically.
- When a dnode is too hot, i.e. too much data has been stored in it, automatic load balancing may be triggered to migrate some vnodes from this dnode to other dnodes.
-- :::tip
- Automatic load balancing is controlled by parameter `balance`, 0 means disabled and 1 means enabled.
+:::tip
+Automatic load balancing is controlled by parameter `balance`, 0 means disabled and 1 means enabled.
:::
@@ -54,7 +54,7 @@ When a dnode is offline, it can be detected by the TDengine cluster. There are t
- The dnode becomes online again before the threshold configured in `offlineThreshold` is reached, it is still in the cluster and data replication is started automatically. The dnode can work properly after the data syncup is finished.
-- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. System alert will be generated and automatic load balancing will be triggered too if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not be joined in the cluster automatically, it can only be joined manually by the system operator.
+- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. A system alert will be generated and automatic load balancing will be triggered if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not join in the cluster automatically, it can only be joined manually by the system operator.
:::note
If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsynced status, the master node can only be voted after all the vnodes or mnodes in the group become online and can exchange status, then the vgroup (or mnode group) is able to provide service.
@@ -63,15 +63,15 @@ If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsyn
## Arbitrator
-If the number of replicas is set to an even number like 2, when half of the vnodes in a vgroup don't work master node can't be voted. Similar case is also applicable to mnode if the number of mnodes is set to an even number like 2.
+If the number of replicas is set to an even number like 2, when half of the vnodes in a vgroup don't work a master node can't be voted. A similar case is also applicable to mnode if the number of mnodes is set to an even number like 2.
-To resolve this problem, a new arbitrator component named `tarbitrator`, abbreviated for TDengine Arbitrator, was introduced. Arbitrator simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. With Arbitrator, any vgroup or mnode group can be considered as having number of member nodes and master node can be selected.
+To resolve this problem, a new arbitrator component named `tarbitrator`, abbreviated for TDengine Arbitrator, was introduced. Arbitrator simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. As long as more than half of the vnode or mnode, including Arbitrator, are available the vnode group or mnode group can provide data insertion or query services normally.
-Normally, it's suggested to configure replica number of each DB or system parameter `numOfMNodes` to an odd number. However, if a user is very sensitive to storage space, replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability.
+Normally, it's suggested to configure a replica number of each DB or system parameter `numOfMNodes` to an odd number. However, if a user is very sensitive to storage space, a replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability.
Arbitrator component is installed with the server package. For details about how to install, please refer to [Install](/operation/pkg-install). The `-p` parameter of `tarbitrator` can be used to specify the port on which it provides service.
-In the configuration file `taos.cfg` of each dnode, parameter `arbitrator` needs to be configured to the end point of the `tarbitrator` process. arbitrator component will be used automatically if the replica is configured to an even number and will be ignored if the replica is configured to an odd number.
+In the configuration file `taos.cfg` of each dnode, parameter `arbitrator` needs to be configured to the end point of the `tarbitrator` process. Arbitrator component will be used automatically if the replica is configured to an even number and will be ignored if the replica is configured to an odd number.
Arbitrator can be shown by executing command in TDengine CLI `taos` with its role shown as "arb".
diff --git a/docs-en/10-cluster/index.md b/docs-en/10-cluster/index.md
index a19a54e01d..5a45a2ce7b 100644
--- a/docs-en/10-cluster/index.md
+++ b/docs-en/10-cluster/index.md
@@ -3,7 +3,7 @@ title: Cluster
keywords: ["cluster", "high availability", "load balance", "scale out"]
---
-TDengine has a native distributed design and provides the ability to scale out. A few of nodes can form a TDengine cluster. If you need to get higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source.
+TDengine has a native distributed design and provides the ability to scale out. A few nodes can form a TDengine cluster. If you need higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source.
This chapter mainly introduces cluster deployment, maintenance, and how to achieve high availability and load balancing.
diff --git a/docs-en/12-taos-sql/05-insert.md b/docs-en/12-taos-sql/05-insert.md
index 96e6a08ee1..d511bd5c91 100644
--- a/docs-en/12-taos-sql/05-insert.md
+++ b/docs-en/12-taos-sql/05-insert.md
@@ -69,7 +69,7 @@ INSERT INTO d1001 VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-
If it's not sure whether the table already exists, the table can be created automatically while inserting using below SQL statement. To use this functionality, a STable must be used as template and tag values must be provided.
```sql
-INSERT INTO d21001 USING meters TAGS ('Beijing.Chaoyang', 2) VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32);
+INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32);
```
It's not necessary to provide values for all tag when creating tables automatically, the tags without values provided will be set to NULL.
@@ -81,7 +81,7 @@ INSERT INTO d21001 USING meters (groupId) TAGS (2) VALUES ('2021-07-13 14:06:33.
Multiple rows can also be inserted into same table in single SQL statement using this way.
```sql
-INSERT INTO d21001 USING meters TAGS ('Beijing.Chaoyang', 2) VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
+INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
d21002 USING meters (groupId) TAGS (2) VALUES ('2021-07-13 14:06:34.255', 10.15, 217, 0.33)
d21003 USING meters (groupId) TAGS (2) (ts, current, phase) VALUES ('2021-07-13 14:06:34.255', 10.27, 0.31);
```
@@ -110,13 +110,13 @@ INSERT INTO d1001 FILE '/tmp/csvfile.csv';
From version 2.1.5.0, tables can be automatically created using a super table as template when inserting data from a CSV file, Like below:
```sql
-INSERT INTO d21001 USING meters TAGS ('Beijing.Chaoyang', 2) FILE '/tmp/csvfile.csv';
+INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/csvfile.csv';
```
Multiple tables can be automatically created and inserted in single SQL statement, like below:
```sql
-INSERT INTO d21001 USING meters TAGS ('Beijing.Chaoyang', 2) FILE '/tmp/csvfile_21001.csv'
+INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/csvfile_21001.csv'
d21002 USING meters (groupId) TAGS (2) FILE '/tmp/csvfile_21002.csv';
```
@@ -146,7 +146,7 @@ Query OK, 0 row(s) in set (0.000946s)
Then, try to create table d1001 automatically when inserting data into it.
```sql
-INSERT INTO d1001 USING meters TAGS('Beijing.Chaoyang', 2) VALUES('a');
+INSERT INTO d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES('a');
```
The output shows the value to be inserted is invalid. But `SHOW TABLES` proves that the table has been created automatically by the `INSERT` statement.
diff --git a/docs-en/12-taos-sql/06-select.md b/docs-en/12-taos-sql/06-select.md
index 11b181f65d..0a5dc7645f 100644
--- a/docs-en/12-taos-sql/06-select.md
+++ b/docs-en/12-taos-sql/06-select.md
@@ -39,15 +39,15 @@ The result includes both data columns and tag columns for super table.
taos> SELECT * FROM meters;
ts | current | voltage | phase | location | groupid |
=====================================================================================================================================
- 2018-10-03 14:38:05.500 | 11.80000 | 221 | 0.28000 | Beijing.Haidian | 2 |
- 2018-10-03 14:38:16.600 | 13.40000 | 223 | 0.29000 | Beijing.Haidian | 2 |
- 2018-10-03 14:38:05.000 | 10.80000 | 223 | 0.29000 | Beijing.Haidian | 3 |
- 2018-10-03 14:38:06.500 | 11.50000 | 221 | 0.35000 | Beijing.Haidian | 3 |
- 2018-10-03 14:38:04.000 | 10.20000 | 220 | 0.23000 | Beijing.Chaoyang | 3 |
- 2018-10-03 14:38:16.650 | 10.30000 | 218 | 0.25000 | Beijing.Chaoyang | 3 |
- 2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 | Beijing.Chaoyang | 2 |
- 2018-10-03 14:38:15.000 | 12.60000 | 218 | 0.33000 | Beijing.Chaoyang | 2 |
- 2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 | Beijing.Chaoyang | 2 |
+ 2018-10-03 14:38:05.500 | 11.80000 | 221 | 0.28000 | California.LoSangeles | 2 |
+ 2018-10-03 14:38:16.600 | 13.40000 | 223 | 0.29000 | California.LoSangeles | 2 |
+ 2018-10-03 14:38:05.000 | 10.80000 | 223 | 0.29000 | California.LoSangeles | 3 |
+ 2018-10-03 14:38:06.500 | 11.50000 | 221 | 0.35000 | California.LoSangeles | 3 |
+ 2018-10-03 14:38:04.000 | 10.20000 | 220 | 0.23000 | California.SanFrancisco | 3 |
+ 2018-10-03 14:38:16.650 | 10.30000 | 218 | 0.25000 | California.SanFrancisco | 3 |
+ 2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 | California.SanFrancisco | 2 |
+ 2018-10-03 14:38:15.000 | 12.60000 | 218 | 0.33000 | California.SanFrancisco | 2 |
+ 2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 | California.SanFrancisco | 2 |
Query OK, 9 row(s) in set (0.002022s)
```
@@ -102,8 +102,8 @@ Starting from version 2.0.14, tag columns can be selected together with data col
taos> SELECT location, groupid, current FROM d1001 LIMIT 2;
location | groupid | current |
======================================================================
- Beijing.Chaoyang | 2 | 10.30000 |
- Beijing.Chaoyang | 2 | 12.60000 |
+ California.SanFrancisco | 2 | 10.30000 |
+ California.SanFrancisco | 2 | 12.60000 |
Query OK, 2 row(s) in set (0.003112s)
```
@@ -271,10 +271,10 @@ Only filter on `TAGS` are allowed in the `where` clause for above two query stat
taos> SELECT TBNAME, location FROM meters;
tbname | location |
==================================================================
- d1004 | Beijing.Haidian |
- d1003 | Beijing.Haidian |
- d1002 | Beijing.Chaoyang |
- d1001 | Beijing.Chaoyang |
+ d1004 | California.LoSangeles |
+ d1003 | California.LoSangeles |
+ d1002 | California.SanFrancisco |
+ d1001 | California.SanFrancisco |
Query OK, 4 row(s) in set (0.000881s)
taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2;
@@ -323,7 +323,7 @@ Logical operations in below table can be used in `where` clause to filter the re
- For timestamp column, only one condition can be used; for other columns or tags, `OR` keyword can be used to combine multiple logical operators. For example, `((value > 20 AND value < 30) OR (value < 12))`.
- From version 2.3.0.0, multiple conditions can be used on timestamp column, but the result set can only contain single time range.
- From version 2.0.17.0, operator `BETWEEN AND` can be used in where clause, for example `WHERE col2 BETWEEN 1.5 AND 3.25` means the filter condition is equal to "1.5 ≤ col2 ≤ 3.25".
-- From version 2.1.4.0, operator `IN` can be used in where clause. For example, `WHERE city IN ('Beijing', 'Shanghai')`. For bool type, both `{true, false}` and `{0, 1}` are allowed, but integers other than 0 or 1 are not allowed. FLOAT and DOUBLE types are impacted by floating precision, only values that match the condition within the tolerance will be selected. Non-primary key column of timestamp type can be used with `IN`.
+- From version 2.1.4.0, operator `IN` can be used in where clause. For example, `WHERE city IN ('California.SanFrancisco', 'California.SanDieo')`. For bool type, both `{true, false}` and `{0, 1}` are allowed, but integers other than 0 or 1 are not allowed. FLOAT and DOUBLE types are impacted by floating precision, only values that match the condition within the tolerance will be selected. Non-primary key column of timestamp type can be used with `IN`.
- From version 2.3.0.0, regular expression is supported in where clause with keyword `match` or `nmatch`, the regular expression is case insensitive.
## Regular Expression
diff --git a/docs-en/12-taos-sql/08-interval.md b/docs-en/12-taos-sql/08-interval.md
index 5cc3fa8cb4..2044ff4f61 100644
--- a/docs-en/12-taos-sql/08-interval.md
+++ b/docs-en/12-taos-sql/08-interval.md
@@ -10,7 +10,7 @@ Window related clauses are used to divide the data set to be queried into subset
`INTERVAL` clause is used to generate time windows of same time interval, `SLIDING` is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining continuous query both the size of time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time range of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time window.
-
+
`INTERVAL` and `SLIDING` should be used with aggregate functions and selection functions. Below SQL statement is illegal because no aggregate or selection function is used with `INTERVAL`.
@@ -30,7 +30,7 @@ When the time length specified by `SLIDING` is same as that specified by `INTERV
In case of using integer, bool, or string to represent the device status at a moment, the continuous rows with same status belong to same status window. Once the status changes, the status window closes. As shown in the following figure,there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now.
-
+
`STATE_WINDOW` is used to specify the column based on which to define status window, for example:
@@ -46,7 +46,7 @@ SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
The primary key, i.e. timestamp, is used to determine which session window the row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to same session window; otherwise they belong to two different time windows. As shown in the figure below, if the limit of time interval for session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
-
+
If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now.
diff --git a/docs-en/12-taos-sql/index.md b/docs-en/12-taos-sql/index.md
index 611f2bf75e..32850e8c4b 100644
--- a/docs-en/12-taos-sql/index.md
+++ b/docs-en/12-taos-sql/index.md
@@ -3,9 +3,9 @@ title: TDengine SQL
description: "The syntax supported by TDengine SQL "
---
-This section explains the syntax about operating database, table, STable, inserting data, selecting data, functions and some tips that can be used in TDengine SQL. It would be easier to understand with some fundamental knowledge of SQL.
+This section explains the syntax to operating databases, tables, STables, inserting data, selecting data, functions and some tips that can be used in TDengine SQL. It would be easier to understand with some fundamental knowledge of SQL.
-TDengine SQL is the major interface for users to write data into or query from TDengine. For users to easily use, syntax similar to standard SQL is provided. However, please be noted that TDengine SQL is not standard SQL. Besides, because TDengine doesn't provide the functionality of deleting time series data, corresponding statements are not provided in TDengine SQL.
+TDengine SQL is the major interface for users to write data into or query from TDengine. For users to easily use, syntax similar to standard SQL is provided. However, please note that TDengine SQL is not standard SQL. For instance, TDengine doesn't provide the functionality of deleting time series data, thus corresponding statements are not provided in TDengine SQL.
TDengine SQL doesn't support abbreviation for keywords, for example `DESCRIBE` can't be abbreviated as `DESC`.
@@ -16,7 +16,7 @@ Syntax Specifications used in this chapter:
- | means one of a few options, excluding | itself.
- … means the item prior to it can be repeated multiple times.
-To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of meters. Assuming each meter collects 3 data: current, voltage, phase. The data model is as below:
+To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of meters. Assuming each meter collects 3 data measurements: current, voltage, phase. The data model is shown below:
```sql
taos> DESCRIBE meters;
diff --git a/docs-en/12-taos-sql/timewindow-1.webp b/docs-en/12-taos-sql/timewindow-1.webp
new file mode 100644
index 0000000000..82747558e9
Binary files /dev/null and b/docs-en/12-taos-sql/timewindow-1.webp differ
diff --git a/docs-en/12-taos-sql/timewindow-2.webp b/docs-en/12-taos-sql/timewindow-2.webp
new file mode 100644
index 0000000000..8f1314ae34
Binary files /dev/null and b/docs-en/12-taos-sql/timewindow-2.webp differ
diff --git a/docs-en/12-taos-sql/timewindow-3.webp b/docs-en/12-taos-sql/timewindow-3.webp
new file mode 100644
index 0000000000..5bd16e68e7
Binary files /dev/null and b/docs-en/12-taos-sql/timewindow-3.webp differ
diff --git a/docs-en/14-reference/03-connector/03-connector.mdx b/docs-en/14-reference/03-connector/03-connector.mdx
index 6be914bdb4..38eba73d09 100644
--- a/docs-en/14-reference/03-connector/03-connector.mdx
+++ b/docs-en/14-reference/03-connector/03-connector.mdx
@@ -4,7 +4,7 @@ title: Connector
TDengine provides a rich set of APIs (application development interface). To facilitate users to develop their applications quickly, TDengine supports connectors for multiple programming languages, including official connectors for C/C++, Java, Python, Go, Node.js, C#, and Rust. These connectors support connecting to TDengine clusters using both native interfaces (taosc) and REST interfaces (not supported in a few languages yet). Community developers have also contributed several unofficial connectors, such as the ADO.NET connector, the Lua connector, and the PHP connector.
-
+
## Supported platforms
diff --git a/docs-en/14-reference/03-connector/connector.webp b/docs-en/14-reference/03-connector/connector.webp
new file mode 100644
index 0000000000..040cf5c26c
Binary files /dev/null and b/docs-en/14-reference/03-connector/connector.webp differ
diff --git a/docs-en/14-reference/03-connector/java.mdx b/docs-en/14-reference/03-connector/java.mdx
index 328907c4d7..530798af11 100644
--- a/docs-en/14-reference/03-connector/java.mdx
+++ b/docs-en/14-reference/03-connector/java.mdx
@@ -11,7 +11,7 @@ import TabItem from '@theme/TabItem';
'taos-jdbcdriver' is TDengine's official Java language connector, which allows Java developers to develop applications that access the TDengine database. 'taos-jdbcdriver' implements the interface of the JDBC driver standard and provides two forms of connectors. One is to connect to a TDengine instance natively through the TDengine client driver (taosc), which supports functions including data writing, querying, subscription, schemaless writing, and bind interface. And the other is to connect to a TDengine instance through the REST interface provided by taosAdapter (2.4.0.0 and later). REST connections implement has a slight differences to compare the set of features implemented and native connections.
-
+
The preceding diagram shows two ways for a Java app to access TDengine via connector:
@@ -206,10 +206,10 @@ The configuration parameters in the URL are as follows.
- Unlike the native connection method, the REST interface is stateless. When using the JDBC REST connection, you need to specify the database name of the table and super table in SQL. For example.
```sql
-INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES(now, 24.6);
+INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('California.SanFrancisco') VALUES(now, 24.6);
```
-- Starting from taos-jdbcdriver-2.0.36 and TDengine 2.2.0.0, if dbname is specified in the URL, JDBC REST connections will use `/rest/sql/dbname` as the URL for REST requests by default, and there is no need to specify dbname in SQL. For example, if the URL is `jdbc:TAOS-RS://127.0.0.1:6041/test`, then the SQL can be executed: insert into t1 using weather(ts, temperature) tags('beijing') values(now, 24.6);
+- Starting from taos-jdbcdriver-2.0.36 and TDengine 2.2.0.0, if dbname is specified in the URL, JDBC REST connections will use `/rest/sql/dbname` as the URL for REST requests by default, and there is no need to specify dbname in SQL. For example, if the URL is `jdbc:TAOS-RS://127.0.0.1:6041/test`, then the SQL can be executed: insert into t1 using weather(ts, temperature) tags('California.SanFrancisco') values(now, 24.6);
:::
@@ -565,7 +565,7 @@ public class ParameterBindingDemo {
// set table name
pstmt.setTableName("t5_" + i);
// set tags
- pstmt.setTagNString(0, "Beijing-abc");
+ pstmt.setTagNString(0, "California-abc");
// set columns
ArrayList tsList = new ArrayList<>();
@@ -576,7 +576,7 @@ public class ParameterBindingDemo {
ArrayList f1List = new ArrayList<>();
for (int j = 0; j < numOfRow; j++) {
- f1List.add("Beijing-abc");
+ f1List.add("California-abc");
}
pstmt.setNString(1, f1List, BINARY_COLUMN_SIZE);
@@ -635,7 +635,7 @@ public class SchemalessInsertTest {
private static final String host = "127.0.0.1";
private static final String lineDemo = "st,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000";
private static final String telnetDemo = "stb0_0 1626006833 4 host=host0 interface=eth0";
- private static final String jsonDemo = "{\"metric\": \"meter_current\",\"timestamp\": 1346846400,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"Beijing\", \"id\": \"d1001\"}}";
+ private static final String jsonDemo = "{\"metric\": \"meter_current\",\"timestamp\": 1346846400,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1001\"}}";
public static void main(String[] args) throws SQLException {
final String url = "jdbc:TAOS://" + host + ":6030/?user=root&password=taosdata";
diff --git a/docs-en/14-reference/03-connector/tdengine-jdbc-connector.png b/docs-en/14-reference/03-connector/tdengine-jdbc-connector.png
deleted file mode 100644
index 7541aaf98a..0000000000
Binary files a/docs-en/14-reference/03-connector/tdengine-jdbc-connector.png and /dev/null differ
diff --git a/docs-en/14-reference/03-connector/tdengine-jdbc-connector.webp b/docs-en/14-reference/03-connector/tdengine-jdbc-connector.webp
new file mode 100644
index 0000000000..37cf6d90a5
Binary files /dev/null and b/docs-en/14-reference/03-connector/tdengine-jdbc-connector.webp differ
diff --git a/docs-en/14-reference/04-taosadapter.md b/docs-en/14-reference/04-taosadapter.md
index 85fd2923b0..de42e8a883 100644
--- a/docs-en/14-reference/04-taosadapter.md
+++ b/docs-en/14-reference/04-taosadapter.md
@@ -24,7 +24,7 @@ taosAdapter provides the following features.
## taosAdapter architecture diagram
-
+
## taosAdapter Deployment Method
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png
deleted file mode 100644
index 4708f836fe..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp
new file mode 100644
index 0000000000..a78e18028a
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png
deleted file mode 100644
index f2684e6eed..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp
new file mode 100644
index 0000000000..b152418d09
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png
deleted file mode 100644
index 74686691e4..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp
new file mode 100644
index 0000000000..f58f48b7f1
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.png
deleted file mode 100644
index 2796421556..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp
new file mode 100644
index 0000000000..00afcce013
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.png
deleted file mode 100644
index b0d3abbf21..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.webp
new file mode 100644
index 0000000000..567e5694f9
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png
deleted file mode 100644
index 2b54cbeb83..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp
new file mode 100644
index 0000000000..cc8a912810
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png
deleted file mode 100644
index eb3848657f..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp
new file mode 100644
index 0000000000..651b716bc5
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png
deleted file mode 100644
index d94b2e02ac..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp
new file mode 100644
index 0000000000..8666193f59
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.png
deleted file mode 100644
index 654df29345..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.webp
new file mode 100644
index 0000000000..7f38a76a2b
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.png b/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.png
deleted file mode 100644
index e3afa22c03..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.webp b/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.webp
new file mode 100644
index 0000000000..3d7fe932a2
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.png b/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.png
deleted file mode 100644
index 198bf37141..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.webp b/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.webp
new file mode 100644
index 0000000000..517123954e
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.png b/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.png
deleted file mode 100644
index ace3aa3c2f..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.webp b/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.webp
new file mode 100644
index 0000000000..6666296ac1
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png b/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png
deleted file mode 100644
index 7082e49f6b..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp b/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp
new file mode 100644
index 0000000000..6f74bc3a47
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.png b/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.png
deleted file mode 100644
index ffd4911b53..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.webp b/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.webp
new file mode 100644
index 0000000000..acda3b24a6
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.png b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.png
deleted file mode 100644
index 802c7366f9..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp
new file mode 100644
index 0000000000..903e236e2a
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png
deleted file mode 100644
index 019ec921b6..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp
new file mode 100644
index 0000000000..14fcfe9d18
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.png b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.png
deleted file mode 100644
index 3963abb4ea..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp
new file mode 100644
index 0000000000..00b50cc619
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.png b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.png
deleted file mode 100644
index 837100464b..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.webp b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.webp
new file mode 100644
index 0000000000..06d0ff6ed5
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.png b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.png
deleted file mode 100644
index 98223df254..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.webp b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.webp
new file mode 100644
index 0000000000..e2ec052b91
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png
deleted file mode 100644
index 07aba348f0..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp
new file mode 100644
index 0000000000..665c035f97
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.png b/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.png
deleted file mode 100644
index 7e28939ead..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.webp b/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.webp
new file mode 100644
index 0000000000..7dc42eeba9
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.png b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.png
deleted file mode 100644
index 981f640b14..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.webp b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.webp
new file mode 100644
index 0000000000..7ef081900f
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png
deleted file mode 100644
index 94ef4fa5fe..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp
new file mode 100644
index 0000000000..602452fc4c
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png b/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png
deleted file mode 100644
index 670cacc377..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp b/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp
new file mode 100644
index 0000000000..35a3ebba78
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/import_dashboard.png b/docs-en/14-reference/07-tdinsight/assets/import_dashboard.png
deleted file mode 100644
index d74cd36c96..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/import_dashboard.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/import_dashboard.webp b/docs-en/14-reference/07-tdinsight/assets/import_dashboard.webp
new file mode 100644
index 0000000000..fb7958f1b9
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/import_dashboard.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.png b/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.png
deleted file mode 100644
index 0101e7430c..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.webp b/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.webp
new file mode 100644
index 0000000000..49f1d88f4a
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/index.md b/docs-en/14-reference/07-tdinsight/index.md
index 4850cecb33..dc337bf9ff 100644
--- a/docs-en/14-reference/07-tdinsight/index.md
+++ b/docs-en/14-reference/07-tdinsight/index.md
@@ -233,33 +233,33 @@ The default username/password is `admin`. Grafana will require a password change
Point to the **Configurations** -> **Data Sources** menu, and click the **Add data source** button.
-
+
Search for and select **TDengine**.
-
+
Configure the TDengine datasource.
-
+
Save and test. It will report 'TDengine Data source is working' under normal circumstances.
-
+
### Importing dashboards
Point to **+** / **Create** - **import** (or `/dashboard/import` url).
-
+
Type the dashboard ID `15167` in the **Import via grafana.com** location and **Load**.
-
+
Once the import is complete, the full page view of TDinsight is shown below.
-
+
## TDinsight dashboard details
@@ -269,7 +269,7 @@ Details of the metrics are as follows.
### Cluster Status
-
+
This section contains the current information and status of the cluster, the alert information is also here (from left to right, top to bottom).
@@ -289,7 +289,7 @@ This section contains the current information and status of the cluster, the ale
### DNodes Status
-
+
- **DNodes Status**: simple table view of `show dnodes`.
- **DNodes Lifetime**: the time elapsed since the dnode was created.
@@ -298,14 +298,14 @@ This section contains the current information and status of the cluster, the ale
### MNode Overview
-
+
1. **MNodes Status**: a simple table view of `show mnodes`. 2.
2. **MNodes Number**: similar to `DNodes Number`, the number of MNodes changes.
### Request
-
+
1. **Requests Rate(Inserts per Second)**: average number of inserts per second.
2. **Requests (Selects)**: number of query requests and change rate (count of second).
@@ -313,7 +313,7 @@ This section contains the current information and status of the cluster, the ale
### Database
-
+
Database usage, repeated for each value of the variable `$database` i.e. multiple rows per database.
@@ -325,7 +325,7 @@ Database usage, repeated for each value of the variable `$database` i.e. multipl
### DNode Resource Usage
-
+
Data node resource usage display with repeated multiple rows for the variable `$fqdn` i.e., each data node. Includes.
@@ -346,13 +346,13 @@ Data node resource usage display with repeated multiple rows for the variable `$
### Login History
-
+
Currently, only the number of logins per minute is reported.
### Monitoring taosAdapter
-
+
Support monitoring taosAdapter request statistics and status details. Includes.
diff --git a/docs-en/14-reference/12-config/index.md b/docs-en/14-reference/12-config/index.md
index c4e7cc523c..1a84f15399 100644
--- a/docs-en/14-reference/12-config/index.md
+++ b/docs-en/14-reference/12-config/index.md
@@ -202,7 +202,7 @@ To handle the data insertion and data query from multiple timezones, Unix Timest
On Linux system, TDengine clients automatically obtain timezone from the host. Alternatively, the timezone can be configured explicitly in configuration file `taos.cfg` like below.
```
-timezone UTC-8
+timezone UTC-7
timezone GMT-8
timezone Asia/Shanghai
```
diff --git a/docs-en/14-reference/taosAdapter-architecture.png b/docs-en/14-reference/taosAdapter-architecture.png
deleted file mode 100644
index 08a9018553..0000000000
Binary files a/docs-en/14-reference/taosAdapter-architecture.png and /dev/null differ
diff --git a/docs-en/14-reference/taosAdapter-architecture.webp b/docs-en/14-reference/taosAdapter-architecture.webp
new file mode 100644
index 0000000000..a4162b0a03
Binary files /dev/null and b/docs-en/14-reference/taosAdapter-architecture.webp differ
diff --git a/docs-en/20-third-party/01-grafana.mdx b/docs-en/20-third-party/01-grafana.mdx
index c1bfd4a96a..7239710e0a 100644
--- a/docs-en/20-third-party/01-grafana.mdx
+++ b/docs-en/20-third-party/01-grafana.mdx
@@ -62,15 +62,15 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource
Users can log in to the Grafana server (username/password: admin/admin) directly through the URL `http://localhost:3000` and add a datasource through `Configuration -> Data Sources` on the left side, as shown in the following figure.
-
+
Click `Add data source` to enter the Add data source page, and enter TDengine in the query box to add it, as shown in the following figure.
-
+
Enter the datasource configuration page, and follow the default prompts to modify the corresponding configuration.
-
+
- Host: IP address of the server where the components of the TDengine cluster provide REST service (offered by taosd before 2.4 and by taosAdapter since 2.4) and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`.
- User: TDengine user name.
@@ -78,13 +78,13 @@ Enter the datasource configuration page, and follow the default prompts to modif
Click `Save & Test` to test. Follows are a success.
-
+
### Create Dashboard
Go back to the main interface to create the Dashboard, click Add Query to enter the panel query page:
-
+
As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query.
@@ -94,7 +94,7 @@ As shown above, select the `TDengine` data source in the `Query` and enter the c
Follow the default prompt to query the average system memory usage for the specified interval on the server where the current TDengine deployment is located as follows.
-
+
> For more information on how to use Grafana to create the appropriate monitoring interface and for more details on using Grafana, refer to the official Grafana [documentation](https://grafana.com/docs/).
diff --git a/docs-en/20-third-party/09-emq-broker.md b/docs-en/20-third-party/09-emq-broker.md
index 13562ba7f7..560c6463b5 100644
--- a/docs-en/20-third-party/09-emq-broker.md
+++ b/docs-en/20-third-party/09-emq-broker.md
@@ -44,25 +44,25 @@ Since the configuration interface of EMQX differs from version to version, here
Use your browser to open the URL `http://IP:18083` and log in to EMQX Dashboard. The initial installation username is `admin` and the password is: `public`.
-
+
### Creating Rule
Select "Rule" in the "Rule Engine" on the left and click the "Create" button: !
-
+
### Edit SQL fields
-
+
### Add "action handler"
-
+
### Add "Resource"
-
+
Select "Data to Web Service" and click the "New Resource" button.
@@ -70,13 +70,13 @@ Select "Data to Web Service" and click the "New Resource" button.
Select "Data to Web Service" and fill in the request URL as the address and port of the server running taosAdapter (default is 6041). Leave the other properties at their default values.
-
+
### Edit "action"
Edit the resource configuration to add the key/value pairing for Authorization. Please refer to the [ TDengine REST API documentation ](https://docs.taosdata.com/reference/rest-api/) for the authorization in details. Enter the rule engine replacement template in the message body.
-
+
## Compose program to mock data
@@ -163,7 +163,7 @@ Edit the resource configuration to add the key/value pairing for Authorization.
Note: `CLIENT_NUM` in the code can be set to a smaller value at the beginning of the test to avoid hardware performance be not capable to handle a more significant number of concurrent clients.
-
+
## Execute tests to simulate sending MQTT data
@@ -172,19 +172,19 @@ npm install mqtt mockjs --save ---registry=https://registry.npm.taobao.org
node mock.js
```
-
+
## Verify that EMQX is receiving data
Refresh the EMQX Dashboard rules engine interface to see how many records were received correctly:
-
+
## Verify that data writing to TDengine
Use the TDengine CLI program to log in and query the appropriate databases and tables to verify that the data is being written to TDengine correctly:
-
+
Please refer to the [TDengine official documentation](https://docs.taosdata.com/) for more details on how to use TDengine.
EMQX Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX.
diff --git a/docs-en/20-third-party/11-kafka.md b/docs-en/20-third-party/11-kafka.md
index b9c7a3814a..2da9a86b7d 100644
--- a/docs-en/20-third-party/11-kafka.md
+++ b/docs-en/20-third-party/11-kafka.md
@@ -9,11 +9,11 @@ TDengine Kafka Connector contains two plugins: TDengine Source Connector and TDe
Kafka Connect is a component of Apache Kafka that enables other systems, such as databases, cloud services, file systems, etc., to connect to Kafka easily. Data can flow from other software to Kafka via Kafka Connect and Kafka to other systems via Kafka Connect. Plugins that read data from other software are called Source Connectors, and plugins that write data to other software are called Sink Connectors. Neither Source Connector nor Sink Connector will directly connect to Kafka Broker, and Source Connector transfers data to Kafka Connect. Sink Connector receives data from Kafka Connect.
-
+
TDengine Source Connector is used to read data from TDengine in real-time and send it to Kafka Connect. Users can use The TDengine Sink Connector to receive data from Kafka Connect and write it to TDengine.
-
+
## What is Confluent?
@@ -26,7 +26,7 @@ Confluent adds many extensions to Kafka. include:
5. GUI for managing and monitoring Kafka - Confluent Control Center
Some of these extensions are available in the community version of Confluent. Some are only available in the enterprise version.
-
+
Confluent Enterprise Edition provides the `confluent` command-line tool to manage various components.
@@ -194,10 +194,10 @@ If the above command is executed successfully, the output is as follows:
Prepare text file as test data, its content is following:
```txt title="test-data.txt"
-meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249000000
-meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250000000
-meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249000000
-meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250000000
+meters,location=California.LoSangeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249000000
+meters,location=California.LoSangeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250000000
+meters,location=California.LoSangeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249000000
+meters,location=California.LoSangeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250000000
```
Use kafka-console-producer to write test data to the topic `meters`.
@@ -221,10 +221,10 @@ Database changed.
taos> select * from meters;
ts | current | voltage | phase | groupid | location |
===============================================================================================================================================================
- 2022-03-28 09:56:51.249000000 | 11.800000000 | 221.000000000 | 0.280000000 | 2 | Beijing.Haidian |
- 2022-03-28 09:56:51.250000000 | 13.400000000 | 223.000000000 | 0.290000000 | 2 | Beijing.Haidian |
- 2022-03-28 09:56:51.249000000 | 10.800000000 | 223.000000000 | 0.290000000 | 3 | Beijing.Haidian |
- 2022-03-28 09:56:51.250000000 | 11.300000000 | 221.000000000 | 0.350000000 | 3 | Beijing.Haidian |
+ 2022-03-28 09:56:51.249000000 | 11.800000000 | 221.000000000 | 0.280000000 | 2 | California.LoSangeles |
+ 2022-03-28 09:56:51.250000000 | 13.400000000 | 223.000000000 | 0.290000000 | 2 | California.LoSangeles |
+ 2022-03-28 09:56:51.249000000 | 10.800000000 | 223.000000000 | 0.290000000 | 3 | California.LoSangeles |
+ 2022-03-28 09:56:51.250000000 | 11.300000000 | 221.000000000 | 0.350000000 | 3 | California.LoSangeles |
Query OK, 4 row(s) in set (0.004208s)
```
@@ -273,7 +273,7 @@ DROP DATABASE IF EXISTS test;
CREATE DATABASE test;
USE test;
CREATE STABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT);
-INSERT INTO d1001 USING meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000) d1001 USING meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:38:15.000',12.60000,218,0.33000) d1001 USING meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:38:16.800',12.30000,221,0.31000) d1002 USING meters TAGS(Beijing.Chaoyang, 3) VALUES('2018-10-03 14:38:16.650',10.30000,218,0.25000) d1003 USING meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 14:38:05.500',11.80000,221,0.28000) d1003 USING meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 14:38:16.600',13.40000,223,0.29000) d1004 USING meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 14:38:05.000',10.80000,223,0.29000) d1004 USING meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 14:38:06.500',11.50000,221,0.35000);
+INSERT INTO d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000) d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:15.000',12.60000,218,0.33000) d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:16.800',12.30000,221,0.31000) d1002 USING meters TAGS(California.SanFrancisco, 3) VALUES('2018-10-03 14:38:16.650',10.30000,218,0.25000) d1003 USING meters TAGS(California.LoSangeles, 2) VALUES('2018-10-03 14:38:05.500',11.80000,221,0.28000) d1003 USING meters TAGS(California.LoSangeles, 2) VALUES('2018-10-03 14:38:16.600',13.40000,223,0.29000) d1004 USING meters TAGS(California.LoSangeles, 3) VALUES('2018-10-03 14:38:05.000',10.80000,223,0.29000) d1004 USING meters TAGS(California.LoSangeles, 3) VALUES('2018-10-03 14:38:06.500',11.50000,221,0.35000);
```
Use TDengine CLI to execute SQL script
@@ -300,8 +300,8 @@ output:
````
......
-meters,location="beijing.chaoyang",groupid=2i32 current=10.3f32,voltage=219i32,phase=0.31f32 1538548685000000000
-meters,location="beijing.chaoyang",groupid=2i32 current=12.6f32,voltage=218i32,phase=0.33f32 1538548695000000000
+meters,location="California.SanFrancisco",groupid=2i32 current=10.3f32,voltage=219i32,phase=0.31f32 1538548685000000000
+meters,location="California.SanFrancisco",groupid=2i32 current=12.6f32,voltage=218i32,phase=0.33f32 1538548695000000000
......
````
diff --git a/docs-en/20-third-party/emqx/add-action-handler.png b/docs-en/20-third-party/emqx/add-action-handler.png
deleted file mode 100644
index 97a1f933ec..0000000000
Binary files a/docs-en/20-third-party/emqx/add-action-handler.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/add-action-handler.webp b/docs-en/20-third-party/emqx/add-action-handler.webp
new file mode 100644
index 0000000000..4a8d105f71
Binary files /dev/null and b/docs-en/20-third-party/emqx/add-action-handler.webp differ
diff --git a/docs-en/20-third-party/emqx/check-result-in-taos.png b/docs-en/20-third-party/emqx/check-result-in-taos.png
deleted file mode 100644
index c17a5c1ea2..0000000000
Binary files a/docs-en/20-third-party/emqx/check-result-in-taos.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/check-result-in-taos.webp b/docs-en/20-third-party/emqx/check-result-in-taos.webp
new file mode 100644
index 0000000000..8fa040a861
Binary files /dev/null and b/docs-en/20-third-party/emqx/check-result-in-taos.webp differ
diff --git a/docs-en/20-third-party/emqx/check-rule-matched.png b/docs-en/20-third-party/emqx/check-rule-matched.png
deleted file mode 100644
index 9e9a466946..0000000000
Binary files a/docs-en/20-third-party/emqx/check-rule-matched.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/check-rule-matched.webp b/docs-en/20-third-party/emqx/check-rule-matched.webp
new file mode 100644
index 0000000000..e5a6140357
Binary files /dev/null and b/docs-en/20-third-party/emqx/check-rule-matched.webp differ
diff --git a/docs-en/20-third-party/emqx/client-num.png b/docs-en/20-third-party/emqx/client-num.png
deleted file mode 100644
index fff48cbf3b..0000000000
Binary files a/docs-en/20-third-party/emqx/client-num.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/client-num.webp b/docs-en/20-third-party/emqx/client-num.webp
new file mode 100644
index 0000000000..a151b18484
Binary files /dev/null and b/docs-en/20-third-party/emqx/client-num.webp differ
diff --git a/docs-en/20-third-party/emqx/create-resource.png b/docs-en/20-third-party/emqx/create-resource.png
deleted file mode 100644
index 58da4c391a..0000000000
Binary files a/docs-en/20-third-party/emqx/create-resource.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/create-resource.webp b/docs-en/20-third-party/emqx/create-resource.webp
new file mode 100644
index 0000000000..bf9cccbe49
Binary files /dev/null and b/docs-en/20-third-party/emqx/create-resource.webp differ
diff --git a/docs-en/20-third-party/emqx/create-rule.png b/docs-en/20-third-party/emqx/create-rule.png
deleted file mode 100644
index 73b0b6ee3e..0000000000
Binary files a/docs-en/20-third-party/emqx/create-rule.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/create-rule.webp b/docs-en/20-third-party/emqx/create-rule.webp
new file mode 100644
index 0000000000..13e8fc83d4
Binary files /dev/null and b/docs-en/20-third-party/emqx/create-rule.webp differ
diff --git a/docs-en/20-third-party/emqx/edit-action.png b/docs-en/20-third-party/emqx/edit-action.png
deleted file mode 100644
index 2a43ee369a..0000000000
Binary files a/docs-en/20-third-party/emqx/edit-action.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/edit-action.webp b/docs-en/20-third-party/emqx/edit-action.webp
new file mode 100644
index 0000000000..7f6d2e36a8
Binary files /dev/null and b/docs-en/20-third-party/emqx/edit-action.webp differ
diff --git a/docs-en/20-third-party/emqx/edit-resource.png b/docs-en/20-third-party/emqx/edit-resource.png
deleted file mode 100644
index 0a0b356004..0000000000
Binary files a/docs-en/20-third-party/emqx/edit-resource.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/edit-resource.webp b/docs-en/20-third-party/emqx/edit-resource.webp
new file mode 100644
index 0000000000..fd5d278fab
Binary files /dev/null and b/docs-en/20-third-party/emqx/edit-resource.webp differ
diff --git a/docs-en/20-third-party/emqx/login-dashboard.png b/docs-en/20-third-party/emqx/login-dashboard.png
deleted file mode 100644
index d6c5035c98..0000000000
Binary files a/docs-en/20-third-party/emqx/login-dashboard.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/login-dashboard.webp b/docs-en/20-third-party/emqx/login-dashboard.webp
new file mode 100644
index 0000000000..f84cee668f
Binary files /dev/null and b/docs-en/20-third-party/emqx/login-dashboard.webp differ
diff --git a/docs-en/20-third-party/emqx/rule-engine.png b/docs-en/20-third-party/emqx/rule-engine.png
deleted file mode 100644
index db110a837b..0000000000
Binary files a/docs-en/20-third-party/emqx/rule-engine.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/rule-engine.webp b/docs-en/20-third-party/emqx/rule-engine.webp
new file mode 100644
index 0000000000..c1711c8cc7
Binary files /dev/null and b/docs-en/20-third-party/emqx/rule-engine.webp differ
diff --git a/docs-en/20-third-party/emqx/rule-header-key-value.png b/docs-en/20-third-party/emqx/rule-header-key-value.png
deleted file mode 100644
index b81b9a9684..0000000000
Binary files a/docs-en/20-third-party/emqx/rule-header-key-value.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/rule-header-key-value.webp b/docs-en/20-third-party/emqx/rule-header-key-value.webp
new file mode 100644
index 0000000000..e645b3822d
Binary files /dev/null and b/docs-en/20-third-party/emqx/rule-header-key-value.webp differ
diff --git a/docs-en/20-third-party/emqx/run-mock.png b/docs-en/20-third-party/emqx/run-mock.png
deleted file mode 100644
index 0da2581857..0000000000
Binary files a/docs-en/20-third-party/emqx/run-mock.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/run-mock.webp b/docs-en/20-third-party/emqx/run-mock.webp
new file mode 100644
index 0000000000..ed33f1666d
Binary files /dev/null and b/docs-en/20-third-party/emqx/run-mock.webp differ
diff --git a/docs-en/20-third-party/grafana/add_datasource1.jpg b/docs-en/20-third-party/grafana/add_datasource1.jpg
deleted file mode 100644
index 1f0f5110f3..0000000000
Binary files a/docs-en/20-third-party/grafana/add_datasource1.jpg and /dev/null differ
diff --git a/docs-en/20-third-party/grafana/add_datasource1.webp b/docs-en/20-third-party/grafana/add_datasource1.webp
new file mode 100644
index 0000000000..211edc4457
Binary files /dev/null and b/docs-en/20-third-party/grafana/add_datasource1.webp differ
diff --git a/docs-en/20-third-party/grafana/add_datasource2.jpg b/docs-en/20-third-party/grafana/add_datasource2.jpg
deleted file mode 100644
index fa7a83e00e..0000000000
Binary files a/docs-en/20-third-party/grafana/add_datasource2.jpg and /dev/null differ
diff --git a/docs-en/20-third-party/grafana/add_datasource2.webp b/docs-en/20-third-party/grafana/add_datasource2.webp
new file mode 100644
index 0000000000..8ab547231f
Binary files /dev/null and b/docs-en/20-third-party/grafana/add_datasource2.webp differ
diff --git a/docs-en/20-third-party/grafana/add_datasource3.jpg b/docs-en/20-third-party/grafana/add_datasource3.jpg
deleted file mode 100644
index fc850ad08f..0000000000
Binary files a/docs-en/20-third-party/grafana/add_datasource3.jpg and /dev/null differ
diff --git a/docs-en/20-third-party/grafana/add_datasource3.webp b/docs-en/20-third-party/grafana/add_datasource3.webp
new file mode 100644
index 0000000000..d8a733360a
Binary files /dev/null and b/docs-en/20-third-party/grafana/add_datasource3.webp differ
diff --git a/docs-en/20-third-party/grafana/add_datasource4.jpg b/docs-en/20-third-party/grafana/add_datasource4.jpg
deleted file mode 100644
index 3ba73e50d4..0000000000
Binary files a/docs-en/20-third-party/grafana/add_datasource4.jpg and /dev/null differ
diff --git a/docs-en/20-third-party/grafana/add_datasource4.webp b/docs-en/20-third-party/grafana/add_datasource4.webp
new file mode 100644
index 0000000000..b1e0fc6e2b
Binary files /dev/null and b/docs-en/20-third-party/grafana/add_datasource4.webp differ
diff --git a/docs-en/20-third-party/grafana/create_dashboard1.jpg b/docs-en/20-third-party/grafana/create_dashboard1.jpg
deleted file mode 100644
index 3b83c3a171..0000000000
Binary files a/docs-en/20-third-party/grafana/create_dashboard1.jpg and /dev/null differ
diff --git a/docs-en/20-third-party/grafana/create_dashboard1.webp b/docs-en/20-third-party/grafana/create_dashboard1.webp
new file mode 100644
index 0000000000..55eb388833
Binary files /dev/null and b/docs-en/20-third-party/grafana/create_dashboard1.webp differ
diff --git a/docs-en/20-third-party/grafana/create_dashboard2.jpg b/docs-en/20-third-party/grafana/create_dashboard2.jpg
deleted file mode 100644
index fe5d768ac5..0000000000
Binary files a/docs-en/20-third-party/grafana/create_dashboard2.jpg and /dev/null differ
diff --git a/docs-en/20-third-party/grafana/create_dashboard2.webp b/docs-en/20-third-party/grafana/create_dashboard2.webp
new file mode 100644
index 0000000000..bb40e40718
Binary files /dev/null and b/docs-en/20-third-party/grafana/create_dashboard2.webp differ
diff --git a/docs-en/20-third-party/kafka/Kafka_Connect.png b/docs-en/20-third-party/kafka/Kafka_Connect.png
deleted file mode 100644
index f3dc02ea2a..0000000000
Binary files a/docs-en/20-third-party/kafka/Kafka_Connect.png and /dev/null differ
diff --git a/docs-en/20-third-party/kafka/Kafka_Connect.webp b/docs-en/20-third-party/kafka/Kafka_Connect.webp
new file mode 100644
index 0000000000..8f2000a749
Binary files /dev/null and b/docs-en/20-third-party/kafka/Kafka_Connect.webp differ
diff --git a/docs-en/20-third-party/kafka/confluentPlatform.png b/docs-en/20-third-party/kafka/confluentPlatform.png
deleted file mode 100644
index f8e69f2c7f..0000000000
Binary files a/docs-en/20-third-party/kafka/confluentPlatform.png and /dev/null differ
diff --git a/docs-en/20-third-party/kafka/confluentPlatform.webp b/docs-en/20-third-party/kafka/confluentPlatform.webp
new file mode 100644
index 0000000000..ff03d4e51a
Binary files /dev/null and b/docs-en/20-third-party/kafka/confluentPlatform.webp differ
diff --git a/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.png b/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.png
deleted file mode 100644
index 26d8a866d7..0000000000
Binary files a/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.png and /dev/null differ
diff --git a/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.webp b/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.webp
new file mode 100644
index 0000000000..120d534ec1
Binary files /dev/null and b/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.webp differ
diff --git a/docs-en/21-tdinternal/01-arch.md b/docs-en/21-tdinternal/01-arch.md
index 9607c9b387..2c430908e4 100644
--- a/docs-en/21-tdinternal/01-arch.md
+++ b/docs-en/21-tdinternal/01-arch.md
@@ -11,7 +11,7 @@ The design of TDengine is based on the assumption that any hardware or software
Logical structure diagram of TDengine distributed architecture as following:
-
+
Figure 1: TDengine architecture diagram
A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDengine client driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit.
@@ -54,7 +54,7 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc
To explain the relationship between vnode, mnode, TAOSC and application and their respective roles, the following is an analysis of a typical data writing process.
-
+
Figure 2: Typical process of TDengine
1. Application initiates a request to insert data through JDBC, ODBC, or other APIs.
@@ -123,7 +123,7 @@ If a database has N replicas, thus a virtual node group has N virtual nodes, but
Master Vnode uses a writing process as follows:
-
+
Figure 3: TDengine Master writing process
1. Master vnode receives the application data insertion request, verifies, and moves to next step;
@@ -137,7 +137,7 @@ Master Vnode uses a writing process as follows:
For a slave vnode, the write process as follows:
-
+
Figure 4: TDengine Slave Writing Process
1. Slave vnode receives a data insertion request forwarded by Master vnode;
@@ -267,7 +267,7 @@ For the data collected by device D1001, the number of records per hour is counte
TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable. STable is used to represent a specific type of data collection point. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tag. The tags can be multiple and be added, deleted and modified at any time. Applications can aggregate or statistically operate all or a subset of tables under a STABLE by specifying tag filters, thus greatly simplifying the development of applications. The process is shown in the following figure:
-
+
Figure 5: Diagram of multi-table aggregation query
1. Application sends a query condition to system;
diff --git a/docs-en/21-tdinternal/dnode.png b/docs-en/21-tdinternal/dnode.png
deleted file mode 100644
index cea87dcccb..0000000000
Binary files a/docs-en/21-tdinternal/dnode.png and /dev/null differ
diff --git a/docs-en/21-tdinternal/dnode.webp b/docs-en/21-tdinternal/dnode.webp
new file mode 100644
index 0000000000..a56c7e4594
Binary files /dev/null and b/docs-en/21-tdinternal/dnode.webp differ
diff --git a/docs-en/21-tdinternal/message.png b/docs-en/21-tdinternal/message.png
deleted file mode 100644
index 715a8bd37e..0000000000
Binary files a/docs-en/21-tdinternal/message.png and /dev/null differ
diff --git a/docs-en/21-tdinternal/message.webp b/docs-en/21-tdinternal/message.webp
new file mode 100644
index 0000000000..a2a42abff3
Binary files /dev/null and b/docs-en/21-tdinternal/message.webp differ
diff --git a/docs-en/21-tdinternal/modules.png b/docs-en/21-tdinternal/modules.png
deleted file mode 100644
index 10ae4703a6..0000000000
Binary files a/docs-en/21-tdinternal/modules.png and /dev/null differ
diff --git a/docs-en/21-tdinternal/modules.webp b/docs-en/21-tdinternal/modules.webp
new file mode 100644
index 0000000000..718a6abccd
Binary files /dev/null and b/docs-en/21-tdinternal/modules.webp differ
diff --git a/docs-en/21-tdinternal/multi_tables.png b/docs-en/21-tdinternal/multi_tables.png
deleted file mode 100644
index 0cefaab6a9..0000000000
Binary files a/docs-en/21-tdinternal/multi_tables.png and /dev/null differ
diff --git a/docs-en/21-tdinternal/multi_tables.webp b/docs-en/21-tdinternal/multi_tables.webp
new file mode 100644
index 0000000000..8f649e34a3
Binary files /dev/null and b/docs-en/21-tdinternal/multi_tables.webp differ
diff --git a/docs-en/21-tdinternal/replica-forward.png b/docs-en/21-tdinternal/replica-forward.png
deleted file mode 100644
index bf616e030b..0000000000
Binary files a/docs-en/21-tdinternal/replica-forward.png and /dev/null differ
diff --git a/docs-en/21-tdinternal/replica-forward.webp b/docs-en/21-tdinternal/replica-forward.webp
new file mode 100644
index 0000000000..512efd4eba
Binary files /dev/null and b/docs-en/21-tdinternal/replica-forward.webp differ
diff --git a/docs-en/21-tdinternal/replica-master.png b/docs-en/21-tdinternal/replica-master.png
deleted file mode 100644
index cb33f1ce98..0000000000
Binary files a/docs-en/21-tdinternal/replica-master.png and /dev/null differ
diff --git a/docs-en/21-tdinternal/replica-master.webp b/docs-en/21-tdinternal/replica-master.webp
new file mode 100644
index 0000000000..57030a11f5
Binary files /dev/null and b/docs-en/21-tdinternal/replica-master.webp differ
diff --git a/docs-en/21-tdinternal/replica-restore.png b/docs-en/21-tdinternal/replica-restore.png
deleted file mode 100644
index 1558e5ed01..0000000000
Binary files a/docs-en/21-tdinternal/replica-restore.png and /dev/null differ
diff --git a/docs-en/21-tdinternal/replica-restore.webp b/docs-en/21-tdinternal/replica-restore.webp
new file mode 100644
index 0000000000..f282c2d4d2
Binary files /dev/null and b/docs-en/21-tdinternal/replica-restore.webp differ
diff --git a/docs-en/21-tdinternal/structure.png b/docs-en/21-tdinternal/structure.png
deleted file mode 100644
index 4fc8f47ab0..0000000000
Binary files a/docs-en/21-tdinternal/structure.png and /dev/null differ
diff --git a/docs-en/21-tdinternal/structure.webp b/docs-en/21-tdinternal/structure.webp
new file mode 100644
index 0000000000..b77a42c074
Binary files /dev/null and b/docs-en/21-tdinternal/structure.webp differ
diff --git a/docs-en/21-tdinternal/vnode.png b/docs-en/21-tdinternal/vnode.png
deleted file mode 100644
index e6148d4907..0000000000
Binary files a/docs-en/21-tdinternal/vnode.png and /dev/null differ
diff --git a/docs-en/21-tdinternal/vnode.webp b/docs-en/21-tdinternal/vnode.webp
new file mode 100644
index 0000000000..fae3104c89
Binary files /dev/null and b/docs-en/21-tdinternal/vnode.webp differ
diff --git a/docs-en/21-tdinternal/write_master.png b/docs-en/21-tdinternal/write_master.png
deleted file mode 100644
index ff2dfc20bf..0000000000
Binary files a/docs-en/21-tdinternal/write_master.png and /dev/null differ
diff --git a/docs-en/21-tdinternal/write_master.webp b/docs-en/21-tdinternal/write_master.webp
new file mode 100644
index 0000000000..9624036ed3
Binary files /dev/null and b/docs-en/21-tdinternal/write_master.webp differ
diff --git a/docs-en/21-tdinternal/write_slave.png b/docs-en/21-tdinternal/write_slave.png
deleted file mode 100644
index cacb2cb6bc..0000000000
Binary files a/docs-en/21-tdinternal/write_slave.png and /dev/null differ
diff --git a/docs-en/21-tdinternal/write_slave.webp b/docs-en/21-tdinternal/write_slave.webp
new file mode 100644
index 0000000000..7c45dec11b
Binary files /dev/null and b/docs-en/21-tdinternal/write_slave.webp differ
diff --git a/docs-en/25-application/01-telegraf.md b/docs-en/25-application/01-telegraf.md
index 718e04ecd3..07ab289ac2 100644
--- a/docs-en/25-application/01-telegraf.md
+++ b/docs-en/25-application/01-telegraf.md
@@ -16,7 +16,7 @@ Current mainstream IT DevOps system usually include a data collection module, a
This article introduces how to quickly build a TDengine + Telegraf + Grafana based IT DevOps visualization system without writing even a single line of code and by simply modifying a few lines of configuration files. The architecture is as follows.
-
+
## Installation steps
@@ -75,7 +75,7 @@ Log in to the Grafana interface using a web browser at `IP:3000`, with the syste
Click on the gear icon on the left and select `Plugins`, you should find the TDengine data source plugin icon.
Click on the plus icon on the left and select `Import` to get the data from `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard- v0.1.0.json`, download the dashboard JSON file and import it. You will then see the dashboard in the following screen.
-
+
## Wrap-up
diff --git a/docs-en/25-application/02-collectd.md b/docs-en/25-application/02-collectd.md
index 2ac37618fa..0ddea28554 100644
--- a/docs-en/25-application/02-collectd.md
+++ b/docs-en/25-application/02-collectd.md
@@ -17,7 +17,7 @@ The new version of TDengine supports multiple data protocols and can accept data
This article introduces how to quickly build an IT DevOps visualization system based on TDengine + collectd / StatsD + Grafana without writing even a single line of code but by simply modifying a few lines of configuration files. The architecture is shown in the following figure.
-
+
## Installation Steps
@@ -83,19 +83,19 @@ Click on the gear icon on the left and select `Plugins`, you should find the TDe
Download the dashboard json from `https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json`, click the plus icon on the left and select Import, follow the instructions to import the JSON file. After that, you can see
The dashboard can be seen in the following screen.
-
+
#### import collectd dashboard
Download the dashboard json file from `https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json`. Download the dashboard json file, click the plus icon on the left side and select `Import`, and follow the interface prompts to select the JSON file to import. After that, you can see
dashboard with the following interface.
-
+
#### Importing the StatsD dashboard
Download the dashboard json from `https://github.com/taosdata/grafanaplugin/blob/master/examples/statsd/dashboards/statsd-with-tdengine-v0.1.0.json`. Click on the plus icon on the left and select `Import`, and follow the interface prompts to import the JSON file. You will then see the dashboard in the following screen.
-
+
## Wrap-up
diff --git a/docs-en/25-application/03-immigrate.md b/docs-en/25-application/03-immigrate.md
index 4cfeb892d8..68d8a2b8cc 100644
--- a/docs-en/25-application/03-immigrate.md
+++ b/docs-en/25-application/03-immigrate.md
@@ -32,7 +32,7 @@ We will explain how to migrate OpenTSDB applications to TDengine quickly, secure
The following figure (Figure 1) shows the system's overall architecture for a typical DevOps application scenario.
**Figure 1. Typical architecture in a DevOps scenario**
-
+
In this application scenario, there are Agent tools deployed in the application environment to collect machine metrics, network metrics, and application metrics. Data collectors to aggregate information collected by agents, systems for persistent data storage and management, and tools for monitoring data visualization (e.g., Grafana, etc.).
@@ -75,7 +75,7 @@ After writing the data to TDengine properly, you can adapt Grafana to visualize
TDengine provides two sets of Dashboard templates by default, and users only need to import the templates from the Grafana directory into Grafana to activate their use.
**Importing Grafana Templates** Figure 2.
-
+
After the above steps, you completed the migration to replace OpenTSDB with TDengine. You can see that the whole process is straightforward, there is no need to write any code, and only some configuration files need to be adjusted to meet the migration work.
@@ -88,7 +88,7 @@ In most DevOps scenarios, if you have a small OpenTSDB cluster (3 or fewer nodes
Suppose your application is particularly complex, or the application domain is not a DevOps scenario. You can continue reading subsequent chapters for a more comprehensive and in-depth look at the advanced topics of migrating an OpenTSDB application to TDengine.
**Figure 3. System architecture after migration**
-
+
## Migration evaluation and strategy for other scenarios
diff --git a/docs-en/25-application/IT-DevOps-Solutions-Collectd-StatsD.webp b/docs-en/25-application/IT-DevOps-Solutions-Collectd-StatsD.webp
new file mode 100644
index 0000000000..147a65b17b
Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-Collectd-StatsD.webp differ
diff --git a/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp
new file mode 100644
index 0000000000..3ca99c835b
Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp differ
diff --git a/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp
new file mode 100644
index 0000000000..04811f61b9
Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp differ
diff --git a/docs-en/25-application/IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp
new file mode 100644
index 0000000000..3693006875
Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp differ
diff --git a/docs-en/25-application/IT-DevOps-Solutions-Telegraf.webp b/docs-en/25-application/IT-DevOps-Solutions-Telegraf.webp
new file mode 100644
index 0000000000..fd5461ec9b
Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-Telegraf.webp differ
diff --git a/docs-en/25-application/IT-DevOps-Solutions-collectd-dashboard.webp b/docs-en/25-application/IT-DevOps-Solutions-collectd-dashboard.webp
new file mode 100644
index 0000000000..879c27a1a5
Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-collectd-dashboard.webp differ
diff --git a/docs-en/25-application/IT-DevOps-Solutions-statsd-dashboard.webp b/docs-en/25-application/IT-DevOps-Solutions-statsd-dashboard.webp
new file mode 100644
index 0000000000..1d4c655970
Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-statsd-dashboard.webp differ
diff --git a/docs-en/25-application/IT-DevOps-Solutions-telegraf-dashboard.webp b/docs-en/25-application/IT-DevOps-Solutions-telegraf-dashboard.webp
new file mode 100644
index 0000000000..105afcdb83
Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-telegraf-dashboard.webp differ
diff --git a/docs-en/27-train-faq/03-docker.md b/docs-en/27-train-faq/03-docker.md
index ba435a9307..3f560bcfef 100644
--- a/docs-en/27-train-faq/03-docker.md
+++ b/docs-en/27-train-faq/03-docker.md
@@ -265,7 +265,7 @@ Below is an example output:
$ taos> select groupid, location from test.d0;
groupid | location |
=================================
- 0 | shanghai |
+ 0 | California.SanDieo |
Query OK, 1 row(s) in set (0.003490s)
```
diff --git a/examples/c/stream.c b/examples/c/stream.c
deleted file mode 100644
index 41365813ae..0000000000
--- a/examples/c/stream.c
+++ /dev/null
@@ -1,178 +0,0 @@
-/*
- * Copyright (c) 2019 TAOS Data, Inc.
- *
- * This program is free software: you can use, redistribute, and/or modify
- * it under the terms of the GNU Affero General Public License, version 3
- * or later ("AGPL"), as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.
- *
- * You should have received a copy of the GNU Affero General Public License
- * along with this program. If not, see .
- */
-
-#include
-#include
-#include
-#include
-#include
-#include "../../../include/client/taos.h" // include TDengine header file
-
-typedef struct {
- char server_ip[64];
- char db_name[64];
- char tbl_name[64];
-} param;
-
-int g_thread_exit_flag = 0;
-void* insert_rows(void *sarg);
-
-void streamCallBack(void *param, TAOS_RES *res, TAOS_ROW row)
-{
- // in this simple demo, it just print out the result
- char temp[128];
-
- TAOS_FIELD *fields = taos_fetch_fields(res);
- int numFields = taos_num_fields(res);
-
- taos_print_row(temp, row, fields, numFields);
-
- printf("\n%s\n", temp);
-}
-
-int main(int argc, char *argv[])
-{
- TAOS *taos;
- char db_name[64];
- char tbl_name[64];
- char sql[1024] = { 0 };
-
- if (argc != 4) {
- printf("usage: %s server-ip dbname tblname\n", argv[0]);
- exit(0);
- }
-
- strcpy(db_name, argv[2]);
- strcpy(tbl_name, argv[3]);
-
- // create pthread to insert into row per second for stream calc
- param *t_param = (param *)malloc(sizeof(param));
- if (NULL == t_param)
- {
- printf("failed to malloc\n");
- exit(1);
- }
- memset(t_param, 0, sizeof(param));
- strcpy(t_param->server_ip, argv[1]);
- strcpy(t_param->db_name, db_name);
- strcpy(t_param->tbl_name, tbl_name);
-
- pthread_t pid;
- pthread_create(&pid, NULL, (void * (*)(void *))insert_rows, t_param);
-
- sleep(3); // waiting for database is created.
- // open connection to database
- taos = taos_connect(argv[1], "root", "taosdata", db_name, 0);
- if (taos == NULL) {
- printf("failed to connet to server:%s\n", argv[1]);
- free(t_param);
- exit(1);
- }
-
- // starting stream calc,
- printf("please input stream SQL:[e.g., select count(*) from tblname interval(5s) sliding(2s);]\n");
- fgets(sql, sizeof(sql), stdin);
- if (sql[0] == 0) {
- printf("input NULL stream SQL, so exit!\n");
- free(t_param);
- exit(1);
- }
-
- // param is set to NULL in this demo, it shall be set to the pointer to app context
- TAOS_STREAM *pStream = taos_open_stream(taos, sql, streamCallBack, 0, NULL, NULL);
- if (NULL == pStream) {
- printf("failed to create stream\n");
- free(t_param);
- exit(1);
- }
-
- printf("presss any key to exit\n");
- getchar();
-
- taos_close_stream(pStream);
-
- g_thread_exit_flag = 1;
- pthread_join(pid, NULL);
-
- taos_close(taos);
- free(t_param);
-
- return 0;
-}
-
-
-void* insert_rows(void *sarg)
-{
- TAOS *taos;
- char command[1024] = { 0 };
- param *winfo = (param * )sarg;
-
- if (NULL == winfo){
- printf("para is null!\n");
- exit(1);
- }
-
- taos = taos_connect(winfo->server_ip, "root", "taosdata", NULL, 0);
- if (taos == NULL) {
- printf("failed to connet to server:%s\n", winfo->server_ip);
- exit(1);
- }
-
- // drop database
- sprintf(command, "drop database %s;", winfo->db_name);
- if (taos_query(taos, command) != 0) {
- printf("failed to drop database, reason:%s\n", taos_errstr(taos));
- exit(1);
- }
-
- // create database
- sprintf(command, "create database %s;", winfo->db_name);
- if (taos_query(taos, command) != 0) {
- printf("failed to create database, reason:%s\n", taos_errstr(taos));
- exit(1);
- }
-
- // use database
- sprintf(command, "use %s;", winfo->db_name);
- if (taos_query(taos, command) != 0) {
- printf("failed to use database, reason:%s\n", taos_errstr(taos));
- exit(1);
- }
-
- // create table
- sprintf(command, "create table %s (ts timestamp, speed int);", winfo->tbl_name);
- if (taos_query(taos, command) != 0) {
- printf("failed to create table, reason:%s\n", taos_errstr(taos));
- exit(1);
- }
-
- // insert data
- int64_t begin = (int64_t)time(NULL);
- int index = 0;
- while (1) {
- if (g_thread_exit_flag) break;
-
- index++;
- sprintf(command, "insert into %s values (%ld, %d)", winfo->tbl_name, (begin + index) * 1000, index);
- if (taos_query(taos, command)) {
- printf("failed to insert row [%s], reason:%s\n", command, taos_errstr(taos));
- }
- sleep(1);
- }
-
- taos_close(taos);
- return 0;
-}
-
diff --git a/include/common/tcommon.h b/include/common/tcommon.h
index 9e3ad42a82..0ff13963c0 100644
--- a/include/common/tcommon.h
+++ b/include/common/tcommon.h
@@ -219,6 +219,16 @@ typedef struct {
#define GET_FORWARD_DIRECTION_FACTOR(ord) (((ord) == TSDB_ORDER_ASC) ? QUERY_ASC_FORWARD_STEP : QUERY_DESC_FORWARD_STEP)
+#define SORT_QSORT_T 0x1
+#define SORT_SPILLED_MERGE_SORT_T 0x2
+typedef struct SSortExecInfo {
+ int32_t sortMethod;
+ int32_t sortBuffer;
+ int32_t loops; // loop count
+ int32_t writeBytes; // write io bytes
+ int32_t readBytes; // read io bytes
+} SSortExecInfo;
+
#ifdef __cplusplus
}
#endif
diff --git a/include/common/tdatablock.h b/include/common/tdatablock.h
index db8644ecfe..e8fe47a462 100644
--- a/include/common/tdatablock.h
+++ b/include/common/tdatablock.h
@@ -198,7 +198,7 @@ void colDataTrim(SColumnInfoData* pColumnInfoData);
size_t blockDataGetNumOfCols(const SSDataBlock* pBlock);
size_t blockDataGetNumOfRows(const SSDataBlock* pBlock);
-int32_t blockDataMerge(SSDataBlock* pDest, const SSDataBlock* pSrc, SArray* pIndexMap);
+int32_t blockDataMerge(SSDataBlock* pDest, const SSDataBlock* pSrc);
int32_t blockDataSplitRows(SSDataBlock* pBlock, bool hasVarCol, int32_t startIndex, int32_t* stopIndex,
int32_t pageSize);
int32_t blockDataToBuf(char* buf, const SSDataBlock* pBlock);
diff --git a/include/common/tdataformat.h b/include/common/tdataformat.h
index f1f96bfedd..ef931ed3b1 100644
--- a/include/common/tdataformat.h
+++ b/include/common/tdataformat.h
@@ -61,9 +61,10 @@ int32_t tTSRowBuilderGetRow(STSRowBuilder *pBuilder, const STSRow2 **ppRow);
// STag
int32_t tTagNew(STagVal *pTagVals, int16_t nTag, STag **ppTag);
void tTagFree(STag *pTag);
-void tTagGet(STag *pTag, int16_t cid, int8_t type, uint8_t **ppData, int32_t *nData);
-int32_t tEncodeTag(SEncoder *pEncoder, STag *pTag);
-int32_t tDecodeTag(SDecoder *pDecoder, const STag **ppTag);
+int32_t tTagSet(STag *pTag, SSchema *pSchema, int32_t nCols, int iCol, uint8_t *pData, uint32_t nData, STag **ppTag);
+void tTagGet(STag *pTag, int16_t cid, int8_t type, uint8_t **ppData, uint32_t *nData);
+int32_t tEncodeTag(SEncoder *pEncoder, const STag *pTag);
+int32_t tDecodeTag(SDecoder *pDecoder, STag **ppTag);
// STRUCT =================
struct STColumn {
diff --git a/include/common/tmsg.h b/include/common/tmsg.h
index 5f78f2b756..0ed63e08bf 100644
--- a/include/common/tmsg.h
+++ b/include/common/tmsg.h
@@ -660,8 +660,7 @@ typedef struct {
int32_t tz; // query client timezone
char intervalUnit;
char slidingUnit;
- char
- offsetUnit; // TODO Remove it, the offset is the number of precision tickle, and it must be a immutable duration.
+ char offsetUnit;
int8_t precision;
int64_t interval;
int64_t sliding;
@@ -951,6 +950,7 @@ typedef struct {
int32_t numOfCores;
int32_t numOfSupportVnodes;
char dnodeEp[TSDB_EP_LEN];
+ SMnodeLoad mload;
SClusterCfg clusterCfg;
SArray* pVloads; // array of SVnodeLoad
} SStatusReq;
diff --git a/include/dnode/mnode/mnode.h b/include/dnode/mnode/mnode.h
index f2c8c916c8..ddd6f1c05f 100644
--- a/include/dnode/mnode/mnode.h
+++ b/include/dnode/mnode/mnode.h
@@ -29,6 +29,8 @@ extern "C" {
typedef struct SMnode SMnode;
typedef struct {
+ int32_t dnodeId;
+ bool standby;
bool deploy;
int8_t replica;
int8_t selfIndex;
@@ -53,15 +55,6 @@ SMnode *mndOpen(const char *path, const SMnodeOpt *pOption);
*/
void mndClose(SMnode *pMnode);
-/**
- * @brief Close a mnode.
- *
- * @param pMnode The mnode object to close.
- * @param pOption Options of the mnode.
- * @return int32_t 0 for success, -1 for failure.
- */
-int32_t mndAlter(SMnode *pMnode, const SMnodeOpt *pOption);
-
/**
* @brief Start mnode
*
diff --git a/include/libs/function/function.h b/include/libs/function/function.h
index 7d3e969c41..21b7309055 100644
--- a/include/libs/function/function.h
+++ b/include/libs/function/function.h
@@ -39,6 +39,7 @@ typedef bool (*FExecInit)(struct SqlFunctionCtx *pCtx, struct SResultRowEntryInf
typedef int32_t (*FExecProcess)(struct SqlFunctionCtx *pCtx);
typedef int32_t (*FExecFinalize)(struct SqlFunctionCtx *pCtx, SSDataBlock* pBlock);
typedef int32_t (*FScalarExecProcess)(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
+typedef int32_t (*FExecCombine)(struct SqlFunctionCtx *pDestCtx, struct SqlFunctionCtx *pSourceCtx);
typedef struct SScalarFuncExecFuncs {
FExecGetEnv getEnv;
@@ -50,6 +51,7 @@ typedef struct SFuncExecFuncs {
FExecInit init;
FExecProcess process;
FExecFinalize finalize;
+ FExecCombine combine;
} SFuncExecFuncs;
typedef struct SFileBlockInfo {
diff --git a/include/libs/nodes/nodes.h b/include/libs/nodes/nodes.h
index ad30bd7552..3c5278011a 100644
--- a/include/libs/nodes/nodes.h
+++ b/include/libs/nodes/nodes.h
@@ -213,6 +213,7 @@ typedef enum ENodeType {
QUERY_NODE_PHYSICAL_PLAN_STREAM_FINAL_INTERVAL,
QUERY_NODE_PHYSICAL_PLAN_FILL,
QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW,
+ QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW,
QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW,
QUERY_NODE_PHYSICAL_PLAN_PARTITION,
QUERY_NODE_PHYSICAL_PLAN_DISPATCH,
diff --git a/include/libs/nodes/plannodes.h b/include/libs/nodes/plannodes.h
index 367addbe9b..3ae2d18e5d 100644
--- a/include/libs/nodes/plannodes.h
+++ b/include/libs/nodes/plannodes.h
@@ -298,6 +298,8 @@ typedef struct SSessionWinodwPhysiNode {
int64_t gap;
} SSessionWinodwPhysiNode;
+typedef SSessionWinodwPhysiNode SStreamSessionWinodwPhysiNode;
+
typedef struct SStateWinodwPhysiNode {
SWinodwPhysiNode window;
SNode* pStateKey;
diff --git a/include/libs/stream/tstream.h b/include/libs/stream/tstream.h
index d18f609d54..55e8cf0050 100644
--- a/include/libs/stream/tstream.h
+++ b/include/libs/stream/tstream.h
@@ -107,7 +107,7 @@ static FORCE_INLINE void streamDataSubmitRefDec(SStreamDataSubmit* pDataSubmit)
if (ref == 0) {
taosMemoryFree(pDataSubmit->data);
taosMemoryFree(pDataSubmit->dataRef);
- // taosFreeQitem(pDataSubmit);
+ taosFreeQitem(pDataSubmit);
}
}
diff --git a/include/libs/sync/sync.h b/include/libs/sync/sync.h
index 2bf678fa48..b14b7667d2 100644
--- a/include/libs/sync/sync.h
+++ b/include/libs/sync/sync.h
@@ -82,14 +82,39 @@ typedef struct SFsmCbMeta {
SyncTerm currentTerm;
} SFsmCbMeta;
+typedef struct SReConfigCbMeta {
+ int32_t code;
+ SyncIndex index;
+ SyncTerm term;
+ SyncTerm currentTerm;
+} SReConfigCbMeta;
+
typedef struct SSyncFSM {
void* data;
+
void (*FpCommitCb)(struct SSyncFSM* pFsm, const SRpcMsg* pMsg, SFsmCbMeta cbMeta);
void (*FpPreCommitCb)(struct SSyncFSM* pFsm, const SRpcMsg* pMsg, SFsmCbMeta cbMeta);
void (*FpRollBackCb)(struct SSyncFSM* pFsm, const SRpcMsg* pMsg, SFsmCbMeta cbMeta);
- void (*FpRestoreFinish)(struct SSyncFSM* pFsm);
+
+ void (*FpRestoreFinishCb)(struct SSyncFSM* pFsm);
int32_t (*FpGetSnapshot)(struct SSyncFSM* pFsm, SSnapshot* pSnapshot);
- int32_t (*FpRestoreSnapshot)(struct SSyncFSM* pFsm, const SSnapshot* snapshot);
+
+ // if (*ppIter == NULL)
+ // *ppIter = new iter;
+ // else
+ // *ppIter.next();
+ //
+ // if success, return 0. else return error code
+ int32_t (*FpSnapshotRead)(struct SSyncFSM* pFsm, const SSnapshot* pSnapshot, void** ppIter, char** ppBuf,
+ int32_t* len);
+
+ // apply data into fsm
+ int32_t (*FpSnapshotApply)(struct SSyncFSM* pFsm, const SSnapshot* pSnapshot, char* pBuf, int32_t len);
+
+ void (*FpReConfigCb)(struct SSyncFSM* pFsm, SSyncCfg newCfg, SReConfigCbMeta cbMeta);
+
+ // int32_t (*FpRestoreSnapshot)(struct SSyncFSM* pFsm, const SSnapshot* snapshot);
+
} SSyncFSM;
// abstract definition of log store in raft
diff --git a/include/libs/transport/trpc.h b/include/libs/transport/trpc.h
index 754a203471..70977bba87 100644
--- a/include/libs/transport/trpc.h
+++ b/include/libs/transport/trpc.h
@@ -89,19 +89,18 @@ typedef struct SRpcInit {
typedef struct {
void *val;
int32_t (*clone)(void *src, void **dst);
- void (*freeFunc)(const void *arg);
} SRpcCtxVal;
typedef struct {
int32_t msgType;
void * val;
int32_t (*clone)(void *src, void **dst);
- void (*freeFunc)(const void *arg);
} SRpcBrokenlinkVal;
typedef struct {
SHashObj * args;
SRpcBrokenlinkVal brokenVal;
+ void (*freeFunc)(const void *arg);
} SRpcCtx;
int32_t rpcInit();
diff --git a/include/util/taoserror.h b/include/util/taoserror.h
index e318978339..0ba1d0c0f2 100644
--- a/include/util/taoserror.h
+++ b/include/util/taoserror.h
@@ -313,6 +313,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_VND_INVALID_TABLE_ACTION TAOS_DEF_ERROR_CODE(0, 0x0519)
#define TSDB_CODE_VND_COL_ALREADY_EXISTS TAOS_DEF_ERROR_CODE(0, 0x051a)
#define TSDB_CODE_VND_TABLE_COL_NOT_EXISTS TAOS_DEF_ERROR_CODE(0, 0x051b)
+#define TSDB_CODE_VND_READ_END TAOS_DEF_ERROR_CODE(0, 0x051c)
// tsdb
#define TSDB_CODE_TDB_INVALID_TABLE_ID TAOS_DEF_ERROR_CODE(0, 0x0600)
diff --git a/include/util/tdef.h b/include/util/tdef.h
index f527523433..cbbf3b8ff5 100644
--- a/include/util/tdef.h
+++ b/include/util/tdef.h
@@ -431,11 +431,11 @@ enum {
};
#define DEFAULT_HANDLE 0
-#define MNODE_HANDLE -1
-#define QNODE_HANDLE -2
-#define SNODE_HANDLE -3
-#define VNODE_HANDLE -4
-#define BNODE_HANDLE -5
+#define MNODE_HANDLE 1
+#define QNODE_HANDLE -1
+#define SNODE_HANDLE -2
+#define VNODE_HANDLE -3
+#define BNODE_HANDLE -4
#define TSDB_CONFIG_OPTION_LEN 16
#define TSDB_CONIIG_VALUE_LEN 48
diff --git a/include/util/tlog.h b/include/util/tlog.h
index be31aa8115..47ac01aacf 100644
--- a/include/util/tlog.h
+++ b/include/util/tlog.h
@@ -88,6 +88,7 @@ void taosPrintLongString(const char *flags, ELogLevel level, int32_t dflag, cons
#define uInfo(...) { if (uDebugFlag & DEBUG_INFO) { taosPrintLog("UTL ", DEBUG_INFO, tsLogEmbedded ? 255 : uDebugFlag, __VA_ARGS__); }}
#define uDebug(...) { if (uDebugFlag & DEBUG_DEBUG) { taosPrintLog("UTL ", DEBUG_DEBUG, uDebugFlag, __VA_ARGS__); }}
#define uTrace(...) { if (uDebugFlag & DEBUG_TRACE) { taosPrintLog("UTL ", DEBUG_TRACE, uDebugFlag, __VA_ARGS__); }}
+#define uDebugL(...) { if (uDebugFlag & DEBUG_DEBUG) { taosPrintLongString("UTL ", DEBUG_DEBUG, uDebugFlag, __VA_ARGS__); }}
#define pError(...) { taosPrintLog("APP ERROR ", DEBUG_ERROR, 255, __VA_ARGS__); }
#define pPrint(...) { taosPrintLog("APP ", DEBUG_INFO, 255, __VA_ARGS__); }
diff --git a/source/client/src/clientHb.c b/source/client/src/clientHb.c
index d01ec501ba..c288b5d2f8 100644
--- a/source/client/src/clientHb.c
+++ b/source/client/src/clientHb.c
@@ -141,7 +141,9 @@ static int32_t hbQueryHbRspHandle(SAppHbMgr *pAppHbMgr, SClientHbRsp *pRsp) {
if (NULL == pTscObj) {
tscDebug("tscObj rid %" PRIx64 " not exist", pRsp->connKey.tscRid);
} else {
- updateEpSet_s(&pTscObj->pAppInfo->mgmtEp, &pRsp->query->epSet);
+ if (pRsp->query->totalDnodes > 1 && !isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, &pRsp->query->epSet)) {
+ updateEpSet_s(&pTscObj->pAppInfo->mgmtEp, &pRsp->query->epSet);
+ }
pTscObj->connId = pRsp->query->connId;
if (pRsp->query->killRid) {
@@ -580,8 +582,15 @@ void hbClearReqInfo(SAppHbMgr *pAppHbMgr) {
}
}
+void hbThreadFuncUnexpectedStopped(void) {
+ atomic_store_8(&clientHbMgr.threadStop, 2);
+}
+
static void *hbThreadFunc(void *param) {
setThreadName("hb");
+#ifdef WINDOWS
+ atexit(hbThreadFuncUnexpectedStopped);
+#endif
while (1) {
int8_t threadStop = atomic_val_compare_exchange_8(&clientHbMgr.threadStop, 1, 2);
if (1 == threadStop) {
diff --git a/source/client/src/clientSml.c b/source/client/src/clientSml.c
index 68c47c2d13..7d623072d6 100644
--- a/source/client/src/clientSml.c
+++ b/source/client/src/clientSml.c
@@ -24,7 +24,6 @@
#define EQUAL '='
#define QUOTE '"'
#define SLASH '\\'
-#define tsMaxSQLStringLen (1024*1024)
#define JUMP_SPACE(sql) while (*sql != '\0'){if(*sql == SPACE) sql++;else break;}
// comma ,
@@ -63,12 +62,11 @@ for (int i = 1; i < keyLen; ++i) { \
#define TS "_ts"
#define TS_LEN 3
-#define VALUE "value"
-#define VALUE_LEN 5
+#define VALUE "_value"
+#define VALUE_LEN 6
#define BINARY_ADD_LEN 2 // "binary" 2 means " "
#define NCHAR_ADD_LEN 3 // L"nchar" 3 means L" "
-#define CHAR_SAVE_LENGTH 8
//=================================================================================================
typedef TSDB_SML_PROTOCOL_TYPE SMLProtocolType;
@@ -253,12 +251,20 @@ static int32_t smlGenerateSchemaAction(SSchema* colField, SHashObj* colHash, SSm
return 0;
}
+static int32_t smlFindNearestPowerOf2(int32_t length){
+ int32_t result = 1;
+ while(result <= length){
+ result *= 2;
+ }
+ return result;
+}
+
static int32_t smlBuildColumnDescription(SSmlKv* field, char* buf, int32_t bufSize, int32_t* outBytes) {
uint8_t type = field->type;
char tname[TSDB_TABLE_NAME_LEN] = {0};
memcpy(tname, field->key, field->keyLen);
if (type == TSDB_DATA_TYPE_BINARY || type == TSDB_DATA_TYPE_NCHAR) {
- int32_t bytes = field->length > CHAR_SAVE_LENGTH ? (2*field->length) : CHAR_SAVE_LENGTH;
+ int32_t bytes = smlFindNearestPowerOf2(field->length);
int out = snprintf(buf, bufSize, "`%s` %s(%d)",
tname, tDataTypes[field->type].name, bytes);
*outBytes = out;
@@ -273,8 +279,8 @@ static int32_t smlBuildColumnDescription(SSmlKv* field, char* buf, int32_t bufSi
static int32_t smlApplySchemaAction(SSmlHandle* info, SSchemaAction* action) {
int32_t code = 0;
int32_t outBytes = 0;
- char *result = (char *)taosMemoryCalloc(1, tsMaxSQLStringLen+1);
- int32_t capacity = tsMaxSQLStringLen + 1;
+ char *result = (char *)taosMemoryCalloc(1, TSDB_MAX_ALLOWED_SQL_LEN);
+ int32_t capacity = TSDB_MAX_ALLOWED_SQL_LEN;
uDebug("SML:0x%"PRIx64" apply schema action. action: %d", info->id, action->action);
switch (action->action) {
@@ -398,7 +404,7 @@ static int32_t smlApplySchemaAction(SSmlHandle* info, SSchemaAction* action) {
}
if(taosArrayGetSize(cols) == 0){
outBytes = snprintf(pos, freeBytes,"`%s` %s(%d)",
- tsSmlTagName, tDataTypes[TSDB_DATA_TYPE_NCHAR].name, CHAR_SAVE_LENGTH);
+ tsSmlTagName, tDataTypes[TSDB_DATA_TYPE_NCHAR].name, 1);
pos += outBytes; freeBytes -= outBytes;
*pos = ','; ++pos; --freeBytes;
}
@@ -508,6 +514,11 @@ static int32_t smlModifyDBSchemas(SSmlHandle* info) {
if (code != TSDB_CODE_SUCCESS) {
return code;
}
+
+ code = catalogRefreshTableMeta(info->pCatalog, info->taos->pAppInfo->pTransporter, &ep, &pName, -1);
+ if (code != TSDB_CODE_SUCCESS) {
+ return code;
+ }
} else {
uError("SML:0x%"PRIx64" load table meta error: %s", info->id, tstrerror(code));
return code;
diff --git a/source/common/src/systable.c b/source/common/src/systable.c
index 9fe7645e2b..5e1405e0c6 100644
--- a/source/common/src/systable.c
+++ b/source/common/src/systable.c
@@ -36,7 +36,6 @@ static const SSysDbTableSchema mnodesSchema[] = {
{.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT},
{.name = "endpoint", .bytes = TSDB_EP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
{.name = "role", .bytes = 12 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
- {.name = "role_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP},
{.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP},
};
diff --git a/source/common/src/tdatablock.c b/source/common/src/tdatablock.c
index 1864806ebf..0350510894 100644
--- a/source/common/src/tdatablock.c
+++ b/source/common/src/tdatablock.c
@@ -365,19 +365,13 @@ int32_t blockDataUpdateTsWindow(SSDataBlock* pDataBlock, int32_t tsColumnIndex)
return 0;
}
-// if pIndexMap = NULL, merger one column by on column
-int32_t blockDataMerge(SSDataBlock* pDest, const SSDataBlock* pSrc, SArray* pIndexMap) {
+int32_t blockDataMerge(SSDataBlock* pDest, const SSDataBlock* pSrc) {
assert(pSrc != NULL && pDest != NULL);
int32_t capacity = pDest->info.capacity;
for (int32_t i = 0; i < pDest->info.numOfCols; ++i) {
- int32_t mapIndex = i;
- // if (pIndexMap) {
- // mapIndex = *(int32_t*)taosArrayGet(pIndexMap, i);
- // }
-
SColumnInfoData* pCol2 = taosArrayGet(pDest->pDataBlock, i);
- SColumnInfoData* pCol1 = taosArrayGet(pSrc->pDataBlock, mapIndex);
+ SColumnInfoData* pCol1 = taosArrayGet(pSrc->pDataBlock, i);
capacity = pDest->info.capacity;
colDataMergeCol(pCol2, pDest->info.rows, &capacity, pCol1, pSrc->info.rows);
@@ -1738,8 +1732,12 @@ SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pTSchema, boo
for (int32_t k = 0; k < pTSchema->numOfCols; k++) {
const STColumn* pColumn = &pTSchema->columns[k];
SColumnInfoData* pColData = taosArrayGet(pDataBlock->pDataBlock, k);
- void* data = colDataGetData(pColData, j);
- tdAppendColValToRow(&rb, pColumn->colId, pColumn->type, TD_VTYPE_NORM, data, true, pColumn->offset, k);
+ if (colDataIsNull_s(pColData, j)) {
+ tdAppendColValToRow(&rb, pColumn->colId, pColumn->type, TD_VTYPE_NONE, NULL, false, pColumn->offset, k);
+ } else {
+ void* data = colDataGetData(pColData, j);
+ tdAppendColValToRow(&rb, pColumn->colId, pColumn->type, TD_VTYPE_NORM, data, true, pColumn->offset, k);
+ }
}
int32_t rowLen = TD_ROW_LEN(rowData);
rowData = POINTER_SHIFT(rowData, rowLen);
diff --git a/source/common/src/tdataformat.c b/source/common/src/tdataformat.c
index f82df0d9bc..e8d7e3ac09 100644
--- a/source/common/src/tdataformat.c
+++ b/source/common/src/tdataformat.c
@@ -581,7 +581,52 @@ void tTagFree(STag *pTag) {
if (pTag) taosMemoryFree(pTag);
}
-void tTagGet(STag *pTag, int16_t cid, int8_t type, uint8_t **ppData, int32_t *nData) {
+int32_t tTagSet(STag *pTag, SSchema *pSchema, int32_t nCols, int iCol, uint8_t *pData, uint32_t nData, STag **ppTag) {
+ STagVal *pTagVals;
+ int16_t nTags = 0;
+ SSchema *pColumn;
+ uint8_t *p;
+ uint32_t n;
+
+ pTagVals = (STagVal *)taosMemoryMalloc(sizeof(*pTagVals) * nCols);
+ if (pTagVals == NULL) {
+ terrno = TSDB_CODE_OUT_OF_MEMORY;
+ return -1;
+ }
+
+ for (int32_t i = 0; i < nCols; i++) {
+ pColumn = &pSchema[i];
+
+ if (i == iCol) {
+ p = pData;
+ n = nData;
+ } else {
+ tTagGet(pTag, pColumn->colId, pColumn->type, &p, &n);
+ }
+
+ if (p == NULL) continue;
+
+ ASSERT(IS_VAR_DATA_TYPE(pColumn->type) || n == pColumn->bytes);
+
+ pTagVals[nTags].cid = pColumn->colId;
+ pTagVals[nTags].type = pColumn->type;
+ pTagVals[nTags].nData = n;
+ pTagVals[nTags].pData = p;
+
+ nTags++;
+ }
+
+ // create new tag
+ if (tTagNew(pTagVals, nTags, ppTag) < 0) {
+ taosMemoryFree(pTagVals);
+ return -1;
+ }
+
+ taosMemoryFree(pTagVals);
+ return 0;
+}
+
+void tTagGet(STag *pTag, int16_t cid, int8_t type, uint8_t **ppData, uint32_t *nData) {
STagIdx *pTagIdx = bsearch(&((STagIdx){.cid = cid}), pTag->idx, pTag->nTag, sizeof(STagIdx), tTagIdxCmprFn);
if (pTagIdx == NULL) {
*ppData = NULL;
@@ -597,18 +642,11 @@ void tTagGet(STag *pTag, int16_t cid, int8_t type, uint8_t **ppData, int32_t *nD
}
}
-int32_t tEncodeTag(SEncoder *pEncoder, STag *pTag) {
- // return tEncodeBinary(pEncoder, (uint8_t *)pTag, pTag->len);
- ASSERT(0);
- return 0;
+int32_t tEncodeTag(SEncoder *pEncoder, const STag *pTag) {
+ return tEncodeBinary(pEncoder, (const uint8_t *)pTag, pTag->len);
}
-int32_t tDecodeTag(SDecoder *pDecoder, const STag **ppTag) {
- // uint32_t n;
- // return tDecodeBinary(pDecoder, (const uint8_t **)ppTag, &n);
- ASSERT(0);
- return 0;
-}
+int32_t tDecodeTag(SDecoder *pDecoder, STag **ppTag) { return tDecodeBinary(pDecoder, (uint8_t **)ppTag, NULL); }
#if 1 // ===================================================================================================================
static void dataColSetNEleNull(SDataCol *pCol, int nEle);
@@ -1087,7 +1125,7 @@ SKVRow tdGetKVRowFromBuilder(SKVRowBuilder *pBuilder) {
kvRowSetNCols(row, pBuilder->nCols);
kvRowSetLen(row, tlen);
- if(pBuilder->nCols > 0){
+ if (pBuilder->nCols > 0) {
memcpy(kvRowColIdx(row), pBuilder->pColIdx, sizeof(SColIdx) * pBuilder->nCols);
memcpy(kvRowValues(row), pBuilder->buf, pBuilder->size);
}
diff --git a/source/common/src/tmsg.c b/source/common/src/tmsg.c
index 32db8c0761..2f6dbf5389 100644
--- a/source/common/src/tmsg.c
+++ b/source/common/src/tmsg.c
@@ -891,6 +891,9 @@ int32_t tSerializeSStatusReq(void *buf, int32_t bufLen, SStatusReq *pReq) {
if (tEncodeI64(&encoder, pload->pointsWritten) < 0) return -1;
}
+ // mnode loads
+ if (tEncodeI32(&encoder, pReq->mload.syncState) < 0) return -1;
+
tEndEncode(&encoder);
int32_t tlen = encoder.pos;
@@ -946,6 +949,8 @@ int32_t tDeserializeSStatusReq(void *buf, int32_t bufLen, SStatusReq *pReq) {
}
}
+ if (tDecodeI32(&decoder, &pReq->mload.syncState) < 0) return -1;
+
tEndDecode(&decoder);
tDecoderClear(&decoder);
return 0;
diff --git a/source/dnode/mgmt/mgmt_dnode/src/dmHandle.c b/source/dnode/mgmt/mgmt_dnode/src/dmHandle.c
index f7337f482f..bb2c069eaa 100644
--- a/source/dnode/mgmt/mgmt_dnode/src/dmHandle.c
+++ b/source/dnode/mgmt/mgmt_dnode/src/dmHandle.c
@@ -75,8 +75,9 @@ void dmSendStatusReq(SDnodeMgmt *pMgmt) {
(*pMgmt->getVnodeLoadsFp)(&vinfo);
req.pVloads = vinfo.pVloads;
- SMonMloadInfo minfo = {0};
+ SMonMloadInfo minfo = {0};
(*pMgmt->getMnodeLoadsFp)(&minfo);
+ req.mload = minfo.load;
int32_t contLen = tSerializeSStatusReq(NULL, 0, &req);
void *pHead = rpcMallocCont(contLen);
diff --git a/source/dnode/mgmt/mgmt_mnode/inc/mmInt.h b/source/dnode/mgmt/mgmt_mnode/inc/mmInt.h
index 030d4b309e..bd034fe7d6 100644
--- a/source/dnode/mgmt/mgmt_mnode/inc/mmInt.h
+++ b/source/dnode/mgmt/mgmt_mnode/inc/mmInt.h
@@ -36,7 +36,6 @@ typedef struct SMnodeMgmt {
SSingleWorker monitorWorker;
SReplica replicas[TSDB_MAX_REPLICA];
int8_t replica;
- int8_t selfIndex;
bool stopped;
int32_t refCount;
TdThreadRwlock lock;
@@ -47,7 +46,6 @@ int32_t mmReadFile(SMnodeMgmt *pMgmt, bool *pDeployed);
int32_t mmWriteFile(SMnodeMgmt *pMgmt, SDCreateMnodeReq *pMsg, bool deployed);
// mmInt.c
-int32_t mmAlter(SMnodeMgmt *pMgmt, SDAlterMnodeReq *pMsg);
int32_t mmAcquire(SMnodeMgmt *pMgmt);
void mmRelease(SMnodeMgmt *pMgmt);
diff --git a/source/dnode/mgmt/mgmt_mnode/src/mmFile.c b/source/dnode/mgmt/mgmt_mnode/src/mmFile.c
index 2aa1087770..478d6abd52 100644
--- a/source/dnode/mgmt/mgmt_mnode/src/mmFile.c
+++ b/source/dnode/mgmt/mgmt_mnode/src/mmFile.c
@@ -53,43 +53,45 @@ int32_t mmReadFile(SMnodeMgmt *pMgmt, bool *pDeployed) {
*pDeployed = deployed->valueint;
cJSON *mnodes = cJSON_GetObjectItem(root, "mnodes");
- if (!mnodes || mnodes->type != cJSON_Array) {
- dError("failed to read %s since nodes not found", file);
- goto _OVER;
- }
-
- pMgmt->replica = cJSON_GetArraySize(mnodes);
- if (pMgmt->replica <= 0 || pMgmt->replica > TSDB_MAX_REPLICA) {
- dError("failed to read %s since mnodes size %d invalid", file, pMgmt->replica);
- goto _OVER;
- }
-
- for (int32_t i = 0; i < pMgmt->replica; ++i) {
- cJSON *node = cJSON_GetArrayItem(mnodes, i);
- if (node == NULL) break;
-
- SReplica *pReplica = &pMgmt->replicas[i];
-
- cJSON *id = cJSON_GetObjectItem(node, "id");
- if (!id || id->type != cJSON_Number) {
- dError("failed to read %s since id not found", file);
+ if (mnodes != NULL) {
+ if (!mnodes || mnodes->type != cJSON_Array) {
+ dError("failed to read %s since nodes not found", file);
goto _OVER;
}
- pReplica->id = id->valueint;
- cJSON *fqdn = cJSON_GetObjectItem(node, "fqdn");
- if (!fqdn || fqdn->type != cJSON_String || fqdn->valuestring == NULL) {
- dError("failed to read %s since fqdn not found", file);
+ pMgmt->replica = cJSON_GetArraySize(mnodes);
+ if (pMgmt->replica <= 0 || pMgmt->replica > TSDB_MAX_REPLICA) {
+ dError("failed to read %s since mnodes size %d invalid", file, pMgmt->replica);
goto _OVER;
}
- tstrncpy(pReplica->fqdn, fqdn->valuestring, TSDB_FQDN_LEN);
- cJSON *port = cJSON_GetObjectItem(node, "port");
- if (!port || port->type != cJSON_Number) {
- dError("failed to read %s since port not found", file);
- goto _OVER;
+ for (int32_t i = 0; i < pMgmt->replica; ++i) {
+ cJSON *node = cJSON_GetArrayItem(mnodes, i);
+ if (node == NULL) break;
+
+ SReplica *pReplica = &pMgmt->replicas[i];
+
+ cJSON *id = cJSON_GetObjectItem(node, "id");
+ if (!id || id->type != cJSON_Number) {
+ dError("failed to read %s since id not found", file);
+ goto _OVER;
+ }
+ pReplica->id = id->valueint;
+
+ cJSON *fqdn = cJSON_GetObjectItem(node, "fqdn");
+ if (!fqdn || fqdn->type != cJSON_String || fqdn->valuestring == NULL) {
+ dError("failed to read %s since fqdn not found", file);
+ goto _OVER;
+ }
+ tstrncpy(pReplica->fqdn, fqdn->valuestring, TSDB_FQDN_LEN);
+
+ cJSON *port = cJSON_GetObjectItem(node, "port");
+ if (!port || port->type != cJSON_Number) {
+ dError("failed to read %s since port not found", file);
+ goto _OVER;
+ }
+ pReplica->port = port->valueint;
}
- pReplica->port = port->valueint;
}
code = 0;
@@ -122,21 +124,23 @@ int32_t mmWriteFile(SMnodeMgmt *pMgmt, SDCreateMnodeReq *pMsg, bool deployed) {
char *content = taosMemoryCalloc(1, maxLen + 1);
len += snprintf(content + len, maxLen - len, "{\n");
- len += snprintf(content + len, maxLen - len, " \"mnodes\": [{\n");
int8_t replica = (pMsg != NULL ? pMsg->replica : pMgmt->replica);
- for (int32_t i = 0; i < replica; ++i) {
- SReplica *pReplica = &pMgmt->replicas[i];
- if (pMsg != NULL) {
- pReplica = &pMsg->replicas[i];
- }
- len += snprintf(content + len, maxLen - len, " \"id\": %d,\n", pReplica->id);
- len += snprintf(content + len, maxLen - len, " \"fqdn\": \"%s\",\n", pReplica->fqdn);
- len += snprintf(content + len, maxLen - len, " \"port\": %u\n", pReplica->port);
- if (i < replica - 1) {
- len += snprintf(content + len, maxLen - len, " },{\n");
- } else {
- len += snprintf(content + len, maxLen - len, " }],\n");
+ if (replica > 0) {
+ len += snprintf(content + len, maxLen - len, " \"mnodes\": [{\n");
+ for (int32_t i = 0; i < replica; ++i) {
+ SReplica *pReplica = &pMgmt->replicas[i];
+ if (pMsg != NULL) {
+ pReplica = &pMsg->replicas[i];
+ }
+ len += snprintf(content + len, maxLen - len, " \"id\": %d,\n", pReplica->id);
+ len += snprintf(content + len, maxLen - len, " \"fqdn\": \"%s\",\n", pReplica->fqdn);
+ len += snprintf(content + len, maxLen - len, " \"port\": %u\n", pReplica->port);
+ if (i < replica - 1) {
+ len += snprintf(content + len, maxLen - len, " },{\n");
+ } else {
+ len += snprintf(content + len, maxLen - len, " }],\n");
+ }
}
}
diff --git a/source/dnode/mgmt/mgmt_mnode/src/mmHandle.c b/source/dnode/mgmt/mgmt_mnode/src/mmHandle.c
index a894a4962d..90d7b88859 100644
--- a/source/dnode/mgmt/mgmt_mnode/src/mmHandle.c
+++ b/source/dnode/mgmt/mgmt_mnode/src/mmHandle.c
@@ -124,22 +124,6 @@ int32_t mmProcessDropReq(const SMgmtInputOpt *pInput, SRpcMsg *pMsg) {
return 0;
}
-int32_t mmProcessAlterReq(SMnodeMgmt *pMgmt, SRpcMsg *pMsg) {
- SDAlterMnodeReq alterReq = {0};
- if (tDeserializeSDCreateMnodeReq(pMsg->pCont, pMsg->contLen, &alterReq) != 0) {
- terrno = TSDB_CODE_INVALID_MSG;
- return -1;
- }
-
- if (pMgmt->pData->dnodeId != 0 && alterReq.dnodeId != pMgmt->pData->dnodeId) {
- terrno = TSDB_CODE_INVALID_OPTION;
- dError("failed to alter mnode since %s, input:%d cur:%d", terrstr(), alterReq.dnodeId, pMgmt->pData->dnodeId);
- return -1;
- } else {
- return mmAlter(pMgmt, &alterReq);
- }
-}
-
SArray *mmGetMsgHandles() {
int32_t code = -1;
SArray *pArray = taosArrayInit(64, sizeof(SMgmtHandle));
diff --git a/source/dnode/mgmt/mgmt_mnode/src/mmInt.c b/source/dnode/mgmt/mgmt_mnode/src/mmInt.c
index 43113d05af..1b973f3045 100644
--- a/source/dnode/mgmt/mgmt_mnode/src/mmInt.c
+++ b/source/dnode/mgmt/mgmt_mnode/src/mmInt.c
@@ -39,71 +39,38 @@ static int32_t mmRequire(const SMgmtInputOpt *pInput, bool *required) {
}
static void mmBuildOptionForDeploy(SMnodeMgmt *pMgmt, const SMgmtInputOpt *pInput, SMnodeOpt *pOption) {
+ pOption->standby = false;
+ pOption->deploy = true;
pOption->msgCb = pMgmt->msgCb;
+ pOption->dnodeId = pMgmt->pData->dnodeId;
+
pOption->replica = 1;
pOption->selfIndex = 0;
+
SReplica *pReplica = &pOption->replicas[0];
pReplica->id = 1;
pReplica->port = tsServerPort;
tstrncpy(pReplica->fqdn, tsLocalFqdn, TSDB_FQDN_LEN);
- pOption->deploy = true;
-
- pMgmt->selfIndex = pOption->selfIndex;
- pMgmt->replica = pOption->replica;
- memcpy(&pMgmt->replicas, pOption->replicas, sizeof(SReplica) * TSDB_MAX_REPLICA);
}
static void mmBuildOptionForOpen(SMnodeMgmt *pMgmt, SMnodeOpt *pOption) {
- pOption->msgCb = pMgmt->msgCb;
- pOption->selfIndex = pMgmt->selfIndex;
- pOption->replica = pMgmt->replica;
- memcpy(&pOption->replicas, pMgmt->replicas, sizeof(SReplica) * TSDB_MAX_REPLICA);
pOption->deploy = false;
-}
-
-static int32_t mmBuildOptionFromReq(SMnodeMgmt *pMgmt, SMnodeOpt *pOption, SDCreateMnodeReq *pCreate) {
+ pOption->standby = false;
pOption->msgCb = pMgmt->msgCb;
- pOption->replica = pCreate->replica;
- pOption->selfIndex = -1;
- for (int32_t i = 0; i < pCreate->replica; ++i) {
- SReplica *pReplica = &pOption->replicas[i];
- pReplica->id = pCreate->replicas[i].id;
- pReplica->port = pCreate->replicas[i].port;
- memcpy(pReplica->fqdn, pCreate->replicas[i].fqdn, TSDB_FQDN_LEN);
- if (pReplica->id == pMgmt->pData->dnodeId) {
- pOption->selfIndex = i;
+ pOption->dnodeId = pMgmt->pData->dnodeId;
+
+ if (pMgmt->replica > 0) {
+ pOption->standby = true;
+ pOption->replica = 1;
+ pOption->selfIndex = 0;
+ SReplica *pReplica = &pOption->replicas[0];
+ for (int32_t i = 0; i < pMgmt->replica; ++i) {
+ if (pMgmt->replicas[i].id != pMgmt->pData->dnodeId) continue;
+ pReplica->id = pMgmt->replicas[i].id;
+ pReplica->port = pMgmt->replicas[i].port;
+ memcpy(pReplica->fqdn, pMgmt->replicas[i].fqdn, TSDB_FQDN_LEN);
}
}
-
- if (pOption->selfIndex == -1) {
- dError("failed to build mnode options since %s", terrstr());
- return -1;
- }
- pOption->deploy = true;
-
- pMgmt->selfIndex = pOption->selfIndex;
- pMgmt->replica = pOption->replica;
- memcpy(&pMgmt->replicas, pOption->replicas, sizeof(SReplica) * TSDB_MAX_REPLICA);
- return 0;
-}
-
-int32_t mmAlter(SMnodeMgmt *pMgmt, SDAlterMnodeReq *pMsg) {
- SMnodeOpt option = {0};
- if (mmBuildOptionFromReq(pMgmt, &option, pMsg) != 0) {
- return -1;
- }
-
- if (mndAlter(pMgmt->pMnode, &option) != 0) {
- return -1;
- }
-
- bool deployed = true;
- if (mmWriteFile(pMgmt, pMsg, deployed) != 0) {
- dError("failed to write mnode file since %s", terrstr());
- return -1;
- }
-
- return 0;
}
static void mmClose(SMnodeMgmt *pMgmt) {
@@ -177,7 +144,8 @@ static int32_t mmOpen(SMgmtInputOpt *pInput, SMgmtOutputOpt *pOutput) {
}
tmsgReportStartup("mnode-worker", "initialized");
- if (!deployed) {
+ if (!deployed || pMgmt->replica > 0) {
+ pMgmt->replica = 0;
deployed = true;
if (mmWriteFile(pMgmt, NULL, deployed) != 0) {
dError("failed to write mnode file since %s", terrstr());
diff --git a/source/dnode/mgmt/mgmt_mnode/src/mmWorker.c b/source/dnode/mgmt/mgmt_mnode/src/mmWorker.c
index 59d0c491a1..85120102bc 100644
--- a/source/dnode/mgmt/mgmt_mnode/src/mmWorker.c
+++ b/source/dnode/mgmt/mgmt_mnode/src/mmWorker.c
@@ -32,9 +32,6 @@ static void mmProcessQueue(SQueueInfo *pInfo, SRpcMsg *pMsg) {
dTrace("msg:%p, get from mnode queue", pMsg);
switch (pMsg->msgType) {
- case TDMT_DND_ALTER_MNODE:
- code = mmProcessAlterReq(pMgmt, pMsg);
- break;
case TDMT_MON_MM_INFO:
code = mmProcessGetMonitorInfoReq(pMgmt, pMsg);
break;
@@ -61,6 +58,11 @@ static void mmProcessSyncQueue(SQueueInfo *pInfo, SRpcMsg *pMsg) {
dTrace("msg:%p, get from mnode-sync queue", pMsg);
pMsg->info.node = pMgmt->pMnode;
+
+ SMsgHead *pHead = pMsg->pCont;
+ pHead->contLen = ntohl(pHead->contLen);
+ pHead->vgId = ntohl(pHead->vgId);
+
int32_t code = mndProcessSyncMsg(pMsg);
dTrace("msg:%p, is freed, code:0x%x", pMsg, code);
diff --git a/source/dnode/mgmt/node_util/inc/dmUtil.h b/source/dnode/mgmt/node_util/inc/dmUtil.h
index 4946669678..0d921c2e8b 100644
--- a/source/dnode/mgmt/node_util/inc/dmUtil.h
+++ b/source/dnode/mgmt/node_util/inc/dmUtil.h
@@ -90,8 +90,8 @@ typedef enum {
typedef int32_t (*ProcessCreateNodeFp)(EDndNodeType ntype, SRpcMsg *pMsg);
typedef int32_t (*ProcessDropNodeFp)(EDndNodeType ntype, SRpcMsg *pMsg);
typedef void (*SendMonitorReportFp)();
-typedef void (*GetVnodeLoadsFp)();
-typedef void (*GetMnodeLoadsFp)();
+typedef void (*GetVnodeLoadsFp)(SMonVloadInfo *pInfo);
+typedef void (*GetMnodeLoadsFp)(SMonMloadInfo *pInfo);
typedef struct {
int32_t dnodeId;
diff --git a/source/dnode/mnode/impl/inc/mndDef.h b/source/dnode/mnode/impl/inc/mndDef.h
index 81f4c5ed1e..26cfaa62ff 100644
--- a/source/dnode/mnode/impl/inc/mndDef.h
+++ b/source/dnode/mnode/impl/inc/mndDef.h
@@ -67,30 +67,33 @@ typedef enum {
typedef enum {
TRN_TYPE_BASIC_SCOPE = 1000,
- TRN_TYPE_CREATE_USER = 1001,
- TRN_TYPE_ALTER_USER = 1002,
- TRN_TYPE_DROP_USER = 1003,
- TRN_TYPE_CREATE_FUNC = 1004,
- TRN_TYPE_DROP_FUNC = 1005,
+ TRN_TYPE_CREATE_ACCT = 1001,
+ TRN_TYPE_CREATE_CLUSTER = 1002,
+ TRN_TYPE_CREATE_USER = 1003,
+ TRN_TYPE_ALTER_USER = 1004,
+ TRN_TYPE_DROP_USER = 1005,
+ TRN_TYPE_CREATE_FUNC = 1006,
+ TRN_TYPE_DROP_FUNC = 1007,
- TRN_TYPE_CREATE_SNODE = 1006,
- TRN_TYPE_DROP_SNODE = 1007,
- TRN_TYPE_CREATE_QNODE = 1008,
- TRN_TYPE_DROP_QNODE = 1009,
- TRN_TYPE_CREATE_BNODE = 1010,
- TRN_TYPE_DROP_BNODE = 1011,
- TRN_TYPE_CREATE_MNODE = 1012,
- TRN_TYPE_DROP_MNODE = 1013,
- TRN_TYPE_CREATE_TOPIC = 1014,
- TRN_TYPE_DROP_TOPIC = 1015,
- TRN_TYPE_SUBSCRIBE = 1016,
- TRN_TYPE_REBALANCE = 1017,
- TRN_TYPE_COMMIT_OFFSET = 1018,
- TRN_TYPE_CREATE_STREAM = 1019,
- TRN_TYPE_DROP_STREAM = 1020,
- TRN_TYPE_ALTER_STREAM = 1021,
- TRN_TYPE_CONSUMER_LOST = 1022,
- TRN_TYPE_CONSUMER_RECOVER = 1023,
+ TRN_TYPE_CREATE_SNODE = 1010,
+ TRN_TYPE_DROP_SNODE = 1011,
+ TRN_TYPE_CREATE_QNODE = 1012,
+ TRN_TYPE_DROP_QNODE = 10013,
+ TRN_TYPE_CREATE_BNODE = 1014,
+ TRN_TYPE_DROP_BNODE = 1015,
+ TRN_TYPE_CREATE_MNODE = 1016,
+ TRN_TYPE_DROP_MNODE = 1017,
+
+ TRN_TYPE_CREATE_TOPIC = 1020,
+ TRN_TYPE_DROP_TOPIC = 1021,
+ TRN_TYPE_SUBSCRIBE = 1022,
+ TRN_TYPE_REBALANCE = 1023,
+ TRN_TYPE_COMMIT_OFFSET = 1024,
+ TRN_TYPE_CREATE_STREAM = 1025,
+ TRN_TYPE_DROP_STREAM = 1026,
+ TRN_TYPE_ALTER_STREAM = 1027,
+ TRN_TYPE_CONSUMER_LOST = 1028,
+ TRN_TYPE_CONSUMER_RECOVER = 1029,
TRN_TYPE_BASIC_SCOPE_END,
TRN_TYPE_GLOBAL_SCOPE = 2000,
@@ -196,9 +199,8 @@ typedef struct {
int32_t id;
int64_t createdTime;
int64_t updateTime;
- ESyncState role;
- int32_t roleTerm;
- int64_t roleTime;
+ ESyncState state;
+ int64_t stateStartTime;
SDnodeObj* pDnode;
} SMnodeObj;
diff --git a/source/dnode/mnode/impl/inc/mndInt.h b/source/dnode/mnode/impl/inc/mndInt.h
index 5a1653b937..189ea82bfc 100644
--- a/source/dnode/mnode/impl/inc/mndInt.h
+++ b/source/dnode/mnode/impl/inc/mndInt.h
@@ -76,11 +76,11 @@ typedef struct {
typedef struct {
SWal *pWal;
- int32_t errCode;
- bool restored;
sem_t syncSem;
int64_t sync;
- ESyncState state;
+ bool standby;
+ bool restored;
+ int32_t errCode;
} SSyncMgmt;
typedef struct {
@@ -89,9 +89,10 @@ typedef struct {
} SGrantInfo;
typedef struct SMnode {
- int32_t selfId;
+ int32_t selfDnodeId;
int64_t clusterId;
TdThread thread;
+ bool deploy;
bool stopped;
int8_t replica;
int8_t selfIndex;
diff --git a/source/dnode/mnode/impl/inc/mndMnode.h b/source/dnode/mnode/impl/inc/mndMnode.h
index a5cdfa1061..fd62b3ce75 100644
--- a/source/dnode/mnode/impl/inc/mndMnode.h
+++ b/source/dnode/mnode/impl/inc/mndMnode.h
@@ -28,7 +28,6 @@ SMnodeObj *mndAcquireMnode(SMnode *pMnode, int32_t mnodeId);
void mndReleaseMnode(SMnode *pMnode, SMnodeObj *pObj);
bool mndIsMnode(SMnode *pMnode, int32_t dnodeId);
void mndGetMnodeEpSet(SMnode *pMnode, SEpSet *pEpSet);
-void mndUpdateMnodeRole(SMnode *pMnode);
#ifdef __cplusplus
}
diff --git a/source/dnode/mnode/impl/src/mndAcct.c b/source/dnode/mnode/impl/src/mndAcct.c
index 52b9ac62e6..a4fde4b706 100644
--- a/source/dnode/mnode/impl/src/mndAcct.c
+++ b/source/dnode/mnode/impl/src/mndAcct.c
@@ -16,6 +16,7 @@
#define _DEFAULT_SOURCE
#include "mndAcct.h"
#include "mndShow.h"
+#include "mndTrans.h"
#define ACCT_VER_NUMBER 1
#define ACCT_RESERVE_SIZE 128
@@ -31,14 +32,16 @@ static int32_t mndProcessAlterAcctReq(SRpcMsg *pReq);
static int32_t mndProcessDropAcctReq(SRpcMsg *pReq);
int32_t mndInitAcct(SMnode *pMnode) {
- SSdbTable table = {.sdbType = SDB_ACCT,
- .keyType = SDB_KEY_BINARY,
- .deployFp = mndCreateDefaultAcct,
- .encodeFp = (SdbEncodeFp)mndAcctActionEncode,
- .decodeFp = (SdbDecodeFp)mndAcctActionDecode,
- .insertFp = (SdbInsertFp)mndAcctActionInsert,
- .updateFp = (SdbUpdateFp)mndAcctActionUpdate,
- .deleteFp = (SdbDeleteFp)mndAcctActionDelete};
+ SSdbTable table = {
+ .sdbType = SDB_ACCT,
+ .keyType = SDB_KEY_BINARY,
+ .deployFp = mndCreateDefaultAcct,
+ .encodeFp = (SdbEncodeFp)mndAcctActionEncode,
+ .decodeFp = (SdbDecodeFp)mndAcctActionDecode,
+ .insertFp = (SdbInsertFp)mndAcctActionInsert,
+ .updateFp = (SdbUpdateFp)mndAcctActionUpdate,
+ .deleteFp = (SdbDeleteFp)mndAcctActionDelete,
+ };
mndSetMsgHandle(pMnode, TDMT_MND_CREATE_ACCT, mndProcessCreateAcctReq);
mndSetMsgHandle(pMnode, TDMT_MND_ALTER_ACCT, mndProcessAlterAcctReq);
@@ -56,25 +59,52 @@ static int32_t mndCreateDefaultAcct(SMnode *pMnode) {
acctObj.updateTime = acctObj.createdTime;
acctObj.acctId = 1;
acctObj.status = 0;
- acctObj.cfg = (SAcctCfg){.maxUsers = INT32_MAX,
- .maxDbs = INT32_MAX,
- .maxStbs = INT32_MAX,
- .maxTbs = INT32_MAX,
- .maxTimeSeries = INT32_MAX,
- .maxStreams = INT32_MAX,
- .maxFuncs = INT32_MAX,
- .maxConsumers = INT32_MAX,
- .maxConns = INT32_MAX,
- .maxTopics = INT32_MAX,
- .maxStorage = INT64_MAX,
- .accessState = TSDB_VN_ALL_ACCCESS};
+ acctObj.cfg = (SAcctCfg){
+ .maxUsers = INT32_MAX,
+ .maxDbs = INT32_MAX,
+ .maxStbs = INT32_MAX,
+ .maxTbs = INT32_MAX,
+ .maxTimeSeries = INT32_MAX,
+ .maxStreams = INT32_MAX,
+ .maxFuncs = INT32_MAX,
+ .maxConsumers = INT32_MAX,
+ .maxConns = INT32_MAX,
+ .maxTopics = INT32_MAX,
+ .maxStorage = INT64_MAX,
+ .accessState = TSDB_VN_ALL_ACCCESS,
+ };
SSdbRaw *pRaw = mndAcctActionEncode(&acctObj);
if (pRaw == NULL) return -1;
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
mDebug("acct:%s, will be created while deploy sdb, raw:%p", acctObj.acct, pRaw);
+#if 0
return sdbWrite(pMnode->pSdb, pRaw);
+#else
+ STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_CREATE_ACCT, NULL);
+ if (pTrans == NULL) {
+ mError("acct:%s, failed to create since %s", acctObj.acct, terrstr());
+ return -1;
+ }
+ mDebug("trans:%d, used to create acct:%s", pTrans->id, acctObj.acct);
+
+ if (mndTransAppendCommitlog(pTrans, pRaw) != 0) {
+ mError("trans:%d, failed to commit redo log since %s", pTrans->id, terrstr());
+ mndTransDrop(pTrans);
+ return -1;
+ }
+ sdbSetRawStatus(pRaw, SDB_STATUS_READY);
+
+ if (mndTransPrepare(pMnode, pTrans) != 0) {
+ mError("trans:%d, failed to prepare since %s", pTrans->id, terrstr());
+ mndTransDrop(pTrans);
+ return -1;
+ }
+
+ mndTransDrop(pTrans);
+ return 0;
+#endif
}
static SSdbRaw *mndAcctActionEncode(SAcctObj *pAcct) {
diff --git a/source/dnode/mnode/impl/src/mndCluster.c b/source/dnode/mnode/impl/src/mndCluster.c
index f6f6813b97..6266f22f39 100644
--- a/source/dnode/mnode/impl/src/mndCluster.c
+++ b/source/dnode/mnode/impl/src/mndCluster.c
@@ -16,6 +16,7 @@
#define _DEFAULT_SOURCE
#include "mndCluster.h"
#include "mndShow.h"
+#include "mndTrans.h"
#define CLUSTER_VER_NUMBE 1
#define CLUSTER_RESERVE_SIZE 64
@@ -177,7 +178,32 @@ static int32_t mndCreateDefaultCluster(SMnode *pMnode) {
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
mDebug("cluster:%" PRId64 ", will be created while deploy sdb, raw:%p", clusterObj.id, pRaw);
+#if 0
return sdbWrite(pMnode->pSdb, pRaw);
+#else
+ STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_CREATE_CLUSTER, NULL);
+ if (pTrans == NULL) {
+ mError("cluster:%" PRId64 ", failed to create since %s", clusterObj.id, terrstr());
+ return -1;
+ }
+ mDebug("trans:%d, used to create cluster:%" PRId64, pTrans->id, clusterObj.id);
+
+ if (mndTransAppendCommitlog(pTrans, pRaw) != 0) {
+ mError("trans:%d, failed to commit redo log since %s", pTrans->id, terrstr());
+ mndTransDrop(pTrans);
+ return -1;
+ }
+ sdbSetRawStatus(pRaw, SDB_STATUS_READY);
+
+ if (mndTransPrepare(pMnode, pTrans) != 0) {
+ mError("trans:%d, failed to prepare since %s", pTrans->id, terrstr());
+ mndTransDrop(pTrans);
+ return -1;
+ }
+
+ mndTransDrop(pTrans);
+ return 0;
+#endif
}
static int32_t mndRetrieveClusters(SRpcMsg *pMsg, SShowObj *pShow, SSDataBlock *pBlock, int32_t rows) {
diff --git a/source/dnode/mnode/impl/src/mndDnode.c b/source/dnode/mnode/impl/src/mndDnode.c
index 0cac7fd86b..047562ec02 100644
--- a/source/dnode/mnode/impl/src/mndDnode.c
+++ b/source/dnode/mnode/impl/src/mndDnode.c
@@ -58,14 +58,16 @@ static int32_t mndRetrieveDnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
static void mndCancelGetNextDnode(SMnode *pMnode, void *pIter);
int32_t mndInitDnode(SMnode *pMnode) {
- SSdbTable table = {.sdbType = SDB_DNODE,
- .keyType = SDB_KEY_INT32,
- .deployFp = (SdbDeployFp)mndCreateDefaultDnode,
- .encodeFp = (SdbEncodeFp)mndDnodeActionEncode,
- .decodeFp = (SdbDecodeFp)mndDnodeActionDecode,
- .insertFp = (SdbInsertFp)mndDnodeActionInsert,
- .updateFp = (SdbUpdateFp)mndDnodeActionUpdate,
- .deleteFp = (SdbDeleteFp)mndDnodeActionDelete};
+ SSdbTable table = {
+ .sdbType = SDB_DNODE,
+ .keyType = SDB_KEY_INT32,
+ .deployFp = (SdbDeployFp)mndCreateDefaultDnode,
+ .encodeFp = (SdbEncodeFp)mndDnodeActionEncode,
+ .decodeFp = (SdbDecodeFp)mndDnodeActionDecode,
+ .insertFp = (SdbInsertFp)mndDnodeActionInsert,
+ .updateFp = (SdbUpdateFp)mndDnodeActionUpdate,
+ .deleteFp = (SdbDeleteFp)mndDnodeActionDelete,
+ };
mndSetMsgHandle(pMnode, TDMT_MND_CREATE_DNODE, mndProcessCreateDnodeReq);
mndSetMsgHandle(pMnode, TDMT_MND_DROP_DNODE, mndProcessDropDnodeReq);
@@ -90,13 +92,40 @@ static int32_t mndCreateDefaultDnode(SMnode *pMnode) {
dnodeObj.updateTime = dnodeObj.createdTime;
dnodeObj.port = pMnode->replicas[0].port;
memcpy(&dnodeObj.fqdn, pMnode->replicas[0].fqdn, TSDB_FQDN_LEN);
+ snprintf(dnodeObj.ep, TSDB_EP_LEN, "%s:%u", dnodeObj.fqdn, dnodeObj.port);
SSdbRaw *pRaw = mndDnodeActionEncode(&dnodeObj);
if (pRaw == NULL) return -1;
if (sdbSetRawStatus(pRaw, SDB_STATUS_READY) != 0) return -1;
mDebug("dnode:%d, will be created while deploy sdb, raw:%p", dnodeObj.id, pRaw);
+
+#if 0
return sdbWrite(pMnode->pSdb, pRaw);
+#else
+ STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_CREATE_DNODE, NULL);
+ if (pTrans == NULL) {
+ mError("dnode:%s, failed to create since %s", dnodeObj.ep, terrstr());
+ return -1;
+ }
+ mDebug("trans:%d, used to create dnode:%s", pTrans->id, dnodeObj.ep);
+
+ if (mndTransAppendCommitlog(pTrans, pRaw) != 0) {
+ mError("trans:%d, failed to append commit log since %s", pTrans->id, terrstr());
+ mndTransDrop(pTrans);
+ return -1;
+ }
+ sdbSetRawStatus(pRaw, SDB_STATUS_READY);
+
+ if (mndTransPrepare(pMnode, pTrans) != 0) {
+ mError("trans:%d, failed to prepare since %s", pTrans->id, terrstr());
+ mndTransDrop(pTrans);
+ return -1;
+ }
+
+ mndTransDrop(pTrans);
+ return 0;
+#endif
}
static SSdbRaw *mndDnodeActionEncode(SDnodeObj *pDnode) {
@@ -350,6 +379,15 @@ static int32_t mndProcessStatusReq(SRpcMsg *pReq) {
mndReleaseVgroup(pMnode, pVgroup);
}
+ SMnodeObj *pObj = mndAcquireMnode(pMnode, pDnode->id);
+ if (pObj != NULL) {
+ if (pObj->state != statusReq.mload.syncState) {
+ pObj->state = statusReq.mload.syncState;
+ pObj->stateStartTime = taosGetTimestampMs();
+ }
+ mndReleaseMnode(pMnode, pObj);
+ }
+
int64_t curMs = taosGetTimestampMs();
bool online = mndIsDnodeOnline(pMnode, pDnode, curMs);
bool dnodeChanged = (statusReq.dnodeVer != sdbGetTableVer(pMnode->pSdb, SDB_DNODE));
@@ -701,7 +739,7 @@ static int32_t mndRetrieveDnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
colDataAppend(pColInfo, numOfRows, (const char *)&pDnode->id, false);
char buf[tListLen(pDnode->ep) + VARSTR_HEADER_SIZE] = {0};
- STR_WITH_MAXSIZE_TO_VARSTR(buf, pDnode->ep, pShow->pMeta->pSchemas[cols].bytes);
+ STR_WITH_MAXSIZE_TO_VARSTR(buf, pDnode->ep, pShow->pMeta->pSchemas[cols].bytes);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataAppend(pColInfo, numOfRows, buf, false);
diff --git a/source/dnode/mnode/impl/src/mndMnode.c b/source/dnode/mnode/impl/src/mndMnode.c
index 7f86eb8b32..a47221ea39 100644
--- a/source/dnode/mnode/impl/src/mndMnode.c
+++ b/source/dnode/mnode/impl/src/mndMnode.c
@@ -31,6 +31,7 @@ static int32_t mndMnodeActionInsert(SSdb *pSdb, SMnodeObj *pObj);
static int32_t mndMnodeActionDelete(SSdb *pSdb, SMnodeObj *pObj);
static int32_t mndMnodeActionUpdate(SSdb *pSdb, SMnodeObj *pOld, SMnodeObj *pNew);
static int32_t mndProcessCreateMnodeReq(SRpcMsg *pReq);
+static int32_t mndProcessAlterMnodeReq(SRpcMsg *pReq);
static int32_t mndProcessDropMnodeReq(SRpcMsg *pReq);
static int32_t mndProcessCreateMnodeRsp(SRpcMsg *pRsp);
static int32_t mndProcessAlterMnodeRsp(SRpcMsg *pRsp);
@@ -39,16 +40,19 @@ static int32_t mndRetrieveMnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *p
static void mndCancelGetNextMnode(SMnode *pMnode, void *pIter);
int32_t mndInitMnode(SMnode *pMnode) {
- SSdbTable table = {.sdbType = SDB_MNODE,
- .keyType = SDB_KEY_INT32,
- .deployFp = (SdbDeployFp)mndCreateDefaultMnode,
- .encodeFp = (SdbEncodeFp)mndMnodeActionEncode,
- .decodeFp = (SdbDecodeFp)mndMnodeActionDecode,
- .insertFp = (SdbInsertFp)mndMnodeActionInsert,
- .updateFp = (SdbUpdateFp)mndMnodeActionUpdate,
- .deleteFp = (SdbDeleteFp)mndMnodeActionDelete};
+ SSdbTable table = {
+ .sdbType = SDB_MNODE,
+ .keyType = SDB_KEY_INT32,
+ .deployFp = (SdbDeployFp)mndCreateDefaultMnode,
+ .encodeFp = (SdbEncodeFp)mndMnodeActionEncode,
+ .decodeFp = (SdbDecodeFp)mndMnodeActionDecode,
+ .insertFp = (SdbInsertFp)mndMnodeActionInsert,
+ .updateFp = (SdbUpdateFp)mndMnodeActionUpdate,
+ .deleteFp = (SdbDeleteFp)mndMnodeActionDelete,
+ };
mndSetMsgHandle(pMnode, TDMT_MND_CREATE_MNODE, mndProcessCreateMnodeReq);
+ mndSetMsgHandle(pMnode, TDMT_DND_ALTER_MNODE, mndProcessAlterMnodeReq);
mndSetMsgHandle(pMnode, TDMT_MND_DROP_MNODE, mndProcessDropMnodeReq);
mndSetMsgHandle(pMnode, TDMT_DND_CREATE_MNODE_RSP, mndProcessCreateMnodeRsp);
mndSetMsgHandle(pMnode, TDMT_DND_ALTER_MNODE_RSP, mndProcessAlterMnodeRsp);
@@ -75,28 +79,6 @@ void mndReleaseMnode(SMnode *pMnode, SMnodeObj *pObj) {
sdbRelease(pMnode->pSdb, pObj);
}
-void mndUpdateMnodeRole(SMnode *pMnode) {
- SSdb *pSdb = pMnode->pSdb;
- void *pIter = NULL;
- while (1) {
- SMnodeObj *pObj = NULL;
- pIter = sdbFetch(pSdb, SDB_MNODE, pIter, (void **)&pObj);
- if (pIter == NULL) break;
-
- ESyncState lastRole = pObj->role;
- if (pObj->id == 1) {
- pObj->role = TAOS_SYNC_STATE_LEADER;
- } else {
- pObj->role = TAOS_SYNC_STATE_CANDIDATE;
- }
- if (pObj->role != lastRole) {
- pObj->roleTime = taosGetTimestampMs();
- }
-
- sdbRelease(pSdb, pObj);
- }
-}
-
static int32_t mndCreateDefaultMnode(SMnode *pMnode) {
SMnodeObj mnodeObj = {0};
mnodeObj.id = 1;
@@ -108,7 +90,33 @@ static int32_t mndCreateDefaultMnode(SMnode *pMnode) {
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
mDebug("mnode:%d, will be created while deploy sdb, raw:%p", mnodeObj.id, pRaw);
+
+#if 0
return sdbWrite(pMnode->pSdb, pRaw);
+#else
+ STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_CREATE_DNODE, NULL);
+ if (pTrans == NULL) {
+ mError("mnode:%d, failed to create since %s", mnodeObj.id, terrstr());
+ return -1;
+ }
+ mDebug("trans:%d, used to create mnode:%d", pTrans->id, mnodeObj.id);
+
+ if (mndTransAppendCommitlog(pTrans, pRaw) != 0) {
+ mError("trans:%d, failed to append commit log since %s", pTrans->id, terrstr());
+ mndTransDrop(pTrans);
+ return -1;
+ }
+ sdbSetRawStatus(pRaw, SDB_STATUS_READY);
+
+ if (mndTransPrepare(pMnode, pTrans) != 0) {
+ mError("trans:%d, failed to prepare since %s", pTrans->id, terrstr());
+ mndTransDrop(pTrans);
+ return -1;
+ }
+
+ mndTransDrop(pTrans);
+ return 0;
+#endif
}
static SSdbRaw *mndMnodeActionEncode(SMnodeObj *pObj) {
@@ -181,7 +189,7 @@ static int32_t mndMnodeActionInsert(SSdb *pSdb, SMnodeObj *pObj) {
return -1;
}
- pObj->role = TAOS_SYNC_STATE_FOLLOWER;
+ pObj->state = TAOS_SYNC_STATE_ERROR;
return 0;
}
@@ -225,7 +233,7 @@ void mndGetMnodeEpSet(SMnode *pMnode, SEpSet *pEpSet) {
if (pObj->pDnode == NULL) {
mError("mnode:%d, no corresponding dnode exists", pObj->id);
} else {
- if (pObj->role == TAOS_SYNC_STATE_LEADER) {
+ if (pObj->state == TAOS_SYNC_STATE_LEADER) {
pEpSet->inUse = pEpSet->numOfEps;
}
addEpIntoEpSet(pEpSet, pObj->pDnode->fqdn, pObj->pDnode->port);
@@ -553,7 +561,7 @@ static int32_t mndProcessDropMnodeReq(SRpcMsg *pReq) {
goto _OVER;
}
- if (pMnode->selfId == dropReq.dnodeId) {
+ if (pMnode->selfDnodeId == dropReq.dnodeId) {
terrno = TSDB_CODE_MND_CANT_DROP_MASTER;
goto _OVER;
}
@@ -624,16 +632,18 @@ static int32_t mndRetrieveMnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataAppend(pColInfo, numOfRows, b1, false);
- const char *roles = syncStr(pObj->role);
- char *b2 = taosMemoryCalloc(1, 12 + VARSTR_HEADER_SIZE);
+ const char *roles = NULL;
+ if (pObj->id == pMnode->selfDnodeId) {
+ roles = syncStr(TAOS_SYNC_STATE_LEADER);
+ } else {
+ roles = syncStr(pObj->state);
+ }
+ char *b2 = taosMemoryCalloc(1, 12 + VARSTR_HEADER_SIZE);
STR_WITH_MAXSIZE_TO_VARSTR(b2, roles, pShow->pMeta->pSchemas[cols].bytes);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataAppend(pColInfo, numOfRows, (const char *)b2, false);
- pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- colDataAppend(pColInfo, numOfRows, (const char *)&pObj->roleTime, false);
-
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataAppend(pColInfo, numOfRows, (const char *)&pObj->createdTime, false);
@@ -650,3 +660,52 @@ static void mndCancelGetNextMnode(SMnode *pMnode, void *pIter) {
SSdb *pSdb = pMnode->pSdb;
sdbCancelFetch(pSdb, pIter);
}
+
+static int32_t mndProcessAlterMnodeReq(SRpcMsg *pReq) {
+ SMnode *pMnode = pReq->info.node;
+ SDAlterMnodeReq alterReq = {0};
+
+ if (tDeserializeSDCreateMnodeReq(pReq->pCont, pReq->contLen, &alterReq) != 0) {
+ terrno = TSDB_CODE_INVALID_MSG;
+ return -1;
+ }
+
+ if (alterReq.dnodeId != pMnode->selfDnodeId) {
+ terrno = TSDB_CODE_INVALID_OPTION;
+ mError("failed to alter mnode since %s, input:%d cur:%d", terrstr(), alterReq.dnodeId, pMnode->selfDnodeId);
+ return -1;
+ }
+
+ SSyncCfg cfg = {.replicaNum = alterReq.replica, .myIndex = -1};
+ for (int32_t i = 0; i < alterReq.replica; ++i) {
+ SNodeInfo *pNode = &cfg.nodeInfo[i];
+ tstrncpy(pNode->nodeFqdn, alterReq.replicas[i].fqdn, sizeof(pNode->nodeFqdn));
+ pNode->nodePort = alterReq.replicas[i].port;
+ if (alterReq.replicas[i].id == pMnode->selfDnodeId) cfg.myIndex = i;
+ }
+
+ if (cfg.myIndex == -1) {
+ mError("failed to alter mnode since myindex is -1");
+ return -1;
+ } else {
+ mInfo("start to alter mnode sync, replica:%d myindex:%d", cfg.replicaNum, cfg.myIndex);
+ for (int32_t i = 0; i < alterReq.replica; ++i) {
+ SNodeInfo *pNode = &cfg.nodeInfo[i];
+ mInfo("index:%d, fqdn:%s port:%d", i, pNode->nodeFqdn, pNode->nodePort);
+ }
+ }
+
+ SSyncMgmt *pMgmt = &pMnode->syncMgmt;
+ pMgmt->standby = 0;
+ int32_t code = syncReconfig(pMgmt->sync, &cfg);
+ if (code != 0) {
+ mError("failed to alter mnode sync since %s", terrstr());
+ return code;
+ } else {
+ pMgmt->errCode = 0;
+ tsem_wait(&pMgmt->syncSem);
+ mInfo("alter mnode sync result:%s", tstrerror(pMgmt->errCode));
+ terrno = pMgmt->errCode;
+ return pMgmt->errCode;
+ }
+}
diff --git a/source/dnode/mnode/impl/src/mndProfile.c b/source/dnode/mnode/impl/src/mndProfile.c
index b9ac82d890..c9c52af0fe 100644
--- a/source/dnode/mnode/impl/src/mndProfile.c
+++ b/source/dnode/mnode/impl/src/mndProfile.c
@@ -379,7 +379,7 @@ static int32_t mndProcessQueryHeartBeat(SMnode *pMnode, SRpcMsg *pMsg, SClientHb
}
rspBasic->connId = pConn->id;
- rspBasic->totalDnodes = 1; // TODO
+ rspBasic->totalDnodes = mndGetDnodeSize(pMnode);
rspBasic->onlineDnodes = 1; // TODO
mndGetMnodeEpSet(pMnode, &rspBasic->epSet);
mndReleaseConn(pMnode, pConn);
diff --git a/source/dnode/mnode/impl/src/mndSync.c b/source/dnode/mnode/impl/src/mndSync.c
index a4e6cfd5ca..ca25133c96 100644
--- a/source/dnode/mnode/impl/src/mndSync.c
+++ b/source/dnode/mnode/impl/src/mndSync.c
@@ -17,22 +17,26 @@
#include "mndSync.h"
#include "mndTrans.h"
-int32_t mndSyncEqMsg(const SMsgCb *msgcb, SRpcMsg *pMsg) { return tmsgPutToQueue(msgcb, SYNC_QUEUE, pMsg); }
+int32_t mndSyncEqMsg(const SMsgCb *msgcb, SRpcMsg *pMsg) {
+ SMsgHead *pHead = pMsg->pCont;
+ pHead->contLen = htonl(pHead->contLen);
+ pHead->vgId = htonl(pHead->vgId);
+
+ return tmsgPutToQueue(msgcb, SYNC_QUEUE, pMsg);
+}
int32_t mndSyncSendMsg(const SEpSet *pEpSet, SRpcMsg *pMsg) { return tmsgSendReq(pEpSet, pMsg); }
void mndSyncCommitMsg(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbMeta) {
- SMnode *pMnode = pFsm->data;
- SSdb *pSdb = pMnode->pSdb;
- SSyncMgmt *pMgmt = &pMnode->syncMgmt;
- SSdbRaw *pRaw = pMsg->pCont;
+ SMnode *pMnode = pFsm->data;
+ SSdbRaw *pRaw = pMsg->pCont;
mTrace("raw:%p, apply to sdb, ver:%" PRId64 " role:%s", pRaw, cbMeta.index, syncStr(cbMeta.state));
- sdbWriteWithoutFree(pSdb, pRaw);
- sdbSetApplyIndex(pSdb, cbMeta.index);
- sdbSetApplyTerm(pSdb, cbMeta.term);
+ sdbWriteWithoutFree(pMnode->pSdb, pRaw);
+ sdbSetApplyIndex(pMnode->pSdb, cbMeta.index);
+ sdbSetApplyTerm(pMnode->pSdb, cbMeta.term);
if (cbMeta.state == TAOS_SYNC_STATE_LEADER) {
- tsem_post(&pMgmt->syncSem);
+ tsem_post(&pMnode->syncMgmt.syncSem);
}
}
@@ -45,19 +49,54 @@ int32_t mndSyncGetSnapshot(struct SSyncFSM *pFsm, SSnapshot *pSnapshot) {
void mndRestoreFinish(struct SSyncFSM *pFsm) {
SMnode *pMnode = pFsm->data;
- mndTransPullup(pMnode);
- pMnode->syncMgmt.restored = true;
+ if (!pMnode->deploy) {
+ mndTransPullup(pMnode);
+ pMnode->syncMgmt.restored = true;
+ }
+}
+
+int32_t mndSnapshotRead(struct SSyncFSM* pFsm, const SSnapshot* pSnapshot, void** ppIter, char** ppBuf, int32_t* len) {
+ /*
+ SMnode *pMnode = pFsm->data;
+ SSdbIter *pIter;
+ if (iter == NULL) {
+ pIter = sdbIterInit(pMnode->sdb)
+ } else {
+ pIter = iter;
+ }
+ */
+
+ return 0;
+}
+
+int32_t mndSnapshotApply(struct SSyncFSM* pFsm, const SSnapshot* pSnapshot, char* pBuf, int32_t len) {
+ SMnode *pMnode = pFsm->data;
+ sdbWrite(pMnode->pSdb, (SSdbRaw*)pBuf);
+ return 0;
+}
+
+void mndReConfig(struct SSyncFSM *pFsm, SSyncCfg newCfg, SReConfigCbMeta cbMeta) {
+ mInfo("mndReConfig cbMeta.code:%d, cbMeta.currentTerm:%" PRId64 ", cbMeta.term:%" PRId64 ", cbMeta.index:%" PRId64,
+ cbMeta.code, cbMeta.currentTerm, cbMeta.term, cbMeta.index);
+ SMnode *pMnode = pFsm->data;
+ pMnode->syncMgmt.errCode = cbMeta.code;
+ tsem_post(&pMnode->syncMgmt.syncSem);
}
SSyncFSM *mndSyncMakeFsm(SMnode *pMnode) {
SSyncFSM *pFsm = taosMemoryCalloc(1, sizeof(SSyncFSM));
pFsm->data = pMnode;
+
pFsm->FpCommitCb = mndSyncCommitMsg;
pFsm->FpPreCommitCb = NULL;
pFsm->FpRollBackCb = NULL;
+
pFsm->FpGetSnapshot = mndSyncGetSnapshot;
- pFsm->FpRestoreFinish = mndRestoreFinish;
- pFsm->FpRestoreSnapshot = NULL;
+ pFsm->FpRestoreFinishCb = mndRestoreFinish;
+ pFsm->FpSnapshotRead = mndSnapshotRead;
+ pFsm->FpSnapshotApply = mndSnapshotApply;
+ pFsm->FpReConfigCb = mndReConfig;
+
return pFsm;
}
@@ -90,10 +129,13 @@ int32_t mndInitSync(SMnode *pMnode) {
SSyncCfg *pCfg = &syncInfo.syncCfg;
pCfg->replicaNum = pMnode->replica;
pCfg->myIndex = pMnode->selfIndex;
+ mInfo("start to open mnode sync, replica:%d myindex:%d standby:%d", pCfg->replicaNum, pCfg->myIndex,
+ pMgmt->standby);
for (int32_t i = 0; i < pMnode->replica; ++i) {
SNodeInfo *pNode = &pCfg->nodeInfo[i];
tstrncpy(pNode->nodeFqdn, pMnode->replicas[i].fqdn, sizeof(pNode->nodeFqdn));
pNode->nodePort = pMnode->replicas[i].port;
+ mInfo("index:%d, fqdn:%s port:%d", i, pNode->nodeFqdn, pNode->nodePort);
}
tsem_init(&pMgmt->syncSem, 0, 0);
@@ -149,7 +191,11 @@ int32_t mndSyncPropose(SMnode *pMnode, SSdbRaw *pRaw) {
void mndSyncStart(SMnode *pMnode) {
SSyncMgmt *pMgmt = &pMnode->syncMgmt;
syncSetMsgCb(pMgmt->sync, &pMnode->msgCb);
- syncStart(pMgmt->sync);
+ if (pMgmt->standby) {
+ syncStartStandBy(pMgmt->sync);
+ } else {
+ syncStart(pMgmt->sync);
+ }
mDebug("sync:%" PRId64 " is started", pMgmt->sync);
}
@@ -157,7 +203,6 @@ void mndSyncStop(SMnode *pMnode) {}
bool mndIsMaster(SMnode *pMnode) {
SSyncMgmt *pMgmt = &pMnode->syncMgmt;
- pMgmt->state = syncGetMyRole(pMgmt->sync);
-
- return (pMgmt->state == TAOS_SYNC_STATE_LEADER) && (pMnode->syncMgmt.restored);
+ ESyncState state = syncGetMyRole(pMgmt->sync);
+ return (state == TAOS_SYNC_STATE_LEADER) && (pMnode->syncMgmt.restored);
}
diff --git a/source/dnode/mnode/impl/src/mndTrans.c b/source/dnode/mnode/impl/src/mndTrans.c
index c6fcc7903f..444c4bb619 100644
--- a/source/dnode/mnode/impl/src/mndTrans.c
+++ b/source/dnode/mnode/impl/src/mndTrans.c
@@ -563,7 +563,7 @@ STrans *mndTransCreate(SMnode *pMnode, ETrnPolicy policy, ETrnType type, const S
pTrans->policy = policy;
pTrans->type = type;
pTrans->createdTime = taosGetTimestampMs();
- pTrans->rpcInfo = pReq->info;
+ if (pReq != NULL) pTrans->rpcInfo = pReq->info;
pTrans->redoLogs = taosArrayInit(TRANS_ARRAY_SIZE, sizeof(void *));
pTrans->undoLogs = taosArrayInit(TRANS_ARRAY_SIZE, sizeof(void *));
pTrans->commitLogs = taosArrayInit(TRANS_ARRAY_SIZE, sizeof(void *));
@@ -1080,7 +1080,7 @@ static bool mndTransPerformRedoLogStage(SMnode *pMnode, STrans *pTrans) {
}
static bool mndTransPerformRedoActionStage(SMnode *pMnode, STrans *pTrans) {
- if (!mndIsMaster(pMnode)) return false;
+ if (!pMnode->deploy && !mndIsMaster(pMnode)) return false;
bool continueExec = true;
int32_t code = mndTransExecuteRedoActions(pMnode, pTrans);
@@ -1171,7 +1171,7 @@ static bool mndTransPerformUndoLogStage(SMnode *pMnode, STrans *pTrans) {
}
static bool mndTransPerformUndoActionStage(SMnode *pMnode, STrans *pTrans) {
- if (!mndIsMaster(pMnode)) return false;
+ if (!pMnode->deploy && !mndIsMaster(pMnode)) return false;
bool continueExec = true;
int32_t code = mndTransExecuteUndoActions(pMnode, pTrans);
diff --git a/source/dnode/mnode/impl/src/mndUser.c b/source/dnode/mnode/impl/src/mndUser.c
index 5f2147a5fe..cc6364c457 100644
--- a/source/dnode/mnode/impl/src/mndUser.c
+++ b/source/dnode/mnode/impl/src/mndUser.c
@@ -78,7 +78,33 @@ static int32_t mndCreateDefaultUser(SMnode *pMnode, char *acct, char *user, char
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
mDebug("user:%s, will be created while deploy sdb, raw:%p", userObj.user, pRaw);
+
+#if 0
return sdbWrite(pMnode->pSdb, pRaw);
+#else
+ STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_CREATE_USER, NULL);
+ if (pTrans == NULL) {
+ mError("user:%s, failed to create since %s", userObj.user, terrstr());
+ return -1;
+ }
+ mDebug("trans:%d, used to create user:%s", pTrans->id, userObj.user);
+
+ if (mndTransAppendCommitlog(pTrans, pRaw) != 0) {
+ mError("trans:%d, failed to commit redo log since %s", pTrans->id, terrstr());
+ mndTransDrop(pTrans);
+ return -1;
+ }
+ sdbSetRawStatus(pRaw, SDB_STATUS_READY);
+
+ if (mndTransPrepare(pMnode, pTrans) != 0) {
+ mError("trans:%d, failed to prepare since %s", pTrans->id, terrstr());
+ mndTransDrop(pTrans);
+ return -1;
+ }
+
+ mndTransDrop(pTrans);
+ return 0;
+#endif
}
static int32_t mndCreateDefaultUsers(SMnode *pMnode) {
diff --git a/source/dnode/mnode/impl/src/mnode.c b/source/dnode/mnode/impl/src/mnode.c
index 775c64ceab..4e4f69e01d 100644
--- a/source/dnode/mnode/impl/src/mnode.c
+++ b/source/dnode/mnode/impl/src/mnode.c
@@ -153,8 +153,14 @@ static int32_t mndInitSdb(SMnode *pMnode) {
return 0;
}
-static int32_t mndDeploySdb(SMnode *pMnode) { return sdbDeploy(pMnode->pSdb); }
-static int32_t mndReadSdb(SMnode *pMnode) { return sdbReadFile(pMnode->pSdb); }
+static int32_t mndOpenSdb(SMnode *pMnode) {
+ if (!pMnode->deploy) {
+ return sdbReadFile(pMnode->pSdb);
+ } else {
+ // return sdbDeploy(pMnode->pSdb);;
+ return 0;
+ }
+}
static void mndCleanupSdb(SMnode *pMnode) {
if (pMnode->pSdb) {
@@ -176,7 +182,7 @@ static int32_t mndAllocStep(SMnode *pMnode, char *name, MndInitFp initFp, MndCle
return 0;
}
-static int32_t mndInitSteps(SMnode *pMnode, bool deploy) {
+static int32_t mndInitSteps(SMnode *pMnode) {
if (mndAllocStep(pMnode, "mnode-sdb", mndInitSdb, mndCleanupSdb) != 0) return -1;
if (mndAllocStep(pMnode, "mnode-trans", mndInitTrans, mndCleanupTrans) != 0) return -1;
if (mndAllocStep(pMnode, "mnode-cluster", mndInitCluster, mndCleanupCluster) != 0) return -1;
@@ -201,11 +207,7 @@ static int32_t mndInitSteps(SMnode *pMnode, bool deploy) {
if (mndAllocStep(pMnode, "mnode-perfs", mndInitPerfs, mndCleanupPerfs) != 0) return -1;
if (mndAllocStep(pMnode, "mnode-db", mndInitDb, mndCleanupDb) != 0) return -1;
if (mndAllocStep(pMnode, "mnode-func", mndInitFunc, mndCleanupFunc) != 0) return -1;
- if (deploy) {
- if (mndAllocStep(pMnode, "mnode-sdb-deploy", mndDeploySdb, NULL) != 0) return -1;
- } else {
- if (mndAllocStep(pMnode, "mnode-sdb-read", mndReadSdb, NULL) != 0) return -1;
- }
+ if (mndAllocStep(pMnode, "mnode-sdb", mndOpenSdb, NULL) != 0) return -1;
if (mndAllocStep(pMnode, "mnode-profile", mndInitProfile, mndCleanupProfile) != 0) return -1;
if (mndAllocStep(pMnode, "mnode-show", mndInitShow, mndCleanupShow) != 0) return -1;
if (mndAllocStep(pMnode, "mnode-query", mndInitQuery, mndCleanupQuery) != 0) return -1;
@@ -262,7 +264,8 @@ static void mndSetOptions(SMnode *pMnode, const SMnodeOpt *pOption) {
pMnode->selfIndex = pOption->selfIndex;
memcpy(&pMnode->replicas, pOption->replicas, sizeof(SReplica) * TSDB_MAX_REPLICA);
pMnode->msgCb = pOption->msgCb;
- pMnode->selfId = pOption->replicas[pOption->selfIndex].id;
+ pMnode->selfDnodeId = pOption->dnodeId;
+ pMnode->syncMgmt.standby = pOption->standby;
}
SMnode *mndOpen(const char *path, const SMnodeOpt *pOption) {
@@ -279,6 +282,7 @@ SMnode *mndOpen(const char *path, const SMnodeOpt *pOption) {
(void)taosParseTime(timestr, &pMnode->checkTime, (int32_t)strlen(timestr), TSDB_TIME_PRECISION_MILLI, 0);
mndSetOptions(pMnode, pOption);
+ pMnode->deploy = pOption->deploy;
pMnode->pSteps = taosArrayInit(24, sizeof(SMnodeStep));
if (pMnode->pSteps == NULL) {
taosMemoryFree(pMnode);
@@ -296,7 +300,7 @@ SMnode *mndOpen(const char *path, const SMnodeOpt *pOption) {
return NULL;
}
- code = mndInitSteps(pMnode, pOption->deploy);
+ code = mndInitSteps(pMnode);
if (code != 0) {
code = terrno;
mError("failed to open mnode since %s", terrstr());
@@ -314,7 +318,6 @@ SMnode *mndOpen(const char *path, const SMnodeOpt *pOption) {
return NULL;
}
- mndUpdateMnodeRole(pMnode);
mDebug("mnode open successfully ");
return pMnode;
}
@@ -329,14 +332,12 @@ void mndClose(SMnode *pMnode) {
}
}
-int32_t mndAlter(SMnode *pMnode, const SMnodeOpt *pOption) {
- mDebug("start to alter mnode");
- mDebug("mnode is altered");
- return 0;
-}
-
int32_t mndStart(SMnode *pMnode) {
mndSyncStart(pMnode);
+ if (pMnode->deploy) {
+ if (sdbDeploy(pMnode->pSdb) != 0) return -1;
+ pMnode->syncMgmt.restored = true;
+ }
return mndInitTimer(pMnode);
}
@@ -413,8 +414,7 @@ int32_t mndProcessMsg(SRpcMsg *pMsg) {
mTrace("msg:%p, will be processed, type:%s app:%p", pMsg, TMSG_INFO(pMsg->msgType), ahandle);
if (IsReq(pMsg)) {
- if (!mndIsMaster(pMnode) && pMsg->msgType != TDMT_MND_TRANS_TIMER && pMsg->msgType != TDMT_MND_MQ_TIMER &&
- pMsg->msgType != TDMT_MND_TELEM_TIMER) {
+ if (!mndIsMaster(pMnode)) {
terrno = TSDB_CODE_APP_NOT_READY;
mDebug("msg:%p, failed to process since %s, app:%p", pMsg, terrstr(), ahandle);
return -1;
@@ -518,15 +518,17 @@ int32_t mndGetMonitorInfo(SMnode *pMnode, SMonClusterInfo *pClusterInfo, SMonVgr
SMonMnodeDesc desc = {0};
desc.mnode_id = pObj->id;
tstrncpy(desc.mnode_ep, pObj->pDnode->ep, sizeof(desc.mnode_ep));
- tstrncpy(desc.role, syncStr(pObj->role), sizeof(desc.role));
- taosArrayPush(pClusterInfo->mnodes, &desc);
- sdbRelease(pSdb, pObj);
- if (pObj->role == TAOS_SYNC_STATE_LEADER) {
+ if (pObj->id == pMnode->selfDnodeId) {
pClusterInfo->first_ep_dnode_id = pObj->id;
tstrncpy(pClusterInfo->first_ep, pObj->pDnode->ep, sizeof(pClusterInfo->first_ep));
- pClusterInfo->master_uptime = (ms - pObj->roleTime) / (86400000.0f);
+ pClusterInfo->master_uptime = (ms - pObj->stateStartTime) / (86400000.0f);
+ tstrncpy(desc.role, syncStr(TAOS_SYNC_STATE_LEADER), sizeof(desc.role));
+ } else {
+ tstrncpy(desc.role, syncStr(pObj->state), sizeof(desc.role));
}
+ taosArrayPush(pClusterInfo->mnodes, &desc);
+ sdbRelease(pSdb, pObj);
}
// vgroup info
@@ -579,6 +581,6 @@ int32_t mndGetMonitorInfo(SMnode *pMnode, SMonClusterInfo *pClusterInfo, SMonVgr
}
int32_t mndGetLoad(SMnode *pMnode, SMnodeLoad *pLoad) {
- pLoad->syncState = pMnode->syncMgmt.state;
+ pLoad->syncState = syncGetMyRole(pMnode->syncMgmt.sync);
return 0;
}
diff --git a/source/dnode/mnode/sdb/CMakeLists.txt b/source/dnode/mnode/sdb/CMakeLists.txt
index e2ebed7a78..2001a70da2 100644
--- a/source/dnode/mnode/sdb/CMakeLists.txt
+++ b/source/dnode/mnode/sdb/CMakeLists.txt
@@ -2,8 +2,7 @@ aux_source_directory(src MNODE_SRC)
add_library(sdb STATIC ${MNODE_SRC})
target_include_directories(
sdb
- PUBLIC "${TD_SOURCE_DIR}/include/dnode/mnode/sdb"
- PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}/inc"
+ PUBLIC "${CMAKE_CURRENT_SOURCE_DIR}/inc"
)
target_link_libraries(
sdb os common util wal
diff --git a/include/dnode/mnode/sdb/sdb.h b/source/dnode/mnode/sdb/inc/sdb.h
similarity index 86%
rename from include/dnode/mnode/sdb/sdb.h
rename to source/dnode/mnode/sdb/inc/sdb.h
index 94d41a7416..3d9148360a 100644
--- a/include/dnode/mnode/sdb/sdb.h
+++ b/source/dnode/mnode/sdb/inc/sdb.h
@@ -27,6 +27,15 @@
extern "C" {
#endif
+// clang-format off
+#define mFatal(...) { if (mDebugFlag & DEBUG_FATAL) { taosPrintLog("MND FATAL ", DEBUG_FATAL, 255, __VA_ARGS__); }}
+#define mError(...) { if (mDebugFlag & DEBUG_ERROR) { taosPrintLog("MND ERROR ", DEBUG_ERROR, 255, __VA_ARGS__); }}
+#define mWarn(...) { if (mDebugFlag & DEBUG_WARN) { taosPrintLog("MND WARN ", DEBUG_WARN, 255, __VA_ARGS__); }}
+#define mInfo(...) { if (mDebugFlag & DEBUG_INFO) { taosPrintLog("MND ", DEBUG_INFO, 255, __VA_ARGS__); }}
+#define mDebug(...) { if (mDebugFlag & DEBUG_DEBUG) { taosPrintLog("MND ", DEBUG_DEBUG, mDebugFlag, __VA_ARGS__); }}
+#define mTrace(...) { if (mDebugFlag & DEBUG_TRACE) { taosPrintLog("MND ", DEBUG_TRACE, mDebugFlag, __VA_ARGS__); }}
+// clang-format on
+
#define SDB_GET_VAL(pData, dataPos, val, pos, func, type) \
{ \
if (func(pRaw, dataPos, val) != 0) { \
@@ -44,12 +53,9 @@ extern "C" {
}
#define SDB_GET_INT64(pData, dataPos, val, pos) SDB_GET_VAL(pData, dataPos, val, pos, sdbGetRawInt64, int64_t)
-
#define SDB_GET_INT32(pData, dataPos, val, pos) SDB_GET_VAL(pData, dataPos, val, pos, sdbGetRawInt32, int32_t)
-
#define SDB_GET_INT16(pData, dataPos, val, pos) SDB_GET_VAL(pData, dataPos, val, pos, sdbGetRawInt16, int16_t)
-
-#define SDB_GET_INT8(pData, dataPos, val, pos) SDB_GET_VAL(pData, dataPos, val, pos, sdbGetRawInt8, int8_t)
+#define SDB_GET_INT8(pData, dataPos, val, pos) SDB_GET_VAL(pData, dataPos, val, pos, sdbGetRawInt8, int8_t)
#define SDB_GET_RESERVE(pRaw, dataPos, valLen, pos) \
{ \
@@ -66,12 +72,9 @@ extern "C" {
}
#define SDB_SET_INT64(pRaw, dataPos, val, pos) SDB_SET_VAL(pRaw, dataPos, val, pos, sdbSetRawInt64, int64_t)
-
#define SDB_SET_INT32(pRaw, dataPos, val, pos) SDB_SET_VAL(pRaw, dataPos, val, pos, sdbSetRawInt32, int32_t)
-
#define SDB_SET_INT16(pRaw, dataPos, val, pos) SDB_SET_VAL(pRaw, dataPos, val, pos, sdbSetRawInt16, int16_t)
-
-#define SDB_SET_INT8(pRaw, dataPos, val, pos) SDB_SET_VAL(pRaw, dataPos, val, pos, sdbSetRawInt8, int8_t)
+#define SDB_SET_INT8(pRaw, dataPos, val, pos) SDB_SET_VAL(pRaw, dataPos, val, pos, sdbSetRawInt8, int8_t)
#define SDB_SET_BINARY(pRaw, dataPos, val, valLen, pos) \
{ \
@@ -95,8 +98,16 @@ extern "C" {
}
typedef struct SMnode SMnode;
+typedef struct SSdb SSdb;
typedef struct SSdbRaw SSdbRaw;
typedef struct SSdbRow SSdbRow;
+typedef int32_t (*SdbInsertFp)(SSdb *pSdb, void *pObj);
+typedef int32_t (*SdbUpdateFp)(SSdb *pSdb, void *pSrcObj, void *pDstObj);
+typedef int32_t (*SdbDeleteFp)(SSdb *pSdb, void *pObj, bool callFunc);
+typedef int32_t (*SdbDeployFp)(SMnode *pMnode);
+typedef SSdbRow *(*SdbDecodeFp)(SSdbRaw *pRaw);
+typedef SSdbRaw *(*SdbEncodeFp)(void *pObj);
+typedef bool (*sdbTraverseFp)(SMnode *pMnode, void *pObj, void *p1, void *p2, void *p3);
typedef enum {
SDB_KEY_BINARY = 1,
@@ -136,14 +147,47 @@ typedef enum {
SDB_MAX = 20
} ESdbType;
-typedef struct SSdb SSdb;
-typedef int32_t (*SdbInsertFp)(SSdb *pSdb, void *pObj);
-typedef int32_t (*SdbUpdateFp)(SSdb *pSdb, void *pSrcObj, void *pDstObj);
-typedef int32_t (*SdbDeleteFp)(SSdb *pSdb, void *pObj, bool callFunc);
-typedef int32_t (*SdbDeployFp)(SMnode *pMnode);
-typedef SSdbRow *(*SdbDecodeFp)(SSdbRaw *pRaw);
-typedef SSdbRaw *(*SdbEncodeFp)(void *pObj);
-typedef bool (*sdbTraverseFp)(SMnode *pMnode, void *pObj, void *p1, void *p2, void *p3);
+typedef struct SSdbRaw {
+ int8_t type;
+ int8_t status;
+ int8_t sver;
+ int8_t reserved;
+ int32_t dataLen;
+ char pData[];
+} SSdbRaw;
+
+typedef struct SSdbRow {
+ ESdbType type;
+ ESdbStatus status;
+ int32_t refCount;
+ char pObj[];
+} SSdbRow;
+
+typedef struct SSdb {
+ SMnode *pMnode;
+ char *currDir;
+ char *syncDir;
+ char *tmpDir;
+ int64_t lastCommitVer;
+ int64_t curVer;
+ int64_t curTerm;
+ int64_t tableVer[SDB_MAX];
+ int64_t maxId[SDB_MAX];
+ EKeyType keyTypes[SDB_MAX];
+ SHashObj *hashObjs[SDB_MAX];
+ TdThreadRwlock locks[SDB_MAX];
+ SdbInsertFp insertFps[SDB_MAX];
+ SdbUpdateFp updateFps[SDB_MAX];
+ SdbDeleteFp deleteFps[SDB_MAX];
+ SdbDeployFp deployFps[SDB_MAX];
+ SdbEncodeFp encodeFps[SDB_MAX];
+ SdbDecodeFp decodeFps[SDB_MAX];
+} SSdb;
+
+typedef struct SSdbIter {
+ TdFilePtr file;
+ int64_t readlen;
+} SSdbIter;
typedef struct {
ESdbType sdbType;
@@ -334,27 +378,13 @@ int32_t sdbGetRawTotalSize(SSdbRaw *pRaw);
SSdbRow *sdbAllocRow(int32_t objSize);
void *sdbGetRowObj(SSdbRow *pRow);
+void sdbFreeRow(SSdb *pSdb, SSdbRow *pRow, bool callFunc);
-typedef struct SSdb {
- SMnode *pMnode;
- char *currDir;
- char *syncDir;
- char *tmpDir;
- int64_t lastCommitVer;
- int64_t curVer;
- int64_t curTerm;
- int64_t tableVer[SDB_MAX];
- int64_t maxId[SDB_MAX];
- EKeyType keyTypes[SDB_MAX];
- SHashObj *hashObjs[SDB_MAX];
- TdThreadRwlock locks[SDB_MAX];
- SdbInsertFp insertFps[SDB_MAX];
- SdbUpdateFp updateFps[SDB_MAX];
- SdbDeleteFp deleteFps[SDB_MAX];
- SdbDeployFp deployFps[SDB_MAX];
- SdbEncodeFp encodeFps[SDB_MAX];
- SdbDecodeFp decodeFps[SDB_MAX];
-} SSdb;
+SSdbIter *sdbIterInit(SSdb *pSdb);
+SSdbIter *sdbIterRead(SSdb *pSdb, SSdbIter *iter, char **ppBuf, int32_t *len);
+
+const char *sdbTableName(ESdbType type);
+void sdbPrintOper(SSdb *pSdb, SSdbRow *pRow, const char *oper);
#ifdef __cplusplus
}
diff --git a/source/dnode/mnode/sdb/src/sdb.c b/source/dnode/mnode/sdb/src/sdb.c
index 7b90d8acb5..d289e30d7b 100644
--- a/source/dnode/mnode/sdb/src/sdb.c
+++ b/source/dnode/mnode/sdb/src/sdb.c
@@ -14,7 +14,7 @@
*/
#define _DEFAULT_SOURCE
-#include "sdbInt.h"
+#include "sdb.h"
static int32_t sdbCreateDir(SSdb *pSdb);
diff --git a/source/dnode/mnode/sdb/src/sdbFile.c b/source/dnode/mnode/sdb/src/sdbFile.c
index b000c208c8..25cda19956 100644
--- a/source/dnode/mnode/sdb/src/sdbFile.c
+++ b/source/dnode/mnode/sdb/src/sdbFile.c
@@ -14,7 +14,7 @@
*/
#define _DEFAULT_SOURCE
-#include "sdbInt.h"
+#include "sdb.h"
#include "tchecksum.h"
#include "wal.h"
@@ -392,3 +392,66 @@ int32_t sdbDeploy(SSdb *pSdb) {
return 0;
}
+
+SSdbIter *sdbIterInit(SSdb *pSdb) {
+ char datafile[PATH_MAX] = {0};
+ char tmpfile[PATH_MAX] = {0};
+ snprintf(datafile, sizeof(datafile), "%s%ssdb.data", pSdb->currDir, TD_DIRSEP);
+ snprintf(tmpfile, sizeof(datafile), "%s%ssdb.data", pSdb->tmpDir, TD_DIRSEP);
+
+ if (taosCopyFile(datafile, tmpfile) != 0) {
+ terrno = TAOS_SYSTEM_ERROR(errno);
+ mError("failed to copy file %s to %s since %s", datafile, tmpfile, terrstr());
+ return NULL;
+ }
+
+ SSdbIter *pIter = taosMemoryCalloc(1, sizeof(SSdbIter));
+ if (pIter == NULL) {
+ terrno = TSDB_CODE_OUT_OF_MEMORY;
+ return NULL;
+ }
+
+ pIter->file = taosOpenFile(tmpfile, TD_FILE_READ);
+ if (pIter->file == NULL) {
+ terrno = TAOS_SYSTEM_ERROR(errno);
+ mError("failed to read snapshot file:%s since %s", tmpfile, terrstr());
+ taosMemoryFree(pIter);
+ return NULL;
+ }
+
+ mDebug("start to read snapshot file:%s, iter:%p", tmpfile, pIter);
+ return pIter;
+}
+
+SSdbIter *sdbIterRead(SSdb *pSdb, SSdbIter *pIter, char **ppBuf, int32_t *buflen) {
+ const int32_t maxlen = 100;
+
+ char *pBuf = taosMemoryCalloc(1, maxlen);
+ if (pBuf == NULL) {
+ terrno = TSDB_CODE_OUT_OF_MEMORY;
+ return NULL;
+ }
+
+ int32_t readlen = taosReadFile(pIter->file, pBuf, maxlen);
+ if (readlen == 0) {
+ mTrace("read snapshot to the end, readlen:%" PRId64, pIter->readlen);
+ taosMemoryFree(pBuf);
+ taosCloseFile(&pIter->file);
+ taosMemoryFree(pIter);
+ pIter = NULL;
+ } else if (readlen < 0) {
+ terrno = TAOS_SYSTEM_ERROR(errno);
+ mError("failed to read snapshot since %s, readlen:%" PRId64, terrstr(), pIter->readlen);
+ taosMemoryFree(pBuf);
+ taosCloseFile(&pIter->file);
+ taosMemoryFree(pIter);
+ pIter = NULL;
+ } else {
+ pIter->readlen += readlen;
+ mTrace("read snapshot, readlen:%" PRId64, pIter->readlen);
+ *ppBuf = pBuf;
+ *buflen = readlen;
+ }
+
+ return pIter;
+}
diff --git a/source/dnode/mnode/sdb/src/sdbHash.c b/source/dnode/mnode/sdb/src/sdbHash.c
index a25c7a5233..abf35b71a9 100644
--- a/source/dnode/mnode/sdb/src/sdbHash.c
+++ b/source/dnode/mnode/sdb/src/sdbHash.c
@@ -14,7 +14,7 @@
*/
#define _DEFAULT_SOURCE
-#include "sdbInt.h"
+#include "sdb.h"
static void sdbCheckRow(SSdb *pSdb, SSdbRow *pRow);
diff --git a/source/dnode/mnode/sdb/src/sdbRaw.c b/source/dnode/mnode/sdb/src/sdbRaw.c
index fd2f20c242..ba3b00c12d 100644
--- a/source/dnode/mnode/sdb/src/sdbRaw.c
+++ b/source/dnode/mnode/sdb/src/sdbRaw.c
@@ -14,7 +14,7 @@
*/
#define _DEFAULT_SOURCE
-#include "sdbInt.h"
+#include "sdb.h"
SSdbRaw *sdbAllocRaw(ESdbType type, int8_t sver, int32_t dataLen) {
SSdbRaw *pRaw = taosMemoryCalloc(1, dataLen + sizeof(SSdbRaw));
diff --git a/source/dnode/mnode/sdb/src/sdbRow.c b/source/dnode/mnode/sdb/src/sdbRow.c
index 43f70cb245..e57a6b028b 100644
--- a/source/dnode/mnode/sdb/src/sdbRow.c
+++ b/source/dnode/mnode/sdb/src/sdbRow.c
@@ -14,7 +14,7 @@
*/
#define _DEFAULT_SOURCE
-#include "sdbInt.h"
+#include "sdb.h"
SSdbRow *sdbAllocRow(int32_t objSize) {
SSdbRow *pRow = taosMemoryCalloc(1, objSize + sizeof(SSdbRow));
diff --git a/source/dnode/vnode/CMakeLists.txt b/source/dnode/vnode/CMakeLists.txt
index 4141485d28..d988f97188 100644
--- a/source/dnode/vnode/CMakeLists.txt
+++ b/source/dnode/vnode/CMakeLists.txt
@@ -13,6 +13,8 @@ target_sources(
"src/vnd/vnodeModule.c"
"src/vnd/vnodeSvr.c"
"src/vnd/vnodeSync.c"
+ "src/vnd/vnodeSnapshot.c"
+ "src/vnd/vnodeUtil.c"
# meta
"src/meta/metaOpen.c"
@@ -22,6 +24,7 @@ target_sources(
"src/meta/metaQuery.c"
"src/meta/metaCommit.c"
"src/meta/metaEntry.c"
+ "src/meta/metaSnapshot.c"
# sma
"src/sma/sma.c"
@@ -44,6 +47,7 @@ target_sources(
"src/tsdb/tsdbReadImpl.c"
# "src/tsdb/tsdbSma.c"
"src/tsdb/tsdbWrite.c"
+ "src/tsdb/tsdbSnapshot.c"
# tq
"src/tq/tq.c"
diff --git a/source/dnode/vnode/inc/vnode.h b/source/dnode/vnode/inc/vnode.h
index 9e33973c05..6026245174 100644
--- a/source/dnode/vnode/inc/vnode.h
+++ b/source/dnode/vnode/inc/vnode.h
@@ -39,9 +39,10 @@ extern "C" {
#endif
// vnode
-typedef struct SVnode SVnode;
-typedef struct STsdbCfg STsdbCfg; // todo: remove
-typedef struct SVnodeCfg SVnodeCfg;
+typedef struct SVnode SVnode;
+typedef struct STsdbCfg STsdbCfg; // todo: remove
+typedef struct SVnodeCfg SVnodeCfg;
+typedef struct SVSnapshotReader SVSnapshotReader;
extern const SVnodeCfg vnodeCfgDefault;
@@ -59,13 +60,14 @@ int32_t vnodeProcessQueryMsg(SVnode *pVnode, SRpcMsg *pMsg);
int32_t vnodeProcessFetchMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pInfo);
int32_t vnodeGetLoad(SVnode *pVnode, SVnodeLoad *pLoad);
int32_t vnodeValidateTableHash(SVnode *pVnode, char *tableFName);
-
int32_t vnodeStart(SVnode *pVnode);
void vnodeStop(SVnode *pVnode);
-
int64_t vnodeGetSyncHandle(SVnode *pVnode);
void vnodeGetSnapshot(SVnode *pVnode, SSnapshot *pSnapshot);
void vnodeGetInfo(SVnode *pVnode, const char **dbname, int32_t *vgId);
+int32_t vnodeSnapshotReaderOpen(SVnode *pVnode, SVSnapshotReader **ppReader, int64_t sver, int64_t ever);
+int32_t vnodeSnapshotReaderClose(SVSnapshotReader *pReader);
+int32_t vnodeSnapshotRead(SVSnapshotReader *pReader, const void **ppData, uint32_t *nData);
// meta
typedef struct SMeta SMeta; // todo: remove
diff --git a/source/dnode/vnode/src/inc/tq.h b/source/dnode/vnode/src/inc/tq.h
index ad3f8cc869..06ff6329e0 100644
--- a/source/dnode/vnode/src/inc/tq.h
+++ b/source/dnode/vnode/src/inc/tq.h
@@ -96,7 +96,8 @@ struct STQ {
SHashObj* pStreamTasks;
SVnode* pVnode;
SWal* pWal;
- TDB* pTdb;
+ TDB* pMetaStore;
+ TTB* pExecStore;
};
typedef struct {
diff --git a/source/dnode/vnode/src/inc/vnodeInt.h b/source/dnode/vnode/src/inc/vnodeInt.h
index 24b3f458b1..faf0ddcd4a 100644
--- a/source/dnode/vnode/src/inc/vnodeInt.h
+++ b/source/dnode/vnode/src/inc/vnodeInt.h
@@ -47,15 +47,17 @@
extern "C" {
#endif
-typedef struct SVnodeInfo SVnodeInfo;
-typedef struct SMeta SMeta;
-typedef struct SSma SSma;
-typedef struct STsdb STsdb;
-typedef struct STQ STQ;
-typedef struct SVState SVState;
-typedef struct SVBufPool SVBufPool;
-typedef struct SQWorker SQHandle;
-typedef struct STsdbKeepCfg STsdbKeepCfg;
+typedef struct SVnodeInfo SVnodeInfo;
+typedef struct SMeta SMeta;
+typedef struct SSma SSma;
+typedef struct STsdb STsdb;
+typedef struct STQ STQ;
+typedef struct SVState SVState;
+typedef struct SVBufPool SVBufPool;
+typedef struct SQWorker SQHandle;
+typedef struct STsdbKeepCfg STsdbKeepCfg;
+typedef struct SMetaSnapshotReader SMetaSnapshotReader;
+typedef struct STsdbSnapshotReader STsdbSnapshotReader;
#define VNODE_META_DIR "meta"
#define VNODE_TSDB_DIR "tsdb"
@@ -67,8 +69,10 @@ typedef struct STsdbKeepCfg STsdbKeepCfg;
#define VNODE_RSMA2_DIR "rsma2"
// vnd.h
-void* vnodeBufPoolMalloc(SVBufPool* pPool, int size);
-void vnodeBufPoolFree(SVBufPool* pPool, void* p);
+void* vnodeBufPoolMalloc(SVBufPool* pPool, int size);
+void vnodeBufPoolFree(SVBufPool* pPool, void* p);
+int32_t vnodeRealloc(void** pp, int32_t size);
+void vnodeFree(void* p);
// meta
typedef struct SMCtbCursor SMCtbCursor;
@@ -95,6 +99,9 @@ STSma* metaGetSmaInfoByIndex(SMeta* pMeta, int64_t indexUid);
STSmaWrapper* metaGetSmaInfoByTable(SMeta* pMeta, tb_uid_t uid, bool deepCopy);
SArray* metaGetSmaIdsByTable(SMeta* pMeta, tb_uid_t uid);
SArray* metaGetSmaTbUids(SMeta* pMeta);
+int32_t metaSnapshotReaderOpen(SMeta* pMeta, SMetaSnapshotReader** ppReader, int64_t sver, int64_t ever);
+int32_t metaSnapshotReaderClose(SMetaSnapshotReader* pReader);
+int32_t metaSnapshotRead(SMetaSnapshotReader* pReader, void** ppData, uint32_t* nData);
int32_t metaCreateTSma(SMeta* pMeta, int64_t version, SSmaCfg* pCfg);
int32_t metaDropTSma(SMeta* pMeta, int64_t indexUid);
@@ -112,6 +119,9 @@ tsdbReaderT* tsdbQueryTables(SVnode* pVnode, SQueryTableDataCond* pCond, STableG
tsdbReaderT tsdbQueryCacheLastT(STsdb* tsdb, SQueryTableDataCond* pCond, STableGroupInfo* groupList, uint64_t qId,
void* pMemRef);
int32_t tsdbGetTableGroupFromIdListT(STsdb* tsdb, SArray* pTableIdList, STableGroupInfo* pGroupInfo);
+int32_t tsdbSnapshotReaderOpen(STsdb* pTsdb, STsdbSnapshotReader** ppReader, int64_t sver, int64_t ever);
+int32_t tsdbSnapshotReaderClose(STsdbSnapshotReader* pReader);
+int32_t tsdbSnapshotRead(STsdbSnapshotReader* pReader, void** ppData, uint32_t* nData);
// tq
STQ* tqOpen(const char* path, SVnode* pVnode, SWal* pWal);
diff --git a/source/dnode/vnode/src/meta/metaSnapshot.c b/source/dnode/vnode/src/meta/metaSnapshot.c
new file mode 100644
index 0000000000..5757039d55
--- /dev/null
+++ b/source/dnode/vnode/src/meta/metaSnapshot.c
@@ -0,0 +1,93 @@
+/*
+ * Copyright (c) 2019 TAOS Data, Inc.
+ *
+ * This program is free software: you can use, redistribute, and/or modify
+ * it under the terms of the GNU Affero General Public License, version 3
+ * or later ("AGPL"), as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * You should have received a copy of the GNU Affero General Public License
+ * along with this program. If not, see .
+ */
+
+#include "meta.h"
+
+struct SMetaSnapshotReader {
+ SMeta* pMeta;
+ TBC* pTbc;
+ int64_t sver;
+ int64_t ever;
+};
+
+int32_t metaSnapshotReaderOpen(SMeta* pMeta, SMetaSnapshotReader** ppReader, int64_t sver, int64_t ever) {
+ int32_t code = 0;
+ int32_t c = 0;
+ SMetaSnapshotReader* pMetaReader = NULL;
+
+ pMetaReader = (SMetaSnapshotReader*)taosMemoryCalloc(1, sizeof(*pMetaReader));
+ if (pMetaReader == NULL) {
+ code = TSDB_CODE_OUT_OF_MEMORY;
+ goto _err;
+ }
+ pMetaReader->pMeta = pMeta;
+ pMetaReader->sver = sver;
+ pMetaReader->ever = ever;
+ code = tdbTbcOpen(pMeta->pTbDb, &pMetaReader->pTbc, NULL);
+ if (code) {
+ goto _err;
+ }
+
+ code = tdbTbcMoveTo(pMetaReader->pTbc, &(STbDbKey){.version = sver, .uid = INT64_MIN}, sizeof(STbDbKey), &c);
+ if (code) {
+ goto _err;
+ }
+
+ *ppReader = pMetaReader;
+ return code;
+
+_err:
+ *ppReader = NULL;
+ return code;
+}
+
+int32_t metaSnapshotReaderClose(SMetaSnapshotReader* pReader) {
+ if (pReader) {
+ tdbTbcClose(pReader->pTbc);
+ taosMemoryFree(pReader);
+ }
+ return 0;
+}
+
+int32_t metaSnapshotRead(SMetaSnapshotReader* pReader, void** ppData, uint32_t* nDatap) {
+ const void* pKey = NULL;
+ const void* pData = NULL;
+ int32_t nKey = 0;
+ int32_t nData = 0;
+ int32_t code = 0;
+
+ for (;;) {
+ code = tdbTbcGet(pReader->pTbc, &pKey, &nKey, &pData, &nData);
+ if (code || ((STbDbKey*)pData)->version > pReader->ever) {
+ return TSDB_CODE_VND_READ_END;
+ }
+
+ if (((STbDbKey*)pData)->version < pReader->sver) {
+ continue;
+ }
+
+ break;
+ }
+
+ // copy the data
+ if (vnodeRealloc(ppData, nData) < 0) {
+ code = TSDB_CODE_OUT_OF_MEMORY;
+ return code;
+ }
+
+ memcpy(*ppData, pData, nData);
+ *nDatap = nData;
+ return code;
+}
\ No newline at end of file
diff --git a/source/dnode/vnode/src/meta/metaTable.c b/source/dnode/vnode/src/meta/metaTable.c
index a792343380..462d461a8a 100644
--- a/source/dnode/vnode/src/meta/metaTable.c
+++ b/source/dnode/vnode/src/meta/metaTable.c
@@ -23,6 +23,7 @@ static int metaUpdateTtlIdx(SMeta *pMeta, const SMetaEntry *pME);
static int metaSaveToSkmDb(SMeta *pMeta, const SMetaEntry *pME);
static int metaUpdateCtbIdx(SMeta *pMeta, const SMetaEntry *pME);
static int metaUpdateTagIdx(SMeta *pMeta, const SMetaEntry *pCtbEntry);
+static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type);
int metaCreateSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
SMetaEntry me = {0};
@@ -71,64 +72,71 @@ _err:
}
int metaDropSTable(SMeta *pMeta, int64_t verison, SVDropStbReq *pReq) {
- TBC *pNameIdxc = NULL;
- TBC *pUidIdxc = NULL;
- TBC *pCtbIdxc = NULL;
- SCtbIdxKey *pCtbIdxKey;
- const void *pKey = NULL;
- int nKey;
- const void *pData = NULL;
- int nData;
- int c, ret;
+ void *pKey = NULL;
+ int nKey = 0;
+ void *pData = NULL;
+ int nData = 0;
+ int c = 0;
+ int rc = 0;
- // prepare uid idx cursor
- tdbTbcOpen(pMeta->pUidIdx, &pUidIdxc, &pMeta->txn);
- ret = tdbTbcMoveTo(pUidIdxc, &pReq->suid, sizeof(tb_uid_t), &c);
- if (ret < 0 || c != 0) {
- terrno = TSDB_CODE_VND_TB_NOT_EXIST;
- tdbTbcClose(pUidIdxc);
- goto _err;
+ // check if super table exists
+ rc = tdbTbGet(pMeta->pNameIdx, pReq->name, strlen(pReq->name) + 1, &pData, &nData);
+ if (rc < 0 || *(tb_uid_t *)pData != pReq->suid) {
+ terrno = TSDB_CODE_VND_TABLE_NOT_EXIST;
+ return -1;
}
- // prepare name idx cursor
- tdbTbcOpen(pMeta->pNameIdx, &pNameIdxc, &pMeta->txn);
- ret = tdbTbcMoveTo(pNameIdxc, pReq->name, strlen(pReq->name) + 1, &c);
- if (ret < 0 || c != 0) {
- ASSERT(0);
- }
+ // drop all child tables
+ TBC *pCtbIdxc = NULL;
+ SArray *pArray = taosArrayInit(8, sizeof(tb_uid_t));
- tdbTbcDelete(pUidIdxc);
- tdbTbcDelete(pNameIdxc);
- tdbTbcClose(pUidIdxc);
- tdbTbcClose(pNameIdxc);
-
- // loop to drop each child table
tdbTbcOpen(pMeta->pCtbIdx, &pCtbIdxc, &pMeta->txn);
- ret = tdbTbcMoveTo(pCtbIdxc, &(SCtbIdxKey){.suid = pReq->suid, .uid = INT64_MIN}, sizeof(SCtbIdxKey), &c);
- if (ret < 0 || (c < 0 && tdbTbcMoveToNext(pCtbIdxc) < 0)) {
+ rc = tdbTbcMoveTo(pCtbIdxc, &(SCtbIdxKey){.suid = pReq->suid, .uid = INT64_MIN}, sizeof(SCtbIdxKey), &c);
+ if (rc < 0) {
tdbTbcClose(pCtbIdxc);
- goto _exit;
+ metaWLock(pMeta);
+ goto _drop_super_table;
}
for (;;) {
- tdbTbcGet(pCtbIdxc, &pKey, &nKey, NULL, NULL);
- pCtbIdxKey = (SCtbIdxKey *)pKey;
+ rc = tdbTbcNext(pCtbIdxc, &pKey, &nKey, NULL, NULL);
+ if (rc < 0) break;
- if (pCtbIdxKey->suid > pReq->suid) break;
+ if (((SCtbIdxKey *)pKey)->suid < pReq->suid) {
+ continue;
+ } else if (((SCtbIdxKey *)pKey)->suid > pReq->suid) {
+ break;
+ }
- // drop the child table (TODO)
-
- if (tdbTbcMoveToNext(pCtbIdxc) < 0) break;
+ taosArrayPush(pArray, &(((SCtbIdxKey *)pKey)->uid));
}
+ tdbTbcClose(pCtbIdxc);
+
+ metaWLock(pMeta);
+
+ for (int32_t iChild = 0; iChild < taosArrayGetSize(pArray); iChild++) {
+ tb_uid_t uid = *(tb_uid_t *)taosArrayGet(pArray, iChild);
+ metaDropTableByUid(pMeta, uid, NULL);
+ }
+
+ taosArrayDestroy(pArray);
+
+ // drop super table
+_drop_super_table:
+ tdbTbGet(pMeta->pUidIdx, &pReq->suid, sizeof(tb_uid_t), &pData, &nData);
+ tdbTbDelete(pMeta->pTbDb, &(STbDbKey){.version = *(int64_t *)pData, .uid = pReq->suid}, sizeof(STbDbKey),
+ &pMeta->txn);
+ tdbTbDelete(pMeta->pNameIdx, pReq->name, strlen(pReq->name) + 1, &pMeta->txn);
+ tdbTbDelete(pMeta->pUidIdx, &pReq->suid, sizeof(tb_uid_t), &pMeta->txn);
+
+ metaULock(pMeta);
+
_exit:
+ tdbFree(pKey);
+ tdbFree(pData);
metaDebug("vgId:%d super table %s uid:%" PRId64 " is dropped", TD_VID(pMeta->pVnode), pReq->name, pReq->suid);
return 0;
-
-_err:
- metaError("vgId:%d failed to drop super table %s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name,
- pReq->suid, tstrerror(terrno));
- return -1;
}
int metaAlterSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
@@ -256,122 +264,63 @@ _err:
}
int metaDropTable(SMeta *pMeta, int64_t version, SVDropTbReq *pReq, SArray *tbUids) {
- TBC *pTbDbc = NULL;
- TBC *pUidIdxc = NULL;
- TBC *pNameIdxc = NULL;
- const void *pData;
- int nData;
- tb_uid_t uid;
- int64_t tver;
- SMetaEntry me = {0};
- SDecoder coder = {0};
- int8_t type;
- int64_t ctime;
- tb_uid_t suid;
- int c = 0, ret;
+ void *pData = NULL;
+ int nData = 0;
+ int rc = 0;
+ tb_uid_t uid;
+ int type;
- // search & delete the name idx
- tdbTbcOpen(pMeta->pNameIdx, &pNameIdxc, &pMeta->txn);
- ret = tdbTbcMoveTo(pNameIdxc, pReq->name, strlen(pReq->name) + 1, &c);
- if (ret < 0 || !tdbTbcIsValid(pNameIdxc) || c) {
- tdbTbcClose(pNameIdxc);
+ rc = tdbTbGet(pMeta->pNameIdx, pReq->name, strlen(pReq->name) + 1, &pData, &nData);
+ if (rc < 0) {
terrno = TSDB_CODE_VND_TABLE_NOT_EXIST;
return -1;
}
-
- ret = tdbTbcGet(pNameIdxc, NULL, NULL, &pData, &nData);
- if (ret < 0) {
- ASSERT(0);
- return -1;
- }
-
uid = *(tb_uid_t *)pData;
- tdbTbcDelete(pNameIdxc);
- tdbTbcClose(pNameIdxc);
+ metaWLock(pMeta);
+ metaDropTableByUid(pMeta, uid, &type);
+ metaULock(pMeta);
- // search & delete uid idx
- tdbTbcOpen(pMeta->pUidIdx, &pUidIdxc, &pMeta->txn);
- ret = tdbTbcMoveTo(pUidIdxc, &uid, sizeof(uid), &c);
- if (ret < 0 || c != 0) {
- ASSERT(0);
- return -1;
+ if (type == TSDB_CHILD_TABLE && tbUids) {
+ taosArrayPush(tbUids, &uid);
}
- ret = tdbTbcGet(pUidIdxc, NULL, NULL, &pData, &nData);
- if (ret < 0) {
- ASSERT(0);
- return -1;
+ tdbFree(pData);
+ return 0;
+}
+
+static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type) {
+ void *pData = NULL;
+ int nData = 0;
+ int rc = 0;
+ int64_t version;
+ SMetaEntry e = {0};
+ SDecoder dc = {0};
+
+ rc = tdbTbGet(pMeta->pUidIdx, &uid, sizeof(uid), &pData, &nData);
+ version = *(int64_t *)pData;
+
+ tdbTbGet(pMeta->pTbDb, &(STbDbKey){.version = version, .uid = uid}, sizeof(STbDbKey), &pData, &nData);
+
+ tDecoderInit(&dc, pData, nData);
+ metaDecodeEntry(&dc, &e);
+
+ if (type) *type = e.type;
+
+ tdbTbDelete(pMeta->pTbDb, &(STbDbKey){.version = version, .uid = uid}, sizeof(STbDbKey), &pMeta->txn);
+ tdbTbDelete(pMeta->pNameIdx, e.name, strlen(e.name) + 1, &pMeta->txn);
+ tdbTbDelete(pMeta->pUidIdx, &uid, sizeof(uid), &pMeta->txn);
+ if (e.type == TSDB_CHILD_TABLE) {
+ tdbTbDelete(pMeta->pCtbIdx, &(SCtbIdxKey){.suid = e.ctbEntry.suid, .uid = uid}, sizeof(SCtbIdxKey), &pMeta->txn);
+ } else if (e.type == TSDB_NORMAL_TABLE) {
+ // drop schema.db (todo)
+ // drop ttl.idx (todo)
+ } else if (e.type == TSDB_SUPER_TABLE) {
+ // drop schema.db (todo)
}
- tver = *(int64_t *)pData;
- tdbTbcDelete(pUidIdxc);
- tdbTbcClose(pUidIdxc);
-
- // search and get meta entry
- tdbTbcOpen(pMeta->pTbDb, &pTbDbc, &pMeta->txn);
- ret = tdbTbcMoveTo(pTbDbc, &(STbDbKey){.uid = uid, .version = tver}, sizeof(STbDbKey), &c);
- if (ret < 0 || c != 0) {
- ASSERT(0);
- return -1;
- }
-
- ret = tdbTbcGet(pTbDbc, NULL, NULL, &pData, &nData);
- if (ret < 0) {
- ASSERT(0);
- return -1;
- }
-
- // decode entry
- void *pDataCopy = taosMemoryMalloc(nData); // remove the copy (todo)
- memcpy(pDataCopy, pData, nData);
- tDecoderInit(&coder, pDataCopy, nData);
- ret = metaDecodeEntry(&coder, &me);
- if (ret < 0) {
- ASSERT(0);
- return -1;
- }
-
- type = me.type;
- if (type == TSDB_CHILD_TABLE) {
- ctime = me.ctbEntry.ctime;
- suid = me.ctbEntry.suid;
- taosArrayPush(tbUids, &me.uid);
- } else if (type == TSDB_NORMAL_TABLE) {
- ctime = me.ntbEntry.ctime;
- suid = 0;
- } else {
- ASSERT(0);
- }
-
- taosMemoryFree(pDataCopy);
- tDecoderClear(&coder);
- tdbTbcClose(pTbDbc);
-
- if (type == TSDB_CHILD_TABLE) {
- // remove the pCtbIdx
- TBC *pCtbIdxc = NULL;
- tdbTbcOpen(pMeta->pCtbIdx, &pCtbIdxc, &pMeta->txn);
-
- ret = tdbTbcMoveTo(pCtbIdxc, &(SCtbIdxKey){.suid = suid, .uid = uid}, sizeof(SCtbIdxKey), &c);
- if (ret < 0 || c != 0) {
- ASSERT(0);
- return -1;
- }
-
- tdbTbcDelete(pCtbIdxc);
- tdbTbcClose(pCtbIdxc);
-
- // remove tags from pTagIdx (todo)
- } else if (type == TSDB_NORMAL_TABLE) {
- // remove from pSkmDb
- } else {
- ASSERT(0);
- }
-
- // remove from ttl (todo)
- if (ctime > 0) {
- }
+ tDecoderClear(&dc);
+ tdbFree(pData);
return 0;
}
@@ -608,14 +557,14 @@ static int metaUpdateTableTagVal(SMeta *pMeta, int64_t version, SVAlterTbReq *pA
// TODO : need to update tag index
}
ctbEntry.version = version;
- if(pTagSchema->nCols == 1 && pTagSchema->pSchema[0].type == TSDB_DATA_TYPE_JSON){
+ if (pTagSchema->nCols == 1 && pTagSchema->pSchema[0].type == TSDB_DATA_TYPE_JSON) {
ctbEntry.ctbEntry.pTags = taosMemoryMalloc(pAlterTbReq->nTagVal);
- if(ctbEntry.ctbEntry.pTags == NULL){
+ if (ctbEntry.ctbEntry.pTags == NULL) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
goto _err;
}
- memcpy((void*)ctbEntry.ctbEntry.pTags, pAlterTbReq->pTagVal, pAlterTbReq->nTagVal);
- }else{
+ memcpy((void *)ctbEntry.ctbEntry.pTags, pAlterTbReq->pTagVal, pAlterTbReq->nTagVal);
+ } else {
SKVRowBuilder kvrb = {0};
const SKVRow pOldTag = (const SKVRow)ctbEntry.ctbEntry.pTags;
SKVRow pNewTag = NULL;
@@ -649,7 +598,7 @@ static int metaUpdateTableTagVal(SMeta *pMeta, int64_t version, SVAlterTbReq *pA
tDecoderClear(&dc1);
tDecoderClear(&dc2);
- if (ctbEntry.ctbEntry.pTags) taosMemoryFree((void*)ctbEntry.ctbEntry.pTags);
+ if (ctbEntry.ctbEntry.pTags) taosMemoryFree((void *)ctbEntry.ctbEntry.pTags);
if (ctbEntry.pBuf) taosMemoryFree(ctbEntry.pBuf);
if (stbEntry.pBuf) tdbFree(stbEntry.pBuf);
tdbTbcClose(pTbDbc);
diff --git a/source/dnode/vnode/src/tq/tq.c b/source/dnode/vnode/src/tq/tq.c
index bd48ed9b4c..0e8835357a 100644
--- a/source/dnode/vnode/src/tq/tq.c
+++ b/source/dnode/vnode/src/tq/tq.c
@@ -14,6 +14,7 @@
*/
#include "tq.h"
+#include "tdbInt.h"
int32_t tqInit() {
int8_t old;
@@ -46,6 +47,51 @@ void tqCleanUp() {
}
}
+int tqExecKeyCompare(const void* pKey1, int32_t kLen1, const void* pKey2, int32_t kLen2) {
+ return strcmp(pKey1, pKey2);
+}
+
+int32_t tqStoreExec(STQ* pTq, const char* key, const STqExec* pExec) {
+ int32_t code;
+ int32_t vlen;
+ tEncodeSize(tEncodeSTqExec, pExec, vlen, code);
+ ASSERT(code == 0);
+
+ void* buf = taosMemoryCalloc(1, vlen);
+ if (buf == NULL) {
+ ASSERT(0);
+ }
+
+ SEncoder encoder;
+ tEncoderInit(&encoder, buf, vlen);
+
+ if (tEncodeSTqExec(&encoder, pExec) < 0) {
+ ASSERT(0);
+ }
+
+ TXN txn;
+
+ if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, TDB_TXN_WRITE | TDB_TXN_READ_UNCOMMITTED) < 0) {
+ ASSERT(0);
+ }
+
+ if (tdbBegin(pTq->pMetaStore, &txn) < 0) {
+ ASSERT(0);
+ }
+
+ if (tdbTbUpsert(pTq->pExecStore, key, (int)strlen(key), buf, vlen, &txn) < 0) {
+ ASSERT(0);
+ }
+
+ if (tdbCommit(pTq->pMetaStore, &txn) < 0) {
+ ASSERT(0);
+ }
+
+ tEncoderClear(&encoder);
+ taosMemoryFree(buf);
+ return 0;
+}
+
STQ* tqOpen(const char* path, SVnode* pVnode, SWal* pWal) {
STQ* pTq = taosMemoryMalloc(sizeof(STQ));
if (pTq == NULL) {
@@ -55,9 +101,6 @@ STQ* tqOpen(const char* path, SVnode* pVnode, SWal* pWal) {
pTq->path = strdup(path);
pTq->pVnode = pVnode;
pTq->pWal = pWal;
- if (tdbOpen(path, 4096, 1, &pTq->pTdb) < 0) {
- ASSERT(0);
- }
pTq->execs = taosHashInit(64, MurmurHash3_32, true, HASH_ENTRY_LOCK);
@@ -65,6 +108,66 @@ STQ* tqOpen(const char* path, SVnode* pVnode, SWal* pWal) {
pTq->pushMgr = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_ENTRY_LOCK);
+ if (tdbOpen(path, 16 * 1024, 1, &pTq->pMetaStore) < 0) {
+ ASSERT(0);
+ }
+
+ if (tdbTbOpen("exec", -1, -1, tqExecKeyCompare, pTq->pMetaStore, &pTq->pExecStore) < 0) {
+ ASSERT(0);
+ }
+
+ TXN txn;
+
+ if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, 0) < 0) {
+ ASSERT(0);
+ }
+
+ /*if (tdbBegin(pTq->pMetaStore, &txn) < 0) {*/
+ /*ASSERT(0);*/
+ /*}*/
+
+ TBC* pCur;
+ if (tdbTbcOpen(pTq->pExecStore, &pCur, &txn) < 0) {
+ ASSERT(0);
+ }
+
+ void* pKey;
+ int kLen;
+ void* pVal;
+ int vLen;
+
+ tdbTbcMoveToFirst(pCur);
+ SDecoder decoder;
+ while (tdbTbcNext(pCur, &pKey, &kLen, &pVal, &vLen) == 0) {
+ STqExec exec;
+ tDecoderInit(&decoder, (uint8_t*)pVal, vLen);
+ tDecodeSTqExec(&decoder, &exec);
+ exec.pWalReader = walOpenReadHandle(pTq->pVnode->pWal);
+ if (exec.subType == TOPIC_SUB_TYPE__TABLE) {
+ for (int32_t i = 0; i < 5; i++) {
+ exec.pExecReader[i] = tqInitSubmitMsgScanner(pTq->pVnode->pMeta);
+
+ SReadHandle handle = {
+ .reader = exec.pExecReader[i],
+ .meta = pTq->pVnode->pMeta,
+ .pMsgCb = &pTq->pVnode->msgCb,
+ };
+ exec.task[i] = qCreateStreamExecTaskInfo(exec.qmsg, &handle);
+ ASSERT(exec.task[i]);
+ }
+ } else {
+ for (int32_t i = 0; i < 5; i++) {
+ exec.pExecReader[i] = tqInitSubmitMsgScanner(pTq->pVnode->pMeta);
+ }
+ exec.pDropTbUid = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
+ }
+ taosHashPut(pTq->execs, pKey, kLen, &exec, sizeof(STqExec));
+ }
+
+ if (tdbTxnClose(&txn) < 0) {
+ ASSERT(0);
+ }
+
return pTq;
}
@@ -74,7 +177,7 @@ void tqClose(STQ* pTq) {
taosHashCleanup(pTq->execs);
taosHashCleanup(pTq->pStreamTasks);
taosHashCleanup(pTq->pushMgr);
- tdbClose(pTq->pTdb);
+ tdbClose(pTq->pMetaStore);
taosMemoryFree(pTq);
}
// TODO
@@ -91,7 +194,6 @@ int32_t tEncodeSTqExec(SEncoder* pEncoder, const STqExec* pExec) {
if (tEncodeI8(pEncoder, pExec->withTag) < 0) return -1;
if (pExec->subType == TOPIC_SUB_TYPE__TABLE) {
if (tEncodeCStr(pEncoder, pExec->qmsg) < 0) return -1;
- // TODO encode modified exec
}
tEndEncode(pEncoder);
return pEncoder->pos;
@@ -108,7 +210,6 @@ int32_t tDecodeSTqExec(SDecoder* pDecoder, STqExec* pExec) {
if (tDecodeI8(pDecoder, &pExec->withTag) < 0) return -1;
if (pExec->subType == TOPIC_SUB_TYPE__TABLE) {
if (tDecodeCStrAlloc(pDecoder, &pExec->qmsg) < 0) return -1;
- // TODO decode modified exec
}
tEndDecode(pDecoder);
return 0;
@@ -556,6 +657,25 @@ int32_t tqProcessVgDeleteReq(STQ* pTq, char* msg, int32_t msgLen) {
int32_t code = taosHashRemove(pTq->execs, pReq->subKey, strlen(pReq->subKey));
ASSERT(code == 0);
+
+ TXN txn;
+
+ if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, TDB_TXN_WRITE | TDB_TXN_READ_UNCOMMITTED) < 0) {
+ ASSERT(0);
+ }
+
+ if (tdbBegin(pTq->pMetaStore, &txn) < 0) {
+ ASSERT(0);
+ }
+
+ if (tdbTbDelete(pTq->pExecStore, pReq->subKey, (int)strlen(pReq->subKey), &txn) < 0) {
+ /*ASSERT(0);*/
+ }
+
+ if (tdbCommit(pTq->pMetaStore, &txn) < 0) {
+ ASSERT(0);
+ }
+
return 0;
}
@@ -604,22 +724,22 @@ int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen) {
pExec->pDropTbUid = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
}
taosHashPut(pTq->execs, req.subKey, strlen(req.subKey), pExec, sizeof(STqExec));
+
+ if (tqStoreExec(pTq, req.subKey, pExec) < 0) {
+ // TODO
+ }
return 0;
} else {
- /*if (req.newConsumerId != -1) {*/
- /*taosWLockLatch(&pExec->lock);*/
- ASSERT(pExec->consumerId == req.oldConsumerId);
+ /*ASSERT(pExec->consumerId == req.oldConsumerId);*/
// TODO handle qmsg and exec modification
atomic_store_32(&pExec->epoch, -1);
atomic_store_64(&pExec->consumerId, req.newConsumerId);
atomic_add_fetch_32(&pExec->epoch, 1);
- /*taosWUnLockLatch(&pExec->lock);*/
+
+ if (tqStoreExec(pTq, req.subKey, pExec) < 0) {
+ // TODO
+ }
return 0;
- /*} else {*/
- // TODO
- /*taosHashRemove(pTq->tqMetaNew, req.subKey, strlen(req.subKey));*/
- /*return 0;*/
- /*}*/
}
}
diff --git a/source/dnode/vnode/src/tq/tqRead.c b/source/dnode/vnode/src/tq/tqRead.c
index be8d786de2..9f4c5fc81e 100644
--- a/source/dnode/vnode/src/tq/tqRead.c
+++ b/source/dnode/vnode/src/tq/tqRead.c
@@ -83,11 +83,11 @@ bool tqNextDataBlockFilterOut(STqReadHandle* pHandle, SHashObj* filterOutUids) {
int32_t tqRetrieveDataBlock(SArray** ppCols, STqReadHandle* pHandle, uint64_t* pGroupId, uint64_t* pUid,
int32_t* pNumOfRows, int16_t* pNumOfCols) {
- /*int32_t sversion = pHandle->pBlock->sversion;*/
- // TODO set to real sversion
*pUid = 0;
- int32_t sversion = 1;
+ // TODO set to real sversion
+ /*int32_t sversion = 1;*/
+ int32_t sversion = htonl(pHandle->pBlock->sversion);
if (pHandle->sver != sversion || pHandle->cachedSchemaUid != pHandle->msgIter.suid) {
pHandle->pSchema = metaGetTbTSchema(pHandle->pVnodeMeta, pHandle->msgIter.uid, sversion);
if (pHandle->pSchema == NULL) {
diff --git a/source/dnode/vnode/src/tsdb/tsdbSnapshot.c b/source/dnode/vnode/src/tsdb/tsdbSnapshot.c
new file mode 100644
index 0000000000..79989a5560
--- /dev/null
+++ b/source/dnode/vnode/src/tsdb/tsdbSnapshot.c
@@ -0,0 +1,36 @@
+/*
+ * Copyright (c) 2019 TAOS Data, Inc.
+ *
+ * This program is free software: you can use, redistribute, and/or modify
+ * it under the terms of the GNU Affero General Public License, version 3
+ * or later ("AGPL"), as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * You should have received a copy of the GNU Affero General Public License
+ * along with this program. If not, see .
+ */
+
+#include "tsdb.h"
+
+struct STsdbSnapshotReader {
+ STsdb* pTsdb;
+ // TODO
+};
+
+int32_t tsdbSnapshotReaderOpen(STsdb* pTsdb, STsdbSnapshotReader** ppReader, int64_t sver, int64_t ever) {
+ // TODO
+ return 0;
+}
+
+int32_t tsdbSnapshotReaderClose(STsdbSnapshotReader* pReader) {
+ // TODO
+ return 0;
+}
+
+int32_t tsdbSnapshotRead(STsdbSnapshotReader* pReader, void** ppData, uint32_t* nData) {
+ // TODO
+ return 0;
+}
diff --git a/source/dnode/vnode/src/vnd/vnodeSnapshot.c b/source/dnode/vnode/src/vnd/vnodeSnapshot.c
new file mode 100644
index 0000000000..baa8422307
--- /dev/null
+++ b/source/dnode/vnode/src/vnd/vnodeSnapshot.c
@@ -0,0 +1,109 @@
+/*
+ * Copyright (c) 2019 TAOS Data, Inc.
+ *
+ * This program is free software: you can use, redistribute, and/or modify
+ * it under the terms of the GNU Affero General Public License, version 3
+ * or later ("AGPL"), as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * You should have received a copy of the GNU Affero General Public License
+ * along with this program. If not, see .
+ */
+
+#include "vnodeInt.h"
+
+struct SVSnapshotReader {
+ SVnode *pVnode;
+ int64_t sver;
+ int64_t ever;
+ int8_t isMetaEnd;
+ int8_t isTsdbEnd;
+ SMetaSnapshotReader *pMetaReader;
+ STsdbSnapshotReader *pTsdbReader;
+ void *pData;
+ int32_t nData;
+};
+
+int32_t vnodeSnapshotReaderOpen(SVnode *pVnode, SVSnapshotReader **ppReader, int64_t sver, int64_t ever) {
+ SVSnapshotReader *pReader = NULL;
+
+ pReader = (SVSnapshotReader *)taosMemoryCalloc(1, sizeof(*pReader));
+ if (pReader == NULL) {
+ terrno = TSDB_CODE_OUT_OF_MEMORY;
+ goto _err;
+ }
+ pReader->pVnode = pVnode;
+ pReader->sver = sver;
+ pReader->ever = ever;
+ pReader->isMetaEnd = 0;
+ pReader->isTsdbEnd = 0;
+
+ if (metaSnapshotReaderOpen(pVnode->pMeta, &pReader->pMetaReader, sver, ever) < 0) {
+ taosMemoryFree(pReader);
+ goto _err;
+ }
+
+ if (tsdbSnapshotReaderOpen(pVnode->pTsdb, &pReader->pTsdbReader, sver, ever) < 0) {
+ metaSnapshotReaderClose(pReader->pMetaReader);
+ taosMemoryFree(pReader);
+ goto _err;
+ }
+
+_exit:
+ *ppReader = pReader;
+ return 0;
+
+_err:
+ *ppReader = NULL;
+ return -1;
+}
+
+int32_t vnodeSnapshotReaderClose(SVSnapshotReader *pReader) {
+ if (pReader) {
+ vnodeFree(pReader->pData);
+ tsdbSnapshotReaderClose(pReader->pTsdbReader);
+ metaSnapshotReaderClose(pReader->pMetaReader);
+ taosMemoryFree(pReader);
+ }
+ return 0;
+}
+
+int32_t vnodeSnapshotRead(SVSnapshotReader *pReader, const void **ppData, uint32_t *nData) {
+ int32_t code = 0;
+
+ if (!pReader->isMetaEnd) {
+ code = metaSnapshotRead(pReader->pMetaReader, &pReader->pData, &pReader->nData);
+ if (code) {
+ if (code == TSDB_CODE_VND_READ_END) {
+ pReader->isMetaEnd = 1;
+ } else {
+ return code;
+ }
+ } else {
+ *ppData = pReader->pData;
+ *nData = pReader->nData;
+ return code;
+ }
+ }
+
+ if (!pReader->isTsdbEnd) {
+ code = tsdbSnapshotRead(pReader->pTsdbReader, &pReader->pData, &pReader->nData);
+ if (code) {
+ if (code == TSDB_CODE_VND_READ_END) {
+ pReader->isTsdbEnd = 1;
+ } else {
+ return code;
+ }
+ } else {
+ *ppData = pReader->pData;
+ *nData = pReader->nData;
+ return code;
+ }
+ }
+
+ code = TSDB_CODE_VND_READ_END;
+ return code;
+}
\ No newline at end of file
diff --git a/source/dnode/vnode/src/vnd/vnodeSvr.c b/source/dnode/vnode/src/vnd/vnodeSvr.c
index 5e50a1b796..ae7ec5a950 100644
--- a/source/dnode/vnode/src/vnd/vnodeSvr.c
+++ b/source/dnode/vnode/src/vnd/vnodeSvr.c
@@ -617,16 +617,18 @@ static int vnodeDebugPrintSingleSubmitMsg(SMeta *pMeta, SSubmitBlk *pBlock, SSub
STSchema *pSchema = NULL;
tb_uid_t suid = 0;
STSRow *row = NULL;
+ int32_t rv = -1;
tInitSubmitBlkIter(msgIter, pBlock, &blkIter);
if (blkIter.row == NULL) return 0;
- if (!pSchema || (suid != msgIter->suid)) {
+ if (!pSchema || (suid != msgIter->suid) || rv != TD_ROW_SVER(blkIter.row)) {
if (pSchema) {
taosMemoryFreeClear(pSchema);
}
- pSchema = metaGetTbTSchema(pMeta, msgIter->suid, 1); // TODO: use the real schema
+ pSchema = metaGetTbTSchema(pMeta, msgIter->suid, TD_ROW_SVER(blkIter.row)); // TODO: use the real schema
if (pSchema) {
suid = msgIter->suid;
+ rv = TD_ROW_SVER(blkIter.row);
}
}
if (!pSchema) {
diff --git a/source/dnode/vnode/src/vnd/vnodeSync.c b/source/dnode/vnode/src/vnd/vnodeSync.c
index 882ee912cd..d8f3110a16 100644
--- a/source/dnode/vnode/src/vnd/vnodeSync.c
+++ b/source/dnode/vnode/src/vnd/vnodeSync.c
@@ -147,6 +147,10 @@ SSyncFSM *vnodeSyncMakeFsm(SVnode *pVnode) {
pFsm->FpPreCommitCb = vnodeSyncPreCommitMsg;
pFsm->FpRollBackCb = vnodeSyncRollBackMsg;
pFsm->FpGetSnapshot = vnodeSyncGetSnapshot;
- pFsm->FpRestoreFinish = NULL;
+ pFsm->FpRestoreFinishCb = NULL;
+ pFsm->FpSnapshotRead = NULL;
+ pFsm->FpSnapshotApply = NULL;
+ pFsm->FpReConfigCb = NULL;
+
return pFsm;
}
\ No newline at end of file
diff --git a/source/dnode/vnode/src/vnd/vnodeUtil.c b/source/dnode/vnode/src/vnd/vnodeUtil.c
new file mode 100644
index 0000000000..cd942099bc
--- /dev/null
+++ b/source/dnode/vnode/src/vnd/vnodeUtil.c
@@ -0,0 +1,45 @@
+/*
+ * Copyright (c) 2019 TAOS Data, Inc.
+ *
+ * This program is free software: you can use, redistribute, and/or modify
+ * it under the terms of the GNU Affero General Public License, version 3
+ * or later ("AGPL"), as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * You should have received a copy of the GNU Affero General Public License
+ * along with this program. If not, see .
+ */
+
+#include "vnd.h"
+
+int32_t vnodeRealloc(void** pp, int32_t size) {
+ uint8_t* p = NULL;
+ int32_t csize = 0;
+
+ if (*pp) {
+ p = (uint8_t*)(*pp) - sizeof(int32_t);
+ csize = *(int32_t*)p;
+ }
+
+ if (csize >= size) {
+ return 0;
+ }
+
+ p = (uint8_t*)taosMemoryRealloc(p, size);
+ if (p == NULL) {
+ return TSDB_CODE_OUT_OF_MEMORY;
+ }
+ *(int32_t*)p = size;
+ *pp = p + sizeof(int32_t);
+
+ return 0;
+}
+
+void vnodeFree(void* p) {
+ if (p) {
+ taosMemoryFree(((uint8_t*)p) - sizeof(int32_t));
+ }
+}
\ No newline at end of file
diff --git a/source/libs/catalog/src/ctgCache.c b/source/libs/catalog/src/ctgCache.c
index 6335a056b9..19b3e5f172 100644
--- a/source/libs/catalog/src/ctgCache.c
+++ b/source/libs/catalog/src/ctgCache.c
@@ -1457,10 +1457,15 @@ _return:
CTG_RET(code);
}
+void ctgUpdateThreadFuncUnexpectedStopped(void) {
+ if (CTG_IS_LOCKED(&gCtgMgmt.lock) == TD_RWLATCH_WRITE_FLAG_COPY) CTG_UNLOCK(CTG_READ, &gCtgMgmt.lock);
+}
void* ctgUpdateThreadFunc(void* param) {
setThreadName("catalog");
-
+#ifdef WINDOWS
+ atexit(ctgUpdateThreadFuncUnexpectedStopped);
+#endif
qInfo("catalog update thread started");
CTG_LOCK(CTG_READ, &gCtgMgmt.lock);
@@ -1494,7 +1499,7 @@ void* ctgUpdateThreadFunc(void* param) {
ctgdShowClusterCache(pCtg);
}
- CTG_UNLOCK(CTG_READ, &gCtgMgmt.lock);
+ if (CTG_IS_LOCKED(&gCtgMgmt.lock)) CTG_UNLOCK(CTG_READ, &gCtgMgmt.lock);
qInfo("catalog update thread stopped");
diff --git a/source/libs/command/src/explain.c b/source/libs/command/src/explain.c
index 1acc9368c8..26a0f3bf6c 100644
--- a/source/libs/command/src/explain.c
+++ b/source/libs/command/src/explain.c
@@ -16,6 +16,7 @@
#include "commandInt.h"
#include "plannodes.h"
#include "query.h"
+#include "tcommon.h"
int32_t qExplainGenerateResNode(SPhysiNode *pNode, SExplainGroup *group, SExplainResNode **pRes);
int32_t qExplainAppendGroupResRows(void *pCtx, int32_t groupId, int32_t level);
@@ -637,13 +638,48 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
QRY_ERR_RET(qExplainBufAppendExecInfo(pResNode->pExecInfo, tbuf, &tlen));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
- EXPLAIN_ROW_APPEND(EXPLAIN_COLUMNS_FORMAT, pSortNode->pSortKeys->length);
+
+ SDataBlockDescNode* pDescNode = pSortNode->node.pOutputDataBlockDesc;
+ EXPLAIN_ROW_APPEND(EXPLAIN_COLUMNS_FORMAT, nodesGetOutputNumFromSlotList(pDescNode->pSlots));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
- EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pSortNode->node.pOutputDataBlockDesc->totalRowSize);
+ EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pDescNode->totalRowSize);
EXPLAIN_ROW_APPEND(EXPLAIN_RIGHT_PARENTHESIS_FORMAT);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level));
+ if (EXPLAIN_MODE_ANALYZE == ctx->mode) {
+ // sort key
+ EXPLAIN_ROW_NEW(level, "Sort Key: ");
+ if (pResNode->pExecInfo) {
+ for (int32_t i = 0; i < LIST_LENGTH(pSortNode->pSortKeys); ++i) {
+ SOrderByExprNode *ptn = nodesListGetNode(pSortNode->pSortKeys, i);
+ EXPLAIN_ROW_APPEND("%s ", nodesGetNameFromColumnNode(ptn->pExpr));
+ }
+ }
+
+ EXPLAIN_ROW_END();
+ QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level));
+
+ // sort method
+ EXPLAIN_ROW_NEW(level, "Sort Method: ");
+
+ int32_t nodeNum = taosArrayGetSize(pResNode->pExecInfo);
+ SExplainExecInfo *execInfo = taosArrayGet(pResNode->pExecInfo, 0);
+ SSortExecInfo * pExecInfo = (SSortExecInfo *)execInfo->verboseInfo;
+ EXPLAIN_ROW_APPEND("%s", pExecInfo->sortMethod == SORT_QSORT_T ? "quicksort" : "merge sort");
+ if (pExecInfo->sortBuffer > 1024 * 1024) {
+ EXPLAIN_ROW_APPEND(" Buffers:%.2f Mb", pExecInfo->sortBuffer / (1024 * 1024.0));
+ } else if (pExecInfo->sortBuffer > 1024) {
+ EXPLAIN_ROW_APPEND(" Buffers:%.2f Kb", pExecInfo->sortBuffer / (1024.0));
+ } else {
+ EXPLAIN_ROW_APPEND(" Buffers:%d b", pExecInfo->sortBuffer);
+ }
+
+ EXPLAIN_ROW_APPEND(" loops:%d", pExecInfo->loops);
+ EXPLAIN_ROW_END();
+ QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level));
+ }
+
if (verbose) {
EXPLAIN_ROW_NEW(level + 1, EXPLAIN_OUTPUT_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_COLUMNS_FORMAT,
@@ -792,13 +828,8 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
QRY_ERR_RET(qExplainBufAppendExecInfo(pResNode->pExecInfo, tbuf, &tlen));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
-// EXPLAIN_ROW_APPEND(EXPLAIN_FUNCTIONS_FORMAT, pPartNode->length);
-// EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pPartNode->node.pOutputDataBlockDesc->totalRowSize);
-// if (pPartNode->pGroupKeys) {
-// EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
-// EXPLAIN_ROW_APPEND(EXPLAIN_GROUPS_FORMAT, pPartNode->pGroupKeys->length);
-// }
+
EXPLAIN_ROW_APPEND(EXPLAIN_RIGHT_PARENTHESIS_FORMAT);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level));
diff --git a/source/libs/executor/inc/executorimpl.h b/source/libs/executor/inc/executorimpl.h
index 8ac320b9aa..53dddd9c22 100644
--- a/source/libs/executor/inc/executorimpl.h
+++ b/source/libs/executor/inc/executorimpl.h
@@ -361,6 +361,18 @@ typedef struct SCatchSupporter {
int64_t* pKeyBuf;
} SCatchSupporter;
+typedef struct SStreamAggSupporter {
+ SArray* pResultRows; // SResultWindowInfo
+ int32_t keySize;
+ char* pKeyBuf; // window key buffer
+ SDiskbasedBuf* pResultBuf; // query result buffer based on blocked-wised disk file
+ int32_t resultRowSize; // the result buffer size for each result row, with the meta data size for each row
+} SStreamAggSupporter;
+
+typedef struct SessionWindowSupporter {
+ SStreamAggSupporter* pStreamAggSup;
+ int64_t gap;
+} SessionWindowSupporter;
typedef struct SStreamBlockScanInfo {
SArray* pBlockLists; // multiple SSDatablock.
SSDataBlock* pRes; // result SSDataBlock
@@ -385,6 +397,7 @@ typedef struct SStreamBlockScanInfo {
SInterval interval; // if the upstream is an interval operator, the interval info is also kept here.
SCatchSupporter childAggSup;
SArray* childIds;
+ SessionWindowSupporter sessionSup;
} SStreamBlockScanInfo;
typedef struct SSysTableScanInfo {
@@ -550,6 +563,27 @@ typedef struct SSessionAggOperatorInfo {
STimeWindowAggSupp twAggSup;
} SSessionAggOperatorInfo;
+typedef struct SResultWindowInfo {
+ SResultRowPosition pos;
+ STimeWindow win;
+ bool isOutput;
+} SResultWindowInfo;
+
+typedef struct SStreamSessionAggOperatorInfo {
+ SOptrBasicInfo binfo;
+ SStreamAggSupporter streamAggSup;
+ SGroupResInfo groupResInfo;
+ int64_t gap; // session window gap
+ int32_t primaryTsIndex; // primary timestamp slot id
+ int32_t order; // current SSDataBlock scan order
+ STimeWindowAggSupp twAggSup;
+ SSDataBlock* pWinBlock; // window result
+ SqlFunctionCtx* pDummyCtx; // for combine
+ SSDataBlock* pDelRes;
+ SHashObj* pStDeleted;
+ void* pDelIterator;
+} SStreamSessionAggOperatorInfo;
+
typedef struct STimeSliceOperatorInfo {
SOptrBasicInfo binfo;
SInterval interval;
@@ -588,18 +622,14 @@ typedef struct SSortedMergeOperatorInfo {
typedef struct SSortOperatorInfo {
SOptrBasicInfo binfo;
- uint32_t sortBufSize; // max buffer size for in-memory sort
+ uint32_t sortBufSize; // max buffer size for in-memory sort
SArray* pSortInfo;
SSortHandle* pSortHandle;
SArray* pColMatchInfo; // for index map from table scan output
int32_t bufPageSize;
- // TODO extact struct
- int64_t startTs; // sort start time
- uint64_t sortElapsed; // sort elapsed time, time to flush to disk not included.
- uint64_t totalSize; // total load bytes from remote
- uint64_t totalRows; // total number of rows
- uint64_t totalElapsed; // total elapsed time
+ int64_t startTs; // sort start time
+ uint64_t sortElapsed; // sort elapsed time, time to flush to disk not included.
} SSortOperatorInfo;
typedef struct STagFilterOperatorInfo {
@@ -727,6 +757,9 @@ SOperatorInfo* createTimeSliceOperatorInfo(SOperatorInfo* downstream, SExprInfo*
SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDownstream, SExprInfo* pExprInfo, int32_t numOfCols, SSDataBlock* pResBlock, SNode* pOnCondition, SExecTaskInfo* pTaskInfo);
SOperatorInfo* createTagScanOperatorInfo(SReadHandle* pReadHandle, SExprInfo* pExpr, int32_t numOfOutput, SSDataBlock* pResBlock, SArray* pColMatchInfo, STableGroupInfo* pTableGroupInfo, SExecTaskInfo* pTaskInfo);
+SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream,
+ SExprInfo* pExprInfo, int32_t numOfCols, SSDataBlock* pResBlock, int64_t gap,
+ int32_t tsSlotId, STimeWindowAggSupp* pTwAggSupp, SExecTaskInfo* pTaskInfo);
#if 0
SOperatorInfo* createTableSeqScanOperatorInfo(void* pTsdbReadHandle, STaskRuntimeEnv* pRuntimeEnv);
#endif
@@ -761,13 +794,19 @@ void aggEncodeResultRow(SOperatorInfo* pOperator, SAggSupporter* pSup, SOptrBasi
int32_t* length);
STimeWindow getActiveTimeWindow(SDiskbasedBuf* pBuf, SResultRowInfo* pResultRowInfo, int64_t ts,
SInterval* pInterval, int32_t precision, STimeWindow* win);
-int32_t getNumOfRowsInTimeWindow(SDataBlockInfo* pDataBlockInfo, TSKEY* pPrimaryColumn, int32_t startPos,
- TSKEY ekey, __block_search_fn_t searchFn, STableQueryInfo* item,
- int32_t order);
+int32_t getNumOfRowsInTimeWindow(SDataBlockInfo* pDataBlockInfo, TSKEY* pPrimaryColumn,
+ int32_t startPos, TSKEY ekey, __block_search_fn_t searchFn, STableQueryInfo* item,
+ int32_t order);
int32_t binarySearchForKey(char* pValue, int num, TSKEY key, int order);
-int32_t initCatchSupporter(SCatchSupporter* pCatchSup, size_t rowSize, size_t keyBufSize,
- const char* pKey, const char* pDir);
-
+int32_t initCatchSupporter(SCatchSupporter* pCatchSup, size_t rowSize, const char* pKey,
+ const char* pDir);
+int32_t initStreamAggSupporter(SStreamAggSupporter* pSup, const char* pKey);
+SResultRow* getNewResultRow_rv(SDiskbasedBuf* pResultBuf, int64_t tableGroupId, int32_t interBufSize);
+SResultWindowInfo* getSessionTimeWindow(SArray* pWinInfos, TSKEY ts, int64_t gap,
+ int32_t* pIndex);
+int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pTs, int32_t rows,
+ int32_t start, int64_t gap, SHashObj* pStDeleted);
+bool functionNeedToExecute(SqlFunctionCtx* pCtx);
#ifdef __cplusplus
}
#endif
diff --git a/source/libs/executor/inc/tsort.h b/source/libs/executor/inc/tsort.h
index d74628a72f..c8b1b3ee51 100644
--- a/source/libs/executor/inc/tsort.h
+++ b/source/libs/executor/inc/tsort.h
@@ -137,6 +137,14 @@ void* tsortGetValue(STupleHandle* pVHandle, int32_t colId);
*/
SSDataBlock* tsortGetSortedDataBlock(const SSortHandle* pSortHandle);
+/**
+ * return the sort execution information.
+ *
+ * @param pHandle
+ * @return
+ */
+SSortExecInfo tsortGetSortExecInfo(SSortHandle* pHandle);
+
#ifdef __cplusplus
}
#endif
diff --git a/source/libs/executor/src/executorimpl.c b/source/libs/executor/src/executorimpl.c
index e16b60e58b..593b79ecc8 100644
--- a/source/libs/executor/src/executorimpl.c
+++ b/source/libs/executor/src/executorimpl.c
@@ -98,7 +98,6 @@ static int32_t getExprFunctionId(SExprInfo* pExprInfo) {
}
static void doSetTagValueToResultBuf(char* output, const char* val, int16_t type, int16_t bytes);
-static bool functionNeedToExecute(SqlFunctionCtx* pCtx);
static void setBlockStatisInfo(SqlFunctionCtx* pCtx, SExprInfo* pExpr, SSDataBlock* pSDataBlock);
@@ -937,7 +936,7 @@ int32_t setGroupResultOutputBuf(SOptrBasicInfo* binfo, int32_t numOfCols, char*
return TSDB_CODE_SUCCESS;
}
-static bool functionNeedToExecute(SqlFunctionCtx* pCtx) {
+bool functionNeedToExecute(SqlFunctionCtx* pCtx) {
struct SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
// in case of timestamp column, always generated results.
@@ -1748,8 +1747,7 @@ void setFunctionResultOutput(SOptrBasicInfo* pInfo, SAggSupporter* pSup, int32_t
SResultRow* pRow = doSetResultOutBufByKey(pSup->pResultBuf, pResultRowInfo, (char*)&tid, sizeof(tid), true, groupId,
pTaskInfo, false, pSup);
- ASSERT(pDataBlock->info.numOfCols == numOfExprs);
- for (int32_t i = 0; i < pDataBlock->info.numOfCols; ++i) {
+ for (int32_t i = 0; i < numOfExprs; ++i) {
struct SResultRowEntryInfo* pEntry = getResultCell(pRow, i, rowCellInfoOffset);
cleanupResultRowEntry(pEntry);
@@ -1757,7 +1755,7 @@ void setFunctionResultOutput(SOptrBasicInfo* pInfo, SAggSupporter* pSup, int32_t
pCtx[i].scanFlag = stage;
}
- initCtxOutputBuffer(pCtx, pDataBlock->info.numOfCols);
+ initCtxOutputBuffer(pCtx, numOfExprs);
}
void updateOutputBuf(SOptrBasicInfo* pBInfo, int32_t* bufCapacity, int32_t numOfInputRows) {
@@ -4660,6 +4658,19 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
pOptr =
createSessionAggOperatorInfo(ops[0], pExprInfo, num, pResBlock, pSessionNode->gap, tsSlotId, &as, pTaskInfo);
+ } else if (QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW == type) {
+ SSessionWinodwPhysiNode* pSessionNode = (SSessionWinodwPhysiNode*)pPhyNode;
+
+ STimeWindowAggSupp as = {.waterMark = pSessionNode->window.watermark,
+ .calTrigger = pSessionNode->window.triggerType};
+
+ SExprInfo* pExprInfo = createExprInfo(pSessionNode->window.pFuncs, NULL, &num);
+ SSDataBlock* pResBlock = createResDataBlock(pPhyNode->pOutputDataBlockDesc);
+ int32_t tsSlotId = ((SColumnNode*)pSessionNode->window.pTspk)->slotId;
+
+ pOptr =
+ createStreamSessionAggOperatorInfo(ops[0], pExprInfo, num, pResBlock, pSessionNode->gap, tsSlotId, &as, pTaskInfo);
+
} else if (QUERY_NODE_PHYSICAL_PLAN_PARTITION == type) {
SPartitionPhysiNode* pPartNode = (SPartitionPhysiNode*)pPhyNode;
SArray* pColList = extractPartitionColInfo(pPartNode->pPartitionKeys);
@@ -5151,15 +5162,37 @@ int32_t getOperatorExplainExecInfo(SOperatorInfo* operatorInfo, SExplainExecInfo
return TSDB_CODE_SUCCESS;
}
-int32_t initCatchSupporter(SCatchSupporter* pCatchSup, size_t rowSize, size_t keyBufSize, const char* pKey,
- const char* pDir) {
+int32_t initCatchSupporter(SCatchSupporter* pCatchSup, size_t rowSize, const char* pKey,
+ const char* pDir) {
pCatchSup->keySize = sizeof(int64_t) + sizeof(int64_t) + sizeof(TSKEY);
pCatchSup->pKeyBuf = taosMemoryCalloc(1, pCatchSup->keySize);
- int32_t pageSize = rowSize * 32;
- int32_t bufSize = pageSize * 4096;
- createDiskbasedBuf(&pCatchSup->pDataBuf, pageSize, bufSize, pKey, pDir);
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
pCatchSup->pWindowHashTable = taosHashInit(10000, hashFn, true, HASH_NO_LOCK);
- ;
- return TSDB_CODE_SUCCESS;
+ if (pCatchSup->pKeyBuf == NULL || pCatchSup->pWindowHashTable == NULL) {
+ return TSDB_CODE_OUT_OF_MEMORY;
+ }
+
+ int32_t pageSize = rowSize * 32;
+ int32_t bufSize = pageSize * 4096;
+ return createDiskbasedBuf(&pCatchSup->pDataBuf, pageSize, bufSize, pKey, pDir);
+}
+
+int32_t initStreamAggSupporter(SStreamAggSupporter* pSup, const char* pKey) {
+ pSup->keySize = sizeof(int64_t) + sizeof(TSKEY);
+ pSup->pKeyBuf = taosMemoryCalloc(1, pSup->keySize);
+ pSup->pResultRows = taosArrayInit(1024, sizeof(SResultWindowInfo));
+ if (pSup->pKeyBuf == NULL || pSup->pResultRows == NULL) {
+ return TSDB_CODE_OUT_OF_MEMORY;
+ }
+
+ int32_t pageSize = 4096;
+ while (pageSize < pSup->resultRowSize * 4) {
+ pageSize <<= 1u;
+ }
+ // at least four pages need to be in buffer
+ int32_t bufSize = 4096 * 256;
+ if (bufSize <= pageSize) {
+ bufSize = pageSize * 4;
+ }
+ return createDiskbasedBuf(&pSup->pResultBuf, pageSize, bufSize, pKey, "/tmp/");
}
diff --git a/source/libs/executor/src/scanoperator.c b/source/libs/executor/src/scanoperator.c
index f77b80c533..17238bbd9b 100644
--- a/source/libs/executor/src/scanoperator.c
+++ b/source/libs/executor/src/scanoperator.c
@@ -645,6 +645,10 @@ static void doClearBufferedBlocks(SStreamBlockScanInfo* pInfo) {
taosArrayClear(pInfo->pBlockLists);
}
+static bool isSessionWindow(SStreamBlockScanInfo* pInfo) {
+ return pInfo->sessionSup.pStreamAggSup != NULL;
+}
+
static bool prepareDataScan(SStreamBlockScanInfo* pInfo) {
SSDataBlock* pSDB = pInfo->pUpdateRes;
if (pInfo->updateResIndex < pSDB->info.rows) {
@@ -652,13 +656,25 @@ static bool prepareDataScan(SStreamBlockScanInfo* pInfo) {
TSKEY *tsCols = (TSKEY*)pColDataInfo->pData;
SResultRowInfo dumyInfo;
dumyInfo.cur.pageId = -1;
- STimeWindow win = getActiveTimeWindow(NULL, &dumyInfo, tsCols[pInfo->updateResIndex], &pInfo->interval,
- pInfo->interval.precision, NULL);
+ STimeWindow win;
+ if (isSessionWindow(pInfo)) {
+ SStreamAggSupporter* pAggSup = pInfo->sessionSup.pStreamAggSup;
+ int64_t gap = pInfo->sessionSup.gap;
+ int32_t winIndex = 0;
+ SResultWindowInfo* pCurWin = getSessionTimeWindow(pAggSup->pResultRows,
+ tsCols[pInfo->updateResIndex], gap, &winIndex);
+ win = pCurWin->win;
+ pInfo->updateResIndex += updateSessionWindowInfo(pCurWin, tsCols, pSDB->info.rows,
+ pInfo->updateResIndex, gap, NULL);
+ } else {
+ win = getActiveTimeWindow(NULL, &dumyInfo, tsCols[pInfo->updateResIndex],
+ &pInfo->interval, pInfo->interval.precision, NULL);
+ pInfo->updateResIndex += getNumOfRowsInTimeWindow(&pSDB->info, tsCols, pInfo->updateResIndex,
+ win.ekey, binarySearchForKey, NULL, TSDB_ORDER_ASC);
+ }
STableScanInfo* pTableScanInfo = pInfo->pOperatorDumy->info;
pTableScanInfo->cond.twindow = win;
tsdbResetReadHandle(pTableScanInfo->dataReader, &pTableScanInfo->cond);
- pInfo->updateResIndex += getNumOfRowsInTimeWindow(&pSDB->info, tsCols, pInfo->updateResIndex,
- win.ekey, binarySearchForKey, NULL, TSDB_ORDER_ASC);
pTableScanInfo->scanTimes = 0;
return true;
} else {
@@ -790,7 +806,7 @@ static SSDataBlock* getDataFromCatch(SStreamBlockScanInfo* pInfo) {
SSDataBlock* pDB = createOneDataBlock(pInfo->pRes, false);
blockDataFromBuf(pDB, buf);
SSDataBlock* pSub = blockDataExtractBlock(pDB, pos->rowId, 1);
- blockDataMerge(pInfo->pRes, pSub, NULL);
+ blockDataMerge(pInfo->pRes, pSub);
blockDataDestroy(pDB);
blockDataDestroy(pSub);
}
@@ -848,6 +864,7 @@ static SSDataBlock* doStreamBlockScan(SOperatorInfo* pOperator) {
} else if (pInfo->scanMode == STREAM_SCAN_FROM_UPDATERES) {
blockDataCleanup(pInfo->pRes);
pInfo->scanMode = STREAM_SCAN_FROM_DATAREADER;
+ prepareDataScan(pInfo);
return pInfo->pUpdateRes;
} else if (pInfo->scanMode == STREAM_SCAN_FROM_DATAREADER) {
SSDataBlock* pSDB = doDataScan(pInfo);
@@ -924,13 +941,12 @@ static SSDataBlock* doStreamBlockScan(SOperatorInfo* pOperator) {
if (rows == 0) {
pOperator->status = OP_EXEC_DONE;
- } else if (pInfo->interval.interval > 0) {
+ } else if (pInfo->pUpdateInfo) {
SSDataBlock* upRes = getUpdateDataBlock(pInfo, true); //TODO(liuyao) get invertible from plan
if (upRes) {
pInfo->pUpdateRes = upRes;
if (upRes->info.type == STREAM_REPROCESS) {
pInfo->updateResIndex = 0;
- prepareDataScan(pInfo);
pInfo->scanMode = STREAM_SCAN_FROM_UPDATERES;
} else if (upRes->info.type == STREAM_INVERT) {
pInfo->scanMode = STREAM_SCAN_FROM_RES;
@@ -1001,10 +1017,9 @@ SOperatorInfo* createStreamScanOperatorInfo(void* streamReadHandle, void* pDataR
pInfo->scanMode = STREAM_SCAN_FROM_READERHANDLE;
pInfo->pOperatorDumy = pOperatorDumy;
pInfo->interval = pSTInfo->interval;
+ pInfo->sessionSup = (SessionWindowSupporter){.pStreamAggSup = NULL, .gap = -1};
- size_t childKeyBufSize = sizeof(int64_t) + sizeof(int64_t) + sizeof(TSKEY);
- initCatchSupporter(&pInfo->childAggSup, 1024, childKeyBufSize,
- "StreamFinalInterval", TD_TMP_DIR_PATH); // TODO(liuyao) get row size from phy plan
+ initCatchSupporter(&pInfo->childAggSup, 1024, "StreamFinalInterval", "/tmp/"); // TODO(liuyao) get row size from phy plan
pOperator->name = "StreamBlockScanOperator";
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN;
@@ -1031,8 +1046,9 @@ static void destroySysScanOperator(void* param, int32_t numOfOutput) {
blockDataDestroy(pInfo->pRes);
const char* name = tNameGetTableName(&pInfo->name);
- if (strncasecmp(name, TSDB_INS_TABLE_USER_TABLES, TSDB_TABLE_FNAME_LEN) == 0) {
+ if (strncasecmp(name, TSDB_INS_TABLE_USER_TABLES, TSDB_TABLE_FNAME_LEN) == 0 || pInfo->pCur != NULL) {
metaCloseTbCursor(pInfo->pCur);
+ pInfo->pCur = NULL;
}
taosArrayDestroy(pInfo->scanCols);
diff --git a/source/libs/executor/src/sortoperator.c b/source/libs/executor/src/sortoperator.c
index 990dc0f200..8f5fa88070 100644
--- a/source/libs/executor/src/sortoperator.c
+++ b/source/libs/executor/src/sortoperator.c
@@ -2,6 +2,9 @@
#include "executorimpl.h"
static SSDataBlock* doSort(SOperatorInfo* pOperator);
+static int32_t doOpenSortOperator(SOperatorInfo* pOperator);
+static int32_t getExplainExecInfo(SOperatorInfo* pOptr, void** pOptrExplain, uint32_t* len);
+
static void destroyOrderOperatorInfo(void* param, int32_t numOfOutput);
SOperatorInfo* createSortOperatorInfo(SOperatorInfo* downstream, SSDataBlock* pResBlock, SArray* pSortInfo, SExprInfo* pExprInfo, int32_t numOfCols,
@@ -35,7 +38,7 @@ SOperatorInfo* createSortOperatorInfo(SOperatorInfo* downstream, SSDataBlock* pR
pOperator->pTaskInfo = pTaskInfo;
pOperator->fpSet =
- createOperatorFpSet(operatorDummyOpenFn, doSort, NULL, NULL, destroyOrderOperatorInfo, NULL, NULL, NULL);
+ createOperatorFpSet(doOpenSortOperator, doSort, NULL, NULL, destroyOrderOperatorInfo, NULL, NULL, getExplainExecInfo);
int32_t code = appendDownstream(pOperator, &downstream, 1);
return pOperator;
@@ -121,20 +124,17 @@ void applyScalarFunction(SSDataBlock* pBlock, void* param) {
}
}
-SSDataBlock* doSort(SOperatorInfo* pOperator) {
- if (pOperator->status == OP_EXEC_DONE) {
- return NULL;
- }
-
- SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
+int32_t doOpenSortOperator(SOperatorInfo* pOperator) {
SSortOperatorInfo* pInfo = pOperator->info;
+ SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
- if (pOperator->status == OP_RES_TO_RETURN) {
- return getSortedBlockData(pInfo->pSortHandle, pInfo->binfo.pRes, pOperator->resultInfo.capacity, pInfo->pColMatchInfo);
+ if (OPTR_IS_OPENED(pOperator)) {
+ return TSDB_CODE_SUCCESS;
}
-// pInfo->binfo.pRes is not equalled to the input datablock.
-// int32_t numOfBufPage = pInfo->sortBufSize / pInfo->bufPageSize;
+ pInfo->startTs = taosGetTimestampUs();
+
+ // pInfo->binfo.pRes is not equalled to the input datablock.
pInfo->pSortHandle = tsortCreateSortHandle(pInfo->pSortInfo, pInfo->pColMatchInfo, SORT_SINGLESOURCE_SORT,
-1, -1, NULL, pTaskInfo->id.str);
@@ -146,12 +146,39 @@ SSDataBlock* doSort(SOperatorInfo* pOperator) {
int32_t code = tsortOpen(pInfo->pSortHandle);
taosMemoryFreeClear(ps);
+
if (code != TSDB_CODE_SUCCESS) {
longjmp(pTaskInfo->env, terrno);
}
+ pOperator->cost.openCost = (taosGetTimestampUs() - pInfo->startTs)/1000.0;
pOperator->status = OP_RES_TO_RETURN;
- return getSortedBlockData(pInfo->pSortHandle, pInfo->binfo.pRes, pOperator->resultInfo.capacity, pInfo->pColMatchInfo);
+
+ OPTR_SET_OPENED(pOperator);
+ return TSDB_CODE_SUCCESS;
+}
+
+SSDataBlock* doSort(SOperatorInfo* pOperator) {
+ if (pOperator->status == OP_EXEC_DONE) {
+ return NULL;
+ }
+
+ SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
+ SSortOperatorInfo* pInfo = pOperator->info;
+
+ int32_t code = pOperator->fpSet._openFn(pOperator);
+ if (code != TSDB_CODE_SUCCESS) {
+ longjmp(pTaskInfo->env, code);
+ }
+
+ SSDataBlock* pBlock = getSortedBlockData(pInfo->pSortHandle, pInfo->binfo.pRes, pOperator->resultInfo.capacity, pInfo->pColMatchInfo);
+
+ if (pBlock != NULL) {
+ pOperator->resultInfo.totalRows += pBlock->info.rows;
+ } else {
+ doSetOperatorCompleted(pOperator);
+ }
+ return pBlock;
}
void destroyOrderOperatorInfo(void* param, int32_t numOfOutput) {
@@ -161,3 +188,15 @@ void destroyOrderOperatorInfo(void* param, int32_t numOfOutput) {
taosArrayDestroy(pInfo->pSortInfo);
taosArrayDestroy(pInfo->pColMatchInfo);
}
+
+int32_t getExplainExecInfo(SOperatorInfo* pOptr, void** pOptrExplain, uint32_t* len) {
+ ASSERT(pOptr != NULL);
+ SSortExecInfo* pInfo = taosMemoryCalloc(1, sizeof(SSortExecInfo));
+
+ SSortOperatorInfo *pOperatorInfo = (SSortOperatorInfo*)pOptr->info;
+
+ *pInfo = tsortGetSortExecInfo(pOperatorInfo->pSortHandle);
+ *pOptrExplain = pInfo;
+ *len = sizeof(SSortExecInfo);
+ return TSDB_CODE_SUCCESS;
+}
diff --git a/source/libs/executor/src/timewindowoperator.c b/source/libs/executor/src/timewindowoperator.c
index deca2f3804..9346dbf54a 100644
--- a/source/libs/executor/src/timewindowoperator.c
+++ b/source/libs/executor/src/timewindowoperator.c
@@ -9,6 +9,7 @@ typedef enum SResultTsInterpType {
} SResultTsInterpType;
static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator);
+static SSDataBlock* doStreamSessionWindowAgg(SOperatorInfo* pOperator);
/*
* There are two cases to handle:
@@ -1039,13 +1040,9 @@ static void setInverFunction(SqlFunctionCtx* pCtx, int32_t num, EStreamType type
}
}
-void doClearWindow(SAggSupporter* pSup, SOptrBasicInfo* pBinfo, char* pData,
- int16_t bytes, uint64_t groupId, int32_t numOfOutput) {
- SET_RES_WINDOW_KEY(pSup->keyBuf, pData, bytes, groupId);
- SResultRowPosition* p1 =
- (SResultRowPosition*)taosHashGet(pSup->pResultRowHashTable, pSup->keyBuf,
- GET_RES_WINDOW_KEY_LEN(bytes));
- SResultRow* pResult = getResultRowByPos(pSup->pResultBuf, p1);
+void doClearWindowImpl(SResultRowPosition* p1, SDiskbasedBuf* pResultBuf,
+ SOptrBasicInfo* pBinfo, int32_t numOfOutput) {
+ SResultRow* pResult = getResultRowByPos(pResultBuf, p1);
SqlFunctionCtx* pCtx = pBinfo->pCtx;
for (int32_t i = 0; i < numOfOutput; ++i) {
pCtx[i].resultInfo = getResultCell(pResult, i, pBinfo->rowCellInfoOffset);
@@ -1060,6 +1057,15 @@ void doClearWindow(SAggSupporter* pSup, SOptrBasicInfo* pBinfo, char* pData,
}
}
+void doClearWindow(SAggSupporter* pSup, SOptrBasicInfo* pBinfo, char* pData,
+ int16_t bytes, uint64_t groupId, int32_t numOfOutput) {
+ SET_RES_WINDOW_KEY(pSup->keyBuf, pData, bytes, groupId);
+ SResultRowPosition* p1 =
+ (SResultRowPosition*)taosHashGet(pSup->pResultRowHashTable, pSup->keyBuf,
+ GET_RES_WINDOW_KEY_LEN(bytes));
+ doClearWindowImpl(p1, pSup->pResultBuf, pBinfo, numOfOutput);
+}
+
static void doClearWindows(SAggSupporter* pSup, SOptrBasicInfo* pBinfo,
SInterval* pIntrerval, int32_t tsIndex, int32_t numOfOutput, SSDataBlock* pBlock) {
SColumnInfoData* pColDataInfo = taosArrayGet(pBlock->pDataBlock, tsIndex);
@@ -1112,8 +1118,8 @@ static SSDataBlock* doStreamIntervalAgg(SOperatorInfo* pOperator) {
}
if (pBlock->info.type == STREAM_REPROCESS) {
- doClearWindows(&pInfo->aggSup, &pInfo->binfo, &pInfo->interval,
- pInfo->primaryTsIndex, pOperator->numOfExprs, pBlock);
+ doClearWindows(&pInfo->aggSup, &pInfo->binfo, &pInfo->interval, 0,
+ pOperator->numOfExprs, pBlock);
qDebug("%s clear existed time window results for updates checked", GET_TASKID(pTaskInfo));
continue;
}
@@ -1644,9 +1650,10 @@ _error:
return NULL;
}
-static SArray* doHashInterval(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResultRowInfo, SSDataBlock* pSDataBlock,
+static SArray* doHashInterval(SOperatorInfo* pOperatorInfo, SSDataBlock* pSDataBlock,
int32_t tableGroupId) {
SStreamFinalIntervalOperatorInfo* pInfo = (SStreamFinalIntervalOperatorInfo*)pOperatorInfo->info;
+ SResultRowInfo* pResultRowInfo = &(pInfo->binfo.resultRowInfo);
SExecTaskInfo* pTaskInfo = pOperatorInfo->pTaskInfo;
int32_t numOfOutput = pOperatorInfo->numOfExprs;
SArray* pUpdated = taosArrayInit(4, POINTER_BYTES);
@@ -1659,7 +1666,10 @@ static SArray* doHashInterval(SOperatorInfo* pOperatorInfo, SResultRowInfo* pRes
if (pSDataBlock->pDataBlock != NULL) {
SColumnInfoData* pColDataInfo = taosArrayGet(pSDataBlock->pDataBlock, pInfo->primaryTsIndex);
tsCols = (int64_t*)pColDataInfo->pData;
+ } else {
+ return pUpdated;
}
+
int32_t startPos = ascScan ? 0 : (pSDataBlock->info.rows - 1);
TSKEY ts = getStartTsKey(&pSDataBlock->info.window, tsCols, pSDataBlock->info.rows, ascScan);
STimeWindow nextWin = getActiveTimeWindow(pInfo->aggSup.pResultBuf, pResultRowInfo, ts,
@@ -1720,7 +1730,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
pInfo->primaryTsIndex, pOperator->numOfExprs, pBlock);
continue;
}
- pUpdated = doHashInterval(pOperator, &pInfo->binfo.resultRowInfo, pBlock, 0);
+ pUpdated = doHashInterval(pOperator, pBlock, 0);
}
finalizeUpdatedResult(pOperator->numOfExprs, pInfo->aggSup.pResultBuf, pUpdated, pInfo->binfo.rowCellInfoOffset);
@@ -1730,3 +1740,534 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
pOperator->status = OP_RES_TO_RETURN;
return pInfo->binfo.pRes->info.rows == 0 ? NULL : pInfo->binfo.pRes;
}
+
+void destroyStreamAggSupporter(SStreamAggSupporter* pSup) {
+ taosArrayDestroy(pSup->pResultRows);
+ taosMemoryFreeClear(pSup->pKeyBuf);
+ destroyDiskbasedBuf(pSup->pResultBuf);
+}
+
+void destroyStreamSessionAggOperatorInfo(void* param, int32_t numOfOutput) {
+ SStreamSessionAggOperatorInfo* pInfo = (SStreamSessionAggOperatorInfo*)param;
+ doDestroyBasicInfo(&pInfo->binfo, numOfOutput);
+ destroyStreamAggSupporter(&pInfo->streamAggSup);
+ cleanupGroupResInfo(&pInfo->groupResInfo);
+}
+
+int32_t initBiasicInfo(SOptrBasicInfo* pBasicInfo, SExprInfo* pExprInfo,
+ int32_t numOfCols, SSDataBlock* pResultBlock, SDiskbasedBuf* pResultBuf) {
+ pBasicInfo->pCtx = createSqlFunctionCtx(pExprInfo, numOfCols, &pBasicInfo->rowCellInfoOffset);
+ pBasicInfo->pRes = pResultBlock;
+ for (int32_t i = 0; i < numOfCols; ++i) {
+ pBasicInfo->pCtx[i].pBuf = pResultBuf;
+ }
+ return TSDB_CODE_SUCCESS;
+}
+
+void initDummyFunction(SqlFunctionCtx* pDummy, SqlFunctionCtx* pCtx, int32_t nums) {
+ for (int i = 0; i < nums; i++) {
+ pDummy[i].functionId = pCtx[i].functionId;
+ }
+}
+void initDownStream(SOperatorInfo* downstream, SStreamSessionAggOperatorInfo* pInfo) {
+ ASSERT(downstream->operatorType == QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN);
+ SStreamBlockScanInfo* pScanInfo = downstream->info;
+ pScanInfo->sessionSup =
+ (SessionWindowSupporter){.pStreamAggSup = &pInfo->streamAggSup, .gap = pInfo->gap};
+ pScanInfo->pUpdateInfo = updateInfoInit(60000, TSDB_TIME_PRECISION_MILLI, 60000 * 60 * 6);
+}
+
+SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream,
+ SExprInfo* pExprInfo, int32_t numOfCols, SSDataBlock* pResBlock, int64_t gap,
+ int32_t tsSlotId, STimeWindowAggSupp* pTwAggSupp, SExecTaskInfo* pTaskInfo) {
+ SStreamSessionAggOperatorInfo* pInfo =
+ taosMemoryCalloc(1, sizeof(SStreamSessionAggOperatorInfo));
+ SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
+ if (pInfo == NULL || pOperator == NULL) {
+ goto _error;
+ }
+
+ initResultSizeInfo(pOperator, 4096);
+
+ int32_t code = initStreamAggSupporter(&pInfo->streamAggSup, "StreamSessionAggOperatorInfo");
+ if (code != TSDB_CODE_SUCCESS) {
+ goto _error;
+ }
+
+ code = initBiasicInfo(&pInfo->binfo, pExprInfo, numOfCols, pResBlock,
+ pInfo->streamAggSup.pResultBuf);
+ if (code != TSDB_CODE_SUCCESS) {
+ goto _error;
+ }
+ pInfo->streamAggSup.resultRowSize = getResultRowSize(pInfo->binfo.pCtx, numOfCols);
+
+ pInfo->pDummyCtx = (SqlFunctionCtx*)taosMemoryCalloc(numOfCols, sizeof(SqlFunctionCtx));
+ if (pInfo->pDummyCtx == NULL) {
+ goto _error;
+ }
+ initDummyFunction(pInfo->pDummyCtx, pInfo->binfo.pCtx, numOfCols);
+
+ pInfo->twAggSup = *pTwAggSupp;
+ initResultRowInfo(&pInfo->binfo.resultRowInfo, 8);
+ initExecTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pTaskInfo->window);
+
+ pInfo->primaryTsIndex = tsSlotId;
+ pInfo->gap = gap;
+ pInfo->binfo.pRes = pResBlock;
+ pInfo->order = TSDB_ORDER_ASC;
+ _hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
+ pInfo->pStDeleted = taosHashInit(64, hashFn, true, HASH_NO_LOCK);
+ pInfo->pDelIterator = NULL;
+ pInfo->pDelRes = createOneDataBlock(pResBlock, false);
+ blockDataEnsureCapacity(pInfo->pDelRes, 64);
+
+ pOperator->name = "StreamSessionWindowAggOperator";
+ pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW;
+ pOperator->blocking = true;
+ pOperator->status = OP_NOT_OPENED;
+ pOperator->pExpr = pExprInfo;
+ pOperator->numOfExprs = numOfCols;
+ pOperator->info = pInfo;
+ pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doStreamSessionWindowAgg,
+ NULL, NULL, destroyStreamSessionAggOperatorInfo, aggEncodeResultRow,
+ aggDecodeResultRow, NULL);
+ pOperator->pTaskInfo = pTaskInfo;
+ initDownStream(downstream, pInfo);
+ code = appendDownstream(pOperator, &downstream, 1);
+ return pOperator;
+
+_error:
+ if (pInfo != NULL) {
+ destroyStreamSessionAggOperatorInfo(pInfo, numOfCols);
+ }
+
+ taosMemoryFreeClear(pInfo);
+ taosMemoryFreeClear(pOperator);
+ pTaskInfo->code = code;
+ return NULL;
+}
+
+typedef int64_t (*__get_value_fn_t)(void* data, int32_t index);
+
+int32_t binarySearch(void* keyList, int num, TSKEY key, int order,
+ __get_value_fn_t getValuefn) {
+ int firstPos = 0, lastPos = num - 1, midPos = -1;
+ int numOfRows = 0;
+
+ if (num <= 0) return -1;
+ if (order == TSDB_ORDER_DESC) {
+ // find the first position which is smaller than the key
+ while (1) {
+ if (key >= getValuefn(keyList, lastPos)) return lastPos;
+ if (key == getValuefn(keyList, firstPos)) return firstPos;
+ if (key < getValuefn(keyList, firstPos)) return firstPos - 1;
+
+ numOfRows = lastPos - firstPos + 1;
+ midPos = (numOfRows >> 1) + firstPos;
+
+ if (key < getValuefn(keyList, midPos)) {
+ lastPos = midPos - 1;
+ } else if (key > getValuefn(keyList, midPos)) {
+ firstPos = midPos + 1;
+ } else {
+ break;
+ }
+ }
+
+ } else {
+ // find the first position which is bigger than the key
+ while (1) {
+ if (key <= getValuefn(keyList, firstPos)) return firstPos;
+ if (key == getValuefn(keyList, lastPos)) return lastPos;
+
+ if (key > getValuefn(keyList, lastPos)) {
+ lastPos = lastPos + 1;
+ if (lastPos >= num)
+ return -1;
+ else
+ return lastPos;
+ }
+
+ numOfRows = lastPos - firstPos + 1;
+ midPos = (numOfRows >> 1) + firstPos;
+
+ if (key < getValuefn(keyList, midPos)) {
+ lastPos = midPos - 1;
+ } else if (key > getValuefn(keyList, midPos)) {
+ firstPos = midPos + 1;
+ } else {
+ break;
+ }
+ }
+ }
+
+ return midPos;
+}
+
+int64_t getSessionWindowEndkey(void* data, int32_t index) {
+ SArray* pWinInfos = (SArray*) data;
+ SResultWindowInfo* pWin = taosArrayGet(pWinInfos, index);
+ return pWin->win.ekey;
+}
+static bool isInWindow(SResultWindowInfo* pWin, TSKEY ts, int64_t gap) {
+ int64_t sGap = ts - pWin->win.skey;
+ int64_t eGap = pWin->win.ekey - ts;
+ if ( (sGap < 0 && sGap >= -gap) || (eGap < 0 && eGap >= -gap) || (sGap >= 0 && eGap >= 0) ) {
+ return true;
+ }
+ return false;
+}
+
+static SResultWindowInfo* insertNewSessionWindow(SArray* pWinInfos, TSKEY ts,
+ int32_t index) {
+ SResultWindowInfo win =
+ {.pos.offset = -1, .pos.pageId = -1, .win.skey = ts, .win.ekey = ts, .isOutput = false};
+ return taosArrayInsert(pWinInfos, index, &win);
+}
+
+static SResultWindowInfo* addNewSessionWindow(SArray* pWinInfos, TSKEY ts) {
+ SResultWindowInfo win =
+ {.pos.offset = -1, .pos.pageId = -1, .win.skey = ts, .win.ekey = ts, .isOutput = false};
+ return taosArrayPush(pWinInfos, &win);
+}
+
+SResultWindowInfo* getSessionTimeWindow(SArray* pWinInfos, TSKEY ts, int64_t gap,
+ int32_t* pIndex) {
+ int32_t size = taosArrayGetSize(pWinInfos);
+ if (size == 0) {
+ return addNewSessionWindow(pWinInfos, ts);
+ }
+ // find the first position which is smaller than the key
+ int32_t index = binarySearch(pWinInfos, size, ts, TSDB_ORDER_DESC,
+ getSessionWindowEndkey);
+ SResultWindowInfo* pWin = NULL;
+ if (index >= 0) {
+ pWin = taosArrayGet(pWinInfos, index);
+ if (isInWindow(pWin, ts, gap)) {
+ *pIndex = index;
+ return pWin;
+ }
+ }
+
+ if (index + 1 < size) {
+ pWin = taosArrayGet(pWinInfos, index + 1);
+ if (isInWindow(pWin, ts, gap)) {
+ *pIndex = index + 1;
+ return pWin;
+ }
+ }
+
+ if (index == size - 1) {
+ *pIndex = taosArrayGetSize(pWinInfos);
+ return addNewSessionWindow(pWinInfos, ts);
+ }
+ *pIndex = index;
+ return insertNewSessionWindow(pWinInfos, ts, index);
+}
+
+int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pTs, int32_t rows,
+ int32_t start, int64_t gap, SHashObj* pStDeleted) {
+ for (int32_t i = start; i < rows; ++i) {
+ if (!isInWindow(pWinInfo, pTs[i], gap)) {
+ return i - start;
+ }
+ if (pWinInfo->win.skey > pTs[i]) {
+ if (pStDeleted && pWinInfo->isOutput) {
+ taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &pWinInfo->win.skey, sizeof(TSKEY));
+ pWinInfo->isOutput = false;
+ }
+ pWinInfo->win.skey = pTs[i];
+ }
+ pWinInfo->win.ekey = TMAX(pWinInfo->win.ekey, pTs[i]);
+ }
+ return rows - start;
+}
+
+static int32_t setWindowOutputBuf(SResultWindowInfo* pWinInfo, SResultRow** pResult,
+ SqlFunctionCtx* pCtx, int32_t groupId, int32_t numOfOutput,
+ int32_t* rowCellInfoOffset, SStreamAggSupporter* pAggSup, SExecTaskInfo* pTaskInfo) {
+ assert(pWinInfo->win.skey <= pWinInfo->win.ekey);
+ // too many time window in query
+ int32_t size = taosArrayGetSize(pAggSup->pResultRows);
+ if (size > MAX_INTERVAL_TIME_WINDOW) {
+ longjmp(pTaskInfo->env, TSDB_CODE_QRY_TOO_MANY_TIMEWINDOW);
+ }
+
+ if (pWinInfo->pos.pageId == -1) {
+ *pResult = getNewResultRow_rv(pAggSup->pResultBuf, groupId, pAggSup->resultRowSize);
+ if (*pResult == NULL) {
+ return TSDB_CODE_OUT_OF_MEMORY;
+ }
+ initResultRow(*pResult);
+
+ // add a new result set for a new group
+ pWinInfo->pos.pageId = (*pResult)->pageId;
+ pWinInfo->pos.offset = (*pResult)->offset;
+ } else {
+ *pResult = getResultRowByPos(pAggSup->pResultBuf, &pWinInfo->pos);
+ if (!(*pResult)) {
+ qError("getResultRowByPos return NULL, TID:%s", GET_TASKID(pTaskInfo));
+ return TSDB_CODE_FAILED;
+ }
+ }
+
+ // set time window for current result
+ (*pResult)->win = pWinInfo->win;
+ setResultRowInitCtx(*pResult, pCtx, numOfOutput, rowCellInfoOffset);
+ return TSDB_CODE_SUCCESS;
+}
+
+static int32_t doOneWindowAgg(SStreamSessionAggOperatorInfo* pInfo,
+ SSDataBlock* pSDataBlock, SResultWindowInfo* pCurWin, SResultRow** pResult,
+ int32_t startIndex, int32_t winRows, int32_t numOutput, SExecTaskInfo* pTaskInfo ) {
+ SColumnInfoData* pColDataInfo =
+ taosArrayGet(pSDataBlock->pDataBlock, pInfo->primaryTsIndex);
+ TSKEY* tsCols = (int64_t*)pColDataInfo->pData;
+ int32_t code = setWindowOutputBuf(pCurWin, pResult, pInfo->binfo.pCtx, pSDataBlock->info.groupId,
+ numOutput, pInfo->binfo.rowCellInfoOffset, &pInfo->streamAggSup, pTaskInfo);
+ if (code != TSDB_CODE_SUCCESS || (*pResult) == NULL) {
+ return TSDB_CODE_QRY_OUT_OF_MEMORY;
+ }
+ updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pCurWin->win, true);
+ doApplyFunctions(pTaskInfo, pInfo->binfo.pCtx, &pCurWin->win,
+ &pInfo->twAggSup.timeWindowData, startIndex, winRows, tsCols, pSDataBlock->info.rows,
+ numOutput, TSDB_ORDER_ASC);
+ return TSDB_CODE_SUCCESS;
+}
+
+int32_t copyWinInfoToDataBlock(SSDataBlock* pBlock, SStreamAggSupporter* pAggSup,
+ int32_t start, int32_t num, int32_t numOfExprs, SOptrBasicInfo* pBinfo) {
+ for (int32_t i = start; i < num; i += 1) {
+ SResultWindowInfo* pWinInfo = taosArrayGet(pAggSup->pResultRows, start);
+ SFilePage* bufPage = getBufPage(pAggSup->pResultBuf, pWinInfo->pos.pageId);
+ SResultRow* pRow = (SResultRow*)((char*)bufPage + pWinInfo->pos.offset);
+ for (int32_t j = 0; j < numOfExprs; ++j) {
+ SResultRowEntryInfo* pResultInfo = getResultCell(pRow, j, pBinfo->rowCellInfoOffset);
+ SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, j);
+ char* in = GET_ROWCELL_INTERBUF(pBinfo->pCtx[j].resultInfo);
+ colDataAppend(pColInfoData, pBlock->info.rows, in, pResultInfo->isNullRes);
+ }
+ pBlock->info.rows += pRow->numOfRows;
+ releaseBufPage(pAggSup->pResultBuf, bufPage);
+ }
+ blockDataUpdateTsWindow(pBlock, -1);
+ return TSDB_CODE_SUCCESS;
+}
+
+int32_t getNumCompactWindow(SArray* pWinInfos, int32_t startIndex, int64_t gap) {
+ SResultWindowInfo* pCurWin = taosArrayGet(pWinInfos, startIndex);
+ int32_t size = taosArrayGetSize(pWinInfos);
+ // Just look for the window behind StartIndex
+ for (int32_t i = startIndex + 1; i < size; i++) {
+ SResultWindowInfo* pWinInfo = taosArrayGet(pWinInfos, i);
+ if (!isInWindow(pCurWin, pWinInfo->win.skey, gap)) {
+ return i - startIndex - 1;
+ }
+ }
+
+ return size - startIndex - 1;
+}
+
+void compactFunctions(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx,
+ int32_t numOfOutput, SExecTaskInfo* pTaskInfo) {
+ for (int32_t k = 0; k < numOfOutput; ++k) {
+ if (fmIsWindowPseudoColumnFunc(pDestCtx[k].functionId)) {
+ continue;
+ }
+ int32_t code = TSDB_CODE_SUCCESS;
+ if (functionNeedToExecute(&pDestCtx[k]) && pDestCtx[k].fpSet.combine != NULL) {
+ code = pDestCtx[k].fpSet.combine(&pDestCtx[k], &pSourceCtx[k]);
+ if (code != TSDB_CODE_SUCCESS) {
+ qError("%s apply functions error, code: %s", GET_TASKID(pTaskInfo), tstrerror(code));
+ pTaskInfo->code = code;
+ longjmp(pTaskInfo->env, code);
+ }
+ }
+ }
+}
+
+void compactTimeWindow(SStreamSessionAggOperatorInfo* pInfo, int32_t startIndex, int32_t num,
+ int32_t groupId, int32_t numOfOutput, SExecTaskInfo* pTaskInfo, SHashObj* pStUpdated, SHashObj* pStDeleted) {
+ SResultWindowInfo* pCurWin = taosArrayGet(pInfo->streamAggSup.pResultRows, startIndex);
+ SResultRow* pCurResult = NULL;
+ setWindowOutputBuf(pCurWin, &pCurResult, pInfo->binfo.pCtx, groupId,
+ numOfOutput, pInfo->binfo.rowCellInfoOffset, &pInfo->streamAggSup, pTaskInfo);
+ num += startIndex + 1;
+ ASSERT(num <= taosArrayGetSize(pInfo->streamAggSup.pResultRows));
+ // Just look for the window behind StartIndex
+ for (int32_t i = startIndex + 1; i < num; i++) {
+ SResultWindowInfo* pWinInfo = taosArrayGet(pInfo->streamAggSup.pResultRows, i);
+ SResultRow* pWinResult = NULL;
+ setWindowOutputBuf(pWinInfo, &pWinResult, pInfo->pDummyCtx, groupId,
+ numOfOutput, pInfo->binfo.rowCellInfoOffset, &pInfo->streamAggSup, pTaskInfo);
+ pCurWin->win.ekey = TMAX(pCurWin->win.ekey, pWinInfo->win.ekey);
+ compactFunctions(pInfo->binfo.pCtx, pInfo->pDummyCtx, numOfOutput, pTaskInfo);
+ taosHashRemove(pStUpdated, &pWinInfo->pos, sizeof(SResultRowPosition));
+ if (pWinInfo->isOutput) {
+ taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &pWinInfo->win.skey, sizeof(TSKEY));
+ pWinInfo->isOutput = false;
+ }
+ taosArrayRemove(pInfo->streamAggSup.pResultRows, i);
+ }
+}
+
+static void doStreamSessionWindowAggImpl(SOperatorInfo* pOperator,
+ SSDataBlock* pSDataBlock, SHashObj* pStUpdated, SHashObj* pStDeleted) {
+ SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
+ SStreamSessionAggOperatorInfo* pInfo = pOperator->info;
+ bool masterScan = true;
+ int32_t numOfOutput = pOperator->numOfExprs;
+ int64_t groupId = pSDataBlock->info.groupId;
+ int64_t gap = pInfo->gap;
+ int64_t code = TSDB_CODE_SUCCESS;
+
+ int32_t step = 1;
+ bool ascScan = true;
+ TSKEY* tsCols = NULL;
+ SResultRow* pResult = NULL;
+ int32_t winRows = 0;
+
+ if (pSDataBlock->pDataBlock != NULL) {
+ SColumnInfoData* pColDataInfo =
+ taosArrayGet(pSDataBlock->pDataBlock, pInfo->primaryTsIndex);
+ tsCols = (int64_t*)pColDataInfo->pData;
+ } else {
+ return ;
+ }
+
+ SStreamAggSupporter* pAggSup = &pInfo->streamAggSup;
+ for(int32_t i = 0; i < pSDataBlock->info.rows; ) {
+ int32_t winIndex = 0;
+ SResultWindowInfo* pCurWin =
+ getSessionTimeWindow(pAggSup->pResultRows, tsCols[i], gap, &winIndex);
+ winRows =
+ updateSessionWindowInfo(pCurWin, tsCols, pSDataBlock->info.rows, i, pInfo->gap, pStDeleted);
+ code = doOneWindowAgg(pInfo, pSDataBlock, pCurWin, &pResult, i, winRows, numOfOutput, pTaskInfo);
+ if (code != TSDB_CODE_SUCCESS || pResult == NULL) {
+ longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY);
+ }
+ // window start(end) key interpolation
+ // doWindowBorderInterpolation(pOperatorInfo, pSDataBlock, pInfo->binfo.pCtx, pResult, &nextWin, startPos, forwardStep,
+ // pInfo->order, false);
+ int32_t winNum = getNumCompactWindow(pAggSup->pResultRows, winIndex, gap);
+ if (winNum > 0) {
+ compactTimeWindow(pInfo, winIndex, winNum, groupId, numOfOutput, pTaskInfo, pStUpdated, pStDeleted);
+ }
+
+ code = taosHashPut(pStUpdated, &pCurWin->pos, sizeof(SResultRowPosition), &(pCurWin->win.skey), sizeof(TSKEY));
+ if (code != TSDB_CODE_SUCCESS) {
+ longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY);
+ }
+ pCurWin->isOutput = true;
+ i += winRows;
+ }
+}
+
+static void doClearSessionWindows(SStreamAggSupporter* pAggSup, SOptrBasicInfo* pBinfo,
+ SSDataBlock* pBlock, int32_t tsIndex, int32_t numOfOutput, int64_t gap) {
+ SColumnInfoData* pColDataInfo = taosArrayGet(pBlock->pDataBlock, tsIndex);
+ TSKEY *tsCols = (TSKEY*)pColDataInfo->pData;
+ int32_t step = 0;
+ for (int32_t i = 0; i < pBlock->info.rows; i += step) {
+ int32_t winIndex = 0;
+ SResultWindowInfo* pCurWin =
+ getSessionTimeWindow(pAggSup->pResultRows, tsCols[i], gap, &winIndex);
+ step = updateSessionWindowInfo(pCurWin, tsCols, pBlock->info.rows, i, gap, NULL);
+ doClearWindowImpl(&pCurWin->pos, pAggSup->pResultBuf, pBinfo, numOfOutput);
+ }
+}
+
+static int32_t copyUpdateResult(SHashObj* pStUpdated, SArray* pUpdated, int32_t groupId) {
+ void* pData = NULL;
+ size_t keyLen = 0;
+ while((pData = taosHashIterate(pStUpdated, pData)) != NULL) {
+ void* key = taosHashGetKey(pData, &keyLen);
+ ASSERT(keyLen == sizeof(SResultRowPosition));
+ SResKeyPos* pos = taosMemoryMalloc(sizeof(SResKeyPos) + sizeof(uint64_t));
+ if (pos == NULL) {
+ return TSDB_CODE_QRY_OUT_OF_MEMORY;
+ }
+ pos->groupId = groupId;
+ pos->pos = *(SResultRowPosition*)key;
+ *(int64_t*)pos->key = *(uint64_t*)pData;
+ taosArrayPush(pUpdated, &pos);
+ }
+ return TSDB_CODE_SUCCESS;
+}
+
+void doBuildDeleteDataBlock(SHashObj* pStDeleted, SSDataBlock* pBlock, void** Ite) {
+ blockDataCleanup(pBlock);
+ size_t keyLen = 0;
+ while(( (*Ite) = taosHashIterate(pStDeleted, *Ite)) != NULL) {
+ SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, 0);
+ colDataAppend(pColInfoData, pBlock->info.rows, *Ite, false);
+ for (int32_t i = 1; i < pBlock->info.numOfCols; i++) {
+ pColInfoData = taosArrayGet(pBlock->pDataBlock, i);
+ colDataAppendNULL(pColInfoData, pBlock->info.rows);
+ }
+ pBlock->info.rows += 1;
+ if (pBlock->info.rows + 1 >= pBlock->info.capacity) {
+ break;
+ }
+ }
+ if ((*Ite) == NULL) {
+ taosHashClear(pStDeleted);
+ }
+}
+
+static SSDataBlock* doStreamSessionWindowAgg(SOperatorInfo* pOperator) {
+ if (pOperator->status == OP_EXEC_DONE) {
+ return NULL;
+ }
+
+ SStreamSessionAggOperatorInfo* pInfo = pOperator->info;
+ SOptrBasicInfo* pBInfo = &pInfo->binfo;
+ if (pOperator->status == OP_RES_TO_RETURN) {
+ doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
+ if (pInfo->pDelRes->info.rows > 0) {
+ return pInfo->pDelRes;
+ }
+ doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo,
+ pInfo->streamAggSup.pResultBuf);
+ if (pBInfo->pRes->info.rows == 0 ||
+ !hashRemainDataInGroupInfo(&pInfo->groupResInfo)) {
+ doSetOperatorCompleted(pOperator);
+ }
+ return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
+ }
+
+ _hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
+ SHashObj* pStUpdated = taosHashInit(64, hashFn, true, HASH_NO_LOCK);
+ SOperatorInfo* downstream = pOperator->pDownstream[0];
+ while (1) {
+ SSDataBlock* pBlock = downstream->fpSet.getNextFn(downstream);
+ if (pBlock == NULL) {
+ break;
+ }
+ // the pDataBlock are always the same one, no need to call this again
+ setInputDataBlock(pOperator, pBInfo->pCtx, pBlock, TSDB_ORDER_ASC, MAIN_SCAN, true);
+ if (pBlock->info.type == STREAM_REPROCESS) {
+ doClearSessionWindows(&pInfo->streamAggSup, &pInfo->binfo, pBlock, 0,
+ pOperator->numOfExprs, pInfo->gap);
+ continue;
+ }
+ doStreamSessionWindowAggImpl(pOperator, pBlock, pStUpdated, pInfo->pStDeleted);
+ }
+
+ // restore the value
+ pOperator->status = OP_RES_TO_RETURN;
+ SArray* pUpdated = taosArrayInit(16, POINTER_BYTES);
+ copyUpdateResult(pStUpdated, pUpdated, pBInfo->pRes->info.groupId);
+ taosHashCleanup(pStUpdated);
+ finalizeUpdatedResult(pOperator->numOfExprs, pInfo->streamAggSup.pResultBuf, pUpdated,
+ pInfo->binfo.rowCellInfoOffset);
+ initMultiResInfoFromArrayList(&pInfo->groupResInfo, pUpdated);
+ blockDataEnsureCapacity(pInfo->binfo.pRes, pOperator->resultInfo.capacity);
+ doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
+ if (pInfo->pDelRes->info.rows > 0) {
+ return pInfo->pDelRes;
+ }
+ doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo,
+ pInfo->streamAggSup.pResultBuf);
+ return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
+}
diff --git a/source/libs/executor/src/tsort.c b/source/libs/executor/src/tsort.c
index c826cb68bf..7581836d59 100644
--- a/source/libs/executor/src/tsort.c
+++ b/source/libs/executor/src/tsort.c
@@ -31,20 +31,16 @@ struct STupleHandle {
struct SSortHandle {
int32_t type;
-
int32_t pageSize;
int32_t numOfPages;
SDiskbasedBuf *pBuf;
SArray *pSortInfo;
- SArray *pIndexMap;
SArray *pOrderedSource;
- _sort_fetch_block_fn_t fetchfp;
- _sort_merge_compar_fn_t comparFn;
- SMultiwayMergeTreeInfo *pMergeTree;
- int64_t startTs;
+ int32_t loops;
uint64_t sortElapsed;
+ int64_t startTs;
uint64_t totalElapsed;
int32_t sourceId;
@@ -53,13 +49,15 @@ struct SSortHandle {
int32_t numOfCompletedSources;
bool opened;
const char *idStr;
-
bool inMemSort;
bool needAdjust;
STupleHandle tupleHandle;
-
void *param;
void (*beforeFp)(SSDataBlock* pBlock, void* param);
+
+ _sort_fetch_block_fn_t fetchfp;
+ _sort_merge_compar_fn_t comparFn;
+ SMultiwayMergeTreeInfo *pMergeTree;
};
static int32_t msortComparFn(const void *pLeft, const void *pRight, void *param);
@@ -80,7 +78,7 @@ SSortHandle* tsortCreateSortHandle(SArray* pSortInfo, SArray* pIndexMap, int32_t
pSortHandle->pageSize = pageSize;
pSortHandle->numOfPages = numOfPages;
pSortHandle->pSortInfo = pSortInfo;
- pSortHandle->pIndexMap = pIndexMap;
+ pSortHandle->loops = 0;
if (pBlock != NULL) {
pSortHandle->pDataBlock = createOneDataBlock(pBlock, false);
@@ -415,6 +413,9 @@ static int32_t doInternalMergeSort(SSortHandle* pHandle) {
int32_t numOfRows = blockDataGetCapacityInRow(pHandle->pDataBlock, pHandle->pageSize);
blockDataEnsureCapacity(pHandle->pDataBlock, numOfRows);
+ // the initial pass + sortPass + final mergePass
+ pHandle->loops = sortPass + 2;
+
size_t numOfSorted = taosArrayGetSize(pHandle->pOrderedSource);
for(int32_t t = 0; t < sortPass; ++t) {
int64_t st = taosGetTimestampUs();
@@ -502,12 +503,13 @@ static int32_t doInternalMergeSort(SSortHandle* pHandle) {
return 0;
}
-static int32_t createInitialSortedMultiSources(SSortHandle* pHandle) {
+static int32_t createInitialSources(SSortHandle* pHandle) {
size_t sortBufSize = pHandle->numOfPages * pHandle->pageSize;
if (pHandle->type == SORT_SINGLESOURCE_SORT) {
SSortSource* source = taosArrayGetP(pHandle->pOrderedSource, 0);
taosArrayClear(pHandle->pOrderedSource);
+
while (1) {
SSDataBlock* pBlock = pHandle->fetchfp(source->param);
if (pBlock == NULL) {
@@ -524,6 +526,7 @@ static int32_t createInitialSortedMultiSources(SSortHandle* pHandle) {
} else {
pHandle->pageSize = 4096;
}
+
// todo!!
pHandle->numOfPages = 1024;
sortBufSize = pHandle->numOfPages * pHandle->pageSize;
@@ -535,7 +538,7 @@ static int32_t createInitialSortedMultiSources(SSortHandle* pHandle) {
}
// todo relocate the columns
- int32_t code = blockDataMerge(pHandle->pDataBlock, pBlock, pHandle->pIndexMap);
+ int32_t code = blockDataMerge(pHandle->pDataBlock, pBlock);
if (code != 0) {
return code;
}
@@ -569,6 +572,7 @@ static int32_t createInitialSortedMultiSources(SSortHandle* pHandle) {
pHandle->cmpParam.numOfSources = 1;
pHandle->inMemSort = true;
+ pHandle->loops = 1;
pHandle->tupleHandle.rowIndex = -1;
pHandle->tupleHandle.pBlock = pHandle->pDataBlock;
return 0;
@@ -592,7 +596,7 @@ int32_t tsortOpen(SSortHandle* pHandle) {
pHandle->opened = true;
- int32_t code = createInitialSortedMultiSources(pHandle);
+ int32_t code = createInitialSources(pHandle);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
@@ -692,3 +696,20 @@ void* tsortGetValue(STupleHandle* pVHandle, int32_t colIndex) {
SColumnInfoData* pColInfo = TARRAY_GET_ELEM(pVHandle->pBlock->pDataBlock, colIndex);
return colDataGetData(pColInfo, pVHandle->rowIndex);
}
+
+SSortExecInfo tsortGetSortExecInfo(SSortHandle* pHandle) {
+ SSortExecInfo info = {0};
+
+ info.sortBuffer = pHandle->pageSize * pHandle->numOfPages;
+ info.sortMethod = pHandle->inMemSort? SORT_QSORT_T:SORT_SPILLED_MERGE_SORT_T;
+ info.loops = pHandle->loops;
+
+ if (pHandle->pBuf != NULL) {
+ SDiskbasedBufStatis st = getDBufStatis(pHandle->pBuf);
+ info.writeBytes = st.flushBytes;
+ info.readBytes = st.loadBytes;
+ }
+
+ return info;
+}
+
diff --git a/source/libs/function/inc/builtins.h b/source/libs/function/inc/builtins.h
index 3a753325bd..3bd0f35bf5 100644
--- a/source/libs/function/inc/builtins.h
+++ b/source/libs/function/inc/builtins.h
@@ -37,6 +37,7 @@ typedef struct SBuiltinFuncDefinition {
FScalarExecProcess sprocessFunc;
FExecFinalize finalizeFunc;
FExecProcess invertFunc;
+ FExecCombine combineFunc;
} SBuiltinFuncDefinition;
extern const SBuiltinFuncDefinition funcMgtBuiltins[];
diff --git a/source/libs/function/inc/builtinsimpl.h b/source/libs/function/inc/builtinsimpl.h
index 3e2ccbc6b8..d041e08d35 100644
--- a/source/libs/function/inc/builtinsimpl.h
+++ b/source/libs/function/inc/builtinsimpl.h
@@ -27,6 +27,7 @@ bool functionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo);
int32_t functionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock);
int32_t dummyProcess(SqlFunctionCtx* UNUSED_PARAM(pCtx));
int32_t functionFinalizeWithResultBuf(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, char* finalResult);
+int32_t combineFunction(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx);
EFuncDataRequired countDataRequired(SFunctionNode* pFunc, STimeWindow* pTimeWindow);
bool getCountFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
@@ -37,24 +38,29 @@ EFuncDataRequired statisDataRequired(SFunctionNode* pFunc, STimeWindow* pTimeWin
bool getSumFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
int32_t sumFunction(SqlFunctionCtx *pCtx);
int32_t sumInvertFunction(SqlFunctionCtx *pCtx);
+int32_t sumCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx);
bool minmaxFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo);
bool getMinmaxFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
int32_t minFunction(SqlFunctionCtx* pCtx);
int32_t maxFunction(SqlFunctionCtx *pCtx);
int32_t minmaxFunctionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock);
+int32_t minCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx);
+int32_t maxCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx);
bool getAvgFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
bool avgFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo);
int32_t avgFunction(SqlFunctionCtx* pCtx);
int32_t avgFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock);
int32_t avgInvertFunction(SqlFunctionCtx* pCtx);
+int32_t avgCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx);
bool getStddevFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
bool stddevFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo);
int32_t stddevFunction(SqlFunctionCtx* pCtx);
int32_t stddevFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock);
int32_t stddevInvertFunction(SqlFunctionCtx* pCtx);
+int32_t stddevCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx);
bool getLeastSQRFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
bool leastSQRFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo);
@@ -73,8 +79,10 @@ int32_t diffFunction(SqlFunctionCtx *pCtx);
bool getFirstLastFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
int32_t firstFunction(SqlFunctionCtx *pCtx);
+int32_t firstCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx);
int32_t lastFunction(SqlFunctionCtx *pCtx);
int32_t lastFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock);
+int32_t lastCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx);
bool getTopBotFuncEnv(SFunctionNode* UNUSED_PARAM(pFunc), SFuncExecEnv* pEnv);
int32_t topFunction(SqlFunctionCtx *pCtx);
diff --git a/source/libs/function/src/builtins.c b/source/libs/function/src/builtins.c
index a1accf2e73..593df0e97b 100644
--- a/source/libs/function/src/builtins.c
+++ b/source/libs/function/src/builtins.c
@@ -156,6 +156,14 @@ static int32_t translatePercentile(SFunctionNode* pFunc, char* pErrBuf, int32_t
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
+ SValueNode* pValue = (SValueNode*)nodesListGetNode(pFunc->pParameterList, 1);
+
+ if (pValue->datum.i < 0 || pValue->datum.i > 100) {
+ return invaildFuncParaValueErrMsg(pErrBuf, len, pFunc->functionName);
+ }
+
+ pValue->notReserved = true;
+
uint8_t para1Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
uint8_t para2Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type;
if (!IS_NUMERIC_TYPE(para1Type) || (!IS_SIGNED_NUMERIC_TYPE(para2Type) && !IS_UNSIGNED_NUMERIC_TYPE(para2Type))) {
@@ -175,8 +183,8 @@ static bool validAperventileAlgo(const SValueNode* pVal) {
}
static int32_t translateApercentile(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
- int32_t paraNum = LIST_LENGTH(pFunc->pParameterList);
- if (2 != paraNum && 3 != paraNum) {
+ int32_t numOfParams = LIST_LENGTH(pFunc->pParameterList);
+ if (2 != numOfParams && 3 != numOfParams) {
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
@@ -198,7 +206,7 @@ static int32_t translateApercentile(SFunctionNode* pFunc, char* pErrBuf, int32_t
pValue->notReserved = true;
- if (3 == paraNum) {
+ if (3 == numOfParams) {
SNode* pPara3 = nodesListGetNode(pFunc->pParameterList, 2);
if (QUERY_NODE_VALUE != nodeType(pPara3) || !validAperventileAlgo((SValueNode*)pPara3)) {
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
@@ -218,8 +226,8 @@ static int32_t translateTbnameColumn(SFunctionNode* pFunc, char* pErrBuf, int32_
}
static int32_t translateTop(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
- int32_t paraNum = LIST_LENGTH(pFunc->pParameterList);
- if (2 != paraNum) {
+ int32_t numOfParams = LIST_LENGTH(pFunc->pParameterList);
+ if (2 != numOfParams) {
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
@@ -263,15 +271,16 @@ static int32_t translateSpread(SFunctionNode* pFunc, char* pErrBuf, int32_t len)
}
static int32_t translateElapsed(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
- int32_t paraNum = LIST_LENGTH(pFunc->pParameterList);
- if (1 != paraNum && 2 != paraNum) {
+ int32_t numOfParams = LIST_LENGTH(pFunc->pParameterList);
+ if (1 != numOfParams && 2 != numOfParams) {
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
- SNode* pPara = nodesListGetNode(pFunc->pParameterList, 0);
- if (QUERY_NODE_COLUMN != nodeType(pPara)) {
+ // param0
+ SNode* pParaNode0 = nodesListGetNode(pFunc->pParameterList, 0);
+ if (QUERY_NODE_COLUMN != nodeType(pParaNode0)) {
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
- "The input parameter of ELAPSED function can only be column");
+ "The first parameter of ELAPSED function can only be column");
}
uint8_t paraType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
@@ -279,6 +288,23 @@ static int32_t translateElapsed(SFunctionNode* pFunc, char* pErrBuf, int32_t len
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
+ // param1
+ if (2 == numOfParams) {
+ SNode* pParamNode1 = nodesListGetNode(pFunc->pParameterList, 1);
+ if (QUERY_NODE_VALUE != nodeType(pParamNode1)) {
+ return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
+ }
+
+ SValueNode* pValue = (SValueNode*)pParamNode1;
+
+ pValue->notReserved = true;
+
+ uint8_t paraType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type;
+ if (!IS_INTEGER_TYPE(paraType)) {
+ return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
+ }
+ }
+
pFunc->node.resType = (SDataType){.bytes = tDataTypes[TSDB_DATA_TYPE_DOUBLE].bytes, .type = TSDB_DATA_TYPE_DOUBLE};
return TSDB_CODE_SUCCESS;
}
@@ -290,6 +316,17 @@ static int32_t translateLeastSQR(SFunctionNode* pFunc, char* pErrBuf, int32_t le
}
for (int32_t i = 0; i < numOfParams; ++i) {
+ SNode* pParamNode = nodesListGetNode(pFunc->pParameterList, i);
+ if (i > 0) { // param1 & param2
+ if (QUERY_NODE_VALUE != nodeType(pParamNode)) {
+ return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
+ }
+
+ SValueNode* pValue = (SValueNode*)pParamNode;
+
+ pValue->notReserved = true;
+ }
+
uint8_t colType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, i))->resType.type;
if (!IS_NUMERIC_TYPE(colType)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
@@ -301,15 +338,35 @@ static int32_t translateLeastSQR(SFunctionNode* pFunc, char* pErrBuf, int32_t le
}
static int32_t translateHistogram(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
- if (4 != LIST_LENGTH(pFunc->pParameterList)) {
+ int32_t numOfParams = LIST_LENGTH(pFunc->pParameterList);
+ if (4 != numOfParams) {
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
+ // param0
+ SNode* pParaNode0 = nodesListGetNode(pFunc->pParameterList, 0);
+ if (QUERY_NODE_COLUMN != nodeType(pParaNode0)) {
+ return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
+ "The first parameter of HISTOGRAM function can only be column");
+ }
+
uint8_t colType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
if (!IS_NUMERIC_TYPE(colType)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
+ // param1 ~ param3
+ for (int32_t i = 1; i < numOfParams; ++i) {
+ SNode* pParamNode = nodesListGetNode(pFunc->pParameterList, i);
+ if (QUERY_NODE_VALUE != nodeType(pParamNode)) {
+ return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
+ }
+
+ SValueNode* pValue = (SValueNode*)pParamNode;
+
+ pValue->notReserved = true;
+ }
+
if (((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type != TSDB_DATA_TYPE_BINARY ||
((SExprNode*)nodesListGetNode(pFunc->pParameterList, 2))->resType.type != TSDB_DATA_TYPE_BINARY ||
((SExprNode*)nodesListGetNode(pFunc->pParameterList, 3))->resType.type != TSDB_DATA_TYPE_BIGINT) {
@@ -336,46 +393,76 @@ static int32_t translateHLL(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
}
static int32_t translateStateCount(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
- if (3 != LIST_LENGTH(pFunc->pParameterList)) {
+ int32_t numOfParams = LIST_LENGTH(pFunc->pParameterList);
+ if (3 != numOfParams) {
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
+ // param0
uint8_t colType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
if (!IS_NUMERIC_TYPE(colType)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
+ // param1 & param2
+ for (int32_t i = 1; i < numOfParams; ++i) {
+ SNode* pParamNode = nodesListGetNode(pFunc->pParameterList, i);
+ if (QUERY_NODE_VALUE != nodeType(pParamNode)) {
+ return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
+ }
+
+ SValueNode* pValue = (SValueNode*)pParamNode;
+
+ pValue->notReserved = true;
+ }
+
if (((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type != TSDB_DATA_TYPE_BINARY ||
(((SExprNode*)nodesListGetNode(pFunc->pParameterList, 2))->resType.type != TSDB_DATA_TYPE_BIGINT &&
((SExprNode*)nodesListGetNode(pFunc->pParameterList, 2))->resType.type != TSDB_DATA_TYPE_DOUBLE)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
+ // set result type
pFunc->node.resType = (SDataType){.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes, .type = TSDB_DATA_TYPE_BIGINT};
return TSDB_CODE_SUCCESS;
}
static int32_t translateStateDuration(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
- int32_t paraNum = LIST_LENGTH(pFunc->pParameterList);
- if (3 != paraNum && 4 != paraNum) {
+ int32_t numOfParams = LIST_LENGTH(pFunc->pParameterList);
+ if (3 != numOfParams && 4 != numOfParams) {
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
+ // param0
uint8_t colType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
if (!IS_NUMERIC_TYPE(colType)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
+ // param1, param2 & param3
+ for (int32_t i = 1; i < numOfParams; ++i) {
+ SNode* pParamNode = nodesListGetNode(pFunc->pParameterList, i);
+ if (QUERY_NODE_VALUE != nodeType(pParamNode)) {
+ return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
+ }
+
+ SValueNode* pValue = (SValueNode*)pParamNode;
+
+ pValue->notReserved = true;
+ }
+
if (((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type != TSDB_DATA_TYPE_BINARY ||
(((SExprNode*)nodesListGetNode(pFunc->pParameterList, 2))->resType.type != TSDB_DATA_TYPE_BIGINT &&
((SExprNode*)nodesListGetNode(pFunc->pParameterList, 2))->resType.type != TSDB_DATA_TYPE_DOUBLE)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
- if (paraNum == 4 && ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 3))->resType.type != TSDB_DATA_TYPE_BIGINT) {
+ if (numOfParams == 4 &&
+ ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 3))->resType.type != TSDB_DATA_TYPE_BIGINT) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
+ // set result type
pFunc->node.resType = (SDataType){.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes, .type = TSDB_DATA_TYPE_BIGINT};
return TSDB_CODE_SUCCESS;
}
@@ -416,13 +503,28 @@ static int32_t translateMavg(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
- SNode* pPara = nodesListGetNode(pFunc->pParameterList, 0);
- if (QUERY_NODE_COLUMN != nodeType(pPara)) {
+ // param0
+ SNode* pParaNode0 = nodesListGetNode(pFunc->pParameterList, 0);
+ if (QUERY_NODE_COLUMN != nodeType(pParaNode0)) {
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
- "The input parameter of MAVG function can only be column");
+ "The first parameter of MAVG function can only be column");
}
uint8_t colType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
+
+ // param1
+ SNode* pParamNode1 = nodesListGetNode(pFunc->pParameterList, 1);
+ if (QUERY_NODE_VALUE != nodeType(pParamNode1)) {
+ return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
+ }
+
+ SValueNode* pValue = (SValueNode*)pParamNode1;
+ if (pValue->datum.i < 1 || pValue->datum.i > 1000) {
+ return invaildFuncParaValueErrMsg(pErrBuf, len, pFunc->functionName);
+ }
+
+ pValue->notReserved = true;
+
uint8_t paraType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type;
if (!IS_NUMERIC_TYPE(colType) || !IS_INTEGER_TYPE(paraType)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
@@ -437,24 +539,41 @@ static int32_t translateSample(SFunctionNode* pFunc, char* pErrBuf, int32_t len)
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
- SNode* pPara = nodesListGetNode(pFunc->pParameterList, 0);
- if (QUERY_NODE_COLUMN != nodeType(pPara)) {
+ // param0
+ SNode* pParamNode0 = nodesListGetNode(pFunc->pParameterList, 0);
+ if (QUERY_NODE_COLUMN != nodeType(pParamNode0)) {
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
- "The input parameter of SAMPLE function can only be column");
+ "The first parameter of SAMPLE function can only be column");
}
+ SExprNode* pCol = (SExprNode*)nodesListGetNode(pFunc->pParameterList, 0);
+ uint8_t colType = pCol->resType.type;
+
+ // param1
+ SNode* pParamNode1 = nodesListGetNode(pFunc->pParameterList, 1);
+ if (QUERY_NODE_VALUE != nodeType(pParamNode1)) {
+ return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
+ }
+
+ SValueNode* pValue = (SValueNode*)pParamNode1;
+ if (pValue->datum.i < 1 || pValue->datum.i > 1000) {
+ return invaildFuncParaValueErrMsg(pErrBuf, len, pFunc->functionName);
+ }
+
+ pValue->notReserved = true;
+
uint8_t paraType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type;
if (!IS_INTEGER_TYPE(paraType)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
- SExprNode* pCol = (SExprNode*)nodesListGetNode(pFunc->pParameterList, 0);
- uint8_t colType = pCol->resType.type;
+ // set result type
if (IS_VAR_DATA_TYPE(colType)) {
pFunc->node.resType = (SDataType){.bytes = pCol->resType.bytes, .type = colType};
} else {
pFunc->node.resType = (SDataType){.bytes = tDataTypes[colType].bytes, .type = colType};
}
+
return TSDB_CODE_SUCCESS;
}
@@ -464,21 +583,37 @@ static int32_t translateTail(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
+ // param0
SNode* pPara = nodesListGetNode(pFunc->pParameterList, 0);
if (QUERY_NODE_COLUMN != nodeType(pPara)) {
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
- "The input parameter of TAIL function can only be column");
+ "The first parameter of TAIL function can only be column");
}
+ SExprNode* pCol = (SExprNode*)nodesListGetNode(pFunc->pParameterList, 0);
+ uint8_t colType = pCol->resType.type;
+ // param1 & param2
for (int32_t i = 1; i < numOfParams; ++i) {
+ SNode* pParamNode = nodesListGetNode(pFunc->pParameterList, i);
+ if (QUERY_NODE_VALUE != nodeType(pParamNode)) {
+ return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
+ }
+
+ SValueNode* pValue = (SValueNode*)pParamNode;
+
+ if (pValue->datum.i < ((i > 1) ? 0 : 1) || pValue->datum.i > 1000) {
+ return invaildFuncParaValueErrMsg(pErrBuf, len, pFunc->functionName);
+ }
+
+ pValue->notReserved = true;
+
uint8_t paraType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, i))->resType.type;
if (!IS_INTEGER_TYPE(paraType)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
}
- SExprNode* pCol = (SExprNode*)nodesListGetNode(pFunc->pParameterList, 0);
- uint8_t colType = pCol->resType.type;
+ // set result type
if (IS_VAR_DATA_TYPE(colType)) {
pFunc->node.resType = (SDataType){.bytes = pCol->resType.bytes, .type = colType};
} else {
@@ -552,8 +687,8 @@ static int32_t translateLength(SFunctionNode* pFunc, char* pErrBuf, int32_t len)
static int32_t translateConcatImpl(SFunctionNode* pFunc, char* pErrBuf, int32_t len, int32_t minParaNum,
int32_t maxParaNum, bool hasSep) {
- int32_t paraNum = LIST_LENGTH(pFunc->pParameterList);
- if (paraNum < minParaNum || paraNum > maxParaNum) {
+ int32_t numOfParams = LIST_LENGTH(pFunc->pParameterList);
+ if (numOfParams < minParaNum || numOfParams > maxParaNum) {
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
@@ -562,7 +697,7 @@ static int32_t translateConcatImpl(SFunctionNode* pFunc, char* pErrBuf, int32_t
int32_t sepBytes = 0;
/* For concat/concat_ws function, if params have NCHAR type, promote the final result to NCHAR */
- for (int32_t i = 0; i < paraNum; ++i) {
+ for (int32_t i = 0; i < numOfParams; ++i) {
SNode* pPara = nodesListGetNode(pFunc->pParameterList, i);
uint8_t paraType = ((SExprNode*)pPara)->resType.type;
if (!IS_VAR_DATA_TYPE(paraType)) {
@@ -573,7 +708,7 @@ static int32_t translateConcatImpl(SFunctionNode* pFunc, char* pErrBuf, int32_t
}
}
- for (int32_t i = 0; i < paraNum; ++i) {
+ for (int32_t i = 0; i < numOfParams; ++i) {
SNode* pPara = nodesListGetNode(pFunc->pParameterList, i);
uint8_t paraType = ((SExprNode*)pPara)->resType.type;
int32_t paraBytes = ((SExprNode*)pPara)->resType.bytes;
@@ -589,7 +724,7 @@ static int32_t translateConcatImpl(SFunctionNode* pFunc, char* pErrBuf, int32_t
}
if (hasSep) {
- resultBytes += sepBytes * (paraNum - 3);
+ resultBytes += sepBytes * (numOfParams - 3);
}
pFunc->node.resType = (SDataType){.bytes = resultBytes, .type = resultType};
@@ -605,8 +740,8 @@ static int32_t translateConcatWs(SFunctionNode* pFunc, char* pErrBuf, int32_t le
}
static int32_t translateSubstr(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
- int32_t paraNum = LIST_LENGTH(pFunc->pParameterList);
- if (2 != paraNum && 3 != paraNum) {
+ int32_t numOfParams = LIST_LENGTH(pFunc->pParameterList);
+ if (2 != numOfParams && 3 != numOfParams) {
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
@@ -615,7 +750,7 @@ static int32_t translateSubstr(SFunctionNode* pFunc, char* pErrBuf, int32_t len)
if (!IS_VAR_DATA_TYPE(pPara1->resType.type) || !IS_INTEGER_TYPE(para2Type)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
- if (3 == paraNum) {
+ if (3 == numOfParams) {
uint8_t para3Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type;
if (!IS_INTEGER_TYPE(para3Type)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
@@ -692,8 +827,8 @@ static int32_t translateTimeTruncate(SFunctionNode* pFunc, char* pErrBuf, int32_
}
static int32_t translateTimeDiff(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
- int32_t paraNum = LIST_LENGTH(pFunc->pParameterList);
- if (2 != paraNum && 3 != paraNum) {
+ int32_t numOfParams = LIST_LENGTH(pFunc->pParameterList);
+ if (2 != numOfParams && 3 != numOfParams) {
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
@@ -704,7 +839,7 @@ static int32_t translateTimeDiff(SFunctionNode* pFunc, char* pErrBuf, int32_t le
}
}
- if (3 == paraNum) {
+ if (3 == numOfParams) {
if (!IS_INTEGER_TYPE(((SExprNode*)nodesListGetNode(pFunc->pParameterList, 2))->resType.type)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
@@ -745,7 +880,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.initFunc = functionSetup,
.processFunc = countFunction,
.finalizeFunc = functionFinalize,
- .invertFunc = countInvertFunction
+ .invertFunc = countInvertFunction,
+ .combineFunc = combineFunction,
},
{
.name = "sum",
@@ -757,7 +893,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.initFunc = functionSetup,
.processFunc = sumFunction,
.finalizeFunc = functionFinalize,
- .invertFunc = sumInvertFunction
+ .invertFunc = sumInvertFunction,
+ .combineFunc = sumCombine,
},
{
.name = "min",
@@ -768,7 +905,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.getEnvFunc = getMinmaxFuncEnv,
.initFunc = minmaxFunctionSetup,
.processFunc = minFunction,
- .finalizeFunc = minmaxFunctionFinalize
+ .finalizeFunc = minmaxFunctionFinalize,
+ .combineFunc = minCombine
},
{
.name = "max",
@@ -779,7 +917,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.getEnvFunc = getMinmaxFuncEnv,
.initFunc = minmaxFunctionSetup,
.processFunc = maxFunction,
- .finalizeFunc = minmaxFunctionFinalize
+ .finalizeFunc = minmaxFunctionFinalize,
+ .combineFunc = maxCombine
},
{
.name = "stddev",
@@ -790,7 +929,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.initFunc = stddevFunctionSetup,
.processFunc = stddevFunction,
.finalizeFunc = stddevFinalize,
- .invertFunc = stddevInvertFunction
+ .invertFunc = stddevInvertFunction,
+ .combineFunc = stddevCombine,
},
{
.name = "leastsquares",
@@ -801,7 +941,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.initFunc = leastSQRFunctionSetup,
.processFunc = leastSQRFunction,
.finalizeFunc = leastSQRFinalize,
- .invertFunc = leastSQRInvertFunction
+ .invertFunc = leastSQRInvertFunction,
},
{
.name = "avg",
@@ -812,7 +952,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.initFunc = avgFunctionSetup,
.processFunc = avgFunction,
.finalizeFunc = avgFinalize,
- .invertFunc = avgInvertFunction
+ .invertFunc = avgInvertFunction,
+ .combineFunc = avgCombine,
},
{
.name = "percentile",
@@ -894,7 +1035,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.getEnvFunc = getFirstLastFuncEnv,
.initFunc = functionSetup,
.processFunc = firstFunction,
- .finalizeFunc = functionFinalize
+ .finalizeFunc = functionFinalize,
+ .combineFunc = firstCombine,
},
{
.name = "last",
@@ -904,7 +1046,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.getEnvFunc = getFirstLastFuncEnv,
.initFunc = functionSetup,
.processFunc = lastFunction,
- .finalizeFunc = lastFinalize
+ .finalizeFunc = lastFinalize,
+ .combineFunc = lastCombine,
},
{
.name = "histogram",
diff --git a/source/libs/function/src/builtinsimpl.c b/source/libs/function/src/builtinsimpl.c
index ad92d095d5..da842877dc 100644
--- a/source/libs/function/src/builtinsimpl.c
+++ b/source/libs/function/src/builtinsimpl.c
@@ -292,6 +292,24 @@ int32_t functionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
return pResInfo->numOfRes;
}
+int32_t firstCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
+ SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx);
+ char* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo);
+ int32_t type = pDestCtx->input.pData[0]->info.type;
+ int32_t bytes = pDestCtx->input.pData[0]->info.bytes;
+
+ SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx);
+ char* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
+
+ if (pSResInfo->numOfRes != 0 &&
+ (pDResInfo->numOfRes == 0 || *(TSKEY*)(pDBuf + bytes) > *(TSKEY*)(pSBuf + bytes)) ) {
+ memcpy(pDBuf, pSBuf, bytes);
+ *(TSKEY*)(pDBuf + bytes) = *(TSKEY*)(pSBuf + bytes);
+ pDResInfo->numOfRes = 1;
+ }
+ return TSDB_CODE_SUCCESS;
+}
+
int32_t dummyProcess(SqlFunctionCtx* UNUSED_PARAM(pCtx)) {
return 0;
}
@@ -388,6 +406,18 @@ int32_t countInvertFunction(SqlFunctionCtx* pCtx) {
return TSDB_CODE_SUCCESS;
}
+int32_t combineFunction(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
+ SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx);
+ char* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo);
+
+ SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx);
+ char* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
+ *((int64_t*)pDBuf) += *((int64_t*)pSBuf);
+
+ SET_VAL(pDResInfo, *((int64_t*)pDBuf), 1);
+ return TSDB_CODE_SUCCESS;
+}
+
#define LIST_ADD_N(_res, _col, _start, _rows, _t, numOfElem) \
do { \
_t* d = (_t*)(_col->pData); \
@@ -537,6 +567,26 @@ int32_t sumInvertFunction(SqlFunctionCtx* pCtx) {
return TSDB_CODE_SUCCESS;
}
+int32_t sumCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
+ SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx);
+ SSumRes* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo);
+ int32_t type = pDestCtx->input.pData[0]->info.type;
+
+ SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx);
+ SSumRes* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
+
+ if (IS_SIGNED_NUMERIC_TYPE(type) || type == TSDB_DATA_TYPE_BOOL) {
+ pDBuf->isum += pSBuf->isum;
+ } else if (IS_UNSIGNED_NUMERIC_TYPE(type)) {
+ pDBuf->usum += pSBuf->usum;
+ } else if (type == TSDB_DATA_TYPE_DOUBLE || type == TSDB_DATA_TYPE_FLOAT) {
+ pDBuf->dsum += pSBuf->dsum;
+ }
+
+ SET_VAL(pDResInfo, *((int64_t*)pDBuf), 1);
+ return TSDB_CODE_SUCCESS;
+}
+
bool getSumFuncEnv(SFunctionNode* UNUSED_PARAM(pFunc), SFuncExecEnv* pEnv) {
pEnv->calcMemSize = sizeof(SSumRes);
return true;
@@ -738,6 +788,24 @@ int32_t avgInvertFunction(SqlFunctionCtx* pCtx) {
return TSDB_CODE_SUCCESS;
}
+int32_t avgCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
+ SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx);
+ SAvgRes* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo);
+ int32_t type = pDestCtx->input.pData[0]->info.type;
+
+ SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx);
+ SAvgRes* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
+
+ if (IS_INTEGER_TYPE(type)) {
+ pDBuf->sum.isum += pSBuf->sum.isum;
+ } else {
+ pDBuf->sum.dsum += pSBuf->sum.dsum;
+ }
+ pDBuf->count += pSBuf->count;
+
+ return TSDB_CODE_SUCCESS;
+}
+
int32_t avgFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
SInputColumnInfoData* pInput = &pCtx->input;
int32_t type = pInput->pData[0]->info.type;
@@ -1273,6 +1341,34 @@ void setSelectivityValue(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, const STuple
}
}
+int32_t minMaxCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx, int32_t isMinFunc) {
+ SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx);
+ SMinmaxResInfo* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo);
+ int32_t type = pDestCtx->input.pData[0]->info.type;
+
+ SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx);
+ SMinmaxResInfo* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
+ if (IS_FLOAT_TYPE(type)) {
+ if (pSBuf->assign &&
+ ( (((*(double*)&pDBuf->v) < (*(double*)&pSBuf->v)) ^ isMinFunc) || !pDBuf->assign ) ) {
+ *(double*) &pDBuf->v = *(double*) &pSBuf->v;
+ }
+ } else {
+ if ( pSBuf->assign && ( ((pDBuf->v < pSBuf->v) ^ isMinFunc) || !pDBuf->assign ) ) {
+ pDBuf->v = pSBuf->v;
+ }
+ }
+ SET_VAL(pDResInfo, *((int64_t*)pDBuf), 1);
+ return TSDB_CODE_SUCCESS;
+}
+
+int32_t minCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
+ return minMaxCombine(pDestCtx, pSourceCtx, 1);
+}
+int32_t maxCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
+ return minMaxCombine(pDestCtx, pSourceCtx, 0);
+}
+
bool getStddevFuncEnv(SFunctionNode* pFunc, SFuncExecEnv* pEnv) {
pEnv->calcMemSize = sizeof(SStddevRes);
return true;
@@ -1491,6 +1587,25 @@ int32_t stddevFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
return functionFinalize(pCtx, pBlock);
}
+int32_t stddevCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
+ SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx);
+ SStddevRes* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo);
+ int32_t type = pDestCtx->input.pData[0]->info.type;
+
+ SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx);
+ SStddevRes* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
+
+ if (IS_INTEGER_TYPE(type)) {
+ pDBuf->isum += pSBuf->isum;
+ pDBuf->quadraticISum += pSBuf->quadraticISum;
+ } else {
+ pDBuf->dsum += pSBuf->dsum;
+ pDBuf->quadraticDSum += pSBuf->quadraticDSum;
+ }
+ pDBuf->count += pSBuf->count;
+ return TSDB_CODE_SUCCESS;
+}
+
bool getLeastSQRFuncEnv(SFunctionNode* pFunc, SFuncExecEnv* pEnv) {
pEnv->calcMemSize = sizeof(SLeastSQRInfo);
return true;
@@ -1979,6 +2094,24 @@ int32_t lastFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
return pResInfo->numOfRes;
}
+int32_t lastCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
+ SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx);
+ char* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo);
+ int32_t type = pDestCtx->input.pData[0]->info.type;
+ int32_t bytes = pDestCtx->input.pData[0]->info.bytes;
+
+ SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx);
+ char* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
+
+ if (pSResInfo->numOfRes != 0 &&
+ (pDResInfo->numOfRes == 0 || *(TSKEY*)(pDBuf + bytes) < *(TSKEY*)(pSBuf + bytes)) ) {
+ memcpy(pDBuf, pSBuf, bytes);
+ *(TSKEY*)(pDBuf + bytes) = *(TSKEY*)(pSBuf + bytes);
+ pDResInfo->numOfRes = 1;
+ }
+ return TSDB_CODE_SUCCESS;
+}
+
bool getDiffFuncEnv(SFunctionNode* UNUSED_PARAM(pFunc), SFuncExecEnv* pEnv) {
pEnv->calcMemSize = sizeof(SDiffInfo);
return true;
diff --git a/source/libs/function/src/functionMgt.c b/source/libs/function/src/functionMgt.c
index 49b20ebc85..506b0eb8da 100644
--- a/source/libs/function/src/functionMgt.c
+++ b/source/libs/function/src/functionMgt.c
@@ -118,6 +118,7 @@ int32_t fmGetFuncExecFuncs(int32_t funcId, SFuncExecFuncs* pFpSet) {
pFpSet->init = funcMgtBuiltins[funcId].initFunc;
pFpSet->process = funcMgtBuiltins[funcId].processFunc;
pFpSet->finalize = funcMgtBuiltins[funcId].finalizeFunc;
+ pFpSet->combine = funcMgtBuiltins[funcId].combineFunc;
return TSDB_CODE_SUCCESS;
}
diff --git a/source/libs/monitor/src/monMain.c b/source/libs/monitor/src/monMain.c
index 3ece089a28..bf857ad718 100644
--- a/source/libs/monitor/src/monMain.c
+++ b/source/libs/monitor/src/monMain.c
@@ -530,7 +530,8 @@ void monSendReport() {
monGenLogJson(pMonitor);
char *pCont = tjsonToString(pMonitor->pJson);
- if (pCont != NULL) {
+ // uDebugL("report cont:%s\n", pCont);
+ if (pCont != NULL) {
EHttpCompFlag flag = tsMonitor.cfg.comp ? HTTP_GZIP : HTTP_FLAT;
if (taosSendHttpReport(tsMonitor.cfg.server, tsMonitor.cfg.port, pCont, strlen(pCont), flag) != 0) {
uError("failed to send monitor msg");
diff --git a/source/libs/nodes/src/nodesCodeFuncs.c b/source/libs/nodes/src/nodesCodeFuncs.c
index f28885aad5..8887b9841a 100644
--- a/source/libs/nodes/src/nodesCodeFuncs.c
+++ b/source/libs/nodes/src/nodesCodeFuncs.c
@@ -230,6 +230,8 @@ const char* nodesNodeName(ENodeType type) {
return "PhysiFill";
case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW:
return "PhysiSessionWindow";
+ case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW:
+ return "PhysiStreamSessionWindow";
case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW:
return "PhysiStateWindow";
case QUERY_NODE_PHYSICAL_PLAN_PARTITION:
@@ -2528,6 +2530,29 @@ static int32_t jsonToOrderByExprNode(const SJson* pJson, void* pObj) {
return code;
}
+static const char* jkSessionWindowTsPrimaryKey = "TsPrimaryKey";
+static const char* jkSessionWindowGap = "Gap";
+
+static int32_t sessionWindowNodeToJson(const void* pObj, SJson* pJson) {
+ const SSessionWindowNode * pNode = (const SSessionWindowNode*)pObj;
+
+ int32_t code = tjsonAddObject(pJson, jkSessionWindowTsPrimaryKey, nodeToJson, pNode->pCol);
+ if (TSDB_CODE_SUCCESS == code) {
+ code = tjsonAddObject(pJson, jkSessionWindowGap, nodeToJson, pNode->pGap);
+ }
+ return code;
+}
+
+static int32_t jsonToSessionWindowNode(const SJson* pJson, void* pObj) {
+ SSessionWindowNode* pNode = (SSessionWindowNode*)pObj;
+
+ int32_t code = jsonToNodeObject(pJson, jkSessionWindowTsPrimaryKey, (SNode **)&pNode->pCol);
+ if (TSDB_CODE_SUCCESS == code) {
+ code = jsonToNodeObject(pJson, jkSessionWindowGap, (SNode **)&pNode->pGap);
+ }
+ return code;
+}
+
static const char* jkIntervalWindowInterval = "Interval";
static const char* jkIntervalWindowOffset = "Offset";
static const char* jkIntervalWindowSliding = "Sliding";
@@ -3015,8 +3040,9 @@ static int32_t specificNodeToJson(const void* pObj, SJson* pJson) {
return orderByExprNodeToJson(pObj, pJson);
case QUERY_NODE_LIMIT:
case QUERY_NODE_STATE_WINDOW:
- case QUERY_NODE_SESSION_WINDOW:
break;
+ case QUERY_NODE_SESSION_WINDOW:
+ return sessionWindowNodeToJson(pObj, pJson);
case QUERY_NODE_INTERVAL_WINDOW:
return intervalWindowNodeToJson(pObj, pJson);
case QUERY_NODE_NODE_LIST:
@@ -3096,6 +3122,7 @@ static int32_t specificNodeToJson(const void* pObj, SJson* pJson) {
case QUERY_NODE_PHYSICAL_PLAN_FILL:
return physiFillNodeToJson(pObj, pJson);
case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW:
+ case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW:
return physiSessionWindowNodeToJson(pObj, pJson);
case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW:
return physiStateWindowNodeToJson(pObj, pJson);
@@ -3134,6 +3161,8 @@ static int32_t jsonToSpecificNode(const SJson* pJson, void* pObj) {
return jsonToTempTableNode(pJson, pObj);
case QUERY_NODE_ORDER_BY_EXPR:
return jsonToOrderByExprNode(pJson, pObj);
+ case QUERY_NODE_SESSION_WINDOW:
+ return jsonToSessionWindowNode(pJson, pObj);
case QUERY_NODE_INTERVAL_WINDOW:
return jsonToIntervalWindowNode(pJson, pObj);
case QUERY_NODE_NODE_LIST:
@@ -3196,6 +3225,7 @@ static int32_t jsonToSpecificNode(const SJson* pJson, void* pObj) {
case QUERY_NODE_PHYSICAL_PLAN_FILL:
return jsonToPhysiFillNode(pJson, pObj);
case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW:
+ case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW:
return jsonToPhysiSessionWindowNode(pJson, pObj);
case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW:
return jsonToPhysiStateWindowNode(pJson, pObj);
diff --git a/source/libs/nodes/src/nodesTraverseFuncs.c b/source/libs/nodes/src/nodesTraverseFuncs.c
index e8274c3c8e..ae1ff5744b 100644
--- a/source/libs/nodes/src/nodesTraverseFuncs.c
+++ b/source/libs/nodes/src/nodesTraverseFuncs.c
@@ -517,6 +517,7 @@ static EDealRes dispatchPhysiPlan(SNode* pNode, ETraversalOrder order, FNodeWalk
res = walkWindowPhysi((SWinodwPhysiNode*)pNode, order, walker, pContext);
break;
case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW:
+ case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW:
res = walkWindowPhysi((SWinodwPhysiNode*)pNode, order, walker, pContext);
break;
case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW: {
diff --git a/source/libs/nodes/src/nodesUtilFuncs.c b/source/libs/nodes/src/nodesUtilFuncs.c
index 6dbfdab67a..e28844f2e1 100644
--- a/source/libs/nodes/src/nodesUtilFuncs.c
+++ b/source/libs/nodes/src/nodesUtilFuncs.c
@@ -262,6 +262,8 @@ SNodeptr nodesMakeNode(ENodeType type) {
return makeNode(type, sizeof(SFillPhysiNode));
case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW:
return makeNode(type, sizeof(SSessionWinodwPhysiNode));
+ case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW:
+ return makeNode(type, sizeof(SStreamSessionWinodwPhysiNode));
case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW:
return makeNode(type, sizeof(SStateWinodwPhysiNode));
case QUERY_NODE_PHYSICAL_PLAN_PARTITION:
@@ -666,6 +668,7 @@ void nodesDestroyNode(SNodeptr pNode) {
destroyWinodwPhysiNode((SWinodwPhysiNode*)pNode);
break;
case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW:
+ case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW:
destroyWinodwPhysiNode((SWinodwPhysiNode*)pNode);
break;
case QUERY_NODE_PHYSICAL_PLAN_DISPATCH:
diff --git a/source/libs/parser/src/parInsert.c b/source/libs/parser/src/parInsert.c
index 678c97fb05..fd79767cc9 100644
--- a/source/libs/parser/src/parInsert.c
+++ b/source/libs/parser/src/parInsert.c
@@ -189,6 +189,7 @@ static int32_t createSName(SName* pName, SToken* pTableName, int32_t acctId, con
const char* msg1 = "name too long";
const char* msg2 = "invalid database name";
const char* msg3 = "db is not specified";
+ const char* msg4 = "invalid table name";
int32_t code = TSDB_CODE_SUCCESS;
char* p = strnchr(pTableName->z, TS_PATH_DELIMITER[0], pTableName->n, true);
@@ -207,6 +208,10 @@ static int32_t createSName(SName* pName, SToken* pTableName, int32_t acctId, con
}
int32_t tbLen = pTableName->n - dbLen - 1;
+ if (tbLen <= 0) {
+ return buildInvalidOperationMsg(pMsgBuf, msg4);
+ }
+
char tbname[TSDB_TABLE_FNAME_LEN] = {0};
strncpy(tbname, p + 1, tbLen);
/*tbLen = */ strdequote(tbname);
diff --git a/source/libs/planner/src/planPhysiCreater.c b/source/libs/planner/src/planPhysiCreater.c
index fcba2aa2d3..0f88a54e91 100644
--- a/source/libs/planner/src/planPhysiCreater.c
+++ b/source/libs/planner/src/planPhysiCreater.c
@@ -945,7 +945,8 @@ static int32_t createIntervalPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChil
static int32_t createSessionWindowPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChildren,
SWindowLogicNode* pWindowLogicNode, SPhysiNode** pPhyNode) {
SSessionWinodwPhysiNode* pSession = (SSessionWinodwPhysiNode*)makePhysiNode(
- pCxt, getPrecision(pChildren), (SLogicNode*)pWindowLogicNode, QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW);
+ pCxt, getPrecision(pChildren), (SLogicNode*)pWindowLogicNode,
+ (pCxt->pPlanCxt->streamQuery ? QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW : QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW));
if (NULL == pSession) {
return TSDB_CODE_OUT_OF_MEMORY;
}
diff --git a/source/libs/scheduler/src/schRemote.c b/source/libs/scheduler/src/schRemote.c
index 6d9f6b435f..3996771443 100644
--- a/source/libs/scheduler/src/schRemote.c
+++ b/source/libs/scheduler/src/schRemote.c
@@ -565,8 +565,9 @@ int32_t schMakeHbCallbackParam(SSchJob *pJob, SSchTask *pTask, void **pParam) {
int32_t schCloneHbRpcCtx(SRpcCtx *pSrc, SRpcCtx *pDst) {
int32_t code = 0;
- memcpy(&pDst->brokenVal, &pSrc->brokenVal, sizeof(pSrc->brokenVal));
+ memcpy(pDst, pSrc, sizeof(SRpcCtx));
pDst->brokenVal.val = NULL;
+ pDst->args = NULL;
SCH_ERR_RET(schCloneSMsgSendInfo(pSrc->brokenVal.val, &pDst->brokenVal.val));
@@ -589,7 +590,7 @@ int32_t schCloneHbRpcCtx(SRpcCtx *pSrc, SRpcCtx *pDst) {
if (taosHashPut(pDst->args, msgType, sizeof(*msgType), &dst, sizeof(dst))) {
qError("taosHashPut msg %d to rpcCtx failed", *msgType);
- (*dst.freeFunc)(dst.val);
+ (*pSrc->freeFunc)(dst.val);
SCH_ERR_JRET(TSDB_CODE_QRY_OUT_OF_MEMORY);
}
@@ -643,13 +644,14 @@ int32_t schMakeHbRpcCtx(SSchJob *pJob, SSchTask *pTask, SRpcCtx *pCtx) {
pMsgSendInfo->param = param;
pMsgSendInfo->fp = fp;
- SRpcCtxVal ctxVal = {.val = pMsgSendInfo, .clone = schCloneSMsgSendInfo, .freeFunc = schFreeRpcCtxVal};
+ SRpcCtxVal ctxVal = {.val = pMsgSendInfo, .clone = schCloneSMsgSendInfo};
if (taosHashPut(pCtx->args, &msgType, sizeof(msgType), &ctxVal, sizeof(ctxVal))) {
SCH_TASK_ELOG("taosHashPut msg %d to rpcCtx failed", msgType);
SCH_ERR_JRET(TSDB_CODE_QRY_OUT_OF_MEMORY);
}
SCH_ERR_JRET(schMakeBrokenLinkVal(pJob, pTask, &pCtx->brokenVal, true));
+ pCtx->freeFunc = schFreeRpcCtxVal;
return TSDB_CODE_SUCCESS;
@@ -911,7 +913,6 @@ int32_t schMakeBrokenLinkVal(SSchJob *pJob, SSchTask *pTask, SRpcBrokenlinkVal *
brokenVal->msgType = msgType;
brokenVal->val = pMsgSendInfo;
brokenVal->clone = schCloneSMsgSendInfo;
- brokenVal->freeFunc = schFreeRpcCtxVal;
return TSDB_CODE_SUCCESS;
@@ -938,7 +939,7 @@ int32_t schMakeQueryRpcCtx(SSchJob *pJob, SSchTask *pTask, SRpcCtx *pCtx) {
SCH_ERR_JRET(schGenerateCallBackInfo(pJob, pTask, TDMT_VND_EXPLAIN, &pExplainMsgSendInfo));
int32_t msgType = TDMT_VND_RES_READY_RSP;
- SRpcCtxVal ctxVal = {.val = pReadyMsgSendInfo, .clone = schCloneSMsgSendInfo, .freeFunc = schFreeRpcCtxVal};
+ SRpcCtxVal ctxVal = {.val = pReadyMsgSendInfo, .clone = schCloneSMsgSendInfo};
if (taosHashPut(pCtx->args, &msgType, sizeof(msgType), &ctxVal, sizeof(ctxVal))) {
SCH_TASK_ELOG("taosHashPut msg %d to rpcCtx failed", msgType);
SCH_ERR_JRET(TSDB_CODE_QRY_OUT_OF_MEMORY);
@@ -952,6 +953,7 @@ int32_t schMakeQueryRpcCtx(SSchJob *pJob, SSchTask *pTask, SRpcCtx *pCtx) {
}
SCH_ERR_JRET(schMakeBrokenLinkVal(pJob, pTask, &pCtx->brokenVal, false));
+ pCtx->freeFunc = schFreeRpcCtxVal;
return TSDB_CODE_SUCCESS;
diff --git a/source/libs/scheduler/src/schUtil.c b/source/libs/scheduler/src/schUtil.c
index 57a86ba125..3862ba76f6 100644
--- a/source/libs/scheduler/src/schUtil.c
+++ b/source/libs/scheduler/src/schUtil.c
@@ -77,16 +77,14 @@ void schFreeRpcCtx(SRpcCtx *pCtx) {
while (pIter) {
SRpcCtxVal *ctxVal = (SRpcCtxVal *)pIter;
- (*ctxVal->freeFunc)(ctxVal->val);
+ (*pCtx->freeFunc)(ctxVal->val);
pIter = taosHashIterate(pCtx->args, pIter);
}
taosHashCleanup(pCtx->args);
- if (pCtx->brokenVal.freeFunc) {
- (*pCtx->brokenVal.freeFunc)(pCtx->brokenVal.val);
- }
+ (*pCtx->freeFunc)(pCtx->brokenVal.val);
}
diff --git a/source/libs/stream/src/tstream.c b/source/libs/stream/src/tstream.c
index 0acec0e4e6..933b37825f 100644
--- a/source/libs/stream/src/tstream.c
+++ b/source/libs/stream/src/tstream.c
@@ -158,7 +158,9 @@ static int32_t streamTaskExecImpl(SStreamTask* pTask, void* data, SArray* pRes)
ASSERT(false);
}
if (output == NULL) break;
- taosArrayPush(pRes, output);
+ // TODO: do we need free memory?
+ SSDataBlock* outputCopy = createOneDataBlock(output, true);
+ taosArrayPush(pRes, outputCopy);
}
// destroy
@@ -166,6 +168,7 @@ static int32_t streamTaskExecImpl(SStreamTask* pTask, void* data, SArray* pRes)
streamDataSubmitRefDec((SStreamDataSubmit*)data);
} else {
taosArrayDestroyEx(((SStreamDataBlock*)data)->blocks, (FDelete)tDeleteSSDataBlock);
+ taosFreeQitem(data);
}
return 0;
}
@@ -186,7 +189,7 @@ int32_t streamExec(SStreamTask* pTask, SMsgCb* pMsgCb) {
streamTaskExecImpl(pTask, data, pRes);
- taosFreeQitem(data);
+ /*taosFreeQitem(data);*/
if (taosArrayGetSize(pRes) != 0) {
SStreamDataBlock* resQ = taosAllocateQitem(sizeof(SStreamDataBlock), DEF_QITEM);
@@ -206,7 +209,7 @@ int32_t streamExec(SStreamTask* pTask, SMsgCb* pMsgCb) {
streamTaskExecImpl(pTask, data, pRes);
- taosFreeQitem(data);
+ /*taosFreeQitem(data);*/
if (taosArrayGetSize(pRes) != 0) {
SStreamDataBlock* resQ = taosAllocateQitem(sizeof(SStreamDataBlock), DEF_QITEM);
@@ -228,7 +231,7 @@ int32_t streamExec(SStreamTask* pTask, SMsgCb* pMsgCb) {
streamTaskExecImpl(pTask, data, pRes);
- taosFreeQitem(data);
+ /*taosFreeQitem(data);*/
if (taosArrayGetSize(pRes) != 0) {
SStreamDataBlock* resQ = taosAllocateQitem(sizeof(SStreamDataBlock), DEF_QITEM);
diff --git a/source/libs/stream/src/tstreamUpdate.c b/source/libs/stream/src/tstreamUpdate.c
index d21dadfe55..75319a2354 100644
--- a/source/libs/stream/src/tstreamUpdate.c
+++ b/source/libs/stream/src/tstreamUpdate.c
@@ -127,7 +127,10 @@ static SScalableBf *getSBf(SUpdateInfo *pInfo, TSKEY ts) {
if (pInfo->minTS < 0) {
pInfo->minTS = (TSKEY)(ts / pInfo->interval * pInfo->interval);
}
- uint64_t index = (uint64_t)((ts - pInfo->minTS) / pInfo->interval);
+ int64_t index = (int64_t)((ts - pInfo->minTS) / pInfo->interval);
+ if (index < 0) {
+ return NULL;
+ }
if (index >= pInfo->numSBFs) {
uint64_t count = index + 1 - pInfo->numSBFs;
windowSBfDelete(pInfo, count);
diff --git a/source/libs/sync/inc/syncInt.h b/source/libs/sync/inc/syncInt.h
index 9246041b81..69549d2a7e 100644
--- a/source/libs/sync/inc/syncInt.h
+++ b/source/libs/sync/inc/syncInt.h
@@ -148,8 +148,8 @@ typedef struct SSyncNode {
SSyncRespMgr* pSyncRespMgr;
// restore state
- bool restoreFinish;
- //sem_t restoreSem;
+ bool restoreFinish;
+ // sem_t restoreSem;
SSnapshot* pSnapshot;
} SSyncNode;
diff --git a/source/libs/sync/inc/syncVoteMgr.h b/source/libs/sync/inc/syncVoteMgr.h
index 5bc240e921..716d2f620c 100644
--- a/source/libs/sync/inc/syncVoteMgr.h
+++ b/source/libs/sync/inc/syncVoteMgr.h
@@ -42,6 +42,7 @@ typedef struct SVotesGranted {
SVotesGranted *voteGrantedCreate(SSyncNode *pSyncNode);
void voteGrantedDestroy(SVotesGranted *pVotesGranted);
+void voteGrantedUpdate(SVotesGranted *pVotesGranted, SSyncNode *pSyncNode);
bool voteGrantedMajority(SVotesGranted *pVotesGranted);
void voteGrantedVote(SVotesGranted *pVotesGranted, SyncRequestVoteReply *pMsg);
void voteGrantedReset(SVotesGranted *pVotesGranted, SyncTerm term);
@@ -65,6 +66,7 @@ typedef struct SVotesRespond {
SVotesRespond *votesRespondCreate(SSyncNode *pSyncNode);
void votesRespondDestory(SVotesRespond *pVotesRespond);
+void votesRespondUpdate(SVotesRespond *pVotesRespond, SSyncNode *pSyncNode);
bool votesResponded(SVotesRespond *pVotesRespond, const SRaftId *pRaftId);
void votesRespondAdd(SVotesRespond *pVotesRespond, const SyncRequestVoteReply *pMsg);
void votesRespondReset(SVotesRespond *pVotesRespond, SyncTerm term);
diff --git a/source/libs/sync/src/syncAppendEntries.c b/source/libs/sync/src/syncAppendEntries.c
index fa735e71c0..c9e16c53c8 100644
--- a/source/libs/sync/src/syncAppendEntries.c
+++ b/source/libs/sync/src/syncAppendEntries.c
@@ -357,13 +357,23 @@ int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
} else {
syncNodeBecomeFollower(ths);
}
+
+ // maybe newSyncCfg.myIndex is updated in syncNodeUpdateConfig
+ if (ths->pFsm->FpReConfigCb != NULL) {
+ SReConfigCbMeta cbMeta = {0};
+ cbMeta.code = 0;
+ cbMeta.currentTerm = ths->pRaftStore->currentTerm;
+ cbMeta.index = pEntry->index;
+ cbMeta.term = pEntry->term;
+ ths->pFsm->FpReConfigCb(ths->pFsm, newSyncCfg, cbMeta);
+ }
}
// restore finish
if (pEntry->index == ths->pLogStore->getLastIndex(ths->pLogStore)) {
if (ths->restoreFinish == false) {
- if (ths->pFsm->FpRestoreFinish != NULL) {
- ths->pFsm->FpRestoreFinish(ths->pFsm);
+ if (ths->pFsm->FpRestoreFinishCb != NULL) {
+ ths->pFsm->FpRestoreFinishCb(ths->pFsm);
}
ths->restoreFinish = true;
sInfo("==syncNodeOnAppendEntriesCb== restoreFinish set true %p vgId:%d", ths, ths->vgId);
diff --git a/source/libs/sync/src/syncCommit.c b/source/libs/sync/src/syncCommit.c
index 18c6f8930a..a3d480956e 100644
--- a/source/libs/sync/src/syncCommit.c
+++ b/source/libs/sync/src/syncCommit.c
@@ -134,13 +134,23 @@ void syncMaybeAdvanceCommitIndex(SSyncNode* pSyncNode) {
} else {
syncNodeBecomeFollower(pSyncNode);
}
+
+ // maybe newSyncCfg.myIndex is updated in syncNodeUpdateConfig
+ if (pSyncNode->pFsm->FpReConfigCb != NULL) {
+ SReConfigCbMeta cbMeta = {0};
+ cbMeta.code = 0;
+ cbMeta.currentTerm = pSyncNode->pRaftStore->currentTerm;
+ cbMeta.index = pEntry->index;
+ cbMeta.term = pEntry->term;
+ pSyncNode->pFsm->FpReConfigCb(pSyncNode->pFsm, newSyncCfg, cbMeta);
+ }
}
// restore finish
if (pEntry->index == pSyncNode->pLogStore->getLastIndex(pSyncNode->pLogStore)) {
if (pSyncNode->restoreFinish == false) {
- if (pSyncNode->pFsm->FpRestoreFinish != NULL) {
- pSyncNode->pFsm->FpRestoreFinish(pSyncNode->pFsm);
+ if (pSyncNode->pFsm->FpRestoreFinishCb != NULL) {
+ pSyncNode->pFsm->FpRestoreFinishCb(pSyncNode->pFsm);
}
pSyncNode->restoreFinish = true;
sInfo("==syncMaybeAdvanceCommitIndex== restoreFinish set true %p vgId:%d", pSyncNode, pSyncNode->vgId);
diff --git a/source/libs/sync/src/syncMain.c b/source/libs/sync/src/syncMain.c
index a69a94831d..e4b6fc215f 100644
--- a/source/libs/sync/src/syncMain.c
+++ b/source/libs/sync/src/syncMain.c
@@ -349,7 +349,9 @@ int32_t syncPropose(int64_t rid, const SRpcMsg* pMsg, bool isWeak) {
}
// open/close --------------
-SSyncNode* syncNodeOpen(const SSyncInfo* pSyncInfo) {
+SSyncNode* syncNodeOpen(const SSyncInfo* pOldSyncInfo) {
+ SSyncInfo* pSyncInfo = (SSyncInfo*)pOldSyncInfo;
+
SSyncNode* pSyncNode = (SSyncNode*)taosMemoryMalloc(sizeof(SSyncNode));
assert(pSyncNode != NULL);
memset(pSyncNode, 0, sizeof(SSyncNode));
@@ -361,11 +363,25 @@ SSyncNode* syncNodeOpen(const SSyncInfo* pSyncInfo) {
sError("failed to create dir:%s since %s", pSyncInfo->path, terrstr());
return NULL;
}
+ }
+ snprintf(pSyncNode->configPath, sizeof(pSyncNode->configPath), "%s/raft_config.json", pSyncInfo->path);
+ if (!taosCheckExistFile(pSyncNode->configPath)) {
// create raft config file
- snprintf(pSyncNode->configPath, sizeof(pSyncNode->configPath), "%s/raft_config.json", pSyncInfo->path);
ret = syncCfgCreateFile((SSyncCfg*)&(pSyncInfo->syncCfg), pSyncNode->configPath);
assert(ret == 0);
+
+ } else {
+ // update syncCfg by raft_config.json
+ pSyncNode->pRaftCfg = raftCfgOpen(pSyncNode->configPath);
+ assert(pSyncNode->pRaftCfg != NULL);
+ pSyncInfo->syncCfg = pSyncNode->pRaftCfg->cfg;
+
+ char* seralized = raftCfg2Str(pSyncNode->pRaftCfg);
+ sInfo("syncNodeOpen update config :%s", seralized);
+ taosMemoryFree(seralized);
+
+ raftCfgClose(pSyncNode->pRaftCfg);
}
// init by SSyncInfo
@@ -509,7 +525,7 @@ SSyncNode* syncNodeOpen(const SSyncInfo* pSyncInfo) {
pSyncNode->pSnapshot = taosMemoryMalloc(sizeof(SSnapshot));
pSyncNode->pFsm->FpGetSnapshot(pSyncNode->pFsm, pSyncNode->pSnapshot);
}
- //tsem_init(&(pSyncNode->restoreSem), 0, 0);
+ // tsem_init(&(pSyncNode->restoreSem), 0, 0);
// start in syncNodeStart
// start raft
@@ -606,7 +622,7 @@ void syncNodeClose(SSyncNode* pSyncNode) {
taosMemoryFree(pSyncNode->pSnapshot);
}
- //tsem_destroy(&pSyncNode->restoreSem);
+ // tsem_destroy(&pSyncNode->restoreSem);
// free memory in syncFreeNode
// taosMemoryFree(pSyncNode);
@@ -920,6 +936,17 @@ char* syncNode2SimpleStr(const SSyncNode* pSyncNode) {
}
void syncNodeUpdateConfig(SSyncNode* pSyncNode, SSyncCfg* newConfig) {
+ bool hit = false;
+ for (int i = 0; i < newConfig->replicaNum; ++i) {
+ if (strcmp(pSyncNode->myNodeInfo.nodeFqdn, (newConfig->nodeInfo)[i].nodeFqdn) == 0 &&
+ pSyncNode->myNodeInfo.nodePort == (newConfig->nodeInfo)[i].nodePort) {
+ newConfig->myIndex = i;
+ hit = true;
+ break;
+ }
+ }
+ ASSERT(hit == true);
+
pSyncNode->pRaftCfg->cfg = *newConfig;
int32_t ret = raftCfgPersist(pSyncNode->pRaftCfg);
ASSERT(ret == 0);
@@ -949,6 +976,8 @@ void syncNodeUpdateConfig(SSyncNode* pSyncNode, SSyncCfg* newConfig) {
syncIndexMgrUpdate(pSyncNode->pNextIndex, pSyncNode);
syncIndexMgrUpdate(pSyncNode->pMatchIndex, pSyncNode);
+ voteGrantedUpdate(pSyncNode->pVotesGranted, pSyncNode);
+ votesRespondUpdate(pSyncNode->pVotesRespond, pSyncNode);
syncNodeLog2("==syncNodeUpdateConfig==", pSyncNode);
}
diff --git a/source/libs/sync/src/syncVoteMgr.c b/source/libs/sync/src/syncVoteMgr.c
index 733dfd05b6..1c1f0809bd 100644
--- a/source/libs/sync/src/syncVoteMgr.c
+++ b/source/libs/sync/src/syncVoteMgr.c
@@ -45,6 +45,17 @@ void voteGrantedDestroy(SVotesGranted *pVotesGranted) {
}
}
+void voteGrantedUpdate(SVotesGranted *pVotesGranted, SSyncNode *pSyncNode) {
+ pVotesGranted->replicas = &(pSyncNode->replicasId);
+ pVotesGranted->replicaNum = pSyncNode->replicaNum;
+ voteGrantedClearVotes(pVotesGranted);
+
+ pVotesGranted->term = 0;
+ pVotesGranted->quorum = pSyncNode->quorum;
+ pVotesGranted->toLeader = false;
+ pVotesGranted->pSyncNode = pSyncNode;
+}
+
bool voteGrantedMajority(SVotesGranted *pVotesGranted) {
bool ret = pVotesGranted->votes >= pVotesGranted->quorum;
return ret;
@@ -168,6 +179,13 @@ void votesRespondDestory(SVotesRespond *pVotesRespond) {
}
}
+void votesRespondUpdate(SVotesRespond *pVotesRespond, SSyncNode *pSyncNode) {
+ pVotesRespond->replicas = &(pSyncNode->replicasId);
+ pVotesRespond->replicaNum = pSyncNode->replicaNum;
+ pVotesRespond->term = 0;
+ pVotesRespond->pSyncNode = pSyncNode;
+}
+
bool votesResponded(SVotesRespond *pVotesRespond, const SRaftId *pRaftId) {
bool ret = false;
for (int i = 0; i < pVotesRespond->replicaNum; ++i) {
diff --git a/source/libs/sync/test/syncConfigChangeTest.cpp b/source/libs/sync/test/syncConfigChangeTest.cpp
index 0850ef6343..f52fef0019 100644
--- a/source/libs/sync/test/syncConfigChangeTest.cpp
+++ b/source/libs/sync/test/syncConfigChangeTest.cpp
@@ -73,9 +73,7 @@ int32_t GetSnapshotCb(struct SSyncFSM* pFsm, SSnapshot* pSnapshot) {
return 0;
}
-void FpRestoreFinishCb(struct SSyncFSM* pFsm) {
- sTrace("==callback== ==FpRestoreFinishCb==");
-}
+void RestoreFinishCb(struct SSyncFSM* pFsm) { sTrace("==callback== ==RestoreFinishCb=="); }
SSyncFSM* createFsm() {
SSyncFSM* pFsm = (SSyncFSM*)taosMemoryMalloc(sizeof(SSyncFSM));
@@ -83,7 +81,7 @@ SSyncFSM* createFsm() {
pFsm->FpPreCommitCb = PreCommitCb;
pFsm->FpRollBackCb = RollBackCb;
pFsm->FpGetSnapshot = GetSnapshotCb;
- pFsm->FpRestoreFinish = FpRestoreFinishCb;
+ pFsm->FpRestoreFinishCb = RestoreFinishCb;
return pFsm;
}
diff --git a/source/libs/sync/test/syncTest.cpp b/source/libs/sync/test/syncTest.cpp
index 76024e061e..ffe8b81571 100644
--- a/source/libs/sync/test/syncTest.cpp
+++ b/source/libs/sync/test/syncTest.cpp
@@ -49,7 +49,7 @@ void test4() {
logTest((char*)__FUNCTION__);
}
-int main() {
+int main(int argc, char** argv) {
// taosInitLog("tmp/syncTest.log", 100);
tsAsyncLog = 0;
@@ -58,6 +58,14 @@ int main() {
test3();
test4();
+ if (argc == 2) {
+ bool bTaosDirExist = taosDirExist(argv[1]);
+ printf("%s bTaosDirExist:%d \n", argv[1], bTaosDirExist);
+
+ bool bTaosCheckExistFile = taosCheckExistFile(argv[1]);
+ printf("%s bTaosCheckExistFile:%d \n", argv[1], bTaosCheckExistFile);
+ }
+
// taosCloseLog();
return 0;
}
diff --git a/source/libs/transport/inc/transComm.h b/source/libs/transport/inc/transComm.h
index 30f799f39e..683f6c88c6 100644
--- a/source/libs/transport/inc/transComm.h
+++ b/source/libs/transport/inc/transComm.h
@@ -95,8 +95,8 @@ typedef void* queue[2];
#define QUEUE_DATA(e, type, field) ((type*)((void*)((char*)(e)-offsetof(type, field))))
#define TRANS_RETRY_COUNT_LIMIT 100 // retry count limit
-#define TRANS_RETRY_INTERVAL 15 // ms retry interval
-#define TRANS_CONN_TIMEOUT 3 // connect timeout
+#define TRANS_RETRY_INTERVAL 15 // ms retry interval
+#define TRANS_CONN_TIMEOUT 3 // connect timeout
typedef SRpcMsg STransMsg;
typedef SRpcCtx STransCtx;
diff --git a/source/libs/transport/src/transCli.c b/source/libs/transport/src/transCli.c
index 92c5e9faf7..159b0cdd07 100644
--- a/source/libs/transport/src/transCli.c
+++ b/source/libs/transport/src/transCli.c
@@ -131,6 +131,19 @@ static void destroyThrdObj(SCliThrdObj* pThrd);
static void cliWalkCb(uv_handle_t* handle, void* arg);
+static void cliReleaseUnfinishedMsg(SCliConn* conn) {
+ SCliMsg* pMsg = NULL;
+ for (int i = 0; i < transQueueSize(&conn->cliMsgs); i++) {
+ pMsg = transQueueGet(&conn->cliMsgs, i);
+ if (pMsg != NULL && pMsg->ctx != NULL) {
+ if (conn->ctx.freeFunc != NULL) {
+ conn->ctx.freeFunc(pMsg->ctx->ahandle);
+ }
+ }
+ destroyCmsg(pMsg);
+ }
+}
+
#define CLI_RELEASE_UV(loop) \
do { \
uv_walk(loop, cliWalkCb, NULL); \
@@ -161,6 +174,7 @@ static void cliWalkCb(uv_handle_t* handle, void* arg);
transUnrefCliHandle(conn); \
} \
destroyCmsg(pMsg); \
+ cliReleaseUnfinishedMsg(conn); \
addConnToPool(((SCliThrdObj*)conn->hostThrd)->pool, conn); \
return; \
} \
@@ -465,8 +479,8 @@ static void addConnToPool(void* pool, SCliConn* conn) {
STrans* pTransInst = ((SCliThrdObj*)conn->hostThrd)->pTransInst;
conn->expireTime = taosGetTimestampMs() + CONN_PERSIST_TIME(pTransInst->idleTime);
- transCtxCleanup(&conn->ctx);
transQueueClear(&conn->cliMsgs);
+ transCtxCleanup(&conn->ctx);
conn->status = ConnInPool;
char key[128] = {0};
diff --git a/source/libs/transport/src/transComm.c b/source/libs/transport/src/transComm.c
index 7014cc481f..1ea03083b2 100644
--- a/source/libs/transport/src/transComm.c
+++ b/source/libs/transport/src/transComm.c
@@ -233,7 +233,7 @@ void transCtxCleanup(STransCtx* ctx) {
STransCtxVal* iter = taosHashIterate(ctx->args, NULL);
while (iter) {
- iter->freeFunc(iter->val);
+ ctx->freeFunc(iter->val);
iter = taosHashIterate(ctx->args, iter);
}
@@ -245,6 +245,7 @@ void transCtxMerge(STransCtx* dst, STransCtx* src) {
if (dst->args == NULL) {
dst->args = src->args;
dst->brokenVal = src->brokenVal;
+ dst->freeFunc = src->freeFunc;
src->args = NULL;
return;
}
@@ -257,7 +258,7 @@ void transCtxMerge(STransCtx* dst, STransCtx* src) {
STransCtxVal* dVal = taosHashGet(dst->args, key, klen);
if (dVal) {
- dVal->freeFunc(dVal->val);
+ dst->freeFunc(dVal->val);
}
taosHashPut(dst->args, key, klen, sVal, sizeof(*sVal));
iter = taosHashIterate(src->args, iter);
diff --git a/source/libs/transport/test/transportTests.cpp b/source/libs/transport/test/transportTests.cpp
index a84bd94a00..6c8b30b6e4 100644
--- a/source/libs/transport/test/transportTests.cpp
+++ b/source/libs/transport/test/transportTests.cpp
@@ -156,80 +156,80 @@ int32_t cloneVal(void *src, void **dst) {
memcpy(*dst, src, sz);
return 0;
}
-TEST_F(TransCtxEnv, mergeTest) {
- int key = 1;
- {
- STransCtx *src = (STransCtx *)taosMemoryCalloc(1, sizeof(STransCtx));
- transCtxInit(src);
- {
- STransCtxVal val1 = {NULL, NULL, (void (*)(const void *))taosMemoryFree};
- val1.val = taosMemoryMalloc(12);
-
- taosHashPut(src->args, &key, sizeof(key), &val1, sizeof(val1));
- key++;
- }
- {
- STransCtxVal val1 = {NULL, NULL, (void (*)(const void *))taosMemoryFree};
- val1.val = taosMemoryMalloc(12);
- taosHashPut(src->args, &key, sizeof(key), &val1, sizeof(val1));
- key++;
- }
- transCtxMerge(ctx, src);
- taosMemoryFree(src);
- }
- EXPECT_EQ(2, taosHashGetSize(ctx->args));
- {
- STransCtx *src = (STransCtx *)taosMemoryCalloc(1, sizeof(STransCtx));
- transCtxInit(src);
- {
- STransCtxVal val1 = {NULL, NULL, (void (*)(const void *))taosMemoryFree};
- val1.val = taosMemoryMalloc(12);
-
- taosHashPut(src->args, &key, sizeof(key), &val1, sizeof(val1));
- key++;
- }
- {
- STransCtxVal val1 = {NULL, NULL, (void (*)(const void *))taosMemoryFree};
- val1.val = taosMemoryMalloc(12);
- taosHashPut(src->args, &key, sizeof(key), &val1, sizeof(val1));
- key++;
- }
- transCtxMerge(ctx, src);
- taosMemoryFree(src);
- }
- std::string val("Hello");
- EXPECT_EQ(4, taosHashGetSize(ctx->args));
- {
- key = 1;
- STransCtx *src = (STransCtx *)taosMemoryCalloc(1, sizeof(STransCtx));
- transCtxInit(src);
- {
- STransCtxVal val1 = {NULL, NULL, (void (*)(const void *))taosMemoryFree};
- val1.val = taosMemoryCalloc(1, 11);
- val1.clone = cloneVal;
- memcpy(val1.val, val.c_str(), val.size());
-
- taosHashPut(src->args, &key, sizeof(key), &val1, sizeof(val1));
- key++;
- }
- {
- STransCtxVal val1 = {NULL, NULL, (void (*)(const void *))taosMemoryFree};
- val1.val = taosMemoryCalloc(1, 11);
- val1.clone = cloneVal;
- memcpy(val1.val, val.c_str(), val.size());
- taosHashPut(src->args, &key, sizeof(key), &val1, sizeof(val1));
- key++;
- }
- transCtxMerge(ctx, src);
- taosMemoryFree(src);
- }
- EXPECT_EQ(4, taosHashGetSize(ctx->args));
-
- char *skey = (char *)transCtxDumpVal(ctx, 1);
- EXPECT_EQ(0, strcmp(skey, val.c_str()));
- taosMemoryFree(skey);
-
- skey = (char *)transCtxDumpVal(ctx, 2);
- EXPECT_EQ(0, strcmp(skey, val.c_str()));
-}
+// TEST_F(TransCtxEnv, mergeTest) {
+// int key = 1;
+// {
+// STransCtx *src = (STransCtx *)taosMemoryCalloc(1, sizeof(STransCtx));
+// transCtxInit(src);
+// {
+// STransCtxVal val1 = {NULL, NULL, (void (*)(const void *))taosMemoryFree};
+// val1.val = taosMemoryMalloc(12);
+//
+// taosHashPut(src->args, &key, sizeof(key), &val1, sizeof(val1));
+// key++;
+// }
+// {
+// STransCtxVal val1 = {NULL, NULL, (void (*)(const void *))taosMemoryFree};
+// val1.val = taosMemoryMalloc(12);
+// taosHashPut(src->args, &key, sizeof(key), &val1, sizeof(val1));
+// key++;
+// }
+// transCtxMerge(ctx, src);
+// taosMemoryFree(src);
+// }
+// EXPECT_EQ(2, taosHashGetSize(ctx->args));
+// {
+// STransCtx *src = (STransCtx *)taosMemoryCalloc(1, sizeof(STransCtx));
+// transCtxInit(src);
+// {
+// STransCtxVal val1 = {NULL, NULL, (void (*)(const void *))taosMemoryFree};
+// val1.val = taosMemoryMalloc(12);
+//
+// taosHashPut(src->args, &key, sizeof(key), &val1, sizeof(val1));
+// key++;
+// }
+// {
+// STransCtxVal val1 = {NULL, NULL, (void (*)(const void *))taosMemoryFree};
+// val1.val = taosMemoryMalloc(12);
+// taosHashPut(src->args, &key, sizeof(key), &val1, sizeof(val1));
+// key++;
+// }
+// transCtxMerge(ctx, src);
+// taosMemoryFree(src);
+// }
+// std::string val("Hello");
+// EXPECT_EQ(4, taosHashGetSize(ctx->args));
+// {
+// key = 1;
+// STransCtx *src = (STransCtx *)taosMemoryCalloc(1, sizeof(STransCtx));
+// transCtxInit(src);
+// {
+// STransCtxVal val1 = {NULL, NULL, (void (*)(const void *))taosMemoryFree};
+// val1.val = taosMemoryCalloc(1, 11);
+// val1.clone = cloneVal;
+// memcpy(val1.val, val.c_str(), val.size());
+//
+// taosHashPut(src->args, &key, sizeof(key), &val1, sizeof(val1));
+// key++;
+// }
+// {
+// STransCtxVal val1 = {NULL, NULL, (void (*)(const void *))taosMemoryFree};
+// val1.val = taosMemoryCalloc(1, 11);
+// val1.clone = cloneVal;
+// memcpy(val1.val, val.c_str(), val.size());
+// taosHashPut(src->args, &key, sizeof(key), &val1, sizeof(val1));
+// key++;
+// }
+// transCtxMerge(ctx, src);
+// taosMemoryFree(src);
+// }
+// EXPECT_EQ(4, taosHashGetSize(ctx->args));
+//
+// char *skey = (char *)transCtxDumpVal(ctx, 1);
+// EXPECT_EQ(0, strcmp(skey, val.c_str()));
+// taosMemoryFree(skey);
+//
+// skey = (char *)transCtxDumpVal(ctx, 2);
+// EXPECT_EQ(0, strcmp(skey, val.c_str()));
+//}
#endif
diff --git a/source/os/src/osDir.c b/source/os/src/osDir.c
index c4b7c9386e..75797048ca 100644
--- a/source/os/src/osDir.c
+++ b/source/os/src/osDir.c
@@ -107,13 +107,14 @@ int32_t taosMkDir(const char *dirname) {
int32_t taosMulMkDir(const char *dirname) {
if (dirname == NULL) return -1;
char temp[1024];
+ char * pos = temp;
+ int32_t code = 0;
#ifdef WINDOWS
taosRealPath(dirname, temp, sizeof(temp));
+ if (temp[1] == ':') pos += 3;
#else
strcpy(temp, dirname);
#endif
- char * pos = temp;
- int32_t code = 0;
if (taosDirExist(temp)) return code;
diff --git a/source/os/src/osFile.c b/source/os/src/osFile.c
index e08b668163..c75cca79f6 100644
--- a/source/os/src/osFile.c
+++ b/source/os/src/osFile.c
@@ -69,7 +69,6 @@ void taosGetTmpfilePath(const char *inputTmpDir, const char *fileNamePrefix, cha
}
strcpy(tmpPath + len, tdengineTmpFileNamePrefix);
- strcat(tmpPath, tdengineTmpFileNamePrefix);
if (strlen(tmpPath) + strlen(fileNamePrefix) + strlen("-%d-%s") < PATH_MAX) {
strcat(tmpPath, fileNamePrefix);
strcat(tmpPath, "-%d-%s");
diff --git a/source/os/src/osSemaphore.c b/source/os/src/osSemaphore.c
index d4cfe4fc39..3b68073c7e 100644
--- a/source/os/src/osSemaphore.c
+++ b/source/os/src/osSemaphore.c
@@ -50,10 +50,15 @@ int32_t taosGetAppName(char* name, int32_t* len) {
if (sub != NULL) {
*sub = '\0';
}
- strcpy(name, filepath);
+ char* end = strrchr(filepath, TD_DIRSEP[0]);
+ if (end == NULL) {
+ end = filepath;
+ }
+
+ strcpy(name, end);
if (len != NULL) {
- *len = (int32_t)strlen(filepath);
+ *len = (int32_t)strlen(end);
}
return 0;
diff --git a/source/os/src/osSocket.c b/source/os/src/osSocket.c
index 572e2db6fd..4a0d9e2866 100644
--- a/source/os/src/osSocket.c
+++ b/source/os/src/osSocket.c
@@ -889,11 +889,11 @@ uint32_t taosGetIpv4FromFqdn(const char *fqdn) {
#ifdef WINDOWS
// Initialize Winsock
WSADATA wsaData;
- int iResult;
+ int iResult;
iResult = WSAStartup(MAKEWORD(2, 2), &wsaData);
if (iResult != 0) {
- printf("WSAStartup failed: %d\n", iResult);
- return 1;
+ // printf("WSAStartup failed: %d\n", iResult);
+ return 1;
}
#endif
struct addrinfo hints = {0};
@@ -913,12 +913,12 @@ uint32_t taosGetIpv4FromFqdn(const char *fqdn) {
} else {
#ifdef EAI_SYSTEM
if (ret == EAI_SYSTEM) {
- printf("failed to get the ip address, fqdn:%s, errno:%d, since:%s", fqdn, errno, strerror(errno));
+ // printf("failed to get the ip address, fqdn:%s, errno:%d, since:%s", fqdn, errno, strerror(errno));
} else {
- printf("failed to get the ip address, fqdn:%s, ret:%d, since:%s", fqdn, ret, gai_strerror(ret));
+ // printf("failed to get the ip address, fqdn:%s, ret:%d, since:%s", fqdn, ret, gai_strerror(ret));
}
#else
- printf("failed to get the ip address, fqdn:%s, ret:%d, since:%s", fqdn, ret, gai_strerror(ret));
+ // printf("failed to get the ip address, fqdn:%s, ret:%d, since:%s", fqdn, ret, gai_strerror(ret));
#endif
return 0xFFFFFFFF;
}
@@ -928,7 +928,7 @@ int32_t taosGetFqdn(char *fqdn) {
char hostname[1024];
hostname[1023] = '\0';
if (gethostname(hostname, 1023) == -1) {
- printf("failed to get hostname, reason:%s", strerror(errno));
+ // printf("failed to get hostname, reason:%s", strerror(errno));
assert(0);
return -1;
}
@@ -946,7 +946,7 @@ int32_t taosGetFqdn(char *fqdn) {
#endif // __APPLE__
int32_t ret = getaddrinfo(hostname, NULL, &hints, &result);
if (!result) {
- printf("failed to get fqdn, code:%d, reason:%s", ret, gai_strerror(ret));
+ // printf("failed to get fqdn, code:%d, reason:%s", ret, gai_strerror(ret));
assert(0);
return -1;
}
@@ -993,9 +993,7 @@ void tinet_ntoa(char *ipstr, uint32_t ip) {
sprintf(ipstr, "%d.%d.%d.%d", ip & 0xFF, (ip >> 8) & 0xFF, (ip >> 16) & 0xFF, ip >> 24);
}
-void taosIgnSIGPIPE() {
- signal(SIGPIPE, SIG_IGN);
-}
+void taosIgnSIGPIPE() { signal(SIGPIPE, SIG_IGN); }
void taosSetMaskSIGPIPE() {
#ifdef WINDOWS
diff --git a/source/util/src/terror.c b/source/util/src/terror.c
index 7c4f0fa2dd..6eb4f9310b 100644
--- a/source/util/src/terror.c
+++ b/source/util/src/terror.c
@@ -272,6 +272,10 @@ TAOS_DEFINE_ERROR(TSDB_CODE_MND_CONSUMER_NOT_EXIST, "Consumer not exist")
TAOS_DEFINE_ERROR(TSDB_CODE_MND_CONSUMER_NOT_READY, "Consumer waiting for rebalance")
TAOS_DEFINE_ERROR(TSDB_CODE_MND_TOPIC_SUBSCRIBED, "Topic subscribed cannot be dropped")
+TAOS_DEFINE_ERROR(TSDB_CODE_MND_STREAM_ALREADY_EXIST, "Stream already exists")
+TAOS_DEFINE_ERROR(TSDB_CODE_MND_STREAM_NOT_EXIST, "Stream not exist")
+TAOS_DEFINE_ERROR(TSDB_CODE_MND_INVALID_STREAM_OPTION, "Invalid stream option")
+
// mnode-sma
TAOS_DEFINE_ERROR(TSDB_CODE_MND_SMA_ALREADY_EXIST, "SMA already exists")
TAOS_DEFINE_ERROR(TSDB_CODE_MND_SMA_NOT_EXIST, "SMA does not exist")
@@ -311,6 +315,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_VND_TABLE_NOT_EXIST, "Table does not exists
TAOS_DEFINE_ERROR(TSDB_CODE_VND_INVALID_TABLE_ACTION, "Invalid table action")
TAOS_DEFINE_ERROR(TSDB_CODE_VND_COL_ALREADY_EXISTS, "Table column already exists")
TAOS_DEFINE_ERROR(TSDB_CODE_VND_TABLE_COL_NOT_EXISTS, "Table column not exists")
+TAOS_DEFINE_ERROR(TSDB_CODE_VND_READ_END, "Read end")
// tsdb
diff --git a/source/util/src/thash.c b/source/util/src/thash.c
index 551c3b67c8..f564ae45b6 100644
--- a/source/util/src/thash.c
+++ b/source/util/src/thash.c
@@ -708,7 +708,7 @@ SHashNode *doCreateHashNode(const void *key, size_t keyLen, const void *pData, s
pNewNode->removed = 0;
pNewNode->next = NULL;
- memcpy(GET_HASH_NODE_DATA(pNewNode), pData, dsize);
+ if (pData) memcpy(GET_HASH_NODE_DATA(pNewNode), pData, dsize);
memcpy(GET_HASH_NODE_KEY(pNewNode), key, keyLen);
return pNewNode;
@@ -774,7 +774,7 @@ static void *taosHashReleaseNode(SHashObj *pHashObj, void *p, int *slot) {
ASSERT(prevNode->next != prevNode);
} else {
pe->next = pOld->next;
- SHashNode* x = pe->next;
+ SHashNode *x = pe->next;
if (x != NULL) {
ASSERT(x->next != x);
}
diff --git a/source/util/src/tlog.c b/source/util/src/tlog.c
index c1fc2c48c0..e8a1ceb18b 100644
--- a/source/util/src/tlog.c
+++ b/source/util/src/tlog.c
@@ -226,7 +226,7 @@ static void *taosThreadToOpenNewFile(void *param) {
tsLogObj.logHandle->pFile = pFile;
tsLogObj.lines = 0;
tsLogObj.openInProgress = 0;
- taosSsleep(10);
+ taosSsleep(20);
taosCloseLogByFd(pOldFile);
uInfo(" new log file:%d is opened", tsLogObj.flag);
diff --git a/source/util/src/tpagedbuf.c b/source/util/src/tpagedbuf.c
index 00f1233707..101ac78e18 100644
--- a/source/util/src/tpagedbuf.c
+++ b/source/util/src/tpagedbuf.c
@@ -549,11 +549,16 @@ void destroyDiskbasedBuf(SDiskbasedBuf* pBuf) {
// print the statistics information
{
SDiskbasedBufStatis* ps = &pBuf->statis;
- uDebug(
- "Get/Release pages:%d/%d, flushToDisk:%.2f Kb (%d Pages), loadFromDisk:%.2f Kb (%d Pages), avgPageSize:%.2f "
- "Kb\n",
- ps->getPages, ps->releasePages, ps->flushBytes / 1024.0f, ps->flushPages, ps->loadBytes / 1024.0f,
- ps->loadPages, ps->loadBytes / (1024.0 * ps->loadPages));
+ if (ps->loadPages == 0) {
+ uDebug(
+ "Get/Release pages:%d/%d, flushToDisk:%.2f Kb (%d Pages), loadFromDisk:%.2f Kb (%d Pages)",
+ ps->getPages, ps->releasePages, ps->flushBytes / 1024.0f, ps->flushPages, ps->loadBytes / 1024.0f, ps->loadPages);
+ } else {
+ uDebug(
+ "Get/Release pages:%d/%d, flushToDisk:%.2f Kb (%d Pages), loadFromDisk:%.2f Kb (%d Pages), avgPageSize:%.2f Kb",
+ ps->getPages, ps->releasePages, ps->flushBytes / 1024.0f, ps->flushPages, ps->loadBytes / 1024.0f,
+ ps->loadPages, ps->loadBytes / (1024.0 * ps->loadPages));
+ }
}
taosRemoveFile(pBuf->path);
diff --git a/tests/pytest/fulltest.bat b/tests/pytest/fulltest.bat
new file mode 100644
index 0000000000..5758691c88
--- /dev/null
+++ b/tests/pytest/fulltest.bat
@@ -0,0 +1,2 @@
+
+python .\test.py -f insert\basic.py
\ No newline at end of file
diff --git a/tests/pytest/stream/cqSupportBefore1970.py b/tests/pytest/stream/cqSupportBefore1970.py
deleted file mode 100644
index 01ba5234fc..0000000000
--- a/tests/pytest/stream/cqSupportBefore1970.py
+++ /dev/null
@@ -1,93 +0,0 @@
-# No part of this file may be reproduced, stored, transmitted,
-# disclosed or used in any form or by any means other than as
-# expressly provided by the written permission from Jianhui Tao
-#
-###################################################################
-
-# -*- coding: utf-8 -*-
-
-import sys
-from util.log import *
-from util.cases import *
-from util.sql import *
-from util.dnodes import *
-
-
-class TDTestCase:
- def init(self, conn, logSql):
- tdLog.debug(f"start to execute {__file__}")
- tdSql.init(conn.cursor(), logSql)
-
- def insertnow(self):
-
- # timestamp list:
- # 0 -> "1970-01-01 08:00:00" | -28800000 -> "1970-01-01 00:00:00" | -946800000000 -> "1940-01-01 00:00:00"
- # -631180800000 -> "1950-01-01 00:00:00"
-
- tsp1 = 0
- tsp2 = -28800000
- tsp3 = -946800000000
- tsp4 = "1969-01-01 00:00:00.000"
-
- tdSql.execute("insert into tcq1 values (now-11d, 5)")
- tdSql.execute(f"insert into tcq1 values ({tsp1}, 4)")
- tdSql.execute(f"insert into tcq1 values ({tsp2}, 3)")
- tdSql.execute(f"insert into tcq1 values ('{tsp4}', 2)")
- tdSql.execute(f"insert into tcq1 values ({tsp3}, 1)")
-
- def waitedQuery(self, sql, expectRows, timeout):
- tdLog.info(f"sql: {sql}, try to retrieve {expectRows} rows in {timeout} seconds")
- try:
- for i in range(timeout):
- tdSql.cursor.execute(sql)
- self.queryResult = tdSql.cursor.fetchall()
- self.queryRows = len(self.queryResult)
- self.queryCols = len(tdSql.cursor.description)
- # tdLog.info("sql: %s, try to retrieve %d rows,get %d rows" % (sql, expectRows, self.queryRows))
- if self.queryRows >= expectRows:
- return (self.queryRows, i)
- time.sleep(1)
- except Exception as e:
- caller = inspect.getframeinfo(inspect.stack()[1][0])
- tdLog.notice(f"{caller.filename}({caller.lineno}) failed: sql:{sql}, {repr(e)}")
- raise Exception(repr(e))
- return (self.queryRows, timeout)
-
- def cq(self):
- tdSql.execute(
- "create table cq1 as select avg(c1) from tcq1 where ts > -946800000000 interval(10d) sliding(1d)"
- )
- self.waitedQuery("select * from cq1", 1, 120)
-
- def querycq(self):
- tdSql.query("select * from cq1")
- tdSql.checkData(0, 1, 1.0)
- tdSql.checkData(10, 1, 2.0)
-
- def run(self):
- tdSql.execute("drop database if exists dbcq")
- tdSql.execute("create database if not exists dbcq keep 36500")
- tdSql.execute("use dbcq")
-
- tdSql.execute("create table stbcq (ts timestamp, c1 int ) TAGS(t1 int)")
- tdSql.execute("create table tcq1 using stbcq tags(1)")
-
- self.insertnow()
- self.cq()
- self.querycq()
-
- # after wal and sync, check again
- tdSql.query("show dnodes")
- index = tdSql.getData(0, 0)
- tdDnodes.stop(index)
- tdDnodes.start(index)
-
- self.querycq()
-
- def stop(self):
- tdSql.close()
- tdLog.success(f"{__file__} successfully executed")
-
-
-tdCases.addWindows(__file__, TDTestCase())
-tdCases.addLinux(__file__, TDTestCase())
\ No newline at end of file
diff --git a/tests/pytest/stream/history.py b/tests/pytest/stream/history.py
deleted file mode 100644
index cb8a4d5986..0000000000
--- a/tests/pytest/stream/history.py
+++ /dev/null
@@ -1,67 +0,0 @@
-###################################################################
-# Copyright (c) 2016 by TAOS Technologies, Inc.
-# All rights reserved.
-#
-# This file is proprietary and confidential to TAOS Technologies.
-# No part of this file may be reproduced, stored, transmitted,
-# disclosed or used in any form or by any means other than as
-# expressly provided by the written permission from Jianhui Tao
-#
-###################################################################
-
-# -*- coding: utf-8 -*-
-
-import sys
-import time
-import taos
-from util.log import tdLog
-from util.cases import tdCases
-from util.sql import tdSql
-
-
-class TDTestCase:
- def init(self, conn, logSql):
- tdLog.debug("start to execute %s" % __file__)
- tdSql.init(conn.cursor(), logSql)
-
- def run(self):
- tdSql.prepare()
-
- tdSql.execute("create table cars(ts timestamp, s int) tags(id int)")
- tdSql.execute("create table car0 using cars tags(0)")
- tdSql.execute("create table car1 using cars tags(1)")
- tdSql.execute("create table car2 using cars tags(2)")
- tdSql.execute("create table car3 using cars tags(3)")
- tdSql.execute("create table car4 using cars tags(4)")
-
- tdSql.execute("insert into car0 values('2019-01-01 00:00:00.103', 1)")
- tdSql.execute("insert into car1 values('2019-01-01 00:00:00.234', 1)")
- tdSql.execute("insert into car0 values('2019-01-01 00:00:01.012', 1)")
- tdSql.execute("insert into car0 values('2019-01-01 00:00:02.003', 1)")
- tdSql.execute("insert into car2 values('2019-01-01 00:00:02.328', 1)")
- tdSql.execute("insert into car0 values('2019-01-01 00:00:03.139', 1)")
- tdSql.execute("insert into car0 values('2019-01-01 00:00:04.348', 1)")
- tdSql.execute("insert into car0 values('2019-01-01 00:00:05.783', 1)")
- tdSql.execute("insert into car1 values('2019-01-01 00:00:01.893', 1)")
- tdSql.execute("insert into car1 values('2019-01-01 00:00:02.712', 1)")
- tdSql.execute("insert into car1 values('2019-01-01 00:00:03.982', 1)")
- tdSql.execute("insert into car3 values('2019-01-01 00:00:01.389', 1)")
- tdSql.execute("insert into car4 values('2019-01-01 00:00:01.829', 1)")
-
- tdSql.error("create table strm as select count(*) from cars")
-
- tdSql.execute("create table strm as select count(*) from cars interval(4s)")
- tdSql.waitedQuery("select * from strm", 2, 100)
- tdSql.checkData(0, 1, 11)
- tdSql.checkData(1, 1, 2)
-
-
-
-
- def stop(self):
- tdSql.close()
- tdLog.success("%s successfully executed" % __file__)
-
-
-tdCases.addWindows(__file__, TDTestCase())
-tdCases.addLinux(__file__, TDTestCase())
diff --git a/tests/pytest/stream/metric_1.py b/tests/pytest/stream/metric_1.py
deleted file mode 100644
index b4cccac69c..0000000000
--- a/tests/pytest/stream/metric_1.py
+++ /dev/null
@@ -1,104 +0,0 @@
-###################################################################
-# Copyright (c) 2016 by TAOS Technologies, Inc.
-# All rights reserved.
-#
-# This file is proprietary and confidential to TAOS Technologies.
-# No part of this file may be reproduced, stored, transmitted,
-# disclosed or used in any form or by any means other than as
-# expressly provided by the written permission from Jianhui Tao
-#
-###################################################################
-
-# -*- coding: utf-8 -*-
-
-import sys
-import time
-import taos
-from util.log import tdLog
-from util.cases import tdCases
-from util.sql import tdSql
-
-
-class TDTestCase:
- def init(self, conn, logSql):
- tdLog.debug("start to execute %s" % __file__)
- tdSql.init(conn.cursor(), logSql)
-
- def createFuncStream(self, expr, suffix, value):
- tbname = "strm_" + suffix
- tdLog.info("create stream table %s" % tbname)
- tdSql.query("select %s from stb interval(1d)" % expr)
- tdSql.checkData(0, 1, value)
- tdSql.execute("create table %s as select %s from stb interval(1d)" % (tbname, expr))
-
- def checkStreamData(self, suffix, value):
- sql = "select * from strm_" + suffix
- tdSql.waitedQuery(sql, 1, 120)
- tdSql.checkData(0, 1, value)
-
- def run(self):
- tbNum = 10
- rowNum = 20
-
- tdSql.prepare()
-
- tdLog.info("===== preparing data =====")
- tdSql.execute(
- "create table stb(ts timestamp, tbcol int, tbcol2 float) tags(tgcol int)")
- for i in range(tbNum):
- tdSql.execute("create table tb%d using stb tags(%d)" % (i, i))
- for j in range(rowNum):
- tdSql.execute(
- "insert into tb%d values (now - %dm, %d, %d)" %
- (i, 1440 - j, j, j))
- time.sleep(0.1)
-
- self.createFuncStream("count(*)", "c1", 200)
- self.createFuncStream("count(tbcol)", "c2", 200)
- self.createFuncStream("count(tbcol2)", "c3", 200)
- self.createFuncStream("avg(tbcol)", "av", 9.5)
- self.createFuncStream("sum(tbcol)", "su", 1900)
- self.createFuncStream("min(tbcol)", "mi", 0)
- self.createFuncStream("max(tbcol)", "ma", 19)
- self.createFuncStream("first(tbcol)", "fi", 0)
- self.createFuncStream("last(tbcol)", "la", 19)
- #tdSql.query("select stddev(tbcol) from stb interval(1d)")
- #tdSql.query("select leastsquares(tbcol, 1, 1) from stb interval(1d)")
- tdSql.query("select top(tbcol, 1) from stb interval(1d)")
- tdSql.query("select bottom(tbcol, 1) from stb interval(1d)")
- #tdSql.query("select percentile(tbcol, 1) from stb interval(1d)")
- #tdSql.query("select diff(tbcol) from stb interval(1d)")
-
- tdSql.query("select count(tbcol) from stb where ts < now + 4m interval(1d)")
- tdSql.checkData(0, 1, 200)
- #tdSql.execute("create table strm_wh as select count(tbcol) from stb where ts < now + 4m interval(1d)")
-
- self.createFuncStream("count(tbcol)", "as", 200)
-
- tdSql.query("select count(tbcol) from stb interval(1d) group by tgcol")
- tdSql.checkData(0, 1, 20)
-
- tdSql.query("select count(tbcol) from stb where ts < now + 4m interval(1d) group by tgcol")
- tdSql.checkData(0, 1, 20)
-
- self.checkStreamData("c1", 200)
- self.checkStreamData("c2", 200)
- self.checkStreamData("c3", 200)
- self.checkStreamData("av", 9.5)
- self.checkStreamData("su", 1900)
- self.checkStreamData("mi", 0)
- self.checkStreamData("ma", 19)
- self.checkStreamData("fi", 0)
- self.checkStreamData("la", 19)
- #self.checkStreamData("wh", 200)
- self.checkStreamData("as", 200)
-
- def stop(self):
- tdSql.close()
- tdLog.success("%s successfully executed" % __file__)
-
-
-tdCases.addWindows(__file__, TDTestCase())
-tdCases.addLinux(__file__, TDTestCase())
-
-
diff --git a/tests/pytest/stream/metric_n.py b/tests/pytest/stream/metric_n.py
deleted file mode 100644
index d223fe81fc..0000000000
--- a/tests/pytest/stream/metric_n.py
+++ /dev/null
@@ -1,123 +0,0 @@
-###################################################################
-# Copyright (c) 2016 by TAOS Technologies, Inc.
-# All rights reserved.
-#
-# This file is proprietary and confidential to TAOS Technologies.
-# No part of this file may be reproduced, stored, transmitted,
-# disclosed or used in any form or by any means other than as
-# expressly provided by the written permission from Jianhui Tao
-#
-###################################################################
-
-# -*- coding: utf-8 -*-
-
-import sys
-import time
-import taos
-from util.log import tdLog
-from util.cases import tdCases
-from util.sql import tdSql
-
-
-class TDTestCase:
- def init(self, conn, logSql):
- tdLog.debug("start to execute %s" % __file__)
- tdSql.init(conn.cursor(), logSql)
-
- def run(self):
- tbNum = 10
- rowNum = 20
- totalNum = tbNum * rowNum
-
- tdSql.prepare()
-
- tdLog.info("===== preparing data =====")
- tdSql.execute(
- "create table stb(ts timestamp, tbcol int, tbcol2 float) tags(tgcol int)")
- for i in range(tbNum):
- tdSql.execute("create table tb%d using stb tags(%d)" % (i, i))
- for j in range(rowNum):
- tdSql.execute(
- "insert into tb%d values (now - %dm, %d, %d)" %
- (i, 1440 - j, j, j))
- time.sleep(0.1)
-
- tdLog.info("===== step 1 =====")
- tdSql.query("select count(*), count(tbcol), count(tbcol2) from stb interval(1d)")
- tdSql.checkData(0, 1, totalNum)
- tdSql.checkData(0, 2, totalNum)
- tdSql.checkData(0, 3, totalNum)
-
- tdLog.info("===== step 2 =====")
- tdSql.execute("create table strm_c3 as select count(*), count(tbcol), count(tbcol2) from stb interval(1d)")
-
- tdLog.info("===== step 3 =====")
- tdSql.execute("create table strm_c32 as select count(*), count(tbcol) as c1, count(tbcol2) as c2, count(tbcol) as c3, count(tbcol) as c4, count(tbcol) as c5, count(tbcol) as c6, count(tbcol) as c7, count(tbcol) as c8, count(tbcol) as c9, count(tbcol) as c10, count(tbcol) as c11, count(tbcol) as c12, count(tbcol) as c13, count(tbcol) as c14, count(tbcol) as c15, count(tbcol) as c16, count(tbcol) as c17, count(tbcol) as c18, count(tbcol) as c19, count(tbcol) as c20, count(tbcol) as c21, count(tbcol) as c22, count(tbcol) as c23, count(tbcol) as c24, count(tbcol) as c25, count(tbcol) as c26, count(tbcol) as c27, count(tbcol) as c28, count(tbcol) as c29, count(tbcol) as c30 from stb interval(1d)")
-
- tdLog.info("===== step 4 =====")
- tdSql.query("select count(*), count(tbcol) as c1, count(tbcol2) as c2, count(tbcol) as c3, count(tbcol) as c4, count(tbcol) as c5, count(tbcol) as c6, count(tbcol) as c7, count(tbcol) as c8, count(tbcol) as c9, count(tbcol) as c10, count(tbcol) as c11, count(tbcol) as c12, count(tbcol) as c13, count(tbcol) as c14, count(tbcol) as c15, count(tbcol) as c16, count(tbcol) as c17, count(tbcol) as c18, count(tbcol) as c19, count(tbcol) as c20, count(tbcol) as c21, count(tbcol) as c22, count(tbcol) as c23, count(tbcol) as c24, count(tbcol) as c25, count(tbcol) as c26, count(tbcol) as c27, count(tbcol) as c28, count(tbcol) as c29, count(tbcol) as c30 from stb interval(1d)")
- tdSql.checkData(0, 1, totalNum)
- tdSql.checkData(0, 2, totalNum)
- tdSql.checkData(0, 3, totalNum)
-
- tdLog.info("===== step 5 =====")
- tdSql.execute("create table strm_c31 as select count(*), count(tbcol) as c1, count(tbcol2) as c2, count(tbcol) as c3, count(tbcol) as c4, count(tbcol) as c5, count(tbcol) as c6, count(tbcol) as c7, count(tbcol) as c8, count(tbcol) as c9, count(tbcol) as c10, count(tbcol) as c11, count(tbcol) as c12, count(tbcol) as c13, count(tbcol) as c14, count(tbcol) as c15, count(tbcol) as c16, count(tbcol) as c17, count(tbcol) as c18, count(tbcol) as c19, count(tbcol) as c20, count(tbcol) as c21, count(tbcol) as c22, count(tbcol) as c23, count(tbcol) as c24, count(tbcol) as c25, count(tbcol) as c26, count(tbcol) as c27, count(tbcol) as c28, count(tbcol) as c29, count(tbcol) as c30 from stb interval(1d)")
-
- tdLog.info("===== step 6 =====")
- tdSql.query("select avg(tbcol), sum(tbcol), min(tbcol), max(tbcol), first(tbcol), last(tbcol) from stb interval(1d)")
- tdSql.checkData(0, 1, 9.5)
- tdSql.checkData(0, 2, 1900)
- tdSql.checkData(0, 3, 0)
- tdSql.checkData(0, 4, 19)
- tdSql.checkData(0, 5, 0)
- tdSql.checkData(0, 6, 19)
- tdSql.execute("create table strm_avg as select avg(tbcol), sum(tbcol), min(tbcol), max(tbcol), first(tbcol), last(tbcol) from stb interval(1d)")
-
- tdLog.info("===== step 7 =====")
- tdSql.query("select avg(tbcol), sum(tbcol), min(tbcol), max(tbcol), first(tbcol), last(tbcol), count(tbcol) from stb where ts < now + 4m interval(1d)")
- tdSql.checkData(0, 1, 9.5)
- tdSql.checkData(0, 2, 1900)
- tdSql.checkData(0, 3, 0)
- tdSql.checkData(0, 4, 19)
- tdSql.checkData(0, 5, 0)
- tdSql.checkData(0, 6, 19)
- tdSql.checkData(0, 7, totalNum)
-
- tdLog.info("===== step 8 =====")
- tdSql.query("select avg(tbcol), sum(tbcol), min(tbcol), max(tbcol), first(tbcol), last(tbcol), count(tbcol) from stb where ts < now + 4m interval(1d)")
- tdSql.checkData(0, 1, 9.5)
- tdSql.checkData(0, 2, 1900)
- tdSql.checkData(0, 3, 0)
- tdSql.checkData(0, 4, 19)
- tdSql.checkData(0, 5, 0)
- tdSql.checkData(0, 6, 19)
- tdSql.checkData(0, 7, totalNum)
-
- tdLog.info("===== step 9 =====")
- tdSql.waitedQuery("select * from strm_c3", 1, 120)
- tdSql.checkData(0, 1, totalNum)
- tdSql.checkData(0, 2, totalNum)
- tdSql.checkData(0, 3, totalNum)
-
- tdLog.info("===== step 10 =====")
- tdSql.waitedQuery("select * from strm_c31", 1, 30)
- for i in range(1, 10):
- tdSql.checkData(0, i, totalNum)
-
- tdLog.info("===== step 11 =====")
- tdSql.waitedQuery("select * from strm_avg", 1, 20)
- tdSql.checkData(0, 1, 9.5)
- tdSql.checkData(0, 2, 1900)
- tdSql.checkData(0, 3, 0)
- tdSql.checkData(0, 4, 19)
- tdSql.checkData(0, 5, 0)
- tdSql.checkData(0, 6, 19)
-
-
- def stop(self):
- tdSql.close()
- tdLog.success("%s successfully executed" % __file__)
-
-
-tdCases.addWindows(__file__, TDTestCase())
-tdCases.addLinux(__file__, TDTestCase())
diff --git a/tests/pytest/stream/new.py b/tests/pytest/stream/new.py
deleted file mode 100644
index 4a0e47c01a..0000000000
--- a/tests/pytest/stream/new.py
+++ /dev/null
@@ -1,79 +0,0 @@
-###################################################################
-# Copyright (c) 2016 by TAOS Technologies, Inc.
-# All rights reserved.
-#
-# This file is proprietary and confidential to TAOS Technologies.
-# No part of this file may be reproduced, stored, transmitted,
-# disclosed or used in any form or by any means other than as
-# expressly provided by the written permission from Jianhui Tao
-#
-###################################################################
-
-# -*- coding: utf-8 -*-
-
-import sys
-import time
-import taos
-from util.log import tdLog
-from util.cases import tdCases
-from util.sql import tdSql
-
-
-class TDTestCase:
- def init(self, conn, logSql):
- tdLog.debug("start to execute %s" % __file__)
- tdSql.init(conn.cursor(), logSql)
-
- def run(self):
- rowNum = 200
- tdSql.prepare()
-
- tdLog.info("=============== step1")
- tdSql.execute("create table mt(ts timestamp, tbcol int, tbcol2 float) TAGS(tgcol int)")
- for i in range(5):
- tdSql.execute("create table tb%d using mt tags(%d)" % (i, i))
- for j in range(rowNum):
- tdSql.execute("insert into tb%d values(now + %ds, %d, %d)" % (i, j, j, j))
- time.sleep(0.1)
-
- tdLog.info("=============== step2")
- tdSql.query("select count(*), count(tbcol), count(tbcol2) from mt interval(10s)")
- tdSql.execute("create table st as select count(*), count(tbcol), count(tbcol2) from mt interval(10s)")
-
- tdLog.info("=============== step3")
- start = time.time()
- tdSql.waitedQuery("select * from st", 1, 180)
- delay = int(time.time() - start) + 80
- v = tdSql.getData(0, 3)
- if v >= 51:
- tdLog.exit("value is %d, which is larger than 51" % v)
-
- tdLog.info("=============== step4")
- for i in range(5, 10):
- tdSql.execute("create table tb%d using mt tags(%d)" % (i, i))
- for j in range(rowNum):
- tdSql.execute("insert into tb%d values(now + %ds, %d, %d)" % (i, j, j, j))
-
- tdLog.info("=============== step5")
- maxValue = 0
- for i in range(delay):
- time.sleep(1)
- tdSql.query("select * from st order by ts desc")
- v = tdSql.getData(0, 3)
- if v > maxValue:
- maxValue = v
- if v > 51:
- break
-
- if maxValue <= 51:
- tdLog.exit("value is %d, which is smaller than 51" % maxValue)
-
- def stop(self):
- tdSql.close()
- tdLog.success("%s successfully executed" % __file__)
-
-
-tdCases.addWindows(__file__, TDTestCase())
-tdCases.addLinux(__file__, TDTestCase())
-
-
diff --git a/tests/pytest/stream/parser.py b/tests/pytest/stream/parser.py
deleted file mode 100644
index 3b231d2b39..0000000000
--- a/tests/pytest/stream/parser.py
+++ /dev/null
@@ -1,182 +0,0 @@
-###################################################################
-# Copyright (c) 2016 by TAOS Technologies, Inc.
-# All rights reserved.
-#
-# This file is proprietary and confidential to TAOS Technologies.
-# No part of this file may be reproduced, stored, transmitted,
-# disclosed or used in any form or by any means other than as
-# expressly provided by the written permission from Jianhui Tao
-#
-###################################################################
-
-# -*- coding: utf-8 -*-
-
-import sys
-import time
-import taos
-from util.log import tdLog
-from util.cases import tdCases
-from util.sql import tdSql
-
-
-class TDTestCase:
- def init(self, conn, logSql):
- tdLog.debug("start to execute %s" % __file__)
- tdSql.init(conn.cursor(), logSql)
-
- '''
- def bug2222(self):
- tdSql.prepare()
- tdSql.execute("create table superreal(ts timestamp, addr binary(5), val float) tags (deviceNo binary(20))")
- tdSql.execute("create table real_001 using superreal tags('001')")
- tdSql.execute("create table tj_001 as select sum(val) from real_001 interval(1m)")
-
- t = datetime.datetime.now()
- for i in range(60):
- ts = t.strftime("%Y-%m-%d %H:%M")
- t += datetime.timedelta(minutes=1)
- sql = "insert into real_001 values('%s:0%d', '1', %d)" % (ts, 0, i)
- for j in range(4):
- sql += ",('%s:0%d', '%d', %d)" % (ts, j + 1, j + 1, i)
- tdSql.execute(sql)
- time.sleep(60 + random.random() * 60 - 30)
- '''
-
- def tbase300(self):
- tdLog.debug("begin tbase300")
-
- tdSql.prepare()
- tdSql.execute("create table mt(ts timestamp, c1 int, c2 int) tags(t1 int)")
- tdSql.execute("create table tb1 using mt tags(1)");
- tdSql.execute("create table tb2 using mt tags(2)");
- tdSql.execute("create table strm as select count(*), avg(c1), sum(c2), max(c1), min(c2),first(c1), last(c2) from mt interval(4s) sliding(2s)")
- #tdSql.execute("create table strm as select count(*), avg(c1), sum(c2), max(c1), min(c2), first(c1) from mt interval(4s) sliding(2s)")
- tdLog.sleep(10)
- tdSql.execute("insert into tb2 values(now, 1, 1)");
- tdSql.execute("insert into tb1 values(now, 1, 1)");
- tdLog.sleep(4)
- tdSql.query("select * from mt")
- tdSql.query("select * from strm")
- tdSql.execute("drop table tb1")
-
- tdSql.waitedQuery("select * from strm", 1, 100)
- if tdSql.queryRows < 1 or tdSql.queryRows > 2:
- tdLog.exit("rows should be 1 or 2")
-
- tdSql.execute("drop table tb2")
- tdSql.execute("drop table mt")
- tdSql.execute("drop table strm")
-
- def tbase304(self):
- tdLog.debug("begin tbase304")
- # we cannot reset query cache in server side, as a workaround,
- # set super table name to mt304, need to change back to mt later
- tdSql.execute("create table mt304 (ts timestamp, c1 int) tags(t1 int, t2 int)")
- tdSql.execute("create table tb1 using mt304 tags(1, 1)")
- tdSql.execute("create table tb2 using mt304 tags(1, -1)")
- time.sleep(0.1)
- tdSql.execute("create table strm as select count(*), avg(c1) from mt304 where t2 >= 0 interval(4s) sliding(2s)")
- tdSql.execute("insert into tb1 values (now,1)")
- tdSql.execute("insert into tb2 values (now,2)")
-
- tdSql.waitedQuery("select * from strm", 1, 100)
- if tdSql.queryRows < 1 or tdSql.queryRows > 2:
- tdLog.exit("rows should be 1 or 2")
-
- tdSql.checkData(0, 1, 1)
- tdSql.checkData(0, 2, 1.000000000)
- tdSql.execute("alter table mt304 drop tag t2")
- tdSql.execute("insert into tb2 values (now,2)")
- tdSql.execute("insert into tb1 values (now,1)")
- tdSql.query("select * from strm")
- tdSql.execute("alter table mt304 add tag t2 int")
- tdLog.sleep(1)
- tdSql.query("select * from strm")
-
- def wildcardFilterOnTags(self):
- tdLog.debug("begin wildcardFilterOnTag")
- tdSql.prepare()
- tdSql.execute("create table stb (ts timestamp, c1 int, c2 binary(10)) tags(t1 binary(10))")
- tdSql.execute("create table tb1 using stb tags('a1')")
- tdSql.execute("create table tb2 using stb tags('b2')")
- tdSql.execute("create table tb3 using stb tags('a3')")
- tdSql.execute("create table strm as select count(*), avg(c1), first(c2) from stb where t1 like 'a%' interval(4s) sliding(2s)")
- tdSql.query("describe strm")
- tdSql.checkRows(4)
-
- tdLog.sleep(1)
- tdSql.execute("insert into tb1 values (now, 0, 'tb1')")
- tdLog.sleep(4)
- tdSql.execute("insert into tb2 values (now, 2, 'tb2')")
- tdLog.sleep(4)
- tdSql.execute("insert into tb3 values (now, 0, 'tb3')")
-
- tdSql.waitedQuery("select * from strm", 4, 60)
- tdSql.checkRows(4)
- tdSql.checkData(0, 2, 0.000000000)
- if tdSql.getData(0, 3) == 'tb2':
- tdLog.exit("unexpected value of data03")
- if tdSql.getData(1, 3) == 'tb2':
- tdLog.exit("unexpected value of data13")
- if tdSql.getData(2, 3) == 'tb2':
- tdLog.exit("unexpected value of data23")
- if tdSql.getData(3, 3) == 'tb2':
- tdLog.exit("unexpected value of data33")
-
- tdLog.info("add table tb4 to see if stream still works correctly")
- # The vnode client needs to refresh metadata cache to allow strm calculate tb4's data.
- # But the current refreshing frequency is every 10 min
- # commented out the case below to save running time
- tdSql.execute("create table tb4 using stb tags('a4')")
- tdSql.execute("insert into tb4 values(now, 4, 'tb4')")
- tdSql.waitedQuery("select * from strm order by ts desc", 6, 60)
- tdSql.checkRows(6)
- tdSql.checkData(0, 2, 4)
- tdSql.checkData(0, 3, "tb4")
-
- tdLog.info("change tag values to see if stream still works correctly")
- tdSql.execute("alter table tb4 set tag t1='b4'")
- tdLog.sleep(3)
- tdSql.execute("insert into tb1 values (now, 1, 'tb1_a1')")
- tdLog.sleep(4)
- tdSql.execute("insert into tb4 values (now, -4, 'tb4_b4')")
- tdSql.waitedQuery("select * from strm order by ts desc", 8, 100)
- tdSql.checkRows(8)
- tdSql.checkData(0, 2, 1)
- tdSql.checkData(0, 3, "tb1_a1")
-
- def datatypes(self):
- tdLog.debug("begin data types")
- tdSql.prepare()
- tdSql.execute("create table stb3 (ts timestamp, c1 int, c2 bigint, c3 float, c4 double, c5 binary(15), c6 nchar(15), c7 bool) tags(t1 int, t2 binary(15))")
- tdSql.execute("create table tb0 using stb3 tags(0, 'tb0')")
- tdSql.execute("create table tb1 using stb3 tags(1, 'tb1')")
- tdSql.execute("create table tb2 using stb3 tags(2, 'tb2')")
- tdSql.execute("create table tb3 using stb3 tags(3, 'tb3')")
- tdSql.execute("create table tb4 using stb3 tags(4, 'tb4')")
-
- tdSql.execute("create table strm0 as select count(ts), count(c1), max(c2), min(c4), first(c5), last(c6) from stb3 where ts < now + 30s interval(4s) sliding(2s)")
- #tdSql.execute("create table strm0 as select count(ts), count(c1), max(c2), min(c4), first(c5) from stb where ts < now + 30s interval(4s) sliding(2s)")
- tdLog.sleep(1)
- tdSql.execute("insert into tb0 values (now, 0, 0, 0, 0, 'binary0', '涛思0', true) tb1 values (now, 1, 1, 1, 1, 'binary1', '涛思1', false) tb2 values (now, 2, 2, 2, 2, 'binary2', '涛思2', true) tb3 values (now, 3, 3, 3, 3, 'binary3', '涛思3', false) tb4 values (now, 4, 4, 4, 4, 'binary4', '涛思4', true) ")
-
- tdSql.waitedQuery("select * from strm0 order by ts desc", 2, 120)
- tdSql.checkRows(2)
-
- tdSql.execute("insert into tb0 values (now, 10, 10, 10, 10, 'binary0', '涛思0', true) tb1 values (now, 11, 11, 11, 11, 'binary1', '涛思1', false) tb2 values (now, 12, 12, 12, 12, 'binary2', '涛思2', true) tb3 values (now, 13, 13, 13, 13, 'binary3', '涛思3', false) tb4 values (now, 14, 14, 14, 14, 'binary4', '涛思4', true) ")
- tdSql.waitedQuery("select * from strm0 order by ts desc", 4, 120)
- tdSql.checkRows(4)
-
- def run(self):
- self.tbase300()
- self.tbase304()
- self.wildcardFilterOnTags()
- self.datatypes()
-
- def stop(self):
- tdSql.close()
- tdLog.success("%s successfully executed" % __file__)
-
-
-tdCases.addWindows(__file__, TDTestCase())
-tdCases.addLinux(__file__, TDTestCase())
diff --git a/tests/pytest/stream/showStreamExecTimeisNull.py b/tests/pytest/stream/showStreamExecTimeisNull.py
deleted file mode 100644
index 8a2a09cec6..0000000000
--- a/tests/pytest/stream/showStreamExecTimeisNull.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# No part of this file may be reproduced, stored, transmitted,
-# disclosed or used in any form or by any means other than as
-# expressly provided by the written permission from Jianhui Tao
-#
-###################################################################
-
-# -*- coding: utf-8 -*-
-
-import sys
-from util.log import *
-from util.cases import *
-from util.sql import *
-from util.dnodes import *
-
-
-class TDTestCase:
- def init(self, conn, logSql):
- tdLog.debug(f"start to execute {__file__}")
- tdSql.init(conn.cursor(), logSql)
-
- def insertnow(self):
-
- # timestamp list:
- # 0 -> "1970-01-01 08:00:00" | -28800000 -> "1970-01-01 00:00:00" | -946800000000 -> "1940-01-01 00:00:00"
- # -631180800000 -> "1950-01-01 00:00:00"
-
- tsp1 = 0
- tsp2 = -28800000
- tsp3 = -946800000000
- tsp4 = "1969-01-01 00:00:00.000"
-
- tdSql.execute("insert into tcq1 values (now-11d, 5)")
- tdSql.execute(f"insert into tcq1 values ({tsp1}, 4)")
- tdSql.execute(f"insert into tcq1 values ({tsp2}, 3)")
- tdSql.execute(f"insert into tcq1 values ('{tsp4}', 2)")
- tdSql.execute(f"insert into tcq1 values ({tsp3}, 1)")
-
- def waitedQuery(self, sql, expectRows, timeout):
- tdLog.info(f"sql: {sql}, try to retrieve {expectRows} rows in {timeout} seconds")
- try:
- for i in range(timeout):
- tdSql.cursor.execute(sql)
- self.queryResult = tdSql.cursor.fetchall()
- self.queryRows = len(self.queryResult)
- self.queryCols = len(tdSql.cursor.description)
- # tdLog.info("sql: %s, try to retrieve %d rows,get %d rows" % (sql, expectRows, self.queryRows))
- if self.queryRows >= expectRows:
- return (self.queryRows, i)
- time.sleep(1)
- except Exception as e:
- caller = inspect.getframeinfo(inspect.stack()[1][0])
- tdLog.notice(f"{caller.filename}({caller.lineno}) failed: sql:{sql}, {repr(e)}")
- raise Exception(repr(e))
- return (self.queryRows, timeout)
-
- def showstream(self):
- tdSql.execute(
- "create table cq1 as select avg(c1) from tcq1 interval(10d) sliding(1d)"
- )
- sql = "show streams"
- timeout = 30
- exception = "ValueError('year -292275055 is out of range')"
- try:
- for i in range(timeout):
- tdSql.cursor.execute(sql)
- self.queryResult = tdSql.cursor.fetchall()
- self.queryRows = len(self.queryResult)
- self.queryCols = len(tdSql.cursor.description)
- # tdLog.info("sql: %s, try to retrieve %d rows,get %d rows" % (sql, expectRows, self.queryRows))
- if self.queryRows >= 1:
- tdSql.query(sql)
- tdSql.checkData(0, 5, None)
- return (self.queryRows, i)
- time.sleep(1)
- except Exception as e:
- tdLog.exit(f"sql: {sql} except raise {exception}, actually raise {repr(e)} ")
- # else:
- # tdLog.exit(f"sql: {sql} except raise {exception}, actually not")
-
- def run(self):
- tdSql.execute("drop database if exists dbcq")
- tdSql.execute("create database if not exists dbcq keep 36500")
- tdSql.execute("use dbcq")
-
- tdSql.execute("create table stbcq (ts timestamp, c1 int ) TAGS(t1 int)")
- tdSql.execute("create table tcq1 using stbcq tags(1)")
-
- self.insertnow()
- self.showstream()
-
-
- def stop(self):
- tdSql.close()
- tdLog.success(f"{__file__} successfully executed")
-
-
-tdCases.addWindows(__file__, TDTestCase())
-tdCases.addLinux(__file__, TDTestCase())
\ No newline at end of file
diff --git a/tests/pytest/stream/stream1.py b/tests/pytest/stream/stream1.py
deleted file mode 100644
index c657379441..0000000000
--- a/tests/pytest/stream/stream1.py
+++ /dev/null
@@ -1,142 +0,0 @@
-###################################################################
-# Copyright (c) 2016 by TAOS Technologies, Inc.
-# All rights reserved.
-#
-# This file is proprietary and confidential to TAOS Technologies.
-# No part of this file may be reproduced, stored, transmitted,
-# disclosed or used in any form or by any means other than as
-# expressly provided by the written permission from Jianhui Tao
-#
-###################################################################
-
-# -*- coding: utf-8 -*-
-
-import sys
-import time
-import taos
-from util.log import tdLog
-from util.cases import tdCases
-from util.sql import tdSql
-
-
-class TDTestCase:
- def init(self, conn, logSql):
- tdLog.debug("start to execute %s" % __file__)
- tdSql.init(conn.cursor(), logSql)
-
- def run(self):
- tbNum = 10
- rowNum = 20
-
- tdSql.prepare()
-
- tdLog.info("===== step1 =====")
- tdSql.execute(
- "create table stb0(ts timestamp, col1 int, col2 float) tags(tgcol int)")
- for i in range(tbNum):
- tdSql.execute("create table tb%d using stb0 tags(%d)" % (i, i))
- for j in range(rowNum):
- tdSql.execute(
- "insert into tb%d values (now - %dm, %d, %d)" %
- (i, 1440 - j, j, j))
- time.sleep(0.1)
-
- tdLog.info("===== step2 =====")
- tdSql.query(
- "select count(*), count(col1), count(col2) from tb0 interval(1d)")
- tdSql.checkData(0, 1, rowNum)
- tdSql.checkData(0, 2, rowNum)
- tdSql.checkData(0, 3, rowNum)
- tdSql.query("show tables")
- tdSql.checkRows(tbNum)
- tdSql.execute(
- "create table s0 as select count(*), count(col1), count(col2) from tb0 interval(1d)")
- tdSql.query("show tables")
- tdSql.checkRows(tbNum + 1)
-
- tdLog.info("===== step3 =====")
- tdSql.waitedQuery("select * from s0", 1, 120)
- try:
- tdSql.checkData(0, 1, rowNum)
- tdSql.checkData(0, 2, rowNum)
- tdSql.checkData(0, 3, rowNum)
- except Exception as e:
- tdLog.info(repr(e))
-
- tdLog.info("===== step4 =====")
- tdSql.execute("drop table s0")
- tdSql.query("show tables")
- tdSql.checkRows(tbNum)
-
- tdLog.info("===== step5 =====")
- tdSql.error("select * from s0")
-
- tdLog.info("===== step6 =====")
- time.sleep(0.1)
- tdSql.execute(
- "create table s0 as select count(*), count(col1), count(col2) from tb0 interval(1d)")
- tdSql.query("show tables")
- tdSql.checkRows(tbNum + 1)
-
- tdLog.info("===== step7 =====")
- tdSql.waitedQuery("select * from s0", 1, 120)
- try:
- tdSql.checkData(0, 1, rowNum)
- tdSql.checkData(0, 2, rowNum)
- tdSql.checkData(0, 3, rowNum)
- except Exception as e:
- tdLog.info(repr(e))
-
- tdLog.info("===== step8 =====")
- tdSql.query(
- "select count(*), count(col1), count(col2) from stb0 interval(1d)")
- tdSql.checkData(0, 1, rowNum * tbNum)
- tdSql.checkData(0, 2, rowNum * tbNum)
- tdSql.checkData(0, 3, rowNum * tbNum)
- tdSql.query("show tables")
- tdSql.checkRows(tbNum + 1)
-
- tdSql.execute(
- "create table s1 as select count(*), count(col1), count(col2) from stb0 interval(1d)")
- tdSql.query("show tables")
- tdSql.checkRows(tbNum + 2)
-
- tdLog.info("===== step9 =====")
- tdSql.waitedQuery("select * from s1", 1, 120)
- try:
- tdSql.checkData(0, 1, rowNum * tbNum)
- tdSql.checkData(0, 2, rowNum * tbNum)
- tdSql.checkData(0, 3, rowNum * tbNum)
- except Exception as e:
- tdLog.info(repr(e))
-
- tdLog.info("===== step10 =====")
- tdSql.execute("drop table s1")
- tdSql.query("show tables")
- tdSql.checkRows(tbNum + 1)
-
- tdLog.info("===== step11 =====")
- tdSql.error("select * from s1")
-
- tdLog.info("===== step12 =====")
- tdSql.execute(
- "create table s1 as select count(*), count(col1), count(col2) from stb0 interval(1d)")
- tdSql.query("show tables")
- tdSql.checkRows(tbNum + 2)
-
- tdLog.info("===== step13 =====")
- tdSql.waitedQuery("select * from s1", 1, 120)
- try:
- tdSql.checkData(0, 1, rowNum * tbNum)
- tdSql.checkData(0, 2, rowNum * tbNum)
- tdSql.checkData(0, 3, rowNum * tbNum)
- except Exception as e:
- tdLog.info(repr(e))
-
- def stop(self):
- tdSql.close()
- tdLog.success("%s successfully executed" % __file__)
-
-
-tdCases.addWindows(__file__, TDTestCase())
-tdCases.addLinux(__file__, TDTestCase())
diff --git a/tests/pytest/stream/stream2.py b/tests/pytest/stream/stream2.py
deleted file mode 100644
index 9b4eb8725c..0000000000
--- a/tests/pytest/stream/stream2.py
+++ /dev/null
@@ -1,164 +0,0 @@
-###################################################################
-# Copyright (c) 2016 by TAOS Technologies, Inc.
-# All rights reserved.
-#
-# This file is proprietary and confidential to TAOS Technologies.
-# No part of this file may be reproduced, stored, transmitted,
-# disclosed or used in any form or by any means other than as
-# expressly provided by the written permission from Jianhui Tao
-#
-###################################################################
-
-# -*- coding: utf-8 -*-
-
-import sys
-import time
-import taos
-from util.log import tdLog
-from util.cases import tdCases
-from util.sql import tdSql
-
-
-class TDTestCase:
- def init(self, conn, logSql):
- tdLog.debug("start to execute %s" % __file__)
- tdSql.init(conn.cursor(), logSql)
-
- def run(self):
- tbNum = 10
- rowNum = 20
- totalNum = tbNum * rowNum
-
- tdSql.prepare()
-
- tdLog.info("===== step1 =====")
- tdSql.execute(
- "create table stb0(ts timestamp, col1 int, col2 float) tags(tgcol int)")
- for i in range(tbNum):
- tdSql.execute("create table tb%d using stb0 tags(%d)" % (i, i))
- for j in range(rowNum):
- tdSql.execute(
- "insert into tb%d values (now - %dm, %d, %d)" %
- (i, 1440 - j, j, j))
- time.sleep(0.1)
-
- tdLog.info("===== step2 =====")
- tdSql.query("select count(col1) from tb0 interval(1d)")
- tdSql.checkData(0, 1, rowNum)
- tdSql.query("show tables")
- tdSql.checkRows(tbNum)
- tdSql.execute(
- "create table s0 as select count(col1) from tb0 interval(1d)")
- tdSql.query("show tables")
- tdSql.checkRows(tbNum + 1)
-
- tdLog.info("===== step3 =====")
- tdSql.waitedQuery("select * from s0", 1, 120)
- try:
- tdSql.checkData(0, 1, rowNum)
- except Exception as e:
- tdLog.info(repr(e))
-
- tdLog.info("===== step4 =====")
- tdSql.execute("drop table s0")
- tdSql.query("show tables")
- try:
- tdSql.checkRows(tbNum)
- except Exception as e:
- tdLog.info(repr(e))
-
- tdLog.info("===== step5 =====")
- tdSql.error("select * from s0")
-
- tdLog.info("===== step6 =====")
- tdSql.execute(
- "create table s0 as select count(*), count(col1), count(col2) from tb0 interval(1d)")
- tdSql.query("show tables")
- try:
- tdSql.checkRows(tbNum + 1)
- except Exception as e:
- tdLog.info(repr(e))
-
- tdLog.info("===== step7 =====")
- tdSql.waitedQuery("select * from s0", 1, 120)
- try:
- tdSql.checkData(0, 1, rowNum)
- tdSql.checkData(0, 2, rowNum)
- tdSql.checkData(0, 3, rowNum)
- except Exception as e:
- tdLog.info(repr(e))
-
-
- time.sleep(5)
- tdSql.query("show streams")
- tdSql.checkRows(1)
- tdSql.checkData(0, 2, 's0')
-
- tdLog.info("===== step8 =====")
- tdSql.query(
- "select count(*), count(col1), count(col2) from stb0 interval(1d)")
- try:
- tdSql.checkData(0, 1, totalNum)
- tdSql.checkData(0, 2, totalNum)
- tdSql.checkData(0, 3, totalNum)
- except Exception as e:
- tdLog.info(repr(e))
- tdSql.query("show tables")
- tdSql.checkRows(tbNum + 1)
- tdSql.execute(
- "create table s1 as select count(*), count(col1), count(col2) from stb0 interval(1d)")
- tdSql.query("show tables")
- tdSql.checkRows(tbNum + 2)
-
- tdLog.info("===== step9 =====")
- tdSql.waitedQuery("select * from s1", 1, 120)
- try:
- tdSql.checkData(0, 1, totalNum)
- tdSql.checkData(0, 2, totalNum)
- tdSql.checkData(0, 3, totalNum)
- except Exception as e:
- tdLog.info(repr(e))
-
- tdLog.info("===== step10 =====")
- tdSql.execute("drop table s1")
- tdSql.query("show tables")
- try:
- tdSql.checkRows(tbNum + 1)
- except Exception as e:
- tdLog.info(repr(e))
-
- tdLog.info("===== step11 =====")
- tdSql.error("select * from s1")
-
- tdLog.info("===== step12 =====")
- tdSql.execute(
- "create table s1 as select count(col1) from stb0 interval(1d)")
- tdSql.query("show tables")
- try:
- tdSql.checkRows(tbNum + 2)
- except Exception as e:
- tdLog.info(repr(e))
-
- tdLog.info("===== step13 =====")
- tdSql.waitedQuery("select * from s1", 1, 120)
- try:
- tdSql.checkData(0, 1, totalNum)
- #tdSql.checkData(0, 2, None)
- #tdSql.checkData(0, 3, None)
- except Exception as e:
- tdLog.info(repr(e))
-
- time.sleep(5)
- tdSql.query("show streams")
- tdSql.checkRows(2)
- tdSql.checkData(0, 2, 's1')
- tdSql.checkData(1, 2, 's0')
-
-
- def stop(self):
- tdSql.close()
- tdLog.success("%s successfully executed" % __file__)
-
-
-tdCases.addWindows(__file__, TDTestCase())
-tdCases.addLinux(__file__, TDTestCase())
diff --git a/tests/pytest/stream/stream3.py b/tests/pytest/stream/stream3.py
deleted file mode 100644
index 9a5c6c9aec..0000000000
--- a/tests/pytest/stream/stream3.py
+++ /dev/null
@@ -1,108 +0,0 @@
-###################################################################
-# Copyright (c) 2016 by TAOS Technologies, Inc.
-# All rights reserved.
-#
-# This file is proprietary and confidential to TAOS Technologies.
-# No part of this file may be reproduced, stored, transmitted,
-# disclosed or used in any form or by any means other than as
-# expressly provided by the written permission from Jianhui Tao
-#
-###################################################################
-
-# -*- coding: utf-8 -*-
-
-import sys
-import time
-import taos
-from util.log import tdLog
-from util.cases import tdCases
-from util.sql import tdSql
-
-
-class TDTestCase:
- def init(self, conn, logSql):
- tdLog.debug("start to execute %s" % __file__)
- tdSql.init(conn.cursor(), logSql)
-
- def run(self):
- ts = 1500000000000
- tbNum = 10
- rowNum = 20
-
- tdSql.prepare()
-
- tdLog.info("===== step1 =====")
- tdSql.execute(
- "create table stb0(ts timestamp, col1 binary(20), col2 nchar(20)) tags(tgcol int)")
- for i in range(tbNum):
- tdSql.execute("create table tb%d using stb0 tags(%d)" % (i, i))
- for j in range(rowNum):
- tdSql.execute(
- "insert into tb%d values (%d, 'binary%d', 'nchar%d')" %
- (i, ts + 60000 * j, j, j))
- tdSql.execute("insert into tb0 values(%d, null, null)" % (ts + 10000000))
- time.sleep(0.1)
-
- tdLog.info("===== step2 =====")
- tdSql.query(
- "select count(*), count(col1), count(col2) from stb0 interval(1d)")
- tdSql.checkData(0, 1, rowNum * tbNum + 1)
- tdSql.checkData(0, 2, rowNum * tbNum)
- tdSql.checkData(0, 3, rowNum * tbNum)
-
- tdSql.query("show tables")
- tdSql.checkRows(tbNum)
- tdSql.execute(
- "create table s0 as select count(*), count(col1), count(col2) from stb0 interval(1d)")
- tdSql.query("show tables")
- tdSql.checkRows(tbNum + 1)
-
- tdLog.info("===== step3 =====")
- tdSql.waitedQuery("select * from s0", 1, 120)
- try:
- tdSql.checkData(0, 1, rowNum * tbNum + 1)
- tdSql.checkData(0, 2, rowNum * tbNum)
- tdSql.checkData(0, 3, rowNum * tbNum)
- except Exception as e:
- tdLog.info(repr(e))
-
- tdLog.info("===== step4 =====")
- tdSql.execute("drop table s0")
- tdSql.query("show tables")
- tdSql.checkRows(tbNum)
-
- tdLog.info("===== step5 =====")
- tdSql.error("select * from s0")
-
- tdLog.info("===== step6 =====")
- time.sleep(0.1)
- tdSql.execute(
- "create table s0 as select count(*), count(col1), count(col2) from tb0 interval(1d)")
- tdSql.query("show tables")
- tdSql.checkRows(tbNum + 1)
-
- tdLog.info("===== step7 =====")
- tdSql.waitedQuery("select * from s0", 1, 120)
- try:
- tdSql.checkData(0, 1, rowNum + 1)
- tdSql.checkData(0, 2, rowNum)
- tdSql.checkData(0, 3, rowNum)
- except Exception as e:
- tdLog.info(repr(e))
-
- tdLog.info("===== step8 =====")
- tdSql.query(
- "select count(*), count(col1), count(col2) from stb0 interval(1d)")
- tdSql.checkData(0, 1, rowNum * tbNum + 1)
- tdSql.checkData(0, 2, rowNum * tbNum)
- tdSql.checkData(0, 3, rowNum * tbNum)
- tdSql.query("show tables")
- tdSql.checkRows(tbNum + 1)
-
- def stop(self):
- tdSql.close()
- tdLog.success("%s successfully executed" % __file__)
-
-
-tdCases.addWindows(__file__, TDTestCase())
-tdCases.addLinux(__file__, TDTestCase())
diff --git a/tests/pytest/stream/sys.py b/tests/pytest/stream/sys.py
deleted file mode 100644
index c9a3fccfe6..0000000000
--- a/tests/pytest/stream/sys.py
+++ /dev/null
@@ -1,62 +0,0 @@
-###################################################################
-# Copyright (c) 2016 by TAOS Technologies, Inc.
-# All rights reserved.
-#
-# This file is proprietary and confidential to TAOS Technologies.
-# No part of this file may be reproduced, stored, transmitted,
-# disclosed or used in any form or by any means other than as
-# expressly provided by the written permission from Jianhui Tao
-#
-###################################################################
-
-# migrated from 'stream_on_sys.sim'
-# -*- coding: utf-8 -*-
-import sys
-import time
-import taos
-from util.log import tdLog
-from util.cases import tdCases
-from util.sql import tdSql
-
-
-class TDTestCase:
- updatecfgDict = {'monitor': 1}
-
- def init(self, conn, logSql):
- tdLog.debug("start to execute %s" % __file__)
- tdSql.init(conn.cursor(), logSql)
-
-
- def run(self):
- time.sleep(5)
- tdSql.execute("use log")
-
- tdSql.execute("create table cpustrm as select count(*), avg(cpu_taosd), max(cpu_taosd), min(cpu_taosd), avg(cpu_system), max(cpu_cores), min(cpu_cores), last(cpu_cores) from log.dn1 interval(4s)")
- tdSql.execute("create table memstrm as select count(*), avg(mem_taosd), max(mem_taosd), min(mem_taosd), avg(mem_system), first(mem_total), last(mem_total) from log.dn1 interval(4s)")
- tdSql.execute("create table diskstrm as select count(*), avg(disk_used), last(disk_used), avg(disk_total), first(disk_total) from log.dn1 interval(4s)")
- tdSql.execute("create table bandstrm as select count(*), avg(band_speed), last(band_speed) from log.dn1 interval(4s)")
- tdSql.execute("create table reqstrm as select count(*), avg(req_http), last(req_http), avg(req_select), last(req_select), avg(req_insert), last(req_insert) from log.dn1 interval(4s)")
- tdSql.execute("create table iostrm as select count(*), avg(io_read), last(io_read), avg(io_write), last(io_write) from log.dn1 interval(4s)")
-
- sqls = [
- "select * from cpustrm",
- "select * from memstrm",
- "select * from diskstrm",
- "select * from bandstrm",
- "select * from reqstrm",
- "select * from iostrm",
- ]
- for sql in sqls:
- (rows, _) = tdSql.waitedQuery(sql, 1, 240)
- if rows < 1:
- tdLog.exit("failed: sql:%s, expect at least one row" % sql)
-
-
- def stop(self):
- tdSql.close()
- tdLog.success("%s successfully executed" % __file__)
-
-
-tdCases.addWindows(__file__, TDTestCase())
-tdCases.addLinux(__file__, TDTestCase())
-
diff --git a/tests/pytest/stream/table_1.py b/tests/pytest/stream/table_1.py
deleted file mode 100644
index b205491fad..0000000000
--- a/tests/pytest/stream/table_1.py
+++ /dev/null
@@ -1,89 +0,0 @@
-###################################################################
-# Copyright (c) 2016 by TAOS Technologies, Inc.
-# All rights reserved.
-#
-# This file is proprietary and confidential to TAOS Technologies.
-# No part of this file may be reproduced, stored, transmitted,
-# disclosed or used in any form or by any means other than as
-# expressly provided by the written permission from Jianhui Tao
-#
-###################################################################
-
-# -*- coding: utf-8 -*-
-
-import sys
-import time
-import taos
-from util.log import tdLog
-from util.cases import tdCases
-from util.sql import tdSql
-
-
-class TDTestCase:
- def init(self, conn, logSql):
- tdLog.debug("start to execute %s" % __file__)
- tdSql.init(conn.cursor(), logSql)
-
- def createFuncStream(self, expr, suffix, value):
- tbname = "strm_" + suffix
- tdLog.info("create stream table %s" % tbname)
- tdSql.query("select %s from tb1 interval(1d)" % expr)
- tdSql.checkData(0, 1, value)
- tdSql.execute("create table %s as select %s from tb1 interval(1d)" % (tbname, expr))
-
- def checkStreamData(self, suffix, value):
- sql = "select * from strm_" + suffix
- tdSql.waitedQuery(sql, 1, 120)
- tdSql.checkData(0, 1, value)
-
- def run(self):
- tbNum = 10
- rowNum = 20
-
- tdSql.prepare()
-
- tdLog.info("===== step1 =====")
- tdSql.execute(
- "create table stb(ts timestamp, tbcol int, tbcol2 float) tags(tgcol int)")
- for i in range(tbNum):
- tdSql.execute("create table tb%d using stb tags(%d)" % (i, i))
- for j in range(rowNum):
- tdSql.execute(
- "insert into tb%d values (now - %dm, %d, %d)" %
- (i, 1440 - j, j, j))
- time.sleep(1)
-
- self.createFuncStream("count(*)", "c1", rowNum)
- self.createFuncStream("count(tbcol)", "c2", rowNum)
- self.createFuncStream("count(tbcol2)", "c3", rowNum)
- self.createFuncStream("avg(tbcol)", "av", 9.5)
- self.createFuncStream("sum(tbcol)", "su", 190)
- self.createFuncStream("min(tbcol)", "mi", 0)
- self.createFuncStream("max(tbcol)", "ma", 19)
- self.createFuncStream("first(tbcol)", "fi", 0)
- self.createFuncStream("last(tbcol)", "la", 19)
- self.createFuncStream("stddev(tbcol)", "st", 5.766281297335398)
- self.createFuncStream("percentile(tbcol, 1)", "pe", 0.19)
- self.createFuncStream("count(tbcol)", "as", rowNum)
-
- self.checkStreamData("c1", rowNum)
- self.checkStreamData("c2", rowNum)
- self.checkStreamData("c3", rowNum)
- self.checkStreamData("av", 9.5)
- self.checkStreamData("su", 190)
- self.checkStreamData("mi", 0)
- self.checkStreamData("ma", 19)
- self.checkStreamData("fi", 0)
- self.checkStreamData("la", 19)
- self.checkStreamData("st", 5.766281297335398)
- self.checkStreamData("pe", 0.19)
- self.checkStreamData("as", rowNum)
-
-
- def stop(self):
- tdSql.close()
- tdLog.success("%s successfully executed" % __file__)
-
-
-tdCases.addWindows(__file__, TDTestCase())
-tdCases.addLinux(__file__, TDTestCase())
diff --git a/tests/pytest/stream/table_n.py b/tests/pytest/stream/table_n.py
deleted file mode 100644
index 371af76977..0000000000
--- a/tests/pytest/stream/table_n.py
+++ /dev/null
@@ -1,143 +0,0 @@
-###################################################################
-# Copyright (c) 2016 by TAOS Technologies, Inc.
-# All rights reserved.
-#
-# This file is proprietary and confidential to TAOS Technologies.
-# No part of this file may be reproduced, stored, transmitted,
-# disclosed or used in any form or by any means other than as
-# expressly provided by the written permission from Jianhui Tao
-#
-###################################################################
-
-# -*- coding: utf-8 -*-
-
-import sys
-import time
-import taos
-from util.log import tdLog
-from util.cases import tdCases
-from util.sql import tdSql
-
-
-class TDTestCase:
- def init(self, conn, logSql):
- tdLog.debug("start to execute %s" % __file__)
- tdSql.init(conn.cursor(), logSql)
-
- def run(self):
- tbNum = 10
- rowNum = 20
-
- tdSql.prepare()
-
- tdLog.info("===== preparing data =====")
- tdSql.execute(
- "create table stb(ts timestamp, tbcol int, tbcol2 float) tags(tgcol int)")
- for i in range(tbNum):
- tdSql.execute("create table tb%d using stb tags(%d)" % (i, i))
- for j in range(rowNum):
- tdSql.execute(
- "insert into tb%d values (now - %dm, %d, %d)" %
- (i, 1440 - j, j, j))
- time.sleep(0.1)
-
- tdLog.info("===== step 1 =====")
- tdSql.query("select count(*), count(tbcol), count(tbcol2) from tb1 interval(1d)")
- tdSql.checkData(0, 1, rowNum)
- tdSql.checkData(0, 2, rowNum)
- tdSql.checkData(0, 3, rowNum)
-
- tdLog.info("===== step 2 =====")
- tdSql.execute("create table strm_c3 as select count(*), count(tbcol), count(tbcol2) from tb1 interval(1d)")
-
- tdLog.info("===== step 3 =====")
- tdSql.execute("create table strm_c32 as select count(*), count(tbcol) as c1, count(tbcol2) as c2, count(tbcol) as c3, count(tbcol) as c4, count(tbcol) as c5, count(tbcol) as c6, count(tbcol) as c7, count(tbcol) as c8, count(tbcol) as c9, count(tbcol) as c10, count(tbcol) as c11, count(tbcol) as c12, count(tbcol) as c13, count(tbcol) as c14, count(tbcol) as c15, count(tbcol) as c16, count(tbcol) as c17, count(tbcol) as c18, count(tbcol) as c19, count(tbcol) as c20, count(tbcol) as c21, count(tbcol) as c22, count(tbcol) as c23, count(tbcol) as c24, count(tbcol) as c25, count(tbcol) as c26, count(tbcol) as c27, count(tbcol) as c28, count(tbcol) as c29, count(tbcol) as c30 from tb1 interval(1d)")
-
- tdLog.info("===== step 4 =====")
- tdSql.query("select count(*), count(tbcol) as c1, count(tbcol2) as c2, count(tbcol) as c3, count(tbcol) as c4, count(tbcol) as c5, count(tbcol) as c6, count(tbcol) as c7, count(tbcol) as c8, count(tbcol) as c9, count(tbcol) as c10, count(tbcol) as c11, count(tbcol) as c12, count(tbcol) as c13, count(tbcol) as c14, count(tbcol) as c15, count(tbcol) as c16, count(tbcol) as c17, count(tbcol) as c18, count(tbcol) as c19, count(tbcol) as c20, count(tbcol) as c21, count(tbcol) as c22, count(tbcol) as c23, count(tbcol) as c24, count(tbcol) as c25, count(tbcol) as c26, count(tbcol) as c27, count(tbcol) as c28, count(tbcol) as c29, count(tbcol) as c30 from tb1 interval(1d)")
- tdSql.checkData(0, 1, rowNum)
- tdSql.checkData(0, 2, rowNum)
- tdSql.checkData(0, 3, rowNum)
-
- tdLog.info("===== step 5 =====")
- tdSql.execute("create table strm_c31 as select count(*), count(tbcol) as c1, count(tbcol2) as c2, count(tbcol) as c3, count(tbcol) as c4, count(tbcol) as c5, count(tbcol) as c6, count(tbcol) as c7, count(tbcol) as c8, count(tbcol) as c9, count(tbcol) as c10, count(tbcol) as c11, count(tbcol) as c12, count(tbcol) as c13, count(tbcol) as c14, count(tbcol) as c15, count(tbcol) as c16, count(tbcol) as c17, count(tbcol) as c18, count(tbcol) as c19, count(tbcol) as c20, count(tbcol) as c21, count(tbcol) as c22, count(tbcol) as c23, count(tbcol) as c24, count(tbcol) as c25, count(tbcol) as c26, count(tbcol) as c27, count(tbcol) as c28, count(tbcol) as c29, count(tbcol) as c30 from tb1 interval(1d)")
-
- tdLog.info("===== step 6 =====")
- tdSql.query("select avg(tbcol), sum(tbcol), min(tbcol), max(tbcol), first(tbcol), last(tbcol) from tb1 interval(1d)")
- tdSql.checkData(0, 1, 9.5)
- tdSql.checkData(0, 2, 190)
- tdSql.checkData(0, 3, 0)
- tdSql.checkData(0, 4, 19)
- tdSql.checkData(0, 5, 0)
- tdSql.checkData(0, 6, 19)
- tdSql.execute("create table strm_avg as select avg(tbcol), sum(tbcol), min(tbcol), max(tbcol), first(tbcol), last(tbcol) from tb1 interval(1d)")
-
- tdLog.info("===== step 7 =====")
- tdSql.query("select stddev(tbcol), leastsquares(tbcol, 1, 1), percentile(tbcol, 1) from tb1 interval(1d)")
- tdSql.checkData(0, 1, 5.766281297335398)
- tdSql.checkData(0, 3, 0.19)
- tdSql.execute("create table strm_ot as select stddev(tbcol), leastsquares(tbcol, 1, 1), percentile(tbcol, 1) from tb1 interval(1d)")
-
- tdLog.info("===== step 8 =====")
- tdSql.query("select avg(tbcol), sum(tbcol), min(tbcol), max(tbcol), first(tbcol), last(tbcol), stddev(tbcol), percentile(tbcol, 1), count(tbcol), leastsquares(tbcol, 1, 1) from tb1 interval(1d)")
- tdSql.checkData(0, 1, 9.5)
- tdSql.checkData(0, 2, 190)
- tdSql.checkData(0, 3, 0)
- tdSql.checkData(0, 4, 19)
- tdSql.checkData(0, 5, 0)
- tdSql.checkData(0, 6, 19)
- tdSql.checkData(0, 7, 5.766281297335398)
- tdSql.checkData(0, 8, 0.19)
- tdSql.checkData(0, 9, rowNum)
- tdSql.execute("create table strm_to as select avg(tbcol), sum(tbcol), min(tbcol), max(tbcol), first(tbcol), last(tbcol), stddev(tbcol), percentile(tbcol, 1), count(tbcol), leastsquares(tbcol, 1, 1) from tb1 interval(1d)")
-
- tdLog.info("===== step 9 =====")
- tdSql.query("select avg(tbcol), sum(tbcol), min(tbcol), max(tbcol), first(tbcol), last(tbcol), stddev(tbcol), percentile(tbcol, 1), count(tbcol), leastsquares(tbcol, 1, 1) from tb1 where ts < now + 4m interval(1d)")
- tdSql.checkData(0, 9, rowNum)
- tdSql.execute("create table strm_wh as select avg(tbcol), sum(tbcol), min(tbcol), max(tbcol), first(tbcol), last(tbcol), stddev(tbcol), percentile(tbcol, 1), count(tbcol), leastsquares(tbcol, 1, 1) from tb1 where ts < now + 4m interval(1d)")
-
- tdLog.info("===== step 10 =====")
- tdSql.waitedQuery("select * from strm_c3", 1, 120)
- tdSql.checkData(0, 1, rowNum)
- tdSql.checkData(0, 2, rowNum)
- tdSql.checkData(0, 3, rowNum)
-
- tdLog.info("===== step 11 =====")
- tdSql.waitedQuery("select * from strm_c31", 1, 30)
- for i in range(1, 10):
- tdSql.checkData(0, i, rowNum)
-
- tdLog.info("===== step 12 =====")
- tdSql.waitedQuery("select * from strm_avg", 1, 20)
- tdSql.checkData(0, 1, 9.5)
- tdSql.checkData(0, 2, 190)
- tdSql.checkData(0, 3, 0)
- tdSql.checkData(0, 4, 19)
- tdSql.checkData(0, 5, 0)
- tdSql.checkData(0, 6, 19)
-
- tdLog.info("===== step 13 =====")
- tdSql.waitedQuery("select * from strm_ot", 1, 20)
- tdSql.checkData(0, 1, 5.766281297335398)
- tdSql.checkData(0, 3, 0.19)
-
- tdLog.info("===== step 14 =====")
- tdSql.waitedQuery("select * from strm_to", 1, 20)
- tdSql.checkData(0, 1, 9.5)
- tdSql.checkData(0, 2, 190)
- tdSql.checkData(0, 3, 0)
- tdSql.checkData(0, 4, 19)
- tdSql.checkData(0, 5, 0)
- tdSql.checkData(0, 6, 19)
- tdSql.checkData(0, 7, 5.766281297335398)
- tdSql.checkData(0, 8, 0.19)
- tdSql.checkData(0, 9, rowNum)
-
-
- def stop(self):
- tdSql.close()
- tdLog.success("%s successfully executed" % __file__)
-
-
-tdCases.addWindows(__file__, TDTestCase())
-tdCases.addLinux(__file__, TDTestCase())
diff --git a/tests/pytest/stream/test1.py b/tests/pytest/stream/test1.py
new file mode 100644
index 0000000000..d3439a7bdb
--- /dev/null
+++ b/tests/pytest/stream/test1.py
@@ -0,0 +1,31 @@
+# -*- coding: utf-8 -*-
+
+import sys
+from util.log import *
+from util.cases import *
+from util.sql import *
+class TDTestCase:
+ def init(self, conn, logSql):
+ tdLog.debug("start to execute %s" % __file__)
+ tdSql.init(conn.cursor(), logSql)
+
+ def run(self):
+ tdSql.prepare()
+ tdSql.execute('drop database if exists slmfvojuxt;')
+ tdSql.execute('create database if not exists slmfvojuxt vgroups 1;')
+ tdSql.execute('use slmfvojuxt;')
+ tdSql.execute('create table if not exists downsampling_stb (ts timestamp, c1 int, c2 double, c3 varchar(100), c4 bool) tags (t1 int, t2 double, t3 varchar(100), t4 bool);')
+ tdSql.execute('create table ownsampling_ct1 using downsampling_stb tags(10, 10.1, "beijing", True);')
+ tdSql.execute('create table if not exists scalar_stb (ts timestamp, c1 int, c2 double, c3 binary(20)) tags (t1 int);')
+ tdSql.execute('create table scalar_ct1 using scalar_stb tags(10);')
+ tdSql.execute('create stream downsampling_stream into output_downsampling_stb as select _wstartts AS start, min(c1), max(c2), sum(c1) from downsampling_stb interval(10m);')
+ tdSql.execute('create stream scalar_stream into output_scalar_stb as select ts, abs(c1) a1 , abs(c2) a2 from scalar_stb;')
+ tdSql.execute('insert into scalar_ct1 values (1653471881952, 100, 100.1, "beijing");')
+ tdSql.execute('insert into scalar_ct1 values (1653471881952+1s, -50, -50.1, "tianjin");')
+ def stop(self):
+ tdSql.close()
+ tdLog.success("%s successfully executed" % __file__)
+
+
+tdCases.addWindows(__file__, TDTestCase())
+tdCases.addLinux(__file__, TDTestCase())
diff --git a/tests/pytest/test-all.bat b/tests/pytest/test-all.bat
new file mode 100644
index 0000000000..437472f7b8
--- /dev/null
+++ b/tests/pytest/test-all.bat
@@ -0,0 +1,25 @@
+@echo off
+SETLOCAL EnableDelayedExpansion
+for /F "tokens=1,2 delims=#" %%a in ('"prompt #$H#$E# & echo on & for %%b in (1) do rem"') do ( set "DEL=%%a")
+set /a a=0
+@REM echo Windows Taosd Test
+@REM for /F "usebackq tokens=*" %%i in (fulltest.bat) do (
+@REM echo Processing %%i
+@REM set /a a+=1
+@REM call %%i ARG1 -w -m localhost > result_!a!.txt 2>error_!a!.txt
+@REM if errorlevel 1 ( call :colorEcho 0c "failed" &echo. && exit 8 ) else ( call :colorEcho 0a "Success" &echo. )
+@REM )
+echo Linux Taosd Test
+for /F "usebackq tokens=*" %%i in (fulltest.bat) do (
+ echo Processing %%i
+ set /a a+=1
+ call %%i ARG1 -w 1 -m %1 > result_!a!.txt 2>error_!a!.txt
+ if errorlevel 1 ( call :colorEcho 0c "failed" &echo. && exit 8 ) else ( call :colorEcho 0a "Success" &echo. )
+)
+exit
+
+:colorEcho
+echo off
+ "%~2"
+findstr /v /a:%1 /R "^$" "%~2" nul
+del "%~2" > nul 2>&1i
\ No newline at end of file
diff --git a/tests/pytest/test.py b/tests/pytest/test.py
index 97dca6be18..9d146462f2 100644
--- a/tests/pytest/test.py
+++ b/tests/pytest/test.py
@@ -35,8 +35,9 @@ if __name__ == "__main__":
logSql = True
stop = 0
restart = False
- opts, args = getopt.gnu_getopt(sys.argv[1:], 'f:p:m:l:scghr', [
- 'file=', 'path=', 'master', 'logSql', 'stop', 'cluster', 'valgrind', 'help'])
+ windows = 0
+ opts, args = getopt.gnu_getopt(sys.argv[1:], 'f:p:m:l:scghrw', [
+ 'file=', 'path=', 'master', 'logSql', 'stop', 'cluster', 'valgrind', 'help', 'windows'])
for key, value in opts:
if key in ['-h', '--help']:
tdLog.printNoPrefix(
@@ -61,7 +62,10 @@ if __name__ == "__main__":
deployPath = value
if key in ['-m', '--master']:
- masterIp = value
+ masterIp = value
+
+ if key in ['-w', '--windows']:
+ windows = 1
if key in ['-l', '--logSql']:
if (value.upper() == "TRUE"):
@@ -110,67 +114,105 @@ if __name__ == "__main__":
time.sleep(2)
tdLog.info('stop All dnodes')
-
- tdDnodes.init(deployPath)
- tdDnodes.setTestCluster(testCluster)
- tdDnodes.setValgrind(valgrind)
- tdDnodes.stopAll()
- is_test_framework = 0
- key_word = 'tdCases.addLinux'
- try:
- if key_word in open(fileName).read():
- is_test_framework = 1
- except:
- pass
- if is_test_framework:
- moduleName = fileName.replace(".py", "").replace("/", ".")
- uModule = importlib.import_module(moduleName)
- try:
- ucase = uModule.TDTestCase()
- tdDnodes.deploy(1,ucase.updatecfgDict)
- except :
- tdDnodes.deploy(1,{})
- else:
- tdDnodes.deploy(1,{})
- tdDnodes.start(1)
if masterIp == "":
host = '127.0.0.1'
else:
host = masterIp
- tdLog.info("Procedures for tdengine deployed in %s" % (host))
-
- tdCases.logSql(logSql)
-
- if testCluster:
- tdLog.info("Procedures for testing cluster")
- if fileName == "all":
- tdCases.runAllCluster()
- else:
- tdCases.runOneCluster(fileName)
- else:
+ if (windows):
+ tdCases.logSql(logSql)
tdLog.info("Procedures for testing self-deployment")
- conn = taos.connect(
- host,
- config=tdDnodes.getSimCfgPath())
- if fileName == "all":
- tdCases.runAllLinux(conn)
- else:
- tdCases.runOneLinux(conn, fileName)
- if restart:
- if fileName == "all":
- tdLog.info("not need to query ")
- else:
- sp = fileName.rsplit(".", 1)
- if len(sp) == 2 and sp[1] == "py":
- tdDnodes.stopAll()
- tdDnodes.start(1)
- time.sleep(1)
- conn = taos.connect( host, config=tdDnodes.getSimCfgPath())
- tdLog.info("Procedures for tdengine deployed in %s" % (host))
- tdLog.info("query test after taosd restart")
- tdCases.runOneLinux(conn, sp[0] + "_" + "restart.py")
+ if masterIp == "" or masterIp == "localhost":
+ tdDnodes.init(deployPath)
+ tdDnodes.setTestCluster(testCluster)
+ tdDnodes.setValgrind(valgrind)
+ tdDnodes.stopAll()
+ is_test_framework = 0
+ key_word = 'tdCases.addWindows'
+ try:
+ if key_word in open(fileName).read():
+ is_test_framework = 1
+ except:
+ pass
+ if is_test_framework:
+ moduleName = fileName.replace(".py", "").replace(os.sep, ".")
+ uModule = importlib.import_module(moduleName)
+ try:
+ ucase = uModule.TDTestCase()
+ tdDnodes.deploy(1,ucase.updatecfgDict)
+ except :
+ tdDnodes.deploy(1,{})
else:
- tdLog.info("not need to query")
+ pass
+ tdDnodes.deploy(1,{})
+ tdDnodes.startWin(1)
+ else:
+ remote_conn = Connection("root@%s"%host)
+ with remote_conn.cd('/var/lib/jenkins/workspace/TDinternal/community/tests/pytest'):
+ remote_conn.run("python3 ./test.py")
+ tdDnodes.init(deployPath)
+ conn = taos.connect(
+ host="%s" % (host),
+ config=tdDnodes.sim.getCfgDir())
+ tdCases.runOneWindows(conn, fileName)
+ tdCases.logSql(logSql)
+ else:
+ tdDnodes.init(deployPath)
+ tdDnodes.setTestCluster(testCluster)
+ tdDnodes.setValgrind(valgrind)
+ tdDnodes.stopAll()
+ is_test_framework = 0
+ key_word = 'tdCases.addLinux'
+ try:
+ if key_word in open(fileName).read():
+ is_test_framework = 1
+ except:
+ pass
+ if is_test_framework:
+ moduleName = fileName.replace(".py", "").replace("/", ".")
+ uModule = importlib.import_module(moduleName)
+ try:
+ ucase = uModule.TDTestCase()
+ tdDnodes.deploy(1,ucase.updatecfgDict)
+ except :
+ tdDnodes.deploy(1,{})
+ else:
+ tdDnodes.deploy(1,{})
+ tdDnodes.start(1)
+
+ tdLog.info("Procedures for tdengine deployed in %s" % (host))
+
+ tdCases.logSql(logSql)
+
+ if testCluster:
+ tdLog.info("Procedures for testing cluster")
+ if fileName == "all":
+ tdCases.runAllCluster()
+ else:
+ tdCases.runOneCluster(fileName)
+ else:
+ tdLog.info("Procedures for testing self-deployment")
+ conn = taos.connect(
+ host,
+ config=tdDnodes.getSimCfgPath())
+ if fileName == "all":
+ tdCases.runAllLinux(conn)
+ else:
+ tdCases.runOneLinux(conn, fileName)
+ if restart:
+ if fileName == "all":
+ tdLog.info("not need to query ")
+ else:
+ sp = fileName.rsplit(".", 1)
+ if len(sp) == 2 and sp[1] == "py":
+ tdDnodes.stopAll()
+ tdDnodes.start(1)
+ time.sleep(1)
+ conn = taos.connect( host, config=tdDnodes.getSimCfgPath())
+ tdLog.info("Procedures for tdengine deployed in %s" % (host))
+ tdLog.info("query test after taosd restart")
+ tdCases.runOneLinux(conn, sp[0] + "_" + "restart.py")
+ else:
+ tdLog.info("not need to query")
conn.close()
diff --git a/tests/pytest/util/cases.py b/tests/pytest/util/cases.py
index 2fc1ac8515..2bfd8efdcd 100644
--- a/tests/pytest/util/cases.py
+++ b/tests/pytest/util/cases.py
@@ -34,7 +34,7 @@ class TDCases:
self.clusterCases = []
def __dynamicLoadModule(self, fileName):
- moduleName = fileName.replace(".py", "").replace("/", ".")
+ moduleName = fileName.replace(".py", "").replace(os.sep, ".")
return importlib.import_module(moduleName, package='..')
def logSql(self, logSql):
@@ -101,8 +101,12 @@ class TDCases:
for tmp in self.windowsCases:
if tmp.name.find(fileName) != -1:
case = testModule.TDTestCase()
- case.init(conn)
- case.run()
+ case.init(conn, self._logSql)
+ try:
+ case.run()
+ except Exception as e:
+ tdLog.notice(repr(e))
+ tdLog.exit("%s failed" % (fileName))
case.stop()
runNum += 1
continue
diff --git a/tests/pytest/util/dnodes.py b/tests/pytest/util/dnodes.py
index 9190943dfd..12e13c9b5c 100644
--- a/tests/pytest/util/dnodes.py
+++ b/tests/pytest/util/dnodes.py
@@ -67,17 +67,19 @@ class TDSimClient:
if os.system(cmd) != 0:
tdLog.exit(cmd)
- cmd = "mkdir -p " + self.logDir
- if os.system(cmd) != 0:
- tdLog.exit(cmd)
+ # cmd = "mkdir -p " + self.logDir
+ # if os.system(cmd) != 0:
+ # tdLog.exit(cmd)
+ os.makedirs(self.logDir)
cmd = "rm -rf " + self.cfgDir
if os.system(cmd) != 0:
tdLog.exit(cmd)
- cmd = "mkdir -p " + self.cfgDir
- if os.system(cmd) != 0:
- tdLog.exit(cmd)
+ # cmd = "mkdir -p " + self.cfgDir
+ # if os.system(cmd) != 0:
+ # tdLog.exit(cmd)
+ os.makedirs(self.cfgDir)
cmd = "touch " + self.cfgPath
if os.system(cmd) != 0:
@@ -179,17 +181,20 @@ class TDDnode:
if os.system(cmd) != 0:
tdLog.exit(cmd)
- cmd = "mkdir -p " + self.dataDir
- if os.system(cmd) != 0:
- tdLog.exit(cmd)
+ # cmd = "mkdir -p " + self.dataDir
+ # if os.system(cmd) != 0:
+ # tdLog.exit(cmd)
+ os.makedirs(self.dataDir)
- cmd = "mkdir -p " + self.logDir
- if os.system(cmd) != 0:
- tdLog.exit(cmd)
+ # cmd = "mkdir -p " + self.logDir
+ # if os.system(cmd) != 0:
+ # tdLog.exit(cmd)
+ os.makedirs(self.logDir)
- cmd = "mkdir -p " + self.cfgDir
- if os.system(cmd) != 0:
- tdLog.exit(cmd)
+ # cmd = "mkdir -p " + self.cfgDir
+ # if os.system(cmd) != 0:
+ # tdLog.exit(cmd)
+ os.makedirs(self.cfgDir)
cmd = "touch " + self.cfgPath
if os.system(cmd) != 0:
@@ -247,6 +252,8 @@ class TDDnode:
if ("packaging" not in rootRealPath):
paths.append(os.path.join(root, tool))
break
+ if (len(paths) == 0):
+ return ""
return paths[0]
def start(self):
@@ -309,6 +316,69 @@ class TDDnode:
time.sleep(10)
# time.sleep(5)
+ def startWin(self):
+ binPath = self.getPath("taosd.exe")
+
+ if (binPath == ""):
+ tdLog.exit("taosd.exe not found!")
+ else:
+ tdLog.info("taosd.exe found: %s" % binPath)
+
+ taosadapterBinPath = self.getPath("taosadapter.exe")
+ if (taosadapterBinPath == ""):
+ tdLog.info("taosAdapter.exe not found!")
+ else:
+ tdLog.info("taosAdapter.exe found in %s" % taosadapterBuildPath)
+
+ if self.deployed == 0:
+ tdLog.exit("dnode:%d is not deployed" % (self.index))
+
+ cmd = "mintty -h never %s -c %s" % (
+ binPath, self.cfgDir)
+
+ if (taosadapterBinPath != ""):
+ taosadapterCmd = "mintty -h never -w hide %s --monitor.writeToTD=false " % (
+ taosadapterBinPath)
+ if os.system(taosadapterCmd) != 0:
+ tdLog.exit(taosadapterCmd)
+
+ if os.system(cmd) != 0:
+ tdLog.exit(cmd)
+
+ self.running = 1
+ tdLog.debug("dnode:%d is running with %s " % (self.index, cmd))
+ if self.valgrind == 0:
+ time.sleep(0.1)
+ key = 'from offline to online'
+ bkey = bytes(key, encoding="utf8")
+ logFile = self.logDir + "/taosdlog.0"
+ i = 0
+ while not os.path.exists(logFile):
+ sleep(0.1)
+ i += 1
+ if i > 50:
+ break
+ popen = subprocess.Popen(
+ 'tail -n +0 -f ' + logFile,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ shell=True)
+ pid = popen.pid
+ # print('Popen.pid:' + str(pid))
+ timeout = time.time() + 60 * 2
+ while True:
+ line = popen.stdout.readline().strip()
+ if bkey in line:
+ popen.kill()
+ break
+ if time.time() > timeout:
+ tdLog.exit('wait too long for taosd start')
+ tdLog.debug("the dnode:%d has been started." % (self.index))
+ else:
+ tdLog.debug(
+ "wait 10 seconds for the dnode:%d to start." %
+ (self.index))
+ time.sleep(10)
def startWithoutSleep(self):
binPath = self.getPath()
@@ -475,7 +545,6 @@ class TDDnodes:
for i in range(len(self.dnodes)):
self.dnodes[i].init(self.path)
-
self.sim = TDSimClient(self.path)
def setTestCluster(self, value):
@@ -504,6 +573,10 @@ class TDDnodes:
self.check(index)
self.dnodes[index - 1].start()
+ def startWin(self, index):
+ self.check(index)
+ self.dnodes[index - 1].startWin()
+
def startWithoutSleep(self, index):
self.check(index)
self.dnodes[index - 1].startWithoutSleep()
diff --git a/tests/script/jenkins/basic.txt b/tests/script/jenkins/basic.txt
index 6cc25d7284..a8f96cccf1 100644
--- a/tests/script/jenkins/basic.txt
+++ b/tests/script/jenkins/basic.txt
@@ -55,7 +55,8 @@
./test.sh -f tsim/bnode/basic1.sim
# ---- mnode
-./test.sh -f tsim/mnode/basic1.sim
+#./test.sh -f tsim/mnode/basic1.sim
+#./test.sh -f tsim/mnode/basic2.sim
# ---- show
./test.sh -f tsim/show/basic.sim
@@ -66,6 +67,9 @@
# ---- stream
./test.sh -f tsim/stream/basic0.sim
./test.sh -f tsim/stream/basic1.sim
+./test.sh -f tsim/stream/basic2.sim
+# ./test.sh -f tsim/stream/session0.sim
+# ./test.sh -f tsim/stream/session1.sim
# ---- transaction
./test.sh -f tsim/trans/lossdata1.sim
@@ -92,6 +96,9 @@
#./test.sh -f tsim/stable/show.sim
./test.sh -f tsim/stable/values.sim
./test.sh -f tsim/stable/vnode3.sim
+./test.sh -f tsim/stable/column_add.sim
+#./test.sh -f tsim/stable/column_drop.sim
+#./test.sh -f tsim/stable/column_modify.sim
# --- for multi process mode
@@ -104,7 +111,7 @@
./test.sh -f tsim/tmq/basic3.sim -m
./test.sh -f tsim/stable/vnode3.sim -m
./test.sh -f tsim/qnode/basic1.sim -m
-./test.sh -f tsim/mnode/basic1.sim -m
+#./test.sh -f tsim/mnode/basic1.sim -m
# --- sma
./test.sh -f tsim/sma/tsmaCreateInsertData.sim
diff --git a/tests/script/sh/deploy.sh b/tests/script/sh/deploy.sh
index da295f640e..5edc0a4d3e 100755
--- a/tests/script/sh/deploy.sh
+++ b/tests/script/sh/deploy.sh
@@ -136,7 +136,7 @@ echo "qDebugFlag 143" >> $TAOS_CFG
echo "rpcDebugFlag 143" >> $TAOS_CFG
echo "tmrDebugFlag 131" >> $TAOS_CFG
echo "uDebugFlag 143" >> $TAOS_CFG
-echo "sDebugFlag 135" >> $TAOS_CFG
+echo "sDebugFlag 143" >> $TAOS_CFG
echo "wDebugFlag 143" >> $TAOS_CFG
echo "numOfLogLines 20000000" >> $TAOS_CFG
echo "statusInterval 1" >> $TAOS_CFG
diff --git a/tests/script/tsim/dnode/basic1.sim b/tests/script/tsim/dnode/basic1.sim
index d49dba60f3..d5c791e902 100644
--- a/tests/script/tsim/dnode/basic1.sim
+++ b/tests/script/tsim/dnode/basic1.sim
@@ -7,6 +7,7 @@ sql connect
print =============== show dnodes
sql show dnodes;
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
if $rows != 1 then
return -1
endi
@@ -15,12 +16,9 @@ if $data00 != 1 then
return -1
endi
-# check 'vnodes' feild ?
-#if $data02 != 0 then
-# return -1
-#endi
sql show mnodes;
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
if $rows != 1 then
return -1
endi
diff --git a/tests/script/tsim/mnode/basic2.sim b/tests/script/tsim/mnode/basic2.sim
new file mode 100644
index 0000000000..f1a3a8c251
--- /dev/null
+++ b/tests/script/tsim/mnode/basic2.sim
@@ -0,0 +1,112 @@
+system sh/stop_dnodes.sh
+system sh/deploy.sh -n dnode1 -i 1
+system sh/deploy.sh -n dnode2 -i 2
+system sh/exec.sh -n dnode1 -s start
+system sh/exec.sh -n dnode2 -s start
+sql connect
+
+print =============== show dnodes
+sql show mnodes;
+if $rows != 1 then
+ return -1
+endi
+
+if $data00 != 1 then
+ return -1
+endi
+
+if $data02 != LEADER then
+ return -1
+endi
+
+print =============== create dnodes
+sql create dnode $hostname port 7200
+sql create dnode $hostname port 7300
+sleep 2000
+
+sql show dnodes;
+if $rows != 3 then
+ return -1
+endi
+
+sql show mnodes;
+if $rows != 1 then
+ return -1
+endi
+
+if $data00 != 1 then
+ return -1
+endi
+
+if $data02 != LEADER then
+ return -1
+endi
+
+print =============== create mnode 2
+sql create mnode on dnode 2
+sql show mnodes
+print $data(1)[0] $data(1)[1] $data(1)[2]
+print $data(2)[0] $data(2)[1] $data(2)[2]
+
+if $rows != 2 then
+ return -1
+endi
+if $data(1)[0] != 1 then
+ return -1
+endi
+if $data(1)[2] != LEADER then
+ return -1
+endi
+if $data(2)[0] != 2 then
+ return -1
+endi
+if $data(2)[2] == LEADER then
+ return -1
+endi
+
+print =============== create user
+sql create user user1 PASS 'user1'
+sql show users
+if $rows != 2 then
+ return -1
+endi
+
+#sql create database db
+#sql show databases
+#if $rows != 3 then
+# return -1
+#endi
+
+system sh/exec.sh -n dnode1 -s stop
+system sh/exec.sh -n dnode2 -s stop
+sleep 100
+system sh/exec.sh -n dnode1 -s start
+system sh/exec.sh -n dnode2 -s start
+
+sql connect
+
+sql show mnodes
+if $rows != 2 then
+ return -1
+endi
+if $data(1)[0] != 1 then
+ return -1
+endi
+if $data(1)[2] != LEADER then
+ return -1
+endi
+
+sql show users
+if $rows != 2 then
+ return -1
+endi
+
+#sql show databases
+#if $rows != 3 then
+# return -1
+#endi
+
+return
+
+system sh/exec.sh -n dnode1 -s stop
+system sh/exec.sh -n dnode2 -s stop
\ No newline at end of file
diff --git a/tests/script/tsim/stable/add_column.sim b/tests/script/tsim/stable/add_column.sim
deleted file mode 100644
index 0a96a7a7d1..0000000000
--- a/tests/script/tsim/stable/add_column.sim
+++ /dev/null
@@ -1,141 +0,0 @@
-system sh/stop_dnodes.sh
-system sh/deploy.sh -n dnode1 -i 1
-system sh/exec.sh -n dnode1 -s start
-sql connect
-
-print ========== prepare stb and ctb
-sql create database db vgroups 1
-sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
-sql create table db.ctb using db.stb tags(101, 102, "103")
-sql insert into db.ctb values(now, 1, "2")
-
-sql show db.stables
-if $rows != 1 then
- return -1
-endi
-if $data[0][0] != stb then
- return -1
-endi
-if $data[0][1] != db then
- return -1
-endi
-if $data[0][3] != 3 then
- return -1
-endi
-if $data[0][4] != 3 then
- return -1
-endi
-if $data[0][6] != abd then
- return -1
-endi
-
-sql show db.tables
-if $rows != 1 then
- return -1
-endi
-if $data[0][0] != ctb then
- return -1
-endi
-if $data[0][1] != db then
- return -1
-endi
-if $data[0][3] != 3 then
- return -1
-endi
-if $data[0][4] != stb then
- return -1
-endi
-if $data[0][6] != 2 then
- return -1
-endi
-if $data[0][9] != CHILD_TABLE then
- return -1
-endi
-
-sql select * from db.stb
-if $rows != 1 then
- return -1
-endi
-if $data[0][1] != 1 then
- return -1
-endi
-if $data[0][2] != 2 then
- return -1
-endi
-if $data[0][3] != 101 then
- return -1
-endi
-
-print ========== add column c3
-sql alter table db.stb add column c3 int
-sql show db.stables
-if $data[0][3] != 4 then
- return -1
-endi
-
-sql show db.tables
-if $data[0][3] != 4 then
- return -1
-endi
-
-sql select * from db.stb
-sql select * from db.stb
-print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
-if $rows != 1 then
- return -1
-endi
-if $data[0][1] != 1 then
- return -1
-endi
-if $data[0][2] != 2 then
- return -1
-endi
-if $data[0][3] != NULL then
- return -1
-endi
-if $data[0][4] != 101 then
- return -1
-endi
-
-sql insert into db.ctb values(now+1s, 1, 2, 3)
-sql select * from db.stb
-print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
-print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
-
-if $rows != 2 then
- return -1
-endi
-if $data[0][1] != 1 then
- return -1
-endi
-if $data[0][2] != 2 then
- return -1
-endi
-if $data[0][3] != NULL then
- return -1
-endi
-if $data[0][4] != 101 then
- return -1
-endi
-if $data[1][1] != 1 then
- return -1
-endi
-if $data[1][2] != 2 then
- return -1
-endi
-if $data[1][3] != 3 then
- return -1
-endi
-if $data[1][4] != 101 then
- return -1
-endi
-
-print ========== add column c4
-sql alter table db.stb add column c4 bigint
-sql insert into db.ctb values(now+2s, 1, 2, 3, 4)
-sql select * from db.stb
-sql select * from db.stb
-print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
-print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
-print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
-
diff --git a/tests/script/tsim/stable/column_add.sim b/tests/script/tsim/stable/column_add.sim
new file mode 100644
index 0000000000..a5d9b48508
--- /dev/null
+++ b/tests/script/tsim/stable/column_add.sim
@@ -0,0 +1,303 @@
+system sh/stop_dnodes.sh
+system sh/deploy.sh -n dnode1 -i 1
+system sh/exec.sh -n dnode1 -s start
+sql connect
+
+print ========== prepare stb and ctb
+sql create database db vgroups 1
+sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
+sql create table db.ctb using db.stb tags(101, 102, "103")
+sql insert into db.ctb values(now, 1, "2")
+
+sql show db.stables
+if $rows != 1 then
+ return -1
+endi
+if $data[0][0] != stb then
+ return -1
+endi
+if $data[0][1] != db then
+ return -1
+endi
+if $data[0][3] != 3 then
+ return -1
+endi
+if $data[0][4] != 3 then
+ return -1
+endi
+if $data[0][6] != abd then
+ return -1
+endi
+
+sql show db.tables
+if $rows != 1 then
+ return -1
+endi
+if $data[0][0] != ctb then
+ return -1
+endi
+if $data[0][1] != db then
+ return -1
+endi
+if $data[0][3] != 3 then
+ return -1
+endi
+if $data[0][4] != stb then
+ return -1
+endi
+if $data[0][6] != 2 then
+ return -1
+endi
+if $data[0][9] != CHILD_TABLE then
+ return -1
+endi
+
+sql select * from db.stb
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != 101 then
+ return -1
+endi
+
+sql_error alter table db.stb add column ts int
+sql_error alter table db.stb add column t1 int
+sql_error alter table db.stb add column t2 int
+sql_error alter table db.stb add column t3 int
+sql_error alter table db.stb add column c1 int
+
+print ========== step1 add column c3
+sql alter table db.stb add column c3 int
+sql show db.stables
+if $data[0][3] != 4 then
+ return -1
+endi
+
+sql show db.tables
+if $data[0][3] != 4 then
+ return -1
+endi
+
+sql select * from db.stb
+sql select * from db.stb
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != NULL then
+ return -1
+endi
+if $data[0][4] != 101 then
+ return -1
+endi
+
+sql insert into db.ctb values(now+1s, 1, 2, 3)
+sql select * from db.stb
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+
+if $rows != 2 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != NULL then
+ return -1
+endi
+if $data[0][4] != 101 then
+ return -1
+endi
+if $data[1][1] != 1 then
+ return -1
+endi
+if $data[1][2] != 2 then
+ return -1
+endi
+if $data[1][3] != 3 then
+ return -1
+endi
+if $data[1][4] != 101 then
+ return -1
+endi
+
+print ========== step2 add column c4
+sql alter table db.stb add column c4 bigint
+sql select * from db.stb
+sql insert into db.ctb values(now+2s, 1, 2, 3, 4)
+sql select * from db.stb
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+
+if $rows != 3 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != NULL then
+ return -1
+endi
+if $data[0][4] != NULL then
+ return -1
+endi
+if $data[0][5] != 101 then
+ return -1
+endi
+if $data[1][1] != 1 then
+ return -1
+endi
+if $data[1][2] != 2 then
+ return -1
+endi
+if $data[1][3] != 3 then
+ return -1
+endi
+if $data[1][4] != NULL then
+ return -1
+endi
+if $data[1][5] != 101 then
+ return -1
+endi
+if $data[2][1] != 1 then
+ return -1
+endi
+if $data[2][2] != 2 then
+ return -1
+endi
+if $data[2][3] != 3 then
+ return -1
+endi
+if $data[2][4] != 4 then
+ return -1
+endi
+if $data[2][5] != 101 then
+ return -1
+endi
+
+print ========== step3 add column c5
+sql alter table db.stb add column c5 int
+sql insert into db.ctb values(now+3s, 1, 2, 3, 4, 5)
+sql select * from db.stb
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
+
+if $rows != 4 then
+ return -1
+endi
+if $data[2][1] != 1 then
+ return -1
+endi
+if $data[2][2] != 2 then
+ return -1
+endi
+if $data[2][3] != 3 then
+ return -1
+endi
+if $data[2][4] != 4 then
+ return -1
+endi
+if $data[2][5] != NULL then
+ return -1
+endi
+if $data[2][6] != 101 then
+ return -1
+endi
+if $data[3][1] != 1 then
+ return -1
+endi
+if $data[3][2] != 2 then
+ return -1
+endi
+if $data[3][3] != 3 then
+ return -1
+endi
+if $data[3][4] != 4 then
+ return -1
+endi
+if $data[3][5] != 5 then
+ return -1
+endi
+if $data[3][6] != 101 then
+ return -1
+endi
+
+print ========== step4 add column c6
+sql alter table db.stb add column c6 int
+sql insert into db.ctb values(now+4s, 1, 2, 3, 4, 5, 6)
+sql select * from db.stb
+
+if $rows != 5 then
+ return -1
+endi
+if $data[3][1] != 1 then
+ return -1
+endi
+if $data[3][2] != 2 then
+ return -1
+endi
+if $data[3][3] != 3 then
+ return -1
+endi
+if $data[3][4] != 4 then
+ return -1
+endi
+if $data[3][5] != 5 then
+ return -1
+endi
+if $data[3][6] != NULL then
+ return -1
+endi
+if $data[3][7] != 101 then
+ return -1
+endi
+if $data[4][1] != 1 then
+ return -1
+endi
+if $data[4][2] != 2 then
+ return -1
+endi
+if $data[4][3] != 3 then
+ return -1
+endi
+if $data[4][4] != 4 then
+ return -1
+endi
+if $data[4][5] != 5 then
+ return -1
+endi
+if $data[4][6] != 6 then
+ return -1
+endi
+if $data[4][7] != 101 then
+ return -1
+endi
+
+print ========== step5 describe
+sql describe db.ctb
+if $rows != 10 then
+ return -1
+endi
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
\ No newline at end of file
diff --git a/tests/script/tsim/stable/column_drop.sim b/tests/script/tsim/stable/column_drop.sim
new file mode 100644
index 0000000000..af84a3ecac
--- /dev/null
+++ b/tests/script/tsim/stable/column_drop.sim
@@ -0,0 +1,209 @@
+system sh/stop_dnodes.sh
+system sh/deploy.sh -n dnode1 -i 1
+system sh/exec.sh -n dnode1 -s start
+sql connect
+
+print ========== prepare stb and ctb
+sql create database db vgroups 1
+sql create table db.stb (ts timestamp, c1 int, c2 binary(4), c3 int, c4 bigint, c5 int, c6 int) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
+sql create table db.ctb using db.stb tags(101, 102, "103")
+sql insert into db.ctb values(now, 1, "2", 3, 4, 5, 6)
+
+sql show db.stables
+if $rows != 1 then
+ return -1
+endi
+if $data[0][0] != stb then
+ return -1
+endi
+if $data[0][1] != db then
+ return -1
+endi
+if $data[0][3] != 7 then
+ return -1
+endi
+if $data[0][4] != 3 then
+ return -1
+endi
+if $data[0][6] != abd then
+ return -1
+endi
+
+sql show db.tables
+if $rows != 1 then
+ return -1
+endi
+if $data[0][0] != ctb then
+ return -1
+endi
+if $data[0][1] != db then
+ return -1
+endi
+if $data[0][3] != 7 then
+ return -1
+endi
+if $data[0][4] != stb then
+ return -1
+endi
+if $data[0][6] != 2 then
+ return -1
+endi
+if $data[0][9] != CHILD_TABLE then
+ return -1
+endi
+
+sql select * from db.stb
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != 3 then
+ return -1
+endi
+if $data[0][4] != 4 then
+ return -1
+endi
+if $data[0][5] != 5 then
+ return -1
+endi
+if $data[0][6] != 6 then
+ return -1
+endi
+if $data[0][7] != 101 then
+ return -1
+endi
+
+sql_error alter table db.stb drop column ts
+sql_error alter table db.stb drop column t1
+sql_error alter table db.stb drop column t2
+sql_error alter table db.stb drop column t3
+sql_error alter table db.stb drop column c9
+
+print ========== step1 drop column c6
+sql alter table db.stb drop column c6
+sql show db.stables
+if $data[0][3] != 6 then
+ return -1
+endi
+
+sql show db.tables
+if $data[0][3] != 6 then
+ return -1
+endi
+
+sql select * from db.stb
+sql select * from db.stb
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != 3 then
+ return -1
+endi
+if $data[0][4] != 4 then
+ return -1
+endi
+if $data[0][5] != 5 then
+ return -1
+endi
+if $data[0][6] != 101 then
+ return -1
+endi
+
+sql insert into db.ctb values(now+1s, 1, 2, 3, 4, 5)
+sql select * from db.stb
+if $rows != 2 then
+ return -1
+endi
+
+print ========== step2 drop column c5
+sql alter table db.stb drop column c5
+sql insert into db.ctb values(now+2s, 1, 2, 3, 4, 5)
+sql insert into db.ctb values(now+3s, 1, 2, 3, 4)
+sql_error insert into db.ctb values(now+2s, 1, 2, 3, 4, 5)
+
+sql select * from db.stb
+if $rows != 4 then
+ return -1
+endi
+
+print ========== step3 drop column c4
+sql alter table db.stb drop column c4
+sql select * from db.stb
+sql_error insert into db.ctb values(now+2s, 1, 2, 3, 4, 5)
+sql_error insert into db.ctb values(now+2s, 1, 2, 3, 4)
+sql insert into db.ctb values(now+3s, 1, 2, 3)
+
+sql select * from db.stb
+if $rows != 5 then
+ return -1
+endi
+
+print ========== step4 add column c4
+sql alter table db.stb add column c4 binary(13)
+sql insert into db.ctb values(now+4s, 1, 2, 3, '4')
+sql select * from db.stb
+if $rows != 6 then
+ return -1
+endi
+if $data[1][4] != NULL then
+ return -1
+endi
+if $data[2][4] != NULL then
+ return -1
+endi
+if $data[3][4] != NULL then
+ return -1
+endi
+if $data[5][4] != 4 then
+ return -1
+endi
+
+print ========== step5 describe
+sql describe db.ctb
+if $rows != 8 then
+ return -1
+endi
+if $data[0][0] != ts then
+ return -1
+endi
+if $data[1][0] != c1 then
+ return -1
+endi
+if $data[2][0] != c2 then
+ return -1
+endi
+if $data[3][0] != c3 then
+ return -1
+endi
+if $data[4][0] != c4 then
+ return -1
+endi
+if $data[4][1] != VARCHAR then
+ return -1
+endi
+if $data[4][2] != 13 then
+ return -1
+endi
+if $data[5][0] != t1 then
+ return -1
+endi
+if $data[6][0] != t2 then
+ return -1
+endi
+if $data[7][0] != t3 then
+ return -1
+endi
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
\ No newline at end of file
diff --git a/tests/script/tsim/stable/column_modify.sim b/tests/script/tsim/stable/column_modify.sim
new file mode 100644
index 0000000000..732e449c4a
--- /dev/null
+++ b/tests/script/tsim/stable/column_modify.sim
@@ -0,0 +1,78 @@
+system sh/stop_dnodes.sh
+system sh/deploy.sh -n dnode1 -i 1
+system sh/exec.sh -n dnode1 -s start
+sql connect
+
+print ========== prepare stb and ctb
+sql create database db vgroups 1
+sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
+sql create table db.ctb using db.stb tags(101, 102, "103")
+sql insert into db.ctb values(now, 1, "1234")
+
+sql_error alter table db.stb MODIFY column c2 binary(3)
+sql_error alter table db.stb MODIFY column c2 int
+sql_error alter table db.stb MODIFY column c1 int
+sql_error alter table db.stb MODIFY column ts int
+sql_error insert into db.ctb values(now, 1, "12345")
+
+print ========== step1 modify column
+sql alter table db.stb MODIFY column c2 binary(5)
+sql insert into db.ctb values(now, 1, "12345")
+
+sql select * from db.stb
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+
+if $rows != 2 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 1234 then
+ return -1
+endi
+if $data[0][3] != 101 then
+ return -1
+endi
+if $data[1][1] != 1 then
+ return -1
+endi
+if $data[1][2] != 12345 then
+ return -1
+endi
+if $data[1][3] != 101 then
+ return -1
+endi
+
+print ========== step2 describe
+sql describe db.ctb
+if $rows != 7 then
+ return -1
+endi
+if $data[0][0] != ts then
+ return -1
+endi
+if $data[1][0] != c1 then
+ return -1
+endi
+if $data[2][0] != c2 then
+ return -1
+endi
+if $data[2][1] != VARCHAR then
+ return -1
+endi
+if $data[2][2] != 5 then
+ return -1
+endi
+if $data[3][0] != t1 then
+ return -1
+endi
+if $data[4][0] != t2 then
+ return -1
+endi
+if $data[5][0] != t3 then
+ return -1
+endi
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
\ No newline at end of file
diff --git a/tests/script/tsim/stream/session0.sim b/tests/script/tsim/stream/session0.sim
new file mode 100644
index 0000000000..46b343632a
--- /dev/null
+++ b/tests/script/tsim/stream/session0.sim
@@ -0,0 +1,162 @@
+system sh/stop_dnodes.sh
+system sh/deploy.sh -n dnode1 -i 1
+system sh/exec.sh -n dnode1 -s start
+sleep 50
+sql connect
+
+print =============== create database
+sql create database test vgroups 1
+sql show databases
+if $rows != 3 then
+ return -1
+endi
+
+print $data00 $data01 $data02
+
+sql use test
+
+
+sql create table t1(ts timestamp, a int, b int , c int, d double,id int);
+sql create stream streams2 trigger at_once into streamt as select _wstartts, count(*) c1, sum(a), max(a), min(d), stddev(a), last(a), first(d), max(id) s from t1 session(ts,10s);
+sql insert into t1 values(1648791213000,NULL,NULL,NULL,NULL,1);
+sql insert into t1 values(1648791223001,10,2,3,1.1,2);
+sql insert into t1 values(1648791233002,3,2,3,2.1,3);
+sql insert into t1 values(1648791243003,NULL,NULL,NULL,NULL,4);
+sql insert into t1 values(1648791213002,NULL,NULL,NULL,NULL,5) (1648791233012,NULL,NULL,NULL,NULL,6);
+
+sql select * from streamt order by s desc;
+
+# row 0
+if $data01 != 3 then
+ print ======$data01
+ return -1
+endi
+
+if $data02 != 3 then
+ print ======$data02
+ return -1
+endi
+
+if $data03 != 3 then
+ print ======$data03
+ return -1
+endi
+
+if $data04 != 2.100000000 then
+ print ======$data04
+ return -1
+endi
+
+if $data05 != 0.000000000 then
+ print ======$data05
+ return -1
+endi
+
+if $data06 != 3 then
+ print ======$data05
+ return -1
+endi
+
+if $data07 != 2.100000000 then
+ print ======$data05
+ return -1
+endi
+
+if $data08 != 6 then
+ print ======$data05
+ return -1
+endi
+
+# row 1
+
+if $data11 != 3 then
+ print ======$data01
+ return -1
+endi
+
+if $data12 != 10 then
+ print ======$data02
+ return -1
+endi
+
+if $data13 != 10 then
+ print ======$data03
+ return -1
+endi
+
+if $data14 != 1.100000000 then
+ print ======$data04
+ return -1
+endi
+
+if $data15 != 0.000000000 then
+ print ======$data05
+ return -1
+endi
+
+if $data16 != 10 then
+ print ======$data05
+ return -1
+endi
+
+if $data17 != 1.100000000 then
+ print ======$data05
+ return -1
+endi
+
+if $data18 != 5 then
+ print ======$data05
+ return -1
+endi
+
+sql insert into t1 values(1648791213000,1,2,3,1.0,7);
+sql insert into t1 values(1648791223001,2,2,3,1.1,8);
+sql insert into t1 values(1648791233002,3,2,3,2.1,9);
+sql insert into t1 values(1648791243003,4,2,3,3.1,10);
+sql insert into t1 values(1648791213002,4,2,3,4.1,11) ;
+sql insert into t1 values(1648791213002,4,2,3,4.1,12) (1648791223009,4,2,3,4.1,13);
+
+sql select * from streamt order by s desc ;
+
+# row 0
+if $data01 != 7 then
+ print ======$data01
+ return -1
+endi
+
+if $data02 != 9 then
+ print ======$data02
+ return -1
+endi
+
+if $data03 != 4 then
+ print ======$data03
+ return -1
+endi
+
+if $data04 != 1.100000000 then
+ print ======$data04
+ return -1
+endi
+
+if $data05 != 0.816496581 then
+ print ======$data05
+ return -1
+endi
+
+if $data06 != 3 then
+ print ======$data05
+ return -1
+endi
+
+if $data07 != 1.100000000 then
+ print ======$data05
+ return -1
+endi
+
+if $data08 != 13 then
+ print ======$data05
+ return -1
+endi
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
diff --git a/tests/script/tsim/stream/session1.sim b/tests/script/tsim/stream/session1.sim
new file mode 100644
index 0000000000..a44639ba7a
--- /dev/null
+++ b/tests/script/tsim/stream/session1.sim
@@ -0,0 +1,190 @@
+system sh/stop_dnodes.sh
+system sh/deploy.sh -n dnode1 -i 1
+system sh/exec.sh -n dnode1 -s start
+sleep 50
+sql connect
+
+print =============== create database
+sql create database test vgroups 1
+sql show databases
+if $rows != 3 then
+ return -1
+endi
+
+print $data00 $data01 $data02
+
+sql use test
+
+
+sql create table t1(ts timestamp, a int, b int , c int, d double,id int);
+sql create stream streams2 trigger at_once into streamt as select _wstartts, count(*) c1, sum(a), min(b), max(id) s from t1 session(ts,10s);
+sql insert into t1 values(1648791210000,1,1,1,1.1,1);
+sql insert into t1 values(1648791220000,2,2,2,2.1,2);
+sql insert into t1 values(1648791230000,3,3,3,3.1,3);
+sql insert into t1 values(1648791240000,4,4,4,4.1,4);
+
+sql select * from streamt order by s desc;
+
+# row 0
+if $data01 != 4 then
+ print ======$data01
+ return -1
+endi
+
+if $data02 != 10 then
+ print ======$data02
+ return -1
+endi
+
+if $data03 != 1 then
+ print ======$data03
+ return -1
+endi
+
+if $data04 != 4 then
+ print ======$data04
+ return -1
+endi
+
+sql insert into t1 values(1648791250005,5,5,5,5.1,5);
+sql insert into t1 values(1648791260006,6,6,6,6.1,6);
+sql insert into t1 values(1648791270007,7,7,7,7.1,7);
+sql insert into t1 values(1648791240005,5,5,5,5.1,8) (1648791250006,6,6,6,6.1,9);
+
+sql select * from streamt order by s desc;
+
+# row 0
+if $data01 != 8 then
+ print ======$data01
+ return -1
+endi
+
+if $data02 != 32 then
+ print ======$data02
+ return -1
+endi
+
+if $data03 != 1 then
+ print ======$data03
+ return -1
+endi
+
+if $data04 != 9 then
+ print ======$data04
+ return -1
+endi
+
+# row 1
+if $data11 != 1 then
+ print ======$data11
+ return -1
+endi
+
+if $data12 != 7 then
+ print ======$data12
+ return -1
+endi
+
+if $data13 != 7 then
+ print ======$data13
+ return -1
+endi
+
+if $data14 != 7 then
+ print ======$data14
+ return -1
+endi
+
+sql insert into t1 values(1648791280008,7,7,7,7.1,10) (1648791300009,8,8,8,8.1,11);
+sql insert into t1 values(1648791260007,7,7,7,7.1,12) (1648791290008,7,7,7,7.1,13) (1648791290009,8,8,8,8.1,14);
+sql insert into t1 values(1648791500000,7,7,7,7.1,15) (1648791520000,8,8,8,8.1,16) (1648791540000,8,8,8,8.1,17);
+sql insert into t1 values(1648791530000,8,8,8,8.1,18);
+sql insert into t1 values(1648791220000,10,10,10,10.1,19) (1648791290008,2,2,2,2.1,20) (1648791540000,17,17,17,17.1,21) (1648791500001,22,22,22,22.1,22);
+
+sql select * from streamt order by s desc;
+
+# row 0
+if $data01 != 2 then
+ print ======$data01
+ return -1
+endi
+
+if $data02 != 29 then
+ print ======$data02
+ return -1
+endi
+
+if $data03 != 7 then
+ print ======$data03
+ return -1
+endi
+
+if $data04 != 22 then
+ print ======$data04
+ return -1
+endi
+
+# row 1
+if $data11 != 3 then
+ print ======$data11
+ return -1
+endi
+
+if $data12 != 33 then
+ print ======$data12
+ return -1
+endi
+
+if $data13 != 8 then
+ print ======$data13
+ return -1
+endi
+
+if $data14 != 21 then
+ print ======$data14
+ return -1
+endi
+
+# row 2
+if $data21 != 4 then
+ print ======$data21
+ return -1
+endi
+
+if $data22 != 25 then
+ print ======$data22
+ return -1
+endi
+
+if $data23 != 2 then
+ print ======$data23
+ return -1
+endi
+
+if $data24 != 20 then
+ print ======$data24
+ return -1
+endi
+
+# row 3
+if $data31 != 10 then
+ print ======$data31
+ return -1
+endi
+
+if $data32 != 54 then
+ print ======$data32
+ return -1
+endi
+
+if $data33 != 1 then
+ print ======$data33
+ return -1
+endi
+
+if $data34 != 19 then
+ print ======$data34
+ return -1
+endi
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
diff --git a/tests/script/tsim/sync/insertDataByRunBack.sim b/tests/script/tsim/sync/insertDataByRunBack.sim
index c86cd3844b..00f0643b61 100644
--- a/tests/script/tsim/sync/insertDataByRunBack.sim
+++ b/tests/script/tsim/sync/insertDataByRunBack.sim
@@ -20,6 +20,8 @@ print $data[1][0] $data[1][1] $data[1][2] $data[1][3]
if $rows == 2 then
if $data[1][1] == stop then
goto end_insert
+ elif $data[0][1] == stop then
+ goto end_insert
endi
endi
@@ -47,6 +49,9 @@ endw
if $loop_cnt == 0 then
print ====> notify main to working for insert data
sql insert into interaction values (now, 'working', 0, 0);
+ sql select * from interaction
+ print $data[0][0] $data[0][1] $data[0][2] $data[0][3]
+ print $data[1][0] $data[1][1] $data[1][2] $data[1][3]
endi
$loop_cnt = $loop_cnt + 1
goto loop_insert
diff --git a/tests/script/tsim/sync/threeReplica1VgElectWihtInsert.sim b/tests/script/tsim/sync/threeReplica1VgElectWihtInsert.sim
index f568008a82..fc501096e6 100644
--- a/tests/script/tsim/sync/threeReplica1VgElectWihtInsert.sim
+++ b/tests/script/tsim/sync/threeReplica1VgElectWihtInsert.sim
@@ -155,28 +155,13 @@ while $i < $tbNum
sql create table $ctb using stb tags( $i )
$ntb = $ntbPrefix . $i
sql create table $ntb (ts timestamp, c1 int, c2 float, c3 binary(10))
-
-# $x = 0
-# while $x < $rowNum
-# $binary = ' . binary
-# $binary = $binary . $i
-# $binary = $binary . '
-#
-# sql insert into $ctb values ($tstart , $i , $x , $binary )
-# sql insert into $ntb values ($tstart , 999 , 999 , 'binary-ntb' )
-# $tstart = $tstart + 1
-# $x = $x + 1
-# endw
-
-# print ====> insert rows: $rowNum into $ctb and $ntb
-
$i = $i + 1
-# $tstart = 1640966400000
endw
$totalTblNum = $tbNum * 2
-print ====>totalTblNum:$totalTblNum
+sleep 1000
sql show tables
+print ====> expect $totalTblNum and infinsert $rows in fact
if $rows != $totalTblNum then
return -1
endi
@@ -222,6 +207,9 @@ endi
$dnodeId = dnode . $dnodeId
print ====> stop $dnodeId
system sh/exec.sh -n $dnodeId -s stop -x SIGINT
+sleep 1000
+print ====> start $dnodeId
+system sh/exec.sh -n $dnodeId -s start
$loop_cnt = 0
check_vg_ready_2:
@@ -245,7 +233,7 @@ if $data[0][4] == LEADER then
if $data[0][8] != FOLLOWER then
goto check_vg_ready_2
endi
- print ---- vgroup $data[0][0] leader switch to dnode $data[0][3]
+ print ---- vgroup $dnodeId leader switch to dnode $data[0][3]
goto vg_ready_2
elif $data[0][6] == LEADER then
if $data[0][4] != FOLLOWER then
@@ -254,7 +242,7 @@ elif $data[0][6] == LEADER then
if $data[0][8] != FOLLOWER then
goto check_vg_ready_2
endi
- print ---- vgroup $data[0][0] leader switch to dnode $data[0][5]
+ print ---- vgroup $dnodeId leader switch to dnode $data[0][5]
goto vg_ready_2
elif $data[0][8] == LEADER then
if $data[0][4] != FOLLOWER then
@@ -263,7 +251,7 @@ elif $data[0][8] == LEADER then
if $data[0][6] != FOLLOWER then
goto check_vg_ready_2
endi
- print ---- vgroup $data[0][0] leader switch to dnode $data[0][7]
+ print ---- vgroup $dnodeId leader switch to dnode $data[0][7]
goto vg_ready_2
else
goto check_vg_ready_2
@@ -272,8 +260,6 @@ vg_ready_2:
$switch_loop_cnt = $switch_loop_cnt + 1
if $switch_loop_cnt < 3 then
- print ====> start $dnodeId
- system sh/exec.sh -n $dnodeId -s start
goto switch_leader_loop
endi
diff --git a/tests/script/tsim/trans/create_db.sim b/tests/script/tsim/trans/create_db.sim
index 0db5add88a..ae6b7eab16 100644
--- a/tests/script/tsim/trans/create_db.sim
+++ b/tests/script/tsim/trans/create_db.sim
@@ -64,7 +64,7 @@ if $rows != 1 then
return -1
endi
-if $data[0][0] != 2 then
+if $data[0][0] != 7 then
return -1
endi
@@ -114,7 +114,7 @@ if $rows != 1 then
return -1
endi
-if $data[0][0] != 4 then
+if $data[0][0] != 9 then
return -1
endi
@@ -137,7 +137,7 @@ endi
sql_error create database d2 vgroups 2;
print =============== kill transaction
-sql kill transaction 4;
+sql kill transaction 9;
sleep 2000
sql show transactions
diff --git a/tests/system-test/0-others/udfTest.py b/tests/system-test/0-others/udfTest.py
index 679b415098..46d0a69688 100644
--- a/tests/system-test/0-others/udfTest.py
+++ b/tests/system-test/0-others/udfTest.py
@@ -134,7 +134,7 @@ class TDTestCase:
def create_udf_function(self):
- for i in range(10):
+ for i in range(5):
# create scalar functions
tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8;")
@@ -644,16 +644,12 @@ class TDTestCase:
self.create_udf_function()
self.basic_udf_query()
self.loop_kill_udfd()
-
- self.unexpected_create()
tdSql.execute(" drop function udf1 ")
tdSql.execute(" drop function udf2 ")
self.create_udf_function()
time.sleep(2)
self.basic_udf_query()
self.test_function_name()
- self.restart_taosd_query_udf()
-
def stop(self):
diff --git a/tests/system-test/0-others/udf_create.py b/tests/system-test/0-others/udf_create.py
new file mode 100644
index 0000000000..e2c6e3c10b
--- /dev/null
+++ b/tests/system-test/0-others/udf_create.py
@@ -0,0 +1,654 @@
+from distutils.log import error
+import taos
+import sys
+import time
+import os
+
+from util.log import *
+from util.sql import *
+from util.cases import *
+from util.dnodes import *
+import subprocess
+
+class TDTestCase:
+
+ def init(self, conn, logSql):
+ tdLog.debug(f"start to excute {__file__}")
+ tdSql.init(conn.cursor(), logSql)
+
+ def getBuildPath(self):
+ selfPath = os.path.dirname(os.path.realpath(__file__))
+
+ if ("community" in selfPath):
+ projPath = selfPath[:selfPath.find("community")]
+ else:
+ projPath = selfPath[:selfPath.find("tests")]
+
+ for root, dirs, files in os.walk(projPath):
+ if ("taosd" in files):
+ rootRealPath = os.path.dirname(os.path.realpath(root))
+ if ("packaging" not in rootRealPath):
+ buildPath = root[:len(root) - len("/build/bin")]
+ break
+ return buildPath
+
+ def prepare_udf_so(self):
+ selfPath = os.path.dirname(os.path.realpath(__file__))
+
+ if ("community" in selfPath):
+ projPath = selfPath[:selfPath.find("community")]
+ else:
+ projPath = selfPath[:selfPath.find("tests")]
+ print(projPath)
+
+ libudf1 = subprocess.Popen('find %s -name "libudf1.so"|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
+ libudf2 = subprocess.Popen('find %s -name "libudf2.so"|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
+ os.system("mkdir /tmp/udf/")
+ os.system("cp %s /tmp/udf/ "%libudf1.replace("\n" ,""))
+ os.system("cp %s /tmp/udf/ "%libudf2.replace("\n" ,""))
+
+
+ def prepare_data(self):
+
+ tdSql.execute("drop database if exists db ")
+ tdSql.execute("create database if not exists db days 300")
+ tdSql.execute("use db")
+ tdSql.execute(
+ '''create table stb1
+ (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
+ tags (t1 int)
+ '''
+ )
+
+ tdSql.execute(
+ '''
+ create table t1
+ (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
+ '''
+ )
+ for i in range(4):
+ tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
+
+ for i in range(9):
+ tdSql.execute(
+ f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
+ )
+ tdSql.execute(
+ f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
+ )
+ tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
+ tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
+ tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
+ tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
+
+ tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
+ tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
+ tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
+
+ tdSql.execute(
+ f'''insert into t1 values
+ ( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
+ ( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
+ ( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
+ ( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
+ ( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
+ ( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
+ ( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
+ ( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
+ ( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
+ ( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
+ ( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
+ ( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
+ '''
+ )
+
+ tdSql.execute("create table tb (ts timestamp , num1 int , num2 int, num3 double , num4 binary(30))")
+ tdSql.execute(
+ f'''insert into tb values
+ ( '2020-04-21 01:01:01.000', NULL, 1, 1, "binary1" )
+ ( '2020-10-21 01:01:01.000', 1, 1, 1.11, "binary1" )
+ ( '2020-12-31 01:01:01.000', 2, 22222, 22, "binary1" )
+ ( '2021-01-01 01:01:06.000', 3, 33333, 33, "binary1" )
+ ( '2021-05-07 01:01:10.000', 4, 44444, 44, "binary1" )
+ ( '2021-07-21 01:01:01.000', NULL, NULL, NULL, "binary1" )
+ ( '2021-09-30 01:01:16.000', 5, 55555, 55, "binary1" )
+ ( '2022-02-01 01:01:20.000', 6, 66666, 66, "binary1" )
+ ( '2022-10-28 01:01:26.000', 0, 00000, 00, "binary1" )
+ ( '2022-12-01 01:01:30.000', 8, -88888, -88, "binary1" )
+ ( '2022-12-31 01:01:36.000', 9, -9999999, -99, "binary1" )
+ ( '2023-02-21 01:01:01.000', NULL, NULL, NULL, "binary1" )
+ '''
+ )
+
+ # udf functions with join
+ ts_start = 1652517451000
+ tdSql.execute("create stable st (ts timestamp , c1 int , c2 int ,c3 double ,c4 double ) tags(ind int)")
+ tdSql.execute("create table sub1 using st tags(1)")
+ tdSql.execute("create table sub2 using st tags(2)")
+
+ for i in range(10):
+ ts = ts_start + i *1000
+ tdSql.execute(" insert into sub1 values({} , {},{},{},{})".format(ts,i ,i*10,i*100.0,i*1000.0))
+ tdSql.execute(" insert into sub2 values({} , {},{},{},{})".format(ts,i ,i*10,i*100.0,i*1000.0))
+
+
+ def create_udf_function(self):
+
+ for i in range(5):
+ # create scalar functions
+ tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8;")
+
+ # create aggregate functions
+
+ tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8;")
+
+ functions = tdSql.getResult("show functions")
+ function_nums = len(functions)
+ if function_nums == 2:
+ tdLog.info("create two udf functions success ")
+
+ # drop functions
+
+ tdSql.execute("drop function udf1")
+ tdSql.execute("drop function udf2")
+
+ functions = tdSql.getResult("show functions")
+ for function in functions:
+ if "udf1" in function[0] or "udf2" in function[0]:
+ tdLog.info("drop udf functions failed ")
+ tdLog.exit("drop udf functions failed")
+
+ tdLog.info("drop two udf functions success ")
+
+ # create scalar functions
+ tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8;")
+
+ # create aggregate functions
+
+ tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8;")
+
+ functions = tdSql.getResult("show functions")
+ function_nums = len(functions)
+ if function_nums == 2:
+ tdLog.info("create two udf functions success ")
+
+ def basic_udf_query(self):
+
+ # scalar functions
+
+ tdSql.execute("use db ")
+ tdSql.query("select num1 , udf1(num1) ,num2 ,udf1(num2),num3 ,udf1(num3),num4 ,udf1(num4) from tb")
+ tdSql.checkData(0,0,None)
+ tdSql.checkData(0,1,None)
+ tdSql.checkData(0,2,1)
+ tdSql.checkData(0,3,88)
+ tdSql.checkData(0,4,1.000000000)
+ tdSql.checkData(0,5,88)
+ tdSql.checkData(0,6,"binary1")
+ tdSql.checkData(0,7,88)
+
+ tdSql.checkData(3,0,3)
+ tdSql.checkData(3,1,88)
+ tdSql.checkData(3,2,33333)
+ tdSql.checkData(3,3,88)
+ tdSql.checkData(3,4,33.000000000)
+ tdSql.checkData(3,5,88)
+ tdSql.checkData(3,6,"binary1")
+ tdSql.checkData(3,7,88)
+
+ tdSql.checkData(11,0,None)
+ tdSql.checkData(11,1,None)
+ tdSql.checkData(11,2,None)
+ tdSql.checkData(11,3,None)
+ tdSql.checkData(11,4,None)
+ tdSql.checkData(11,5,None)
+ tdSql.checkData(11,6,"binary1")
+ tdSql.checkData(11,7,88)
+
+ tdSql.query("select c1 , udf1(c1) ,c2 ,udf1(c2), c3 ,udf1(c3), c4 ,udf1(c4) from stb1 order by c1")
+ tdSql.checkData(0,0,None)
+ tdSql.checkData(0,1,None)
+ tdSql.checkData(0,2,None)
+ tdSql.checkData(0,3,None)
+ tdSql.checkData(0,4,None)
+ tdSql.checkData(0,5,None)
+ tdSql.checkData(0,6,None)
+ tdSql.checkData(0,7,None)
+
+ tdSql.checkData(20,0,8)
+ tdSql.checkData(20,1,88)
+ tdSql.checkData(20,2,88888)
+ tdSql.checkData(20,3,88)
+ tdSql.checkData(20,4,888)
+ tdSql.checkData(20,5,88)
+ tdSql.checkData(20,6,88)
+ tdSql.checkData(20,7,88)
+
+
+ # aggregate functions
+ tdSql.query("select udf2(num1) ,udf2(num2), udf2(num3) from tb")
+ tdSql.checkData(0,0,15.362291496)
+ tdSql.checkData(0,1,10000949.553189287)
+ tdSql.checkData(0,2,168.633425216)
+
+ # Arithmetic compute
+ tdSql.query("select udf2(num1)+100 ,udf2(num2)-100, udf2(num3)*100 ,udf2(num3)/100 from tb")
+ tdSql.checkData(0,0,115.362291496)
+ tdSql.checkData(0,1,10000849.553189287)
+ tdSql.checkData(0,2,16863.342521576)
+ tdSql.checkData(0,3,1.686334252)
+
+ tdSql.query("select udf2(c1) ,udf2(c6) from stb1 ")
+ tdSql.checkData(0,0,25.514701644)
+ tdSql.checkData(0,1,265.247614504)
+
+ tdSql.query("select udf2(c1)+100 ,udf2(c6)-100 ,udf2(c1)*100 ,udf2(c6)/100 from stb1 ")
+ tdSql.checkData(0,0,125.514701644)
+ tdSql.checkData(0,1,165.247614504)
+ tdSql.checkData(0,2,2551.470164435)
+ tdSql.checkData(0,3,2.652476145)
+
+ # # bug for crash when query sub table
+ tdSql.query("select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from ct1")
+ tdSql.checkData(0,0,378.215547010)
+ tdSql.checkData(0,1,353.808067460)
+ tdSql.checkData(0,2,2114.237451187)
+ tdSql.checkData(0,3,2.125468151)
+
+ tdSql.query("select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from stb1 ")
+ tdSql.checkData(0,0,490.358032462)
+ tdSql.checkData(0,1,400.460106627)
+ tdSql.checkData(0,2,2551.470164435)
+ tdSql.checkData(0,3,2.652476145)
+
+
+ # regular table with aggregate functions
+
+ tdSql.error("select udf1(num1) , count(num1) from tb;")
+ tdSql.error("select udf1(num1) , avg(num1) from tb;")
+ tdSql.error("select udf1(num1) , twa(num1) from tb;")
+ tdSql.error("select udf1(num1) , irate(num1) from tb;")
+ tdSql.error("select udf1(num1) , sum(num1) from tb;")
+ tdSql.error("select udf1(num1) , stddev(num1) from tb;")
+ tdSql.error("select udf1(num1) , mode(num1) from tb;")
+ tdSql.error("select udf1(num1) , HYPERLOGLOG(num1) from tb;")
+ # stable
+ tdSql.error("select udf1(c1) , count(c1) from stb1;")
+ tdSql.error("select udf1(c1) , avg(c1) from stb1;")
+ tdSql.error("select udf1(c1) , twa(c1) from stb1;")
+ tdSql.error("select udf1(c1) , irate(c1) from stb1;")
+ tdSql.error("select udf1(c1) , sum(c1) from stb1;")
+ tdSql.error("select udf1(c1) , stddev(c1) from stb1;")
+ tdSql.error("select udf1(c1) , mode(c1) from stb1;")
+ tdSql.error("select udf1(c1) , HYPERLOGLOG(c1) from stb1;")
+
+ # regular table with select functions
+
+ tdSql.query("select udf1(num1) , max(num1) from tb;")
+ tdSql.checkRows(1)
+ tdSql.query("select floor(num1) , max(num1) from tb;")
+ tdSql.checkRows(1)
+ tdSql.query("select udf1(num1) , min(num1) from tb;")
+ tdSql.checkRows(1)
+ tdSql.query("select ceil(num1) , min(num1) from tb;")
+ tdSql.checkRows(1)
+ tdSql.error("select udf1(num1) , first(num1) from tb;")
+
+ tdSql.error("select abs(num1) , first(num1) from tb;")
+
+ tdSql.error("select udf1(num1) , last(num1) from tb;")
+
+ tdSql.error("select round(num1) , last(num1) from tb;")
+
+ tdSql.query("select udf1(num1) , top(num1,1) from tb;")
+ tdSql.checkRows(1)
+ tdSql.query("select udf1(num1) , bottom(num1,1) from tb;")
+ tdSql.checkRows(1)
+ tdSql.error("select udf1(num1) , last_row(num1) from tb;")
+
+ tdSql.error("select round(num1) , last_row(num1) from tb;")
+
+
+ # stable
+ tdSql.query("select udf1(c1) , max(c1) from stb1;")
+ tdSql.checkRows(1)
+ tdSql.query("select abs(c1) , max(c1) from stb1;")
+ tdSql.checkRows(1)
+ tdSql.query("select udf1(c1) , min(c1) from stb1;")
+ tdSql.checkRows(1)
+ tdSql.query("select floor(c1) , min(c1) from stb1;")
+ tdSql.checkRows(1)
+ tdSql.error("select udf1(c1) , first(c1) from stb1;")
+
+ tdSql.error("select udf1(c1) , last(c1) from stb1;")
+
+ tdSql.query("select udf1(c1) , top(c1 ,1) from stb1;")
+ tdSql.checkRows(1)
+ tdSql.query("select abs(c1) , top(c1 ,1) from stb1;")
+ tdSql.checkRows(1)
+ tdSql.query("select udf1(c1) , bottom(c1,1) from stb1;")
+ tdSql.checkRows(1)
+ tdSql.query("select ceil(c1) , bottom(c1,1) from stb1;")
+ tdSql.checkRows(1)
+
+ tdSql.error("select udf1(c1) , last_row(c1) from stb1;")
+ tdSql.error("select ceil(c1) , last_row(c1) from stb1;")
+
+ # regular table with compute functions
+
+ tdSql.query("select udf1(num1) , abs(num1) from tb;")
+ tdSql.checkRows(12)
+ tdSql.query("select floor(num1) , abs(num1) from tb;")
+ tdSql.checkRows(12)
+
+ # # bug need fix
+
+ #tdSql.query("select udf1(num1) , csum(num1) from tb;")
+ #tdSql.checkRows(9)
+ #tdSql.query("select ceil(num1) , csum(num1) from tb;")
+ #tdSql.checkRows(9)
+ #tdSql.query("select udf1(c1) , csum(c1) from stb1;")
+ #tdSql.checkRows(22)
+ #tdSql.query("select floor(c1) , csum(c1) from stb1;")
+ #tdSql.checkRows(22)
+
+ # stable with compute functions
+ tdSql.query("select udf1(c1) , abs(c1) from stb1;")
+ tdSql.checkRows(25)
+ tdSql.query("select abs(c1) , ceil(c1) from stb1;")
+ tdSql.checkRows(25)
+
+ # nest query
+ tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from stb1 order by ts;")
+ tdSql.checkRows(25)
+ tdSql.checkData(0,0,None)
+ tdSql.checkData(0,1,None)
+ tdSql.checkData(1,0,88)
+ tdSql.checkData(1,1,8)
+
+ tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from ct1 order by ts;")
+ tdSql.checkRows(13)
+ tdSql.checkData(0,0,88)
+ tdSql.checkData(0,1,8)
+ tdSql.checkData(1,0,88)
+ tdSql.checkData(1,1,7)
+
+ # bug fix for crash
+ # order by udf function result
+ for _ in range(50):
+ tdSql.query("select udf2(c1) from stb1 group by 1-udf1(c1)")
+ print(tdSql.queryResult)
+
+ # udf functions with filter
+
+ tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from stb1 where c1 is null order by ts;")
+ tdSql.checkRows(3)
+ tdSql.checkData(0,0,None)
+ tdSql.checkData(0,1,None)
+
+ tdSql.query("select c1 ,udf1(c1) , c6 ,udf1(c6) from stb1 where c1 > 8 order by ts")
+ tdSql.checkRows(3)
+ tdSql.checkData(0,0,9)
+ tdSql.checkData(0,1,88)
+ tdSql.checkData(0,2,-99.990000000)
+ tdSql.checkData(0,3,88)
+
+ tdSql.query("select sub1.c1, sub2.c2 from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+ tdSql.checkData(0,0,0)
+ tdSql.checkData(0,1,0)
+ tdSql.checkData(1,0,1)
+ tdSql.checkData(1,1,10)
+
+ tdSql.query("select udf1(sub1.c1), udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+ tdSql.checkData(0,0,88)
+ tdSql.checkData(0,1,88)
+ tdSql.checkData(1,0,88)
+ tdSql.checkData(1,1,88)
+
+ tdSql.query("select sub1.c1 , udf1(sub1.c1), sub2.c2 ,udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+ tdSql.checkData(0,0,0)
+ tdSql.checkData(0,1,88)
+ tdSql.checkData(0,2,0)
+ tdSql.checkData(0,3,88)
+ tdSql.checkData(1,0,1)
+ tdSql.checkData(1,1,88)
+ tdSql.checkData(1,2,10)
+ tdSql.checkData(1,3,88)
+
+ tdSql.query("select udf2(sub1.c1), udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+ tdSql.checkData(0,0,16.881943016)
+ tdSql.checkData(0,1,168.819430161)
+ tdSql.error("select sub1.c1 , udf2(sub1.c1), sub2.c2 ,udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+
+ # udf functions with group by
+ tdSql.query("select udf1(c1) from ct1 group by c1")
+ tdSql.checkRows(10)
+ tdSql.query("select udf1(c1) from stb1 group by c1")
+ tdSql.checkRows(11)
+ tdSql.query("select c1,c2, udf1(c1,c2) from ct1 group by c1,c2")
+ tdSql.checkRows(10)
+ tdSql.query("select c1,c2, udf1(c1,c2) from stb1 group by c1,c2")
+ tdSql.checkRows(11)
+
+ tdSql.query("select udf2(c1) from ct1 group by c1")
+ tdSql.checkRows(10)
+ tdSql.query("select udf2(c1) from stb1 group by c1")
+ tdSql.checkRows(11)
+ tdSql.query("select c1,c2, udf2(c1,c6) from ct1 group by c1,c2")
+ tdSql.checkRows(10)
+ tdSql.query("select c1,c2, udf2(c1,c6) from stb1 group by c1,c2")
+ tdSql.checkRows(11)
+ tdSql.query("select udf2(c1) from stb1 group by udf1(c1)")
+ tdSql.checkRows(2)
+ tdSql.query("select udf2(c1) from stb1 group by floor(c1)")
+ tdSql.checkRows(11)
+
+ # udf mix with order by
+ tdSql.query("select udf2(c1) from stb1 group by floor(c1) order by udf2(c1)")
+ tdSql.checkRows(11)
+
+
+ def multi_cols_udf(self):
+ tdSql.query("select num1,num2,num3,udf1(num1,num2,num3) from tb")
+ tdSql.checkData(0,0,None)
+ tdSql.checkData(0,1,1)
+ tdSql.checkData(0,2,1.000000000)
+ tdSql.checkData(0,3,None)
+ tdSql.checkData(1,0,1)
+ tdSql.checkData(1,1,1)
+ tdSql.checkData(1,2,1.110000000)
+ tdSql.checkData(1,3,88)
+
+ tdSql.query("select c1,c6,udf1(c1,c6) from stb1 order by ts")
+ tdSql.checkData(1,0,8)
+ tdSql.checkData(1,1,88.880000000)
+ tdSql.checkData(1,2,88)
+
+ tdSql.query("select abs(udf1(c1,c6,c1,c6)) , abs(ceil(c1)) from stb1 where c1 is not null order by ts;")
+ tdSql.checkRows(22)
+
+ tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+ tdSql.checkData(0,0,169.661427555)
+ tdSql.checkData(0,1,169.661427555)
+
+ def try_query_sql(self):
+ udf1_sqls = [
+ "select num1 , udf1(num1) ,num2 ,udf1(num2),num3 ,udf1(num3),num4 ,udf1(num4) from tb" ,
+ "select c1 , udf1(c1) ,c2 ,udf1(c2), c3 ,udf1(c3), c4 ,udf1(c4) from stb1 order by c1" ,
+ "select udf1(num1) , max(num1) from tb;" ,
+ "select udf1(num1) , min(num1) from tb;" ,
+ #"select udf1(num1) , top(num1,1) from tb;" ,
+ #"select udf1(num1) , bottom(num1,1) from tb;" ,
+ "select udf1(c1) , max(c1) from stb1;" ,
+ "select udf1(c1) , min(c1) from stb1;" ,
+ #"select udf1(c1) , top(c1 ,1) from stb1;" ,
+ #"select udf1(c1) , bottom(c1,1) from stb1;" ,
+ "select udf1(num1) , abs(num1) from tb;" ,
+ #"select udf1(num1) , csum(num1) from tb;" ,
+ #"select udf1(c1) , csum(c1) from stb1;" ,
+ "select udf1(c1) , abs(c1) from stb1;" ,
+ "select abs(udf1(c1)) , abs(ceil(c1)) from stb1 order by ts;" ,
+ "select abs(udf1(c1)) , abs(ceil(c1)) from ct1 order by ts;" ,
+ "select abs(udf1(c1)) , abs(ceil(c1)) from stb1 where c1 is null order by ts;" ,
+ "select c1 ,udf1(c1) , c6 ,udf1(c6) from stb1 where c1 > 8 order by ts" ,
+ "select udf1(sub1.c1), udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
+ "select sub1.c1 , udf1(sub1.c1), sub2.c2 ,udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
+ "select udf1(c1) from ct1 group by c1" ,
+ "select udf1(c1) from stb1 group by c1" ,
+ "select c1,c2, udf1(c1,c2) from ct1 group by c1,c2" ,
+ "select c1,c2, udf1(c1,c2) from stb1 group by c1,c2" ,
+ "select num1,num2,num3,udf1(num1,num2,num3) from tb" ,
+ "select c1,c6,udf1(c1,c6) from stb1 order by ts" ,
+ "select abs(udf1(c1,c6,c1,c6)) , abs(ceil(c1)) from stb1 where c1 is not null order by ts;"
+ ]
+ udf2_sqls = ["select udf2(sub1.c1), udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
+ "select udf2(c1) from stb1 group by 1-udf1(c1)" ,
+ "select udf2(num1) ,udf2(num2), udf2(num3) from tb" ,
+ "select udf2(num1)+100 ,udf2(num2)-100, udf2(num3)*100 ,udf2(num3)/100 from tb" ,
+ "select udf2(c1) ,udf2(c6) from stb1 " ,
+ "select udf2(c1)+100 ,udf2(c6)-100 ,udf2(c1)*100 ,udf2(c6)/100 from stb1 " ,
+ "select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from ct1" ,
+ "select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from stb1 " ,
+ "select udf2(c1) from ct1 group by c1" ,
+ "select udf2(c1) from stb1 group by c1" ,
+ "select c1,c2, udf2(c1,c6) from ct1 group by c1,c2" ,
+ "select c1,c2, udf2(c1,c6) from stb1 group by c1,c2" ,
+ "select udf2(c1) from stb1 group by udf1(c1)" ,
+ "select udf2(c1) from stb1 group by floor(c1)" ,
+ "select udf2(c1) from stb1 group by floor(c1) order by udf2(c1)" ,
+
+ "select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
+ "select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
+ "select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
+ "select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null"]
+
+ return udf1_sqls ,udf2_sqls
+
+
+
+ def unexpected_create(self):
+
+ tdLog.info(" create function with out bufsize ")
+ tdSql.query("drop function udf1 ")
+ tdSql.query("drop function udf2 ")
+
+ # create function without buffer
+ tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int")
+ tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double")
+ udf1_sqls ,udf2_sqls = self.try_query_sql()
+
+ for scalar_sql in udf1_sqls:
+ tdSql.query(scalar_sql)
+ for aggregate_sql in udf2_sqls:
+ tdSql.error(aggregate_sql)
+
+ # create function without aggregate
+
+ tdLog.info(" create function with out aggregate ")
+ tdSql.query("drop function udf1 ")
+ tdSql.query("drop function udf2 ")
+
+ # create function without buffer
+ tdSql.execute("create aggregate function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
+ tdSql.execute("create function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ udf1_sqls ,udf2_sqls = self.try_query_sql()
+
+ for scalar_sql in udf1_sqls:
+ tdSql.error(scalar_sql)
+ for aggregate_sql in udf2_sqls:
+ tdSql.error(aggregate_sql)
+
+ tdSql.execute(" create function db as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
+ tdSql.execute(" create aggregate function test as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
+ tdSql.error(" select db(c1) from stb1 ")
+ tdSql.error(" select db(c1,c6), db(c6) from stb1 ")
+ tdSql.error(" select db(num1,num2), db(num1) from tb ")
+ tdSql.error(" select test(c1) from stb1 ")
+ tdSql.error(" select test(c1,c6), test(c6) from stb1 ")
+ tdSql.error(" select test(num1,num2), test(num1) from tb ")
+
+
+
+ def loop_kill_udfd(self):
+
+ buildPath = self.getBuildPath()
+ if (buildPath == ""):
+ tdLog.exit("taosd not found!")
+ else:
+ tdLog.info("taosd found in %s" % buildPath)
+
+ cfgPath = buildPath + "/../sim/dnode1/cfg"
+ udfdPath = buildPath +'/build/bin/udfd'
+
+ for i in range(3):
+
+ tdLog.info(" loop restart udfd %d_th" % i)
+
+ tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+ tdSql.checkData(0,0,169.661427555)
+ tdSql.checkData(0,1,169.661427555)
+ # stop udfd cmds
+ get_processID = "ps -ef | grep -w udfd | grep -v grep| grep -v defunct | awk '{print $2}'"
+ processID = subprocess.check_output(get_processID, shell=True).decode("utf-8")
+ stop_udfd = " kill -9 %s" % processID
+ os.system(stop_udfd)
+
+ time.sleep(2)
+
+ tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+ tdSql.checkData(0,0,169.661427555)
+ tdSql.checkData(0,1,169.661427555)
+
+ # # start udfd cmds
+ # start_udfd = "nohup " + udfdPath +'-c' +cfgPath +" > /dev/null 2>&1 &"
+ # tdLog.info("start udfd : %s " % start_udfd)
+
+ def test_function_name(self):
+ tdLog.info(" create function name is not build_in functions ")
+ tdSql.execute(" drop function udf1 ")
+ tdSql.execute(" drop function udf2 ")
+ tdSql.error("create function max as '/tmp/udf/libudf1.so' outputtype int bufSize 8")
+ tdSql.error("create aggregate function sum as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ tdSql.error("create function max as '/tmp/udf/libudf1.so' outputtype int bufSize 8")
+ tdSql.error("create aggregate function sum as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ tdSql.error("create aggregate function tbname as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ tdSql.error("create aggregate function function as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ tdSql.error("create aggregate function stable as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ tdSql.error("create aggregate function union as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ tdSql.error("create aggregate function 123 as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ tdSql.error("create aggregate function 123db as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ tdSql.error("create aggregate function mnode as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+
+ def restart_taosd_query_udf(self):
+
+ self.create_udf_function()
+
+ for i in range(5):
+ tdLog.info(" this is %d_th restart taosd " %i)
+ tdSql.execute("use db ")
+ tdSql.query("select count(*) from stb1")
+ tdSql.checkRows(1)
+ tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+ tdSql.checkData(0,0,169.661427555)
+ tdSql.checkData(0,1,169.661427555)
+ tdDnodes.stop(1)
+ tdDnodes.start(1)
+ time.sleep(2)
+
+
+ def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
+
+ print(" env is ok for all ")
+ self.prepare_udf_so()
+ self.prepare_data()
+ self.create_udf_function()
+ self.basic_udf_query()
+ self.unexpected_create()
+
+
+ def stop(self):
+ tdSql.close()
+ tdLog.success(f"{__file__} successfully executed")
+
+tdCases.addLinux(__file__, TDTestCase())
+tdCases.addWindows(__file__, TDTestCase())
diff --git a/tests/system-test/0-others/udf_restart_taosd.py b/tests/system-test/0-others/udf_restart_taosd.py
new file mode 100644
index 0000000000..24d3b5a9c3
--- /dev/null
+++ b/tests/system-test/0-others/udf_restart_taosd.py
@@ -0,0 +1,654 @@
+from distutils.log import error
+import taos
+import sys
+import time
+import os
+
+from util.log import *
+from util.sql import *
+from util.cases import *
+from util.dnodes import *
+import subprocess
+
+class TDTestCase:
+
+ def init(self, conn, logSql):
+ tdLog.debug(f"start to excute {__file__}")
+ tdSql.init(conn.cursor(), logSql)
+
+ def getBuildPath(self):
+ selfPath = os.path.dirname(os.path.realpath(__file__))
+
+ if ("community" in selfPath):
+ projPath = selfPath[:selfPath.find("community")]
+ else:
+ projPath = selfPath[:selfPath.find("tests")]
+
+ for root, dirs, files in os.walk(projPath):
+ if ("taosd" in files):
+ rootRealPath = os.path.dirname(os.path.realpath(root))
+ if ("packaging" not in rootRealPath):
+ buildPath = root[:len(root) - len("/build/bin")]
+ break
+ return buildPath
+
+ def prepare_udf_so(self):
+ selfPath = os.path.dirname(os.path.realpath(__file__))
+
+ if ("community" in selfPath):
+ projPath = selfPath[:selfPath.find("community")]
+ else:
+ projPath = selfPath[:selfPath.find("tests")]
+ print(projPath)
+
+ libudf1 = subprocess.Popen('find %s -name "libudf1.so"|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
+ libudf2 = subprocess.Popen('find %s -name "libudf2.so"|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
+ os.system("mkdir /tmp/udf/")
+ os.system("cp %s /tmp/udf/ "%libudf1.replace("\n" ,""))
+ os.system("cp %s /tmp/udf/ "%libudf2.replace("\n" ,""))
+
+
+ def prepare_data(self):
+
+ tdSql.execute("drop database if exists db ")
+ tdSql.execute("create database if not exists db days 300")
+ tdSql.execute("use db")
+ tdSql.execute(
+ '''create table stb1
+ (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
+ tags (t1 int)
+ '''
+ )
+
+ tdSql.execute(
+ '''
+ create table t1
+ (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
+ '''
+ )
+ for i in range(4):
+ tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
+
+ for i in range(9):
+ tdSql.execute(
+ f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
+ )
+ tdSql.execute(
+ f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
+ )
+ tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
+ tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
+ tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
+ tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
+
+ tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
+ tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
+ tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
+
+ tdSql.execute(
+ f'''insert into t1 values
+ ( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
+ ( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
+ ( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
+ ( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
+ ( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
+ ( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
+ ( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
+ ( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
+ ( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
+ ( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
+ ( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
+ ( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
+ '''
+ )
+
+ tdSql.execute("create table tb (ts timestamp , num1 int , num2 int, num3 double , num4 binary(30))")
+ tdSql.execute(
+ f'''insert into tb values
+ ( '2020-04-21 01:01:01.000', NULL, 1, 1, "binary1" )
+ ( '2020-10-21 01:01:01.000', 1, 1, 1.11, "binary1" )
+ ( '2020-12-31 01:01:01.000', 2, 22222, 22, "binary1" )
+ ( '2021-01-01 01:01:06.000', 3, 33333, 33, "binary1" )
+ ( '2021-05-07 01:01:10.000', 4, 44444, 44, "binary1" )
+ ( '2021-07-21 01:01:01.000', NULL, NULL, NULL, "binary1" )
+ ( '2021-09-30 01:01:16.000', 5, 55555, 55, "binary1" )
+ ( '2022-02-01 01:01:20.000', 6, 66666, 66, "binary1" )
+ ( '2022-10-28 01:01:26.000', 0, 00000, 00, "binary1" )
+ ( '2022-12-01 01:01:30.000', 8, -88888, -88, "binary1" )
+ ( '2022-12-31 01:01:36.000', 9, -9999999, -99, "binary1" )
+ ( '2023-02-21 01:01:01.000', NULL, NULL, NULL, "binary1" )
+ '''
+ )
+
+ # udf functions with join
+ ts_start = 1652517451000
+ tdSql.execute("create stable st (ts timestamp , c1 int , c2 int ,c3 double ,c4 double ) tags(ind int)")
+ tdSql.execute("create table sub1 using st tags(1)")
+ tdSql.execute("create table sub2 using st tags(2)")
+
+ for i in range(10):
+ ts = ts_start + i *1000
+ tdSql.execute(" insert into sub1 values({} , {},{},{},{})".format(ts,i ,i*10,i*100.0,i*1000.0))
+ tdSql.execute(" insert into sub2 values({} , {},{},{},{})".format(ts,i ,i*10,i*100.0,i*1000.0))
+
+
+ def create_udf_function(self):
+
+ for i in range(5):
+ # create scalar functions
+ tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8;")
+
+ # create aggregate functions
+
+ tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8;")
+
+ functions = tdSql.getResult("show functions")
+ function_nums = len(functions)
+ if function_nums == 2:
+ tdLog.info("create two udf functions success ")
+
+ # drop functions
+
+ tdSql.execute("drop function udf1")
+ tdSql.execute("drop function udf2")
+
+ functions = tdSql.getResult("show functions")
+ for function in functions:
+ if "udf1" in function[0] or "udf2" in function[0]:
+ tdLog.info("drop udf functions failed ")
+ tdLog.exit("drop udf functions failed")
+
+ tdLog.info("drop two udf functions success ")
+
+ # create scalar functions
+ tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8;")
+
+ # create aggregate functions
+
+ tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8;")
+
+ functions = tdSql.getResult("show functions")
+ function_nums = len(functions)
+ if function_nums == 2:
+ tdLog.info("create two udf functions success ")
+
+ def basic_udf_query(self):
+
+ # scalar functions
+
+ tdSql.execute("use db ")
+ tdSql.query("select num1 , udf1(num1) ,num2 ,udf1(num2),num3 ,udf1(num3),num4 ,udf1(num4) from tb")
+ tdSql.checkData(0,0,None)
+ tdSql.checkData(0,1,None)
+ tdSql.checkData(0,2,1)
+ tdSql.checkData(0,3,88)
+ tdSql.checkData(0,4,1.000000000)
+ tdSql.checkData(0,5,88)
+ tdSql.checkData(0,6,"binary1")
+ tdSql.checkData(0,7,88)
+
+ tdSql.checkData(3,0,3)
+ tdSql.checkData(3,1,88)
+ tdSql.checkData(3,2,33333)
+ tdSql.checkData(3,3,88)
+ tdSql.checkData(3,4,33.000000000)
+ tdSql.checkData(3,5,88)
+ tdSql.checkData(3,6,"binary1")
+ tdSql.checkData(3,7,88)
+
+ tdSql.checkData(11,0,None)
+ tdSql.checkData(11,1,None)
+ tdSql.checkData(11,2,None)
+ tdSql.checkData(11,3,None)
+ tdSql.checkData(11,4,None)
+ tdSql.checkData(11,5,None)
+ tdSql.checkData(11,6,"binary1")
+ tdSql.checkData(11,7,88)
+
+ tdSql.query("select c1 , udf1(c1) ,c2 ,udf1(c2), c3 ,udf1(c3), c4 ,udf1(c4) from stb1 order by c1")
+ tdSql.checkData(0,0,None)
+ tdSql.checkData(0,1,None)
+ tdSql.checkData(0,2,None)
+ tdSql.checkData(0,3,None)
+ tdSql.checkData(0,4,None)
+ tdSql.checkData(0,5,None)
+ tdSql.checkData(0,6,None)
+ tdSql.checkData(0,7,None)
+
+ tdSql.checkData(20,0,8)
+ tdSql.checkData(20,1,88)
+ tdSql.checkData(20,2,88888)
+ tdSql.checkData(20,3,88)
+ tdSql.checkData(20,4,888)
+ tdSql.checkData(20,5,88)
+ tdSql.checkData(20,6,88)
+ tdSql.checkData(20,7,88)
+
+
+ # aggregate functions
+ tdSql.query("select udf2(num1) ,udf2(num2), udf2(num3) from tb")
+ tdSql.checkData(0,0,15.362291496)
+ tdSql.checkData(0,1,10000949.553189287)
+ tdSql.checkData(0,2,168.633425216)
+
+ # Arithmetic compute
+ tdSql.query("select udf2(num1)+100 ,udf2(num2)-100, udf2(num3)*100 ,udf2(num3)/100 from tb")
+ tdSql.checkData(0,0,115.362291496)
+ tdSql.checkData(0,1,10000849.553189287)
+ tdSql.checkData(0,2,16863.342521576)
+ tdSql.checkData(0,3,1.686334252)
+
+ tdSql.query("select udf2(c1) ,udf2(c6) from stb1 ")
+ tdSql.checkData(0,0,25.514701644)
+ tdSql.checkData(0,1,265.247614504)
+
+ tdSql.query("select udf2(c1)+100 ,udf2(c6)-100 ,udf2(c1)*100 ,udf2(c6)/100 from stb1 ")
+ tdSql.checkData(0,0,125.514701644)
+ tdSql.checkData(0,1,165.247614504)
+ tdSql.checkData(0,2,2551.470164435)
+ tdSql.checkData(0,3,2.652476145)
+
+ # # bug for crash when query sub table
+ tdSql.query("select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from ct1")
+ tdSql.checkData(0,0,378.215547010)
+ tdSql.checkData(0,1,353.808067460)
+ tdSql.checkData(0,2,2114.237451187)
+ tdSql.checkData(0,3,2.125468151)
+
+ tdSql.query("select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from stb1 ")
+ tdSql.checkData(0,0,490.358032462)
+ tdSql.checkData(0,1,400.460106627)
+ tdSql.checkData(0,2,2551.470164435)
+ tdSql.checkData(0,3,2.652476145)
+
+
+ # regular table with aggregate functions
+
+ tdSql.error("select udf1(num1) , count(num1) from tb;")
+ tdSql.error("select udf1(num1) , avg(num1) from tb;")
+ tdSql.error("select udf1(num1) , twa(num1) from tb;")
+ tdSql.error("select udf1(num1) , irate(num1) from tb;")
+ tdSql.error("select udf1(num1) , sum(num1) from tb;")
+ tdSql.error("select udf1(num1) , stddev(num1) from tb;")
+ tdSql.error("select udf1(num1) , mode(num1) from tb;")
+ tdSql.error("select udf1(num1) , HYPERLOGLOG(num1) from tb;")
+ # stable
+ tdSql.error("select udf1(c1) , count(c1) from stb1;")
+ tdSql.error("select udf1(c1) , avg(c1) from stb1;")
+ tdSql.error("select udf1(c1) , twa(c1) from stb1;")
+ tdSql.error("select udf1(c1) , irate(c1) from stb1;")
+ tdSql.error("select udf1(c1) , sum(c1) from stb1;")
+ tdSql.error("select udf1(c1) , stddev(c1) from stb1;")
+ tdSql.error("select udf1(c1) , mode(c1) from stb1;")
+ tdSql.error("select udf1(c1) , HYPERLOGLOG(c1) from stb1;")
+
+ # regular table with select functions
+
+ tdSql.query("select udf1(num1) , max(num1) from tb;")
+ tdSql.checkRows(1)
+ tdSql.query("select floor(num1) , max(num1) from tb;")
+ tdSql.checkRows(1)
+ tdSql.query("select udf1(num1) , min(num1) from tb;")
+ tdSql.checkRows(1)
+ tdSql.query("select ceil(num1) , min(num1) from tb;")
+ tdSql.checkRows(1)
+ tdSql.error("select udf1(num1) , first(num1) from tb;")
+
+ tdSql.error("select abs(num1) , first(num1) from tb;")
+
+ tdSql.error("select udf1(num1) , last(num1) from tb;")
+
+ tdSql.error("select round(num1) , last(num1) from tb;")
+
+ tdSql.query("select udf1(num1) , top(num1,1) from tb;")
+ tdSql.checkRows(1)
+ tdSql.query("select udf1(num1) , bottom(num1,1) from tb;")
+ tdSql.checkRows(1)
+ tdSql.error("select udf1(num1) , last_row(num1) from tb;")
+
+ tdSql.error("select round(num1) , last_row(num1) from tb;")
+
+
+ # stable
+ tdSql.query("select udf1(c1) , max(c1) from stb1;")
+ tdSql.checkRows(1)
+ tdSql.query("select abs(c1) , max(c1) from stb1;")
+ tdSql.checkRows(1)
+ tdSql.query("select udf1(c1) , min(c1) from stb1;")
+ tdSql.checkRows(1)
+ tdSql.query("select floor(c1) , min(c1) from stb1;")
+ tdSql.checkRows(1)
+ tdSql.error("select udf1(c1) , first(c1) from stb1;")
+
+ tdSql.error("select udf1(c1) , last(c1) from stb1;")
+
+ tdSql.query("select udf1(c1) , top(c1 ,1) from stb1;")
+ tdSql.checkRows(1)
+ tdSql.query("select abs(c1) , top(c1 ,1) from stb1;")
+ tdSql.checkRows(1)
+ tdSql.query("select udf1(c1) , bottom(c1,1) from stb1;")
+ tdSql.checkRows(1)
+ tdSql.query("select ceil(c1) , bottom(c1,1) from stb1;")
+ tdSql.checkRows(1)
+
+ tdSql.error("select udf1(c1) , last_row(c1) from stb1;")
+ tdSql.error("select ceil(c1) , last_row(c1) from stb1;")
+
+ # regular table with compute functions
+
+ tdSql.query("select udf1(num1) , abs(num1) from tb;")
+ tdSql.checkRows(12)
+ tdSql.query("select floor(num1) , abs(num1) from tb;")
+ tdSql.checkRows(12)
+
+ # # bug need fix
+
+ #tdSql.query("select udf1(num1) , csum(num1) from tb;")
+ #tdSql.checkRows(9)
+ #tdSql.query("select ceil(num1) , csum(num1) from tb;")
+ #tdSql.checkRows(9)
+ #tdSql.query("select udf1(c1) , csum(c1) from stb1;")
+ #tdSql.checkRows(22)
+ #tdSql.query("select floor(c1) , csum(c1) from stb1;")
+ #tdSql.checkRows(22)
+
+ # stable with compute functions
+ tdSql.query("select udf1(c1) , abs(c1) from stb1;")
+ tdSql.checkRows(25)
+ tdSql.query("select abs(c1) , ceil(c1) from stb1;")
+ tdSql.checkRows(25)
+
+ # nest query
+ tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from stb1 order by ts;")
+ tdSql.checkRows(25)
+ tdSql.checkData(0,0,None)
+ tdSql.checkData(0,1,None)
+ tdSql.checkData(1,0,88)
+ tdSql.checkData(1,1,8)
+
+ tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from ct1 order by ts;")
+ tdSql.checkRows(13)
+ tdSql.checkData(0,0,88)
+ tdSql.checkData(0,1,8)
+ tdSql.checkData(1,0,88)
+ tdSql.checkData(1,1,7)
+
+ # bug fix for crash
+ # order by udf function result
+ for _ in range(50):
+ tdSql.query("select udf2(c1) from stb1 group by 1-udf1(c1)")
+ print(tdSql.queryResult)
+
+ # udf functions with filter
+
+ tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from stb1 where c1 is null order by ts;")
+ tdSql.checkRows(3)
+ tdSql.checkData(0,0,None)
+ tdSql.checkData(0,1,None)
+
+ tdSql.query("select c1 ,udf1(c1) , c6 ,udf1(c6) from stb1 where c1 > 8 order by ts")
+ tdSql.checkRows(3)
+ tdSql.checkData(0,0,9)
+ tdSql.checkData(0,1,88)
+ tdSql.checkData(0,2,-99.990000000)
+ tdSql.checkData(0,3,88)
+
+ tdSql.query("select sub1.c1, sub2.c2 from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+ tdSql.checkData(0,0,0)
+ tdSql.checkData(0,1,0)
+ tdSql.checkData(1,0,1)
+ tdSql.checkData(1,1,10)
+
+ tdSql.query("select udf1(sub1.c1), udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+ tdSql.checkData(0,0,88)
+ tdSql.checkData(0,1,88)
+ tdSql.checkData(1,0,88)
+ tdSql.checkData(1,1,88)
+
+ tdSql.query("select sub1.c1 , udf1(sub1.c1), sub2.c2 ,udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+ tdSql.checkData(0,0,0)
+ tdSql.checkData(0,1,88)
+ tdSql.checkData(0,2,0)
+ tdSql.checkData(0,3,88)
+ tdSql.checkData(1,0,1)
+ tdSql.checkData(1,1,88)
+ tdSql.checkData(1,2,10)
+ tdSql.checkData(1,3,88)
+
+ tdSql.query("select udf2(sub1.c1), udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+ tdSql.checkData(0,0,16.881943016)
+ tdSql.checkData(0,1,168.819430161)
+ tdSql.error("select sub1.c1 , udf2(sub1.c1), sub2.c2 ,udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+
+ # udf functions with group by
+ tdSql.query("select udf1(c1) from ct1 group by c1")
+ tdSql.checkRows(10)
+ tdSql.query("select udf1(c1) from stb1 group by c1")
+ tdSql.checkRows(11)
+ tdSql.query("select c1,c2, udf1(c1,c2) from ct1 group by c1,c2")
+ tdSql.checkRows(10)
+ tdSql.query("select c1,c2, udf1(c1,c2) from stb1 group by c1,c2")
+ tdSql.checkRows(11)
+
+ tdSql.query("select udf2(c1) from ct1 group by c1")
+ tdSql.checkRows(10)
+ tdSql.query("select udf2(c1) from stb1 group by c1")
+ tdSql.checkRows(11)
+ tdSql.query("select c1,c2, udf2(c1,c6) from ct1 group by c1,c2")
+ tdSql.checkRows(10)
+ tdSql.query("select c1,c2, udf2(c1,c6) from stb1 group by c1,c2")
+ tdSql.checkRows(11)
+ tdSql.query("select udf2(c1) from stb1 group by udf1(c1)")
+ tdSql.checkRows(2)
+ tdSql.query("select udf2(c1) from stb1 group by floor(c1)")
+ tdSql.checkRows(11)
+
+ # udf mix with order by
+ tdSql.query("select udf2(c1) from stb1 group by floor(c1) order by udf2(c1)")
+ tdSql.checkRows(11)
+
+
+ def multi_cols_udf(self):
+ tdSql.query("select num1,num2,num3,udf1(num1,num2,num3) from tb")
+ tdSql.checkData(0,0,None)
+ tdSql.checkData(0,1,1)
+ tdSql.checkData(0,2,1.000000000)
+ tdSql.checkData(0,3,None)
+ tdSql.checkData(1,0,1)
+ tdSql.checkData(1,1,1)
+ tdSql.checkData(1,2,1.110000000)
+ tdSql.checkData(1,3,88)
+
+ tdSql.query("select c1,c6,udf1(c1,c6) from stb1 order by ts")
+ tdSql.checkData(1,0,8)
+ tdSql.checkData(1,1,88.880000000)
+ tdSql.checkData(1,2,88)
+
+ tdSql.query("select abs(udf1(c1,c6,c1,c6)) , abs(ceil(c1)) from stb1 where c1 is not null order by ts;")
+ tdSql.checkRows(22)
+
+ tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+ tdSql.checkData(0,0,169.661427555)
+ tdSql.checkData(0,1,169.661427555)
+
+ def try_query_sql(self):
+ udf1_sqls = [
+ "select num1 , udf1(num1) ,num2 ,udf1(num2),num3 ,udf1(num3),num4 ,udf1(num4) from tb" ,
+ "select c1 , udf1(c1) ,c2 ,udf1(c2), c3 ,udf1(c3), c4 ,udf1(c4) from stb1 order by c1" ,
+ "select udf1(num1) , max(num1) from tb;" ,
+ "select udf1(num1) , min(num1) from tb;" ,
+ #"select udf1(num1) , top(num1,1) from tb;" ,
+ #"select udf1(num1) , bottom(num1,1) from tb;" ,
+ "select udf1(c1) , max(c1) from stb1;" ,
+ "select udf1(c1) , min(c1) from stb1;" ,
+ #"select udf1(c1) , top(c1 ,1) from stb1;" ,
+ #"select udf1(c1) , bottom(c1,1) from stb1;" ,
+ "select udf1(num1) , abs(num1) from tb;" ,
+ #"select udf1(num1) , csum(num1) from tb;" ,
+ #"select udf1(c1) , csum(c1) from stb1;" ,
+ "select udf1(c1) , abs(c1) from stb1;" ,
+ "select abs(udf1(c1)) , abs(ceil(c1)) from stb1 order by ts;" ,
+ "select abs(udf1(c1)) , abs(ceil(c1)) from ct1 order by ts;" ,
+ "select abs(udf1(c1)) , abs(ceil(c1)) from stb1 where c1 is null order by ts;" ,
+ "select c1 ,udf1(c1) , c6 ,udf1(c6) from stb1 where c1 > 8 order by ts" ,
+ "select udf1(sub1.c1), udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
+ "select sub1.c1 , udf1(sub1.c1), sub2.c2 ,udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
+ "select udf1(c1) from ct1 group by c1" ,
+ "select udf1(c1) from stb1 group by c1" ,
+ "select c1,c2, udf1(c1,c2) from ct1 group by c1,c2" ,
+ "select c1,c2, udf1(c1,c2) from stb1 group by c1,c2" ,
+ "select num1,num2,num3,udf1(num1,num2,num3) from tb" ,
+ "select c1,c6,udf1(c1,c6) from stb1 order by ts" ,
+ "select abs(udf1(c1,c6,c1,c6)) , abs(ceil(c1)) from stb1 where c1 is not null order by ts;"
+ ]
+ udf2_sqls = ["select udf2(sub1.c1), udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
+ "select udf2(c1) from stb1 group by 1-udf1(c1)" ,
+ "select udf2(num1) ,udf2(num2), udf2(num3) from tb" ,
+ "select udf2(num1)+100 ,udf2(num2)-100, udf2(num3)*100 ,udf2(num3)/100 from tb" ,
+ "select udf2(c1) ,udf2(c6) from stb1 " ,
+ "select udf2(c1)+100 ,udf2(c6)-100 ,udf2(c1)*100 ,udf2(c6)/100 from stb1 " ,
+ "select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from ct1" ,
+ "select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from stb1 " ,
+ "select udf2(c1) from ct1 group by c1" ,
+ "select udf2(c1) from stb1 group by c1" ,
+ "select c1,c2, udf2(c1,c6) from ct1 group by c1,c2" ,
+ "select c1,c2, udf2(c1,c6) from stb1 group by c1,c2" ,
+ "select udf2(c1) from stb1 group by udf1(c1)" ,
+ "select udf2(c1) from stb1 group by floor(c1)" ,
+ "select udf2(c1) from stb1 group by floor(c1) order by udf2(c1)" ,
+
+ "select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
+ "select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
+ "select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
+ "select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null"]
+
+ return udf1_sqls ,udf2_sqls
+
+
+
+ def unexpected_create(self):
+
+ tdLog.info(" create function with out bufsize ")
+ tdSql.query("drop function udf1 ")
+ tdSql.query("drop function udf2 ")
+
+ # create function without buffer
+ tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int")
+ tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double")
+ udf1_sqls ,udf2_sqls = self.try_query_sql()
+
+ for scalar_sql in udf1_sqls:
+ tdSql.query(scalar_sql)
+ for aggregate_sql in udf2_sqls:
+ tdSql.error(aggregate_sql)
+
+ # create function without aggregate
+
+ tdLog.info(" create function with out aggregate ")
+ tdSql.query("drop function udf1 ")
+ tdSql.query("drop function udf2 ")
+
+ # create function without buffer
+ tdSql.execute("create aggregate function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
+ tdSql.execute("create function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ udf1_sqls ,udf2_sqls = self.try_query_sql()
+
+ for scalar_sql in udf1_sqls:
+ tdSql.error(scalar_sql)
+ for aggregate_sql in udf2_sqls:
+ tdSql.error(aggregate_sql)
+
+ tdSql.execute(" create function db as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
+ tdSql.execute(" create aggregate function test as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
+ tdSql.error(" select db(c1) from stb1 ")
+ tdSql.error(" select db(c1,c6), db(c6) from stb1 ")
+ tdSql.error(" select db(num1,num2), db(num1) from tb ")
+ tdSql.error(" select test(c1) from stb1 ")
+ tdSql.error(" select test(c1,c6), test(c6) from stb1 ")
+ tdSql.error(" select test(num1,num2), test(num1) from tb ")
+
+
+
+ def loop_kill_udfd(self):
+
+ buildPath = self.getBuildPath()
+ if (buildPath == ""):
+ tdLog.exit("taosd not found!")
+ else:
+ tdLog.info("taosd found in %s" % buildPath)
+
+ cfgPath = buildPath + "/../sim/dnode1/cfg"
+ udfdPath = buildPath +'/build/bin/udfd'
+
+ for i in range(3):
+
+ tdLog.info(" loop restart udfd %d_th" % i)
+
+ tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+ tdSql.checkData(0,0,169.661427555)
+ tdSql.checkData(0,1,169.661427555)
+ # stop udfd cmds
+ get_processID = "ps -ef | grep -w udfd | grep -v grep| grep -v defunct | awk '{print $2}'"
+ processID = subprocess.check_output(get_processID, shell=True).decode("utf-8")
+ stop_udfd = " kill -9 %s" % processID
+ os.system(stop_udfd)
+
+ time.sleep(2)
+
+ tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+ tdSql.checkData(0,0,169.661427555)
+ tdSql.checkData(0,1,169.661427555)
+
+ # # start udfd cmds
+ # start_udfd = "nohup " + udfdPath +'-c' +cfgPath +" > /dev/null 2>&1 &"
+ # tdLog.info("start udfd : %s " % start_udfd)
+
+ def test_function_name(self):
+ tdLog.info(" create function name is not build_in functions ")
+ tdSql.execute(" drop function udf1 ")
+ tdSql.execute(" drop function udf2 ")
+ tdSql.error("create function max as '/tmp/udf/libudf1.so' outputtype int bufSize 8")
+ tdSql.error("create aggregate function sum as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ tdSql.error("create function max as '/tmp/udf/libudf1.so' outputtype int bufSize 8")
+ tdSql.error("create aggregate function sum as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ tdSql.error("create aggregate function tbname as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ tdSql.error("create aggregate function function as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ tdSql.error("create aggregate function stable as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ tdSql.error("create aggregate function union as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ tdSql.error("create aggregate function 123 as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ tdSql.error("create aggregate function 123db as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+ tdSql.error("create aggregate function mnode as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
+
+ def restart_taosd_query_udf(self):
+
+ for i in range(3):
+ tdLog.info(" this is %d_th restart taosd " %i)
+ tdSql.execute("use db ")
+ tdSql.query("select count(*) from stb1")
+ tdSql.checkRows(1)
+ tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
+ tdSql.checkData(0,0,169.661427555)
+ tdSql.checkData(0,1,169.661427555)
+ tdDnodes.stop(1)
+ tdDnodes.start(1)
+ time.sleep(2)
+
+
+ def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
+
+ print(" env is ok for all ")
+ self.prepare_udf_so()
+ self.prepare_data()
+ self.create_udf_function()
+ self.basic_udf_query()
+ self.multi_cols_udf()
+ self.restart_taosd_query_udf()
+
+
+
+ def stop(self):
+ tdSql.close()
+ tdLog.success(f"{__file__} successfully executed")
+
+tdCases.addLinux(__file__, TDTestCase())
+tdCases.addWindows(__file__, TDTestCase())
diff --git a/tests/system-test/1-insert/insertWithMoreVgroup.py b/tests/system-test/1-insert/insertWithMoreVgroup.py
index f0f35831db..8d2870fc2c 100644
--- a/tests/system-test/1-insert/insertWithMoreVgroup.py
+++ b/tests/system-test/1-insert/insertWithMoreVgroup.py
@@ -294,7 +294,7 @@ class TDTestCase:
return
def test_case3(self):
- self.taosBenchCreate("127.0.0.1","no","db1", "stb1", 1, 8, 1*10000)
+ self.taosBenchCreate("127.0.0.1","no","db1", "stb1", 1, 1, 1*10)
# self.taosBenchCreate("test209","no","db2", "stb2", 1, 8, 1*10000)
# self.taosBenchCreate("chenhaoran02","no","db1", "stb1", 1, 8, 1*10000)
@@ -349,17 +349,17 @@ class TDTestCase:
# run case
def run(self):
- # create database and tables。
- self.test_case1()
- tdLog.debug(" LIMIT test_case1 ............ [OK]")
+ # # create database and tables。
+ # self.test_case1()
+ # tdLog.debug(" LIMIT test_case1 ............ [OK]")
# # taosBenchmark : create database and table
# self.test_case2()
# tdLog.debug(" LIMIT test_case2 ............ [OK]")
- # # taosBenchmark:create database/table and insert data
- # self.test_case3()
- # tdLog.debug(" LIMIT test_case3 ............ [OK]")
+ # taosBenchmark:create database/table and insert data
+ self.test_case3()
+ tdLog.debug(" LIMIT test_case3 ............ [OK]")
# # test qnode
diff --git a/tests/system-test/1-insert/manyVgroups.json b/tests/system-test/1-insert/manyVgroups.json
index 1c9aa1f28c..5dea41476c 100644
--- a/tests/system-test/1-insert/manyVgroups.json
+++ b/tests/system-test/1-insert/manyVgroups.json
@@ -10,7 +10,7 @@
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
- "interlace_rows": 100000,
+ "interlace_rows": 0,
"num_of_records_per_req": 100,
"databases": [
{
@@ -29,8 +29,8 @@
"batch_create_tbl_num": 50000,
"data_source": "rand",
"insert_mode": "taosc",
- "insert_rows": 10,
- "interlace_rows": 100000,
+ "insert_rows": 1,
+ "interlace_rows": 0,
"insert_interval": 0,
"max_sql_len": 10000000,
"disorder_ratio": 0,
diff --git a/tests/system-test/2-query/check_tsdb.py b/tests/system-test/2-query/check_tsdb.py
new file mode 100644
index 0000000000..33bf351207
--- /dev/null
+++ b/tests/system-test/2-query/check_tsdb.py
@@ -0,0 +1,106 @@
+import taos
+import sys
+import datetime
+import inspect
+
+from util.log import *
+from util.sql import *
+from util.cases import *
+from util.dnodes import *
+
+class TDTestCase:
+ updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
+ "jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
+ "wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"fnDebugFlag":143}
+ def init(self, conn, logSql):
+ tdLog.debug(f"start to excute {__file__}")
+ tdSql.init(conn.cursor(), True)
+
+ def prepare_datas(self):
+ tdSql.execute(
+ '''create table stb1
+ (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
+ tags (t1 int)
+ '''
+ )
+
+ tdSql.execute(
+ '''
+ create table t1
+ (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
+ '''
+ )
+ for i in range(4):
+ tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
+
+ for i in range(9):
+ tdSql.execute(
+ f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
+ )
+ tdSql.execute(
+ f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
+ )
+ tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
+ tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
+ tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
+ tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
+
+ tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
+ tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
+ tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
+
+ tdSql.execute(
+ f'''insert into t1 values
+ ( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
+ ( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
+ ( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
+ ( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
+ ( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
+ ( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
+ ( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
+ ( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
+ ( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
+ ( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
+ ( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
+ ( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
+ '''
+ )
+
+
+ def restart_taosd_query_sum(self):
+
+ for i in range(5):
+ tdLog.info(" this is %d_th restart taosd " %i)
+ os.system("taos -s ' use db ;select c6 from stb1 ; '")
+ tdSql.execute("use db ")
+ tdSql.query("select count(*) from stb1")
+ tdSql.checkRows(1)
+ tdSql.query("select sum(c1),sum(c2),sum(c3),sum(c4),sum(c5),sum(c6) from stb1;")
+ tdSql.checkData(0,0,99)
+ tdSql.checkData(0,1,499995)
+ tdSql.checkData(0,2,4995)
+ tdSql.checkData(0,3,594)
+ tdSql.checkData(0,4,49.950001001)
+ tdSql.checkData(0,5,599.940000000)
+ tdDnodes.stop(1)
+ tdDnodes.start(1)
+ time.sleep(2)
+
+
+
+ def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
+ tdSql.prepare()
+
+ tdLog.printNoPrefix("==========step1:create table ==============")
+
+ self.prepare_datas()
+
+ os.system("taos -s ' select c6 from stb1 ; '")
+ self.restart_taosd_query_sum()
+
+ def stop(self):
+ tdSql.close()
+ tdLog.success(f"{__file__} successfully executed")
+
+tdCases.addLinux(__file__, TDTestCase())
+tdCases.addWindows(__file__, TDTestCase())
diff --git a/tests/system-test/7-tmq/subscribeStb.py b/tests/system-test/7-tmq/subscribeStb.py
index fe05d2e223..a0b3668d47 100644
--- a/tests/system-test/7-tmq/subscribeStb.py
+++ b/tests/system-test/7-tmq/subscribeStb.py
@@ -1377,9 +1377,9 @@ class TDTestCase:
self.tmqCase1(cfgPath, buildPath)
self.tmqCase2(cfgPath, buildPath)
- self.tmqCase3(cfgPath, buildPath)
- self.tmqCase4(cfgPath, buildPath)
- self.tmqCase5(cfgPath, buildPath)
+ # self.tmqCase3(cfgPath, buildPath)
+ # self.tmqCase4(cfgPath, buildPath)
+ # self.tmqCase5(cfgPath, buildPath)
def stop(self):
tdSql.close()
diff --git a/tests/system-test/7-tmq/subscribeStb0.py b/tests/system-test/7-tmq/subscribeStb0.py
new file mode 100644
index 0000000000..1d56103059
--- /dev/null
+++ b/tests/system-test/7-tmq/subscribeStb0.py
@@ -0,0 +1,1391 @@
+
+import taos
+import sys
+import time
+import socket
+import os
+import threading
+from enum import Enum
+
+from util.log import *
+from util.sql import *
+from util.cases import *
+from util.dnodes import *
+
+class actionType(Enum):
+ CREATE_DATABASE = 0
+ CREATE_STABLE = 1
+ CREATE_CTABLE = 2
+ INSERT_DATA = 3
+
+class TDTestCase:
+ hostname = socket.gethostname()
+ #rpcDebugFlagVal = '143'
+ #clientCfgDict = {'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
+ #clientCfgDict["rpcDebugFlag"] = rpcDebugFlagVal
+ #updatecfgDict = {'clientCfg': {}, 'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
+ #updatecfgDict["rpcDebugFlag"] = rpcDebugFlagVal
+ #print ("===================: ", updatecfgDict)
+
+ def init(self, conn, logSql):
+ tdLog.debug(f"start to excute {__file__}")
+ #tdSql.init(conn.cursor())
+ tdSql.init(conn.cursor(), logSql) # output sql.txt file
+
+ def getBuildPath(self):
+ selfPath = os.path.dirname(os.path.realpath(__file__))
+
+ if ("community" in selfPath):
+ projPath = selfPath[:selfPath.find("community")]
+ else:
+ projPath = selfPath[:selfPath.find("tests")]
+
+ for root, dirs, files in os.walk(projPath):
+ if ("taosd" in files):
+ rootRealPath = os.path.dirname(os.path.realpath(root))
+ if ("packaging" not in rootRealPath):
+ buildPath = root[:len(root) - len("/build/bin")]
+ break
+ return buildPath
+
+ def newcur(self,cfg,host,port):
+ user = "root"
+ password = "taosdata"
+ con=taos.connect(host=host, user=user, password=password, config=cfg ,port=port)
+ cur=con.cursor()
+ print(cur)
+ return cur
+
+ def initConsumerTable(self,cdbName='cdb'):
+ tdLog.info("create consume database, and consume info table, and consume result table")
+ tdSql.query("create database if not exists %s vgroups 1"%(cdbName))
+ tdSql.query("drop table if exists %s.consumeinfo "%(cdbName))
+ tdSql.query("drop table if exists %s.consumeresult "%(cdbName))
+
+ tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName)
+ tdSql.query("create table %s.consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)"%cdbName)
+
+ def initConsumerInfoTable(self,cdbName='cdb'):
+ tdLog.info("drop consumeinfo table")
+ tdSql.query("drop table if exists %s.consumeinfo "%(cdbName))
+ tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName)
+
+ def insertConsumerInfo(self,consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifmanualcommit,cdbName='cdb'):
+ sql = "insert into %s.consumeinfo values "%cdbName
+ sql += "(now, %d, '%s', '%s', %d, %d, %d)"%(consumerId, topicList, keyList, expectrowcnt, ifcheckdata, ifmanualcommit)
+ tdLog.info("consume info sql: %s"%sql)
+ tdSql.query(sql)
+
+ def selectConsumeResult(self,expectRows,cdbName='cdb'):
+ resultList=[]
+ while 1:
+ tdSql.query("select * from %s.consumeresult"%cdbName)
+ #tdLog.info("row: %d, %l64d, %l64d"%(tdSql.getData(0, 1),tdSql.getData(0, 2),tdSql.getData(0, 3))
+ if tdSql.getRows() == expectRows:
+ break
+ else:
+ time.sleep(5)
+
+ for i in range(expectRows):
+ tdLog.info ("consume id: %d, consume msgs: %d, consume rows: %d"%(tdSql.getData(i , 1), tdSql.getData(i , 2), tdSql.getData(i , 3)))
+ resultList.append(tdSql.getData(i , 3))
+
+ return resultList
+
+ def startTmqSimProcess(self,buildPath,cfgPath,pollDelay,dbName,showMsg=1,showRow=1,cdbName='cdb',valgrind=0):
+ shellCmd = 'nohup '
+ if valgrind == 1:
+ logFile = cfgPath + '/../log/valgrind-tmq.log'
+ shellCmd = 'nohup valgrind --log-file=' + logFile
+ shellCmd += '--tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all --num-callers=20 -v --workaround-gcc296-bugs=yes '
+
+ shellCmd += buildPath + '/build/bin/tmq_sim -c ' + cfgPath
+ shellCmd += " -y %d -d %s -g %d -r %d -w %s "%(pollDelay, dbName, showMsg, showRow, cdbName)
+ shellCmd += "> /dev/null 2>&1 &"
+ tdLog.info(shellCmd)
+ os.system(shellCmd)
+
+ def create_database(self,tsql, dbName,dropFlag=1,vgroups=4,replica=1):
+ if dropFlag == 1:
+ tsql.execute("drop database if exists %s"%(dbName))
+
+ tsql.execute("create database if not exists %s vgroups %d replica %d"%(dbName, vgroups, replica))
+ tdLog.debug("complete to create database %s"%(dbName))
+ return
+
+ def create_stable(self,tsql, dbName,stbName):
+ tsql.execute("create table if not exists %s.%s (ts timestamp, c1 bigint, c2 binary(16)) tags(t1 int)"%(dbName, stbName))
+ tdLog.debug("complete to create %s.%s" %(dbName, stbName))
+ return
+
+ def create_ctables(self,tsql, dbName,stbName,ctbNum):
+ tsql.execute("use %s" %dbName)
+ pre_create = "create table"
+ sql = pre_create
+ #tdLog.debug("doing create one stable %s and %d child table in %s ..." %(stbname, count ,dbname))
+ for i in range(ctbNum):
+ sql += " %s_%d using %s tags(%d)"%(stbName,i,stbName,i+1)
+ if (i > 0) and (i%100 == 0):
+ tsql.execute(sql)
+ sql = pre_create
+ if sql != pre_create:
+ tsql.execute(sql)
+
+ tdLog.debug("complete to create %d child tables in %s.%s" %(ctbNum, dbName, stbName))
+ return
+
+ def insert_data(self,tsql,dbName,stbName,ctbNum,rowsPerTbl,batchNum,startTs=0):
+ tdLog.debug("start to insert data ............")
+ tsql.execute("use %s" %dbName)
+ pre_insert = "insert into "
+ sql = pre_insert
+
+ if startTs == 0:
+ t = time.time()
+ startTs = int(round(t * 1000))
+
+ #tdLog.debug("doing insert data into stable:%s rows:%d ..."%(stbName, allRows))
+ rowsOfSql = 0
+ for i in range(ctbNum):
+ sql += " %s_%d values "%(stbName,i)
+ for j in range(rowsPerTbl):
+ sql += "(%d, %d, 'tmqrow_%d') "%(startTs + j, j, j)
+ rowsOfSql += 1
+ if (j > 0) and ((rowsOfSql == batchNum) or (j == rowsPerTbl - 1)):
+ tsql.execute(sql)
+ rowsOfSql = 0
+ if j < rowsPerTbl - 1:
+ sql = "insert into %s_%d values " %(stbName,i)
+ else:
+ sql = "insert into "
+ #end sql
+ if sql != pre_insert:
+ #print("insert sql:%s"%sql)
+ tsql.execute(sql)
+ tdLog.debug("insert data ............ [OK]")
+ return
+
+ def prepareEnv(self, **parameterDict):
+ # create new connector for my thread
+ tsql=self.newcur(parameterDict['cfg'], 'localhost', 6030)
+
+ if parameterDict["actionType"] == actionType.CREATE_DATABASE:
+ self.create_database(tsql, parameterDict["dbName"])
+ elif parameterDict["actionType"] == actionType.CREATE_STABLE:
+ self.create_stable(tsql, parameterDict["dbName"], parameterDict["stbName"])
+ elif parameterDict["actionType"] == actionType.CREATE_CTABLE:
+ self.create_ctables(tsql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
+ elif parameterDict["actionType"] == actionType.INSERT_DATA:
+ self.insert_data(tsql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"],\
+ parameterDict["rowsPerTbl"],parameterDict["batchNum"])
+ else:
+ tdLog.exit("not support's action: ", parameterDict["actionType"])
+
+ return
+
+ def tmqCase1(self, cfgPath, buildPath):
+ tdLog.printNoPrefix("======== test case 1: ")
+
+ self.initConsumerTable()
+
+ # create and start thread
+ parameterDict = {'cfg': '', \
+ 'actionType': 0, \
+ 'dbName': 'db1', \
+ 'dropFlag': 1, \
+ 'vgroups': 4, \
+ 'replica': 1, \
+ 'stbName': 'stb1', \
+ 'ctbNum': 10, \
+ 'rowsPerTbl': 10000, \
+ 'batchNum': 100, \
+ 'startTs': 1640966400000} # 2022-01-01 00:00:00.000
+ parameterDict['cfg'] = cfgPath
+
+ self.create_database(tdSql, parameterDict["dbName"])
+ self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
+
+ tdLog.info("create topics from stb1")
+ topicFromStb1 = 'topic_stb1'
+
+ tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
+ consumerId = 0
+ expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
+ topicList = topicFromStb1
+ ifcheckdata = 0
+ ifManualCommit = 0
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:earliest'
+ self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("start consume processor")
+ pollDelay = 100
+ showMsg = 1
+ showRow = 1
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ time.sleep(5)
+ self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
+ self.insert_data(tdSql,\
+ parameterDict["dbName"],\
+ parameterDict["stbName"],\
+ parameterDict["ctbNum"],\
+ parameterDict["rowsPerTbl"],\
+ parameterDict["batchNum"])
+
+ tdLog.info("insert process end, and start to check consume result")
+ expectRows = 1
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
+ tdLog.exit("tmq consume rows error!")
+
+ tdSql.query("drop topic %s"%topicFromStb1)
+
+ tdLog.printNoPrefix("======== test case 1 end ...... ")
+
+ def tmqCase2(self, cfgPath, buildPath):
+ tdLog.printNoPrefix("======== test case 2: ")
+
+ self.initConsumerTable()
+
+ # create and start thread
+ parameterDict = {'cfg': '', \
+ 'actionType': 0, \
+ 'dbName': 'db2', \
+ 'dropFlag': 1, \
+ 'vgroups': 4, \
+ 'replica': 1, \
+ 'stbName': 'stb1', \
+ 'ctbNum': 10, \
+ 'rowsPerTbl': 10000, \
+ 'batchNum': 100, \
+ 'startTs': 1640966400000} # 2022-01-01 00:00:00.000
+ parameterDict['cfg'] = cfgPath
+
+ parameterDict2 = {'cfg': '', \
+ 'actionType': 0, \
+ 'dbName': 'db2', \
+ 'dropFlag': 1, \
+ 'vgroups': 4, \
+ 'replica': 1, \
+ 'stbName': 'stb2', \
+ 'ctbNum': 10, \
+ 'rowsPerTbl': 10000, \
+ 'batchNum': 100, \
+ 'startTs': 1640966400000} # 2022-01-01 00:00:00.000
+ parameterDict2['cfg'] = cfgPath
+
+ self.create_database(tdSql, parameterDict["dbName"])
+ self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
+ self.create_stable(tdSql, parameterDict2["dbName"], parameterDict2["stbName"])
+
+ tdLog.info("create topics from stb1")
+ topicFromStb1 = 'topic_stb1'
+
+ tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
+ consumerId = 0
+ expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
+ topicList = topicFromStb1
+ ifcheckdata = 0
+ ifManualCommit = 0
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:earliest'
+ self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("start consume processor")
+ pollDelay = 100
+ showMsg = 1
+ showRow = 1
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start create child tables of stb1 and stb2")
+ parameterDict['actionType'] = actionType.CREATE_CTABLE
+ parameterDict2['actionType'] = actionType.CREATE_CTABLE
+
+ prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
+ prepareEnvThread.start()
+ prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
+ prepareEnvThread2.start()
+
+ prepareEnvThread.join()
+ prepareEnvThread2.join()
+
+ tdLog.info("start insert data into child tables of stb1 and stb2")
+ parameterDict['actionType'] = actionType.INSERT_DATA
+ parameterDict2['actionType'] = actionType.INSERT_DATA
+
+ prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
+ prepareEnvThread.start()
+ prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
+ prepareEnvThread2.start()
+
+ prepareEnvThread.join()
+ prepareEnvThread2.join()
+
+ tdLog.info("insert process end, and start to check consume result")
+ expectRows = 1
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
+ tdLog.exit("tmq consume rows error!")
+
+ tdSql.query("drop topic %s"%topicFromStb1)
+
+ tdLog.printNoPrefix("======== test case 2 end ...... ")
+
+ def tmqCase3(self, cfgPath, buildPath):
+ tdLog.printNoPrefix("======== test case 3: ")
+
+ self.initConsumerTable()
+
+ # create and start thread
+ parameterDict = {'cfg': '', \
+ 'actionType': 0, \
+ 'dbName': 'db3', \
+ 'dropFlag': 1, \
+ 'vgroups': 4, \
+ 'replica': 1, \
+ 'stbName': 'stb1', \
+ 'ctbNum': 10, \
+ 'rowsPerTbl': 20000, \
+ 'batchNum': 50, \
+ 'startTs': 1640966400000} # 2022-01-01 00:00:00.000
+ parameterDict['cfg'] = cfgPath
+
+ self.create_database(tdSql, parameterDict["dbName"])
+ self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
+ self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
+ self.insert_data(tdSql,parameterDict["dbName"],parameterDict["stbName"],parameterDict["ctbNum"],parameterDict["rowsPerTbl"],parameterDict["batchNum"])
+
+ tdLog.info("create topics from stb1")
+ topicFromStb1 = 'topic_stb1'
+
+ tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
+ consumerId = 0
+ expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
+ topicList = topicFromStb1
+ ifcheckdata = 0
+ ifManualCommit = 0
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:earliest'
+ self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("start consume processor")
+ pollDelay = 5
+ showMsg = 1
+ showRow = 1
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ time.sleep(3)
+ tdLog.info("drop som child table of stb1")
+ dropTblNum = 4
+ tdSql.query("drop table if exists %s.%s_1"%(parameterDict["dbName"], parameterDict["stbName"]))
+ tdSql.query("drop table if exists %s.%s_2"%(parameterDict["dbName"], parameterDict["stbName"]))
+ tdSql.query("drop table if exists %s.%s_3"%(parameterDict["dbName"], parameterDict["stbName"]))
+ tdSql.query("drop table if exists %s.%s_4"%(parameterDict["dbName"], parameterDict["stbName"]))
+
+ tdLog.info("drop some child tables, then start to check consume result")
+ expectRows = 1
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ remaindrowcnt = parameterDict["rowsPerTbl"] * (parameterDict["ctbNum"] - dropTblNum)
+
+ if not (totalConsumeRows < expectrowcnt and totalConsumeRows > remaindrowcnt):
+ tdLog.info("act consume rows: %d, expect consume rows: between %d and %d"%(totalConsumeRows, remaindrowcnt, expectrowcnt))
+ tdLog.exit("tmq consume rows error!")
+
+ tdSql.query("drop topic %s"%topicFromStb1)
+
+ tdLog.printNoPrefix("======== test case 3 end ...... ")
+
+ def tmqCase4(self, cfgPath, buildPath):
+ tdLog.printNoPrefix("======== test case 4: ")
+
+ self.initConsumerTable()
+
+ # create and start thread
+ parameterDict = {'cfg': '', \
+ 'actionType': 0, \
+ 'dbName': 'db4', \
+ 'dropFlag': 1, \
+ 'vgroups': 4, \
+ 'replica': 1, \
+ 'stbName': 'stb1', \
+ 'ctbNum': 10, \
+ 'rowsPerTbl': 10000, \
+ 'batchNum': 100, \
+ 'startTs': 1640966400000} # 2022-01-01 00:00:00.000
+ parameterDict['cfg'] = cfgPath
+
+ self.create_database(tdSql, parameterDict["dbName"])
+ self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
+ self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
+ self.insert_data(tdSql,\
+ parameterDict["dbName"],\
+ parameterDict["stbName"],\
+ parameterDict["ctbNum"],\
+ parameterDict["rowsPerTbl"],\
+ parameterDict["batchNum"])
+
+ tdLog.info("create topics from stb1")
+ topicFromStb1 = 'topic_stb1'
+
+ tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
+ consumerId = 0
+ expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
+ topicList = topicFromStb1
+ ifcheckdata = 0
+ ifManualCommit = 1
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:earliest'
+ self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("start consume processor")
+ pollDelay = 5
+ showMsg = 1
+ showRow = 1
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start to check consume result")
+ expectRows = 1
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt/4:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4))
+ tdLog.exit("tmq consume rows error!")
+
+ self.initConsumerInfoTable()
+ consumerId = 1
+ self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("again start consume processor")
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("again check consume result")
+ expectRows = 2
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
+ tdLog.exit("tmq consume rows error!")
+
+ tdSql.query("drop topic %s"%topicFromStb1)
+
+ tdLog.printNoPrefix("======== test case 4 end ...... ")
+
+ def tmqCase5(self, cfgPath, buildPath):
+ tdLog.printNoPrefix("======== test case 5: ")
+
+ self.initConsumerTable()
+
+ # create and start thread
+ parameterDict = {'cfg': '', \
+ 'actionType': 0, \
+ 'dbName': 'db5', \
+ 'dropFlag': 1, \
+ 'vgroups': 4, \
+ 'replica': 1, \
+ 'stbName': 'stb1', \
+ 'ctbNum': 10, \
+ 'rowsPerTbl': 10000, \
+ 'batchNum': 100, \
+ 'startTs': 1640966400000} # 2022-01-01 00:00:00.000
+ parameterDict['cfg'] = cfgPath
+
+ self.create_database(tdSql, parameterDict["dbName"])
+ self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
+ self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
+ self.insert_data(tdSql,\
+ parameterDict["dbName"],\
+ parameterDict["stbName"],\
+ parameterDict["ctbNum"],\
+ parameterDict["rowsPerTbl"],\
+ parameterDict["batchNum"])
+
+ tdLog.info("create topics from stb1")
+ topicFromStb1 = 'topic_stb1'
+
+ tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
+ consumerId = 0
+ expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
+ topicList = topicFromStb1
+ ifcheckdata = 0
+ ifManualCommit = 0
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:earliest'
+ self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("start consume processor")
+ pollDelay = 5
+ showMsg = 1
+ showRow = 1
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start to check consume result")
+ expectRows = 1
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt/4:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4))
+ tdLog.exit("tmq consume rows error!")
+
+ self.initConsumerInfoTable()
+ consumerId = 1
+ self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("again start consume processor")
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("again check consume result")
+ expectRows = 2
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != (expectrowcnt * (1 + 1/4)):
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
+ tdLog.exit("tmq consume rows error!")
+
+ tdSql.query("drop topic %s"%topicFromStb1)
+
+ tdLog.printNoPrefix("======== test case 5 end ...... ")
+
+ def tmqCase6(self, cfgPath, buildPath):
+ tdLog.printNoPrefix("======== test case 6: ")
+
+ self.initConsumerTable()
+
+ # create and start thread
+ parameterDict = {'cfg': '', \
+ 'actionType': 0, \
+ 'dbName': 'db6', \
+ 'dropFlag': 1, \
+ 'vgroups': 4, \
+ 'replica': 1, \
+ 'stbName': 'stb1', \
+ 'ctbNum': 10, \
+ 'rowsPerTbl': 10000, \
+ 'batchNum': 100, \
+ 'startTs': 1640966400000} # 2022-01-01 00:00:00.000
+ parameterDict['cfg'] = cfgPath
+
+ self.create_database(tdSql, parameterDict["dbName"])
+ self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
+ self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
+ self.insert_data(tdSql,\
+ parameterDict["dbName"],\
+ parameterDict["stbName"],\
+ parameterDict["ctbNum"],\
+ parameterDict["rowsPerTbl"],\
+ parameterDict["batchNum"])
+
+ tdLog.info("create topics from stb1")
+ topicFromStb1 = 'topic_stb1'
+
+ tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
+ consumerId = 0
+ expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
+ topicList = topicFromStb1
+ ifcheckdata = 0
+ ifManualCommit = 1
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:earliest'
+ self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("start consume processor")
+ pollDelay = 5
+ showMsg = 1
+ showRow = 1
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start to check consume result")
+ expectRows = 1
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt/4:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4))
+ tdLog.exit("tmq consume rows error!")
+
+ self.initConsumerInfoTable()
+ consumerId = 1
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:latest'
+ self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("again start consume processor")
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("again check consume result")
+ expectRows = 2
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
+ tdLog.exit("tmq consume rows error!")
+
+ tdSql.query("drop topic %s"%topicFromStb1)
+
+ tdLog.printNoPrefix("======== test case 6 end ...... ")
+
+ def tmqCase7(self, cfgPath, buildPath):
+ tdLog.printNoPrefix("======== test case 7: ")
+
+ self.initConsumerTable()
+
+ # create and start thread
+ parameterDict = {'cfg': '', \
+ 'actionType': 0, \
+ 'dbName': 'db7', \
+ 'dropFlag': 1, \
+ 'vgroups': 4, \
+ 'replica': 1, \
+ 'stbName': 'stb1', \
+ 'ctbNum': 10, \
+ 'rowsPerTbl': 10000, \
+ 'batchNum': 100, \
+ 'startTs': 1640966400000} # 2022-01-01 00:00:00.000
+ parameterDict['cfg'] = cfgPath
+
+ self.create_database(tdSql, parameterDict["dbName"])
+ self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
+ self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
+ self.insert_data(tdSql,\
+ parameterDict["dbName"],\
+ parameterDict["stbName"],\
+ parameterDict["ctbNum"],\
+ parameterDict["rowsPerTbl"],\
+ parameterDict["batchNum"])
+
+ tdLog.info("create topics from stb1")
+ topicFromStb1 = 'topic_stb1'
+
+ tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
+ consumerId = 0
+ expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
+ topicList = topicFromStb1
+ ifcheckdata = 0
+ ifManualCommit = 1
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:latest'
+ self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("start consume processor")
+ pollDelay = 5
+ showMsg = 1
+ showRow = 1
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start to check consume result")
+ expectRows = 1
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != 0:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
+ tdLog.exit("tmq consume rows error!")
+
+ self.initConsumerInfoTable()
+ consumerId = 1
+ self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("again start consume processor")
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("again check consume result")
+ expectRows = 2
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != 0:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
+ tdLog.exit("tmq consume rows error!")
+
+ tdSql.query("drop topic %s"%topicFromStb1)
+
+ tdLog.printNoPrefix("======== test case 7 end ...... ")
+
+ def tmqCase8(self, cfgPath, buildPath):
+ tdLog.printNoPrefix("======== test case 8: ")
+
+ self.initConsumerTable()
+
+ # create and start thread
+ parameterDict = {'cfg': '', \
+ 'actionType': 0, \
+ 'dbName': 'db8', \
+ 'dropFlag': 1, \
+ 'vgroups': 4, \
+ 'replica': 1, \
+ 'stbName': 'stb1', \
+ 'ctbNum': 10, \
+ 'rowsPerTbl': 10000, \
+ 'batchNum': 100, \
+ 'startTs': 1640966400000} # 2022-01-01 00:00:00.000
+ parameterDict['cfg'] = cfgPath
+
+ self.create_database(tdSql, parameterDict["dbName"])
+ self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
+ self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
+ self.insert_data(tdSql,\
+ parameterDict["dbName"],\
+ parameterDict["stbName"],\
+ parameterDict["ctbNum"],\
+ parameterDict["rowsPerTbl"],\
+ parameterDict["batchNum"])
+
+ tdLog.info("create topics from stb1")
+ topicFromStb1 = 'topic_stb1'
+
+ tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
+ consumerId = 0
+ expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
+ topicList = topicFromStb1
+ ifcheckdata = 0
+ ifManualCommit = 1
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:latest'
+ self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("start consume 0 processor")
+ pollDelay = 10
+ showMsg = 1
+ showRow = 1
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start to check consume 0 result")
+ expectRows = 1
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != 0:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
+ tdLog.exit("tmq consume rows error!")
+
+ tdLog.info("start consume 1 processor")
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start one new thread to insert data")
+ parameterDict['actionType'] = actionType.INSERT_DATA
+ prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
+ prepareEnvThread.start()
+ prepareEnvThread.join()
+
+ tdLog.info("start to check consume 0 and 1 result")
+ expectRows = 2
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
+ tdLog.exit("tmq consume rows error!")
+
+ tdLog.info("start consume 2 processor")
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start one new thread to insert data")
+ parameterDict['actionType'] = actionType.INSERT_DATA
+ prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
+ prepareEnvThread.start()
+ prepareEnvThread.join()
+
+ tdLog.info("start to check consume 0 and 1 and 2 result")
+ expectRows = 3
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt*2:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2))
+ tdLog.exit("tmq consume rows error!")
+
+ tdSql.query("drop topic %s"%topicFromStb1)
+
+ tdLog.printNoPrefix("======== test case 8 end ...... ")
+
+ def tmqCase9(self, cfgPath, buildPath):
+ tdLog.printNoPrefix("======== test case 9: ")
+
+ self.initConsumerTable()
+
+ # create and start thread
+ parameterDict = {'cfg': '', \
+ 'actionType': 0, \
+ 'dbName': 'db9', \
+ 'dropFlag': 1, \
+ 'vgroups': 4, \
+ 'replica': 1, \
+ 'stbName': 'stb1', \
+ 'ctbNum': 10, \
+ 'rowsPerTbl': 10000, \
+ 'batchNum': 100, \
+ 'startTs': 1640966400000} # 2022-01-01 00:00:00.000
+ parameterDict['cfg'] = cfgPath
+
+ self.create_database(tdSql, parameterDict["dbName"])
+ self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
+ self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
+ self.insert_data(tdSql,\
+ parameterDict["dbName"],\
+ parameterDict["stbName"],\
+ parameterDict["ctbNum"],\
+ parameterDict["rowsPerTbl"],\
+ parameterDict["batchNum"])
+
+ tdLog.info("create topics from stb1")
+ topicFromStb1 = 'topic_stb1'
+
+ tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
+ consumerId = 0
+ expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
+ topicList = topicFromStb1
+ ifcheckdata = 0
+ ifManualCommit = 1
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:latest'
+ self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("start consume 0 processor")
+ pollDelay = 10
+ showMsg = 1
+ showRow = 1
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start to check consume 0 result")
+ expectRows = 1
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != 0:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
+ tdLog.exit("tmq consume rows error!")
+
+ tdLog.info("start consume 1 processor")
+ self.initConsumerInfoTable()
+ consumerId = 1
+ ifManualCommit = 0
+ self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start one new thread to insert data")
+ parameterDict['actionType'] = actionType.INSERT_DATA
+ prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
+ prepareEnvThread.start()
+ prepareEnvThread.join()
+
+ tdLog.info("start to check consume 0 and 1 result")
+ expectRows = 2
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
+ tdLog.exit("tmq consume rows error!")
+
+ tdLog.info("start consume 2 processor")
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start one new thread to insert data")
+ parameterDict['actionType'] = actionType.INSERT_DATA
+ prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
+ prepareEnvThread.start()
+ prepareEnvThread.join()
+
+ tdLog.info("start to check consume 0 and 1 and 2 result")
+ expectRows = 3
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt*2:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2))
+ tdLog.exit("tmq consume rows error!")
+
+ tdSql.query("drop topic %s"%topicFromStb1)
+
+ tdLog.printNoPrefix("======== test case 9 end ...... ")
+
+ def tmqCase10(self, cfgPath, buildPath):
+ tdLog.printNoPrefix("======== test case 10: ")
+
+ self.initConsumerTable()
+
+ # create and start thread
+ parameterDict = {'cfg': '', \
+ 'actionType': 0, \
+ 'dbName': 'db10', \
+ 'dropFlag': 1, \
+ 'vgroups': 4, \
+ 'replica': 1, \
+ 'stbName': 'stb1', \
+ 'ctbNum': 10, \
+ 'rowsPerTbl': 10000, \
+ 'batchNum': 100, \
+ 'startTs': 1640966400000} # 2022-01-01 00:00:00.000
+ parameterDict['cfg'] = cfgPath
+
+ self.create_database(tdSql, parameterDict["dbName"])
+ self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
+ self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
+ self.insert_data(tdSql,\
+ parameterDict["dbName"],\
+ parameterDict["stbName"],\
+ parameterDict["ctbNum"],\
+ parameterDict["rowsPerTbl"],\
+ parameterDict["batchNum"])
+
+ tdLog.info("create topics from stb1")
+ topicFromStb1 = 'topic_stb1'
+
+ tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
+ consumerId = 0
+ expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
+ topicList = topicFromStb1
+ ifcheckdata = 0
+ ifManualCommit = 1
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:latest'
+ self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("start consume 0 processor")
+ pollDelay = 10
+ showMsg = 1
+ showRow = 1
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start to check consume 0 result")
+ expectRows = 1
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != 0:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
+ tdLog.exit("tmq consume rows error!")
+
+ tdLog.info("start consume 1 processor")
+ self.initConsumerInfoTable()
+ consumerId = 1
+ ifManualCommit = 1
+ self.insertConsumerInfo(consumerId, expectrowcnt-10000,topicList,keyList,ifcheckdata,ifManualCommit)
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start one new thread to insert data")
+ parameterDict['actionType'] = actionType.INSERT_DATA
+ prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
+ prepareEnvThread.start()
+ prepareEnvThread.join()
+
+ tdLog.info("start to check consume 0 and 1 result")
+ expectRows = 2
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt-10000:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt-10000))
+ tdLog.exit("tmq consume rows error!")
+
+ tdLog.info("start consume 2 processor")
+ self.initConsumerInfoTable()
+ consumerId = 2
+ ifManualCommit = 1
+ self.insertConsumerInfo(consumerId, expectrowcnt+10000,topicList,keyList,ifcheckdata,ifManualCommit)
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start one new thread to insert data")
+ parameterDict['actionType'] = actionType.INSERT_DATA
+ prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
+ prepareEnvThread.start()
+ prepareEnvThread.join()
+
+ tdLog.info("start to check consume 0 and 1 and 2 result")
+ expectRows = 3
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt*2:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2))
+ tdLog.exit("tmq consume rows error!")
+
+ tdSql.query("drop topic %s"%topicFromStb1)
+
+ tdLog.printNoPrefix("======== test case 10 end ...... ")
+
+ def tmqCase11(self, cfgPath, buildPath):
+ tdLog.printNoPrefix("======== test case 11: ")
+
+ self.initConsumerTable()
+
+ # create and start thread
+ parameterDict = {'cfg': '', \
+ 'actionType': 0, \
+ 'dbName': 'db11', \
+ 'dropFlag': 1, \
+ 'vgroups': 4, \
+ 'replica': 1, \
+ 'stbName': 'stb1', \
+ 'ctbNum': 10, \
+ 'rowsPerTbl': 10000, \
+ 'batchNum': 100, \
+ 'startTs': 1640966400000} # 2022-01-01 00:00:00.000
+ parameterDict['cfg'] = cfgPath
+
+ self.create_database(tdSql, parameterDict["dbName"])
+ self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
+ self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
+ self.insert_data(tdSql,\
+ parameterDict["dbName"],\
+ parameterDict["stbName"],\
+ parameterDict["ctbNum"],\
+ parameterDict["rowsPerTbl"],\
+ parameterDict["batchNum"])
+
+ tdLog.info("create topics from stb1")
+ topicFromStb1 = 'topic_stb1'
+
+ tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
+ consumerId = 0
+ expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
+ topicList = topicFromStb1
+ ifcheckdata = 0
+ ifManualCommit = 1
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:none'
+ self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("start consume processor")
+ pollDelay = 5
+ showMsg = 1
+ showRow = 1
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start to check consume result")
+ expectRows = 1
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != 0:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
+ tdLog.exit("tmq consume rows error!")
+
+ self.initConsumerInfoTable()
+ consumerId = 1
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:none'
+ self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("again start consume processor")
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("again check consume result")
+ expectRows = 2
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != 0:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0))
+ tdLog.exit("tmq consume rows error!")
+
+ tdSql.query("drop topic %s"%topicFromStb1)
+
+ tdLog.printNoPrefix("======== test case 11 end ...... ")
+
+ def tmqCase12(self, cfgPath, buildPath):
+ tdLog.printNoPrefix("======== test case 12: ")
+
+ self.initConsumerTable()
+
+ # create and start thread
+ parameterDict = {'cfg': '', \
+ 'actionType': 0, \
+ 'dbName': 'db12', \
+ 'dropFlag': 1, \
+ 'vgroups': 4, \
+ 'replica': 1, \
+ 'stbName': 'stb1', \
+ 'ctbNum': 10, \
+ 'rowsPerTbl': 10000, \
+ 'batchNum': 100, \
+ 'startTs': 1640966400000} # 2022-01-01 00:00:00.000
+ parameterDict['cfg'] = cfgPath
+
+ self.create_database(tdSql, parameterDict["dbName"])
+ self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
+ self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
+ self.insert_data(tdSql,\
+ parameterDict["dbName"],\
+ parameterDict["stbName"],\
+ parameterDict["ctbNum"],\
+ parameterDict["rowsPerTbl"],\
+ parameterDict["batchNum"])
+
+ tdLog.info("create topics from stb1")
+ topicFromStb1 = 'topic_stb1'
+
+ tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
+ consumerId = 0
+ expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
+ topicList = topicFromStb1
+ ifcheckdata = 0
+ ifManualCommit = 0
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:earliest'
+ self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("start consume processor")
+ pollDelay = 5
+ showMsg = 1
+ showRow = 1
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start to check consume result")
+ expectRows = 1
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt/4:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4))
+ tdLog.exit("tmq consume rows error!")
+
+ self.initConsumerInfoTable()
+ consumerId = 1
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:none'
+ self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("again start consume processor")
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("again check consume result")
+ expectRows = 2
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt/4:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4))
+ tdLog.exit("tmq consume rows error!")
+
+ tdSql.query("drop topic %s"%topicFromStb1)
+
+ tdLog.printNoPrefix("======== test case 12 end ...... ")
+
+ def tmqCase13(self, cfgPath, buildPath):
+ tdLog.printNoPrefix("======== test case 13: ")
+
+ self.initConsumerTable()
+
+ # create and start thread
+ parameterDict = {'cfg': '', \
+ 'actionType': 0, \
+ 'dbName': 'db13', \
+ 'dropFlag': 1, \
+ 'vgroups': 4, \
+ 'replica': 1, \
+ 'stbName': 'stb1', \
+ 'ctbNum': 10, \
+ 'rowsPerTbl': 10000, \
+ 'batchNum': 100, \
+ 'startTs': 1640966400000} # 2022-01-01 00:00:00.000
+ parameterDict['cfg'] = cfgPath
+
+ self.create_database(tdSql, parameterDict["dbName"])
+ self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"])
+ self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"])
+ self.insert_data(tdSql,\
+ parameterDict["dbName"],\
+ parameterDict["stbName"],\
+ parameterDict["ctbNum"],\
+ parameterDict["rowsPerTbl"],\
+ parameterDict["batchNum"])
+
+ tdLog.info("create topics from stb1")
+ topicFromStb1 = 'topic_stb1'
+
+ tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName']))
+ consumerId = 0
+ expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
+ topicList = topicFromStb1
+ ifcheckdata = 0
+ ifManualCommit = 1
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:earliest'
+ self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("start consume processor")
+ pollDelay = 5
+ showMsg = 1
+ showRow = 1
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("start to check consume result")
+ expectRows = 1
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt/4:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4))
+ tdLog.exit("tmq consume rows error!")
+
+ self.initConsumerInfoTable()
+ consumerId = 1
+ ifManualCommit = 1
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:none'
+ self.insertConsumerInfo(consumerId, expectrowcnt/2,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("again start consume processor")
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("again check consume result")
+ expectRows = 2
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt*(1/2+1/4):
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*(1/2+1/4)))
+ tdLog.exit("tmq consume rows error!")
+
+ self.initConsumerInfoTable()
+ consumerId = 2
+ ifManualCommit = 1
+ keyList = 'group.id:cgrp1,\
+ enable.auto.commit:false,\
+ auto.commit.interval.ms:6000,\
+ auto.offset.reset:none'
+ self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
+
+ tdLog.info("again start consume processor")
+ self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
+
+ tdLog.info("again check consume result")
+ expectRows = 3
+ resultList = self.selectConsumeResult(expectRows)
+ totalConsumeRows = 0
+ for i in range(expectRows):
+ totalConsumeRows += resultList[i]
+
+ if totalConsumeRows != expectrowcnt:
+ tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
+ tdLog.exit("tmq consume rows error!")
+
+ tdSql.query("drop topic %s"%topicFromStb1)
+
+ tdLog.printNoPrefix("======== test case 13 end ...... ")
+
+ def run(self):
+ tdSql.prepare()
+
+ buildPath = self.getBuildPath()
+ if (buildPath == ""):
+ tdLog.exit("taosd not found!")
+ else:
+ tdLog.info("taosd found in %s" % buildPath)
+ cfgPath = buildPath + "/../sim/psim/cfg"
+ tdLog.info("cfgPath: %s" % cfgPath)
+
+ # self.tmqCase1(cfgPath, buildPath)
+ # self.tmqCase2(cfgPath, buildPath)
+ self.tmqCase3(cfgPath, buildPath)
+ self.tmqCase4(cfgPath, buildPath)
+ self.tmqCase5(cfgPath, buildPath)
+
+ def stop(self):
+ tdSql.close()
+ tdLog.success(f"{__file__} successfully executed")
+
+event = threading.Event()
+
+tdCases.addLinux(__file__, TDTestCase())
+tdCases.addWindows(__file__, TDTestCase())
diff --git a/tests/system-test/fulltest.sh b/tests/system-test/fulltest.sh
index 275f63007c..01b8b56903 100755
--- a/tests/system-test/fulltest.sh
+++ b/tests/system-test/fulltest.sh
@@ -8,6 +8,8 @@ python3 ./test.py -f 0-others/taosShellNetChk.py
python3 ./test.py -f 0-others/telemetry.py
python3 ./test.py -f 0-others/taosdMonitor.py
python3 ./test.py -f 0-others/udfTest.py
+python3 ./test.py -f 0-others/udf_create.py
+python3 ./test.py -f 0-others/udf_restart_taosd.py
python3 ./test.py -f 0-others/user_control.py
python3 ./test.py -f 0-others/fsync.py
@@ -25,6 +27,7 @@ python3 ./test.py -f 2-query/join.py
python3 ./test.py -f 2-query/cast.py
python3 ./test.py -f 2-query/concat.py
python3 ./test.py -f 2-query/concat_ws.py
+python3 ./test.py -f 2-query/check_tsdb.py
# python3 ./test.py -f 2-query/union.py
# python3 ./test.py -f 2-query/union2.py
# python3 ./test.py -f 2-query/union3.py
@@ -60,11 +63,13 @@ python3 ./test.py -f 2-query/arccos.py
python3 ./test.py -f 2-query/arctan.py
python3 ./test.py -f 2-query/query_cols_tags_and_or.py
python3 ./test.py -f 2-query/nestedQuery.py
+
python3 ./test.py -f 7-tmq/basic5.py
python3 ./test.py -f 7-tmq/subscribeDb.py
python3 ./test.py -f 7-tmq/subscribeDb1.py
python3 ./test.py -f 7-tmq/subscribeStb.py
+python3 ./test.py -f 7-tmq/subscribeStb0.py
python3 ./test.py -f 7-tmq/subscribeStb1.py
python3 ./test.py -f 7-tmq/subscribeStb2.py
diff --git a/tests/test/c/sdbDump.c b/tests/test/c/sdbDump.c
index 1d3eba7cde..2a19ae778f 100644
--- a/tests/test/c/sdbDump.c
+++ b/tests/test/c/sdbDump.c
@@ -16,7 +16,7 @@
#define _DEFAULT_SOURCE
#include "dmMgmt.h"
#include "mndInt.h"
-#include "sdbInt.h"
+#include "sdb.h"
#include "tconfig.h"
#include "tjson.h"
diff --git a/tests/test/c/tmqSim.c b/tests/test/c/tmqSim.c
index e0f58d052f..accd1dd080 100644
--- a/tests/test/c/tmqSim.c
+++ b/tests/test/c/tmqSim.c
@@ -321,9 +321,16 @@ int32_t saveConsumeResult(SThreadInfo* pInfo) {
TAOS* pConn = taos_connect(NULL, "root", "taosdata", NULL, 0);
assert(pConn != NULL);
+ int64_t now = taosGetTimestampMs();
+
// schema: ts timestamp, consumerid int, consummsgcnt bigint, checkresult int
- sprintf(sqlStr, "insert into %s.consumeresult values (now, %d, %" PRId64 ", %" PRId64 ", %d)", g_stConfInfo.cdbName,
- pInfo->consumerId, pInfo->consumeMsgCnt, pInfo->consumeRowCnt, pInfo->checkresult);
+ sprintf(sqlStr, "insert into %s.consumeresult values (%"PRId64", %d, %" PRId64 ", %" PRId64 ", %d)",
+ g_stConfInfo.cdbName,
+ now,
+ pInfo->consumerId,
+ pInfo->consumeMsgCnt,
+ pInfo->consumeRowCnt,
+ pInfo->checkresult);
char tmpString[128];
taosFprintfFile(g_fp, "%s, consume id %d result: %s\n", getCurrentTimeString(tmpString), pInfo->consumerId ,sqlStr);
diff --git a/tools/taos-tools b/tools/taos-tools
index 0aad27d725..a8bb88c905 160000
--- a/tools/taos-tools
+++ b/tools/taos-tools
@@ -1 +1 @@
-Subproject commit 0aad27d725f4ee6b18daf1db0c07d933aed16eea
+Subproject commit a8bb88c9056735919fc50bf9b12d9562f17e844f